Skip to content

SSH: Optimize pack data handling for large repositories #62

@dcoric

Description

@dcoric

Description:

Current pack data capture approach buffers all data in memory, which could cause issues with large repositories.

Current Approach:

const packDataChunks: Buffer[] = [];
// ... accumulate all chunks in memory
const packData = Buffer.concat(packDataChunks);

Potential Issues:

  • Memory exhaustion with large pushes (>500MB)
  • No progress reporting for slow connections
  • All-or-nothing approach (no partial processing)

Proposed Optimizations:

  1. Streaming to temporary file:

    • Write chunks to temporary file instead of memory
    • Process from file after stream completes
    • Automatic cleanup on completion/error
  2. Size limits:

    • Configure maximum pack file size
    • Graceful degradation for oversized packs
    • Clear error messages when limits exceeded
  3. Progress reporting:

    • Track bytes received vs expected
    • Timeout for stalled transfers
    • User-facing progress indicators

Affected Files:

  • src/proxy/ssh/server.ts (stream handling)
  • src/config/ (size limit configuration)

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions