- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 1.3k
Description
Which project does this relate to?
Start
Describe the bug
Currently, TanStack Start automatically calls await request.formData() in server functions when the content-type is multipart/form-data, which loads the entire file into memory before the handler is invoked. This prevents implementing true streaming file uploads with size limit enforcement.
Reference: server-functions-handler.js:49
if (formDataContentTypes.some(
  (type) => contentType && contentType.includes(type)
)) {
  // ...
  const formData = await request.formData();  // <-- File fully loaded into memory here
  // ...
  return await action(params, signal);  // Handler called after file is in memory
}Current Behavior
When a user uploads a large file (e.g., 100MB, 1GB, or even 10GB):
- The entire file is loaded into server memory via request.formData()
- Only then is the server function handler called
- File size limits cannot be enforced before memory consumption
- Large uploads can crash the server or cause OOM errors
- True streaming from network → storage is impossible
Expected Behavior
Server functions should be able to access the raw request body stream via getRequest() before TanStack Start parses the FormData, allowing developers to:
- Enforce file size limits during upload (reject at 25MB instead of buffering 10GB)
- Stream files directly from network to cloud storage (S3, Azure Blob, etc.)
- Implement proper backpressure handling
- Minimize memory footprint for file uploads
Use Case
Streaming file uploads to Azure Blob Storage with size enforcement:
export const uploadFileRpc = createServerFn({ method: 'POST' })
  .middleware([authMiddleware])
  .handler(async (ctx) => {
    const request = getRequest();
    
    // Would like to access raw body stream here
    const busboy = new Busboy({
      headers: {
        'content-type': request.headers.get('content-type') || '',
      },
      limits: {
        fileSize: 25 * 1024 * 1024, // Enforce 25MB limit DURING upload
      },
    });
    busboy.on('file', (fieldname, fileStream) => {
      // Stream directly to Azure without buffering in memory
      const webStream = Readable.toWeb(fileStream);
      await objectStorage.upload({ stream: webStream });
    });
    // Currently doesn't work because request.body is already consumed
    Readable.fromWeb(request.body).pipe(busboy);
  });Current Workaround
The only workaround is to bypass TanStack Start server functions entirely and create separate Hono/Express routes for file uploads, which defeats the purpose of having a unified server function API.
Impact
This affects any TanStack Start application that needs to:
- Handle large file uploads (videos, backups, datasets)
- Implement proper file size validation
- Stream files to cloud storage
- Build production file upload services
Steps to Reproduce the Bug or Issue
Try to stream the file from user to other source, for example blob storage
Expected behavior
Add a configuration option or special handler type for server functions that need raw stream access:
Option 1: Skip FormData parsing when a flag is set
export const uploadFileRpc = createServerFn({ 
  method: 'POST',
  parseBody: false, // Skip automatic FormData parsing
})
  .handler(async (ctx) => {
    const request = getRequest();
    // request.body stream is still available
  });Option 2: Provide a separate handler for raw body access
export const uploadFileRpc = createServerFn({ method: 'POST' })
  .rawBodyHandler(async (ctx, request) => {
    // Access raw request before any parsing
    // request.body stream is available here
  });Screenshots or Videos
No response
Platform
- @tanstack/react-start: 1.133.20
- Runtime: Node.js with Hono
Additional context
No response