-
Notifications
You must be signed in to change notification settings - Fork 5
fix: file manager limits #663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdded batch file upload handler supporting up to 50 files with per-file validation and error aggregation. Enhanced folderId normalization, standardized response shapes across endpoints to include owner/signatures/canPreview fields, exposed getFileSignatures endpoint, and refined error handling consistency. Changes
Sequence DiagramsequenceDiagram
actor Client
participant FileController
participant FileService
participant StorageSystem
participant QuotaManager
Client->>FileController: POST /uploadFiles (50 files)
FileController->>FileController: Parse & validate array
loop For each file
FileController->>FileController: Validate file size (≤1GB)
FileController->>QuotaManager: Check user quota
alt Quota exceeded
FileController-->>FileController: Add per-file error
else Quota available
FileController->>FileService: Create file record
FileService->>StorageSystem: Store file
StorageSystem-->>FileService: Return file metadata
FileService-->>FileController: File created
FileController-->>FileController: Add to success results
end
end
FileController-->>Client: Return aggregated response (successes + errors)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @platforms/file-manager-api/src/controllers/FileController.ts:
- Around line 10-13: The uploadMultiple middleware uses multer.memoryStorage
which can OOM for large batches (e.g., 50×1GB); replace multer.memoryStorage()
with multer.diskStorage configured to write to a temporary directory (e.g.,
os.tmpdir()) or change to a streaming approach that pipes uploads directly to
your storage backend, and add stricter multer limits (limits: { files:
<reasonable_max>, fileSize: <per_file_limit> }) and ensure uploaded temp files
are cleaned up after processing; update the same upload middleware usage
elsewhere (uploadMultiple) to match.
- Around line 153-157: Remove the unreachable duplicate authentication guard
that checks if (!req.user) and returns res.status(401).json({ error:
"Authentication required" }) inside the same request handler; keep the original
early-return check (the one at the top of the handler) and delete the later
redundant block so authentication is only validated once per request in this
controller method.
🧹 Nitpick comments (5)
platforms/file-manager-api/src/controllers/FileController.ts (5)
5-13: Consolidate duplicate multer configurations.Both
uploadanduploadMultiplehave identical configuration (1GB limit, memoryStorage). A single multer instance can handle both single and multiple file uploads.♻️ Proposed consolidation
-const upload = multer({ - limits: { fileSize: 1024 * 1024 * 1024 }, // 1GB limit - storage: multer.memoryStorage(), -}); - -const uploadMultiple = multer({ +const upload = multer({ limits: { fileSize: 1024 * 1024 * 1024 }, // 1GB limit storage: multer.memoryStorage(), });Then use
upload.single("file")andupload.array("files", 50)throughout.
36-44: Remove redundant file size check.Multer already enforces the 1GB limit at configuration (line 6), so this manual check is unreachable. If a file exceeds 1GB, multer will reject it before reaching this code.
♻️ Proposed fix
- // Check file size limit (1GB) - const MAX_FILE_SIZE = 1024 * 1024 * 1024; // 1GB in bytes - if (req.file.size > MAX_FILE_SIZE) { - return res.status(413).json({ - error: "File size exceeds 1GB limit", - maxSize: MAX_FILE_SIZE, - fileSize: req.file.size, - }); - } -
62-68: Extract folderId normalization to a helper method.This normalization logic is duplicated across at least 5 handlers (uploadFile, uploadFiles, updateFile, getFiles, moveFile). Extract it to reduce duplication and ensure consistency.
♻️ Proposed helper method
Add this private helper to the FileController class:
private normalizeFolderId(folderId: any): string | null { return folderId === "null" || folderId === "" || folderId === null || folderId === undefined ? null : folderId; }Then replace all occurrences with:
- const normalizedFolderId = - folderId === "null" || - folderId === "" || - folderId === null || - folderId === undefined - ? null - : folderId; + const normalizedFolderId = this.normalizeFolderId(folderId);
251-273: Consider extracting signature mapping to reduce duplication.The signature mapping logic (including user details) is duplicated in
getFile(lines 315-331) andgetFileSignatures(lines 500-517). Consider extracting to a private helper method.♻️ Example helper method
private mapSignature(sig: any) { return { id: sig.id, userId: sig.userId, user: sig.user ? { id: sig.user.id, name: sig.user.name, ename: sig.user.ename, avatarUrl: sig.user.avatarUrl, } : null, md5Hash: sig.md5Hash, signature: sig.signature, publicKey: sig.publicKey, message: sig.message, createdAt: sig.createdAt, }; }
81-91: Response shape inconsistency across endpoints.The
uploadFileresponse doesn't includeownerorcanPreviewfields that are returned bygetFiles(lines 261-271) andgetFile(lines 302-332). Consider standardizing response shapes for consistency, unless this omission is intentional.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (4)
platforms/esigner/.svelte-kit/ambient.d.tsis excluded by!**/.svelte-kit/**platforms/esigner/.svelte-kit/non-ambient.d.tsis excluded by!**/.svelte-kit/**platforms/file-manager/.svelte-kit/ambient.d.tsis excluded by!**/.svelte-kit/**platforms/file-manager/.svelte-kit/non-ambient.d.tsis excluded by!**/.svelte-kit/**
📒 Files selected for processing (2)
platforms/file-manager-api/src/controllers/FileController.tsplatforms/file-manager/src/routes/(protected)/files/+page.svelte
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (4)
platforms/file-manager-api/src/controllers/FileController.ts (4)
181-190: Batch upload doesn't support per-file displayName and description.Unlike
uploadFile(line 59), the batch upload handler hardcodesdisplayNameanddescriptionasundefined. This means all uploaded files will have null metadata. Consider whether this is intentional or if you want to support per-file metadata via a different mechanism (e.g., parallel JSON array in the request body).
159-217: Verify partial upload behavior is intentional.The batch upload processes files sequentially and returns both successful uploads and errors. If some files fail, the successfully uploaded files remain in storage (no rollback). Verify this partial-success pattern aligns with your requirements, as users may expect an all-or-nothing transaction.
412-418: LGTM: Proper Content-Disposition headers for download vs preview.The distinction between
attachmentfor downloads andinlinefor previews is correct and follows HTTP best practices.Also applies to: 446-451
483-523: LGTM: File signatures endpoint properly secured.The new
getFileSignaturesendpoint includes proper authentication checks and verifies file access before returning signature data. The response mapping includes user details consistently with other endpoints.
| const uploadMultiple = multer({ | ||
| limits: { fileSize: 1024 * 1024 * 1024 }, // 1GB limit | ||
| storage: multer.memoryStorage(), | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memory storage risk with large batch uploads.
Using memoryStorage with 50 files × 1GB max = potential 50GB memory consumption. This can crash the Node.js process with out-of-memory errors. Consider using diskStorage for temporary file storage or streaming directly to your storage backend.
Also applies to: 103-103
🤖 Prompt for AI Agents
In @platforms/file-manager-api/src/controllers/FileController.ts around lines 10
- 13, The uploadMultiple middleware uses multer.memoryStorage which can OOM for
large batches (e.g., 50×1GB); replace multer.memoryStorage() with
multer.diskStorage configured to write to a temporary directory (e.g.,
os.tmpdir()) or change to a streaming approach that pipes uploads directly to
your storage backend, and add stricter multer limits (limits: { files:
<reasonable_max>, fileSize: <per_file_limit> }) and ensure uploaded temp files
are cleaned up after processing; update the same upload middleware usage
elsewhere (uploadMultiple) to match.
| const { used, limit } = | ||
| await this.fileService.getUserStorageUsage(req.user.id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential storage quota race condition with concurrent uploads.
The storage quota is fetched once at the beginning (line 118-119), and while currentUsed is tracked locally within the batch (line 192), there's no protection against concurrent upload requests from the same user. If multiple batches are uploaded simultaneously, each will check against the same initial quota, potentially exceeding the limit.
Consider implementing database-level constraints or locks to ensure quota enforcement across concurrent requests.
Also applies to: 172-179
| if (!req.user) { | ||
| return res | ||
| .status(401) | ||
| .json({ error: "Authentication required" }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate authentication check.
This authentication check is unreachable dead code. The same check already exists at lines 111-115 and would have returned early if req.user is not present.
🔧 Proposed fix
const results: UploadResult[] = [];
const errors: UploadError[] = [];
let currentUsed = used;
- if (!req.user) {
- return res
- .status(401)
- .json({ error: "Authentication required" });
- }
-
for (const file of files) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!req.user) { | |
| return res | |
| .status(401) | |
| .json({ error: "Authentication required" }); | |
| } | |
| const results: UploadResult[] = []; | |
| const errors: UploadError[] = []; | |
| let currentUsed = used; | |
| for (const file of files) { |
🤖 Prompt for AI Agents
In @platforms/file-manager-api/src/controllers/FileController.ts around lines
153 - 157, Remove the unreachable duplicate authentication guard that checks if
(!req.user) and returns res.status(401).json({ error: "Authentication required"
}) inside the same request handler; keep the original early-return check (the
one at the top of the handler) and delete the later redundant block so
authentication is only validated once per request in this controller method.
Description of change
Issue Number
Type of change
How the change has been tested
Change checklist
Summary by CodeRabbit
New Features
Bug Fixes & Improvements
✏️ Tip: You can customize this high-level summary in your review settings.