-
Notifications
You must be signed in to change notification settings - Fork 5
Feat/evault file manager #654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughAdds a new File Manager platform (Express + TypeORM API, SvelteKit frontend, DB migrations, Web3 adapter mappings) and extends esigner signing support (fileSigneeId), webhook processing, and invitation-based single-use signature validation across services. Changes
Sequence Diagram(s)sequenceDiagram
participant Browser
participant FileManagerAPI
participant DB
participant Web3Adapter
participant EsignerAPI
Browser->>FileManagerAPI: GET /api/auth/offer
FileManagerAPI-->>Browser: QR data + session ID
Note over Browser: User scans and authenticates via eID Wallet
Browser->>FileManagerAPI: POST /api/auth (ename, session, signature)
FileManagerAPI->>DB: Lookup/create user
FileManagerAPI-->>Browser: JWT token + user
Browser->>FileManagerAPI: POST /api/files (multipart)
FileManagerAPI->>FileManagerAPI: calc MD5, persist via FileService
FileManagerAPI->>DB: Save File entity
DB->>Web3Adapter: afterInsert event (file)
Web3Adapter->>Web3Adapter: enrich relations, convert buffers->base64
Web3Adapter->>Web3Adapter: map to universal schema, debounce
Web3Adapter-->>EsignerAPI: forward change (universal)
sequenceDiagram
participant Signer as User (Signer)
participant EsignerApp
participant EsignerAPI
participant FileManagerAPI
participant DB
Signer->>EsignerApp: Accept invitation, sign file
EsignerApp->>EsignerAPI: POST signature (fileId, userId, invitationId)
EsignerAPI->>DB: Create SignatureContainer (fileSigneeId = invitationId)
EsignerAPI->>EsignerAPI: store mapping/lock
EsignerAPI->>FileManagerAPI: POST /api/webhook (signature_containers payload)
FileManagerAPI->>FileManagerAPI: resolve global->local IDs
FileManagerAPI->>DB: create SignatureContainer in file-manager DB
FileManagerAPI->>FileManagerAPI: map IDs, lock processed IDs
EsignerApp->>EsignerAPI: GET /api/files/:id/signatures
EsignerAPI->>DB: return signatures including fileSigneeId
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 inconclusive)
✅ Passed checks (1 passed)
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro ⛔ Files ignored due to path filters (4)
📒 Files selected for processing (3)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Fix all issues with AI Agents 🤖
In @platforms/esigner-api/src/controllers/WebhookController.ts:
- Around line 319-321: The code calls this.adapter.addToLockedIds(file.id)
before persisting the new File, risking an undefined id; change the order to
await this.fileRepository.save(file) first (or explicitly ensure the entity has
an id after creation), then call this.adapter.addToLockedIds(file.id); update
any related logic around this.fileRepository.create(...) and the save result so
you use the saved entity (with its assigned id) when calling addToLockedIds.
- Around line 402-409: The code in WebhookController is reading signature.id
before the persistent save completes; update the flow so you call await
this.signatureRepository.save(signature) first, then use the
returned/now-populated id for this.adapter.addToLockedIds(...) and for
mappingDb.storeMapping's localId and for assigning localId; specifically, move
the await this.signatureRepository.save(signature) above
this.adapter.addToLockedIds(signature.id) (or capture the saved entity/returned
id from save and use that) and then call this.adapter.mappingDb.storeMapping
with the saved id.
In @platforms/esigner-api/src/services/InvitationService.ts:
- Around line 30-37: The single-use check in InvitationService (the
existingSignatures lookup via this.signatureRepository.find with fileId) is
vulnerable to a TOCTOU race; wrap the check-and-create logic that spans the
validation and invitation creation (the block that precedes and includes the
create-invitation flow) in a database transaction, acquire a pessimistic
write/row lock on the target file record (or an appropriate row representing the
file) inside that transaction, replace the find(...) call with a count(...)
query for existence, and perform the invitation creation only within the same
transaction so concurrent requests are serialized and cannot both pass the
check.
In @platforms/esigner-api/src/web3adapter/mappings/file.mapping.json:
- Around line 6-17: Add the missing folderId mapping to the localToUniversalMap
so folder relationships are synchronized; update the localToUniversalMap (in
file.mapping.json) to include a "folderId" entry that mirrors how ownerId is
mapped (i.e., include the folder relationship mapping such as
folders(folder.id),folderId or a direct "folderId" mapping depending on existing
relationship patterns).
- Line 3: The file.mapping.json contains a placeholder schemaId
("a1b2c3d4-e5f6-7890-abcd-ef1234567890") for the File entity which will break
Web3 adapter mapping; replace that placeholder with the actual File schema UUID
used across file-manager-api and esigner-api, i.e., update the "schemaId" value
in platforms/esigner-api/src/web3adapter/mappings/file.mapping.json to the real
File schema identifier so all mappings reference the correct schema.
In @platforms/esigner-api/src/web3adapter/watchers/subscriber.ts:
- Around line 138-159: afterInsert duplicates relation-loading logic that's
already in enrichEntity, causing extra DB queries; extract the
relation-resolution into a single helper (e.g., loadEntityWithRelations(entity,
metadata) or getFullEntityWithRelations) that encapsulates the
AppDataSource.getRepository(...).findOne logic and the tableName -> relations
mapping for "files" and "signature_containers", call that helper from
afterInsert/afterUpdate (once) and from enrichEntity as needed, and remove the
inline relation-loading block currently inside afterInsert so the entity is
loaded only via the shared helper.
In @platforms/file-manager-api/package.json:
- Line 9: The build script uses the Unix-only `cp -r` which fails on Windows;
replace it with a cross-platform tool such as `cpy-cli`: update the "build"
script in package.json to use cpy (for example: `tsc && cpy
"src/web3adapter/mappings/**/*" dist/web3adapter/ --parents`) and add `cpy-cli`
to devDependencies (e.g., `"cpy-cli": "^5.0.0"`), or alternatively implement a
small Node copy script and call it from the "build" script; ensure the script
references the existing mappings path (`src/web3adapter/mappings`) and target
(`dist/web3adapter/`).
- Line 22: Update the package.json dependency entry for "jsonwebtoken" from
"^9.0.2" to "^9.0.3" (the version string in package.json is the unique symbol to
change), then run your package manager (npm install or yarn install) to update
node_modules and regenerate the lockfile (package-lock.json or yarn.lock);
commit both the updated package.json and the updated lockfile.
In @platforms/file-manager-api/src/controllers/AccessController.ts:
- Around line 188-214: revokeFolderAccess currently calls
this.accessService.revokeFolderAccess without verifying the requester owns the
folder; add the same ownership check used by revokeFileAccess (lines ~68-78)
before attempting revocation: fetch the folder by id (e.g. via
this.folderService.getById or the equivalent method used in revokeFileAccess),
compare its owner id to req.user.id, return 403 if not owner, and only then call
this.accessService.revokeFolderAccess(id, userId, req.user.id) and proceed with
the existing success/error handling.
- Around line 151-186: grantFolderAccess lacks a folder ownership check before
granting permissions; mirror the pattern used in grantFileAccess by fetching the
folder (e.g., this.folderService.getById or getOwner/getFolderById) for the
given id, compare its ownerId to req.user.id, and if they don't match return
res.status(403).json({ error: "Forbidden" }) (or similar) before calling
this.accessService.grantFolderAccess; keep the same error handling and response
shape as the existing method.
- Line 25: The controller currently uses dynamic require expressions like new
(require('../services/FileService').FileService)() inside request handlers (seen
at the FileService instantiation on lines around 25, 69, 109), which is
inefficient and breaks static analysis; instead import FileService at the top of
AccessController (e.g., import { FileService } from '../services/FileService')
and create a single shared instance (e.g., const fileService = new
FileService()) that the controller methods reuse, then replace the inline
dynamic new (require(...).FileService)() calls in the methods with references to
that shared fileService instance.
In @platforms/file-manager-api/src/controllers/AuthController.ts:
- Around line 108-112: After verifying the signature, the code currently throws
a generic Error when userService.findByEname(ename) returns null, which leads to
a 500 response; update the AuthController to return an appropriate HTTP status
(404 Not Found or 401 Unauthorized) instead of a generic Error: replace the
throw new Error("User not found") with a specific HTTP error (e.g., throw new
HttpError(404, "User not found") or set the response status and body like
res.status(404).json({ error: "User not found" })) so the catch handler doesn't
convert this case into a 500; locate the check around
userService.findByEname(ename) in AuthController and apply the specific status
approach consistent with your project's error types (HttpError/ApiError or
direct res/ctx status).
In @platforms/file-manager-api/src/controllers/FileController.ts:
- Around line 256-274: The deleteFile and getFileSignatures methods have extra
indentation compared to other controller methods; fix by realigning their
function declarations and bodies to the same indentation level as other methods
in FileController (e.g., adjust the indentation of deleteFile = async (req:
Request, res: Response) => { ... } and getFileSignatures to match the class
method block style), ensuring opening/closing braces and nested statements
follow the file's existing indentation conventions so the methods visually align
with the rest of the class.
- Around line 219-221: The Content-Disposition header in FileController uses
file.name directly, which risks header injection; sanitize or encode the
filename before setting the header in the method that sets 'Content-Disposition'
(where res.setHeader('Content-Disposition', `attachment;
filename="${file.name}"`) is used). Either validate/replace dangerous characters
(remove CR/LF, quotes, backslashes, and control chars and fallback to a safe
name) or generate a proper RFC5987-encoded filename using the
content-disposition package and use that encoded value for the header; ensure
the sanitized/encoded filename variable replaces file.name in the res.setHeader
call.
In @platforms/file-manager-api/src/controllers/WebhookController.ts:
- Line 339: The calls to this.adapter.addToLockedIds(...) use file.id and
signature.id before the entities are persisted; with TypeORM
@PrimaryGeneratedColumn("uuid") IDs are assigned on save, so move the
addToLockedIds calls to after the corresponding await repository.save(...) calls
in WebhookController (e.g., after save() that persists the File entity and after
save() that persists the Signature entity) so you pass the generated id; update
the locations where file.id (around the current call near
this.adapter.addToLockedIds(file.id)) and signature.id (near
this.adapter.addToLockedIds(signature.id)) are referenced to occur only
post-save.
- Line 102: Replace fragile inline parsing (e.g., const userId =
ref.split("(")[1].split(")")[0];) with a small reusable helper like
extractIdFromReference(ref: string): string | null that uses a safe regex to
capture text inside parentheses and returns null on failure; update all
occurrences (references that set userId, ownerId, etc. in WebhookController
methods) to call this helper, check for null, log the invalid reference (include
the offending string) and return a 400 error instead of allowing an exception to
be thrown. Ensure the helper is exported/defined near the top of
WebhookController and used at the spots you noted (the lines that compute
userId/ownerId from ref/local.data.*).
- Around line 35-40: The axios.post in WebhookController (the call using new
URL("file-manager", process.env.ANCHR_URL).toString() with req.body) is
currently fire-and-forget and can cause unhandled promise rejections; wrap the
HTTP call with proper async error handling—either await it inside a try/catch or
explicitly attach .catch(...) (or use void with .catch) to log failures via your
logger and avoid silent rejections; ensure you log the URL/context and error
details so ANCHR failures are visible without changing the intended flow.
In @platforms/file-manager-api/src/database/data-source.ts:
- Line 36: The migrations config in data-source.ts uses the pattern "*.ts" which
will break in production after compilation; update the migrations entry in the
DataSource options (the migrations property) to load .js files in production
(e.g., "*.js") or use a conditional/ENV-aware pattern that switches between
"*.ts" for development and "*.js" for production (or use a runtime-resolved path
via __dirname with process.env.NODE_ENV) so migrations work both locally and in
the compiled build.
In @platforms/file-manager-api/src/database/entities/File.ts:
- Around line 35-36: The File entity's size property is declared as a TypeORM
"bigint" column but typed as number, causing a runtime mismatch; update the File
class's size field to match TypeORM runtime behavior by either changing the
TypeScript type to string (size: string) or add a column transformer/mapper on
the @Column("bigint") definition to convert between string and number/BigInt (so
the File.size getter/setter returns a number/BigInt while persisting as string).
Ensure references to File.size elsewhere in the codebase are adjusted
accordingly (or perform explicit parsing where the service layer consumes the
entity).
- Around line 41-42: The File entity currently persists raw bytes via the data:
Buffer property with @Column({ type: "bytea" }), which is not scalable; replace
this by removing or deprecating the data field and add a storageKey (string)
column on the File entity to store the object storage key/URL, then
implement/use a FileStorageService to handle upload/download of file payloads
(upload in your create/update flows and save the returned key to storageKey;
download by fetching via FileStorageService using storageKey). Update any
repository/service methods that reference File.data to call FileStorageService
instead, add a DB migration to drop the bytea column and add storageKey, and
update tests and API payloads to pass/return storage keys or presigned URLs
rather than raw bytes.
In @platforms/file-manager-api/src/database/entities/UserEVaultMapping.ts:
- Around line 9-34: The entity UserEVaultMapping lacks DB-level uniqueness even
though code assumes one-to-one mappings; add @Unique decorators to the class to
enforce constraints (at minimum on localUserId) and also on evaultW3id or
evaultUri as appropriate so an eVault identity cannot be mapped to multiple
local users; update the UserEVaultMapping class to declare these unique
constraints (e.g., unique on localUserId and unique on evaultW3id/evaultUri) and
re-run migrations so the database schema prevents duplicate rows that would
break PlatformEVaultService's findOne() assumptions.
In @platforms/file-manager-api/src/index.ts:
- Around line 53-65: The CORS setup in the app.use(cors({...})) block is invalid
because it sets origin: "*" while credentials: true; change the configuration so
credentials are only enabled when a specific origin is allowed—either replace
origin: "*" with a whitelist (e.g., an array or a runtime origin-check function
that returns the request origin when it is in your allowed list) and keep
credentials: true, or keep origin: "*" and remove credentials: true; update the
cors(...) call accordingly so the origin and credentials options are consistent
(refer to the app.use(cors({...})) call and the origin and credentials
properties).
In @platforms/file-manager-api/src/services/FolderService.ts:
- Around line 249-270: The recursive deletion misses removing FileAccess entries
and uses a new repository instance instead of the injected this.fileRepository;
update deleteFolderRecursive to, for each file found, delete associated
FileAccess records (e.g., via this.fileAccessRepository.delete({ fileId: file.id
}) or remove via repository) before removing the file, and replace
AppDataSource.getRepository(File).remove(file) with
this.fileRepository.remove(file) to use the existing repository instance.
In @platforms/file-manager-api/src/services/GroupService.ts:
- Around line 58-77: The fallback loads all private groups into memory; replace
it with a DB-level exact-members query using groupRepository and
createQueryBuilder to avoid full scans: compute sortedMemberIds as before, then
query groups with .innerJoin("group.members","member").where("group.isPrivate =
:isPrivate", { isPrivate: true }).andWhere("member.id IN (:...ids)", { ids:
sortedMemberIds }).groupBy("group.id").having("COUNT(member.id) = :len AND
COUNT(DISTINCT member.id) = :len", { len: sortedMemberIds.length }).getOne() to
return a group whose member set exactly matches sortedMemberIds; alternatively
consider adding a deterministic members-hash column with a unique constraint and
query by that hash for faster lookups (refer to GroupService, groupRepository,
createQueryBuilder, members, sortedMemberIds).
- Around line 226-234: getUserGroups currently does three leftJoinAndSelects
(group.members, group.admins, group.participants) and a WHERE with OR which
causes Cartesian duplicates; change the WHERE to use subqueries that select
group IDs from the join tables instead of joining all relations (use
createQueryBuilder("group").where(qb => `group.id IN
(${qb.subQuery().select("gm.group_id").from("group_members","gm").where("gm.user_id
= :userId").getQuery()}) OR group.id IN
(${qb.subQuery().select("ga.group_id").from("group_admins","ga").where("ga.user_id
= :userId").getQuery()}) OR group.id IN
(${qb.subQuery().select("gp.group_id").from("group_participants","gp").where("gp.user_id
= :userId").getQuery()})`, { userId })) and keep the leftJoinAndSelects only if
needed, or alternatively first fetch matching group IDs via subqueries and then
call findByIds / findOne with relations: ["members","admins","participants"] to
load relations without producing a Cartesian product; update getUserGroups to
use one of these approaches.
- Line 21: In GroupService, memberIds.sort() mutates the caller's array; make a
non-mutating copy before sorting and assign that to sortedMemberIds (for
example, copy the memberIds array then sort the copy using Array.prototype.slice
or the spread operator) so the original memberIds isn't changed; apply the same
change to the second occurrence noted (the sort call around line 121) to avoid
side effects.
In @platforms/file-manager-api/src/services/NotificationService.ts:
- Around line 267-269: The ternary assigned to folderLink is redundant because
both branches produce the same string; replace the conditional expression in the
NotificationService where folderLink is set (currently using
folder.parentFolderId) with a single assignment building the URL from
fileManagerUrl and folder.id (i.e., set folderLink to
`${fileManagerUrl}/files?folderId=${folder.id}`), removing the unnecessary
conditional check.
In @platforms/file-manager-api/src/services/UserService.ts:
- Around line 89-109: The query builds LIKE patterns from searchQuery without
escaping SQL wildcard characters, so add an escapeLike(str: string) helper and
produce a safeQuery = escapeLike(searchQuery) before constructing patterns;
replace usages of searchQuery in the CASE/select and .where parameter values
with safeQuery-derived values (query: `%${safeQuery}%`, exactQuery: safeQuery,
fuzzyQuery: `%${safeQuery.split('').join('%')}%`) and update the WHERE clause
expressions referenced in UserService (the .addSelect CASE and the .where call)
to include ESCAPE '\' on each ILIKE so the backslash-escaped wildcards are
treated literally.
In @platforms/file-manager-api/src/utils/jwt.ts:
- Line 3: Remove the hardcoded fallback for JWT_SECRET in jwt.ts: stop using
"process.env.JWT_SECRET || 'your-secret-key'". Instead require a defined secret
at startup by reading process.env.JWT_SECRET and throwing a clear error (or
exiting) when it's missing so the app fails fast; update the JWT_SECRET constant
initialization and any initialization code that imports it (e.g., in jwt.ts and
any modules referencing JWT_SECRET) to validate presence and throw a descriptive
error like "JWT_SECRET must be set".
In @platforms/file-manager-api/src/utils/version.ts:
- Around line 7-20: The compareVersions function needs input validation: ensure
both version1 and version2 are strings and extract only the numeric core (strip
any prerelease/build suffix starting at '-' or '+'), then split on '.' and
verify every part is a whole number (match /^\d+$/) before calling Number; if
any numeric part is invalid (would produce NaN) throw a clear Error; finally
perform the existing numeric comparison logic using those validated numeric
parts (this treats prerelease/build metadata as equal to the base numeric
version as suggested).
In @platforms/file-manager-api/src/web3adapter/mappings/group.mapping.json:
- Line 4: The mapping uses the wrong path for the group owner: change
ownerEnamePath from referencing participants to the explicit owner field; update
ownerEnamePath to use "users(owner.ename)" (matching the Group entity's owner
and the pattern used in file.mapping.json) so the mapping points to owner.ename
instead of participants[].ename.
In @platforms/file-manager-api/src/web3adapter/mappings/user.mapping.json:
- Around line 16-17: createdAt and updatedAt in user.mapping.json are missing
the __date() wrapper, causing inconsistent date handling across mappings; verify
and align them. Inspect the source storage/serialization for user dates (compare
with file.mapping.json and signature.mapping.json which use __date(), and
blabsy-w3ds-auth-api which uses __date(calc(...))) and either (a) add the
appropriate wrapper to "createdAt" and "updatedAt" in user.mapping.json (use
__date() or __date(calc(...)) to match the actual stored format) or (b) if
omission is intentional, add a clarifying comment in the mapping and a short
note in the repository docs explaining why user dates differ so others won't
change it inadvertently.
In @platforms/file-manager-api/src/web3adapter/watchers/subscriber.ts:
- Around line 15-20: The code constructs adapter with Web3Adapter using
environment vars cast to string without validation, which can be undefined;
update the adapter initialization (export const adapter = new Web3Adapter(...))
to validate and throw a clear error if any required env var
(FILE_MANAGER_MAPPING_DB_PATH, PUBLIC_REGISTRY_URL,
PUBLIC_FILE_MANAGER_BASE_URL) is missing, and ensure
schemasPath/dbPath/registryUrl/platform are derived from verified values (or
provide sensible defaults) before passing them into Web3Adapter so runtime
crashes are prevented and failures surface with actionable messages.
- Around line 244-257: afterRemove fails to pluralize the table name
consistently; change afterRemove to mirror afterInsert/afterUpdate by computing
a pluralized tableName (e.g., let tableName =
event.metadata.tableName.endsWith('s') ? event.metadata.tableName :
event.metadata.tableName + 's') and use that tableName when calling enrichEntity
and handleChange so both calls receive the same pluralized name; keep the
existing entity enrichment via enrichEntity and pass entity ?? event.entityId
into handleChange as before.
In @platforms/file-manager/package.json:
- Line 19: package.json currently declares both "@sveltejs/adapter-static" and
"@sveltejs/adapter-node", which conflicts and breaks the SvelteKit build; remove
the adapter you don't intend to use (keep only "@sveltejs/adapter-static" for
static deployments or only "@sveltejs/adapter-node" for Node server
deployments), then update svelte.config.js to import and use the matching
adapter name ("@sveltejs/adapter-static" or "@sveltejs/adapter-node") so the
adapter in dependencies and the adapter used in svelte.config.js are the same,
reinstall dependencies and re-run the build.
- Line 40: The package.json entry for the axios dependency is pinned to
"^1.6.7", which is vulnerable; update the axios version spec to a fixed safe
minimum (e.g., "^1.8.2" or a newer 1.x version like "^1.13.2") in package.json
(the "axios" dependency), then regenerate the lockfile by running your package
manager (npm/yarn/pnpm install) and commit the updated lockfile; after updating,
run the test suite and a security audit (npm audit or equivalent) to confirm
vulnerabilities are resolved.
- Line 10: The prepare script currently swallows errors via the fallback "||
echo ''" which hides failures; update the package.json "prepare" script entry
(the key "prepare" that runs svelte-kit sync) to stop silently suppressing
errors — either remove the "|| echo ''" fallback so svelte-kit sync can surface
failures, or replace it with logic that logs a clear warning to stderr
(including context) while preserving the non-zero exit when appropriate so setup
issues are visible.
In @platforms/file-manager/src/lib/stores/access.ts:
- Around line 90-104: Wrap both grantFolderAccess and revokeFolderAccess in
try/catch blocks, calling the API inside the try and only updating the
folderAccess store after a successful response; in the catch log the error
details (include the caught error) using the same logger pattern used for the
file access functions and rethrow the error so callers can handle it. Ensure you
reference the same apiClient calls and keep the existing folderAccess.update
logic but move it into the try after the response is validated.
- Around line 4-5: The stores fileAccess and folderAccess use non-specific any[]
which disables type checking; change their declarations to use the proper
interfaces FileAccess[] and FolderAccess[] (e.g. export const fileAccess =
writable<FileAccess[]>([]); export const folderAccess =
writable<FolderAccess[]>([])) and ensure FileAccess and FolderAccess are
imported or defined in the same module so the writable<T> generics provide
compile-time safety and better IDE hints.
In @platforms/file-manager/src/lib/stores/auth.ts:
- Line 67: Remove the unreachable "return true;" statement that follows the
try/catch (it's dead code because the try returns on success and the catch
returns on error); either delete that trailing return or refactor the function
so there is a single, clear return path (e.g., return the successful value from
the try and let the catch rethrow or return a consistent failure value),
ensuring the function's return behavior is explicit and covered by existing
try/catch branches.
In @platforms/file-manager/src/lib/stores/files.ts:
- Around line 60-64: The multipart upload sets an explicit 'Content-Type' header
which omits the required boundary; in the apiClient.post call that sends
formData (the POST to '/api/files' in files.ts) remove the explicit headers
object (or at minimum remove the 'Content-Type' entry) so the browser/axios can
set the correct multipart/form-data boundary automatically when sending
FormData.
In @platforms/file-manager/src/lib/stores/signatures.ts:
- Around line 5-19: The Signature interface is missing the optional fileSigneeId
field which causes type and runtime errors; update the Signature interface
declaration for the exported type named Signature to include fileSigneeId?:
string | null (optional and nullable) alongside the existing properties so
returned API objects with fileSigneeId type-check and consumers can safely
access sig.fileSigneeId.
In @platforms/file-manager/src/lib/utils/axios.ts:
- Around line 13-25: All three functions (setAuthToken, removeAuthToken,
removeAuthId) access localStorage directly which breaks SSR; guard every
localStorage access with a runtime check (e.g., ensure typeof window !==
'undefined' or typeof localStorage !== 'undefined') before calling localStorage
methods and only modify apiClient.defaults.headers.common['Authorization'] when
running in the browser; update setAuthToken to no-op server-side and update
removeAuthToken/removeAuthId similarly so they safely return without touching
localStorage or apiClient on the server.
- Around line 6-11: apiClient is created without restoring a persisted auth
token, so authentication is lost on refresh; update the axios client
initialization in axios.ts to read the stored token (e.g., from localStorage)
when running in the browser and, if present, call the existing setAuthToken or
directly set apiClient.defaults.headers.Authorization to "Bearer <token>" so the
header is present immediately after axios.create; ensure access to
window/localStorage is guarded for SSR and keep use of API_BASE_URL and
apiClient unchanged.
In @platforms/file-manager/src/routes/(auth)/auth/+page.svelte:
- Around line 18-23: getDeepLinkUrl currently returns uri unchanged in both
branches, so remove the dead branching or implement the intended transformation:
either delete getDeepLinkUrl and replace its call sites with the plain uri, or
update getDeepLinkUrl to return a mobile deep-link (e.g., a custom-scheme or
intent URL) when isMobileDevice() is true and a normal web URL (or fallback)
otherwise; locate the function by name getDeepLinkUrl in +page.svelte and adjust
its callers accordingly so behavior is no longer a no-op.
- Around line 93-129: The EventSource opened in watchEventStream is never
cleaned up, causing a memory/leak on component unmount; modify the component to
retain the EventSource instance created by watchEventStream (or have
watchEventStream return it) and call eventSource.close() inside an onDestroy
handler so the connection is explicitly closed when the Svelte component is
destroyed; update references to watchEventStream, eventSource, and the onDestroy
import to ensure the SSE is closed on teardown.
In @platforms/file-manager/src/routes/(protected)/files/[id]/+page.svelte:
- Around line 27-49: The isAuthenticated.subscribe call inside onMount creates a
subscription that is never cleaned up; capture the returned unsubscribe function
(e.g., const unsubscribe = isAuthenticated.subscribe(...)) and ensure it is
called when the component unmounts by registering it in onDestroy (import
onDestroy from 'svelte' and call unsubscribe() inside onDestroy). Keep the
existing auth check logic inside the subscription and remove the subscription
when the component is destroyed to prevent the memory leak.
- Around line 119-136: The downloadFile function creates a Blob URL but never
revokes it, causing a memory leak; update downloadFile to revoke the object URL
after use (e.g., call window.URL.revokeObjectURL(url)) — revoke it in a finally
block or after the link.click() (with a small setTimeout if needed to ensure the
download starts) and keep the existing link.remove() behavior so the created URL
is always cleaned up even on error; reference the downloadFile function, the url
variable, and the created anchor element when applying the change.
- Around line 57-62: The client is embedding the long-lived auth token in
previewUrl (previewUrl, file.canPreview, token, PUBLIC_FILE_MANAGER_BASE_URL),
which exposes credentials in logs and history; instead, change the flow so the
client does not put the user auth token in the URL — either (A) call a backend
endpoint (e.g., /api/files/:id/preview-token) that validates the user session
server-side and returns a short-lived signed preview token or an already-signed
preview URL, then set previewUrl to use that short-lived token, or (B) fetch the
preview via XHR/fetch with the Authorization header and create an object URL
(blob URL) for img/iframe src; update the Svelte code that currently constructs
previewUrl to request the backend preview token (or fetch and blobify) and use
the returned value instead of localStorage token in the query string.
In @platforms/file-manager/src/routes/(protected)/files/+page.svelte:
- Around line 115-120: The subscription to isAuthenticated inside onMount is
never cleaned up and may trigger multiple redirects; change it to capture the
unsubscribe function (e.g., const unsubscribe = isAuthenticated.subscribe(...))
and ensure you call unsubscribe in a cleanup (either return unsubscribe from
onMount or call unsubscribe in onDestroy). Also make the callback guard against
repeated redirects by only calling goto('/auth') when auth is strictly false and
the current route is not already '/auth' (or unsubscribe immediately after
performing the redirect) to avoid multiple navigation attempts.
- Around line 105-113: Subscriptions to the stores currentUser, files, and
folders are never cleaned up causing a memory leak; update the component to
either use Svelte auto-subscriptions (replace manual subscribe usage and direct
assignments to user/filesList/foldersList with $currentUser/$files/$folders in
the template/script) or capture the unsubscribe functions (e.g., const
unsubscribeUser = currentUser.subscribe(...), unsubscribeFiles =
files.subscribe(...), unsubscribeFolders = folders.subscribe(...)) and call them
in a lifecycle cleanup (onDestroy) to unsubscribe when the component is
destroyed; reference the currentUser, files, folders subscribe calls and the
user, filesList, foldersList variables to make the change.
In @platforms/file-manager/src/routes/+page.svelte:
- Around line 6-14: The component creates an isAuthenticated.subscribe inside
onMount but never unsubscribes, causing a memory leak; replace the raw
subscription with either Svelte auto-subscription (use the reactive
$isAuthenticated or a derived store to trigger navigation) or capture the
subscription returned from isAuthenticated.subscribe inside onMount and call its
unsubscribe in the onDestroy cleanup; update references to onMount,
isAuthenticated.subscribe and goto accordingly so navigation still occurs but
the subscription is properly disposed.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
platforms/registry/src/index.ts (1)
162-178: Filter out undefined platform URLs to prevent mapping failures.Three environment variables are not defined in
.env.exampleand lack fallbacks in the registry service:PUBLIC_FILE_MANAGER_BASE_URL,VITE_ECURRENCY_BASE_URL, andPUBLIC_EMOVER_BASE_URL. ThePlatformServicein eReputation-api will fail to map undefined URLs when consuming this endpoint.Proposed fix
server.get("/platforms", async (request, reply) => { const platforms = [ process.env.PUBLIC_PICTIQUE_BASE_URL, process.env.PUBLIC_BLABSY_BASE_URL, process.env.PUBLIC_GROUP_CHARTER_BASE_URL, process.env.PUBLIC_CERBERUS_BASE_URL, process.env.PUBLIC_EVOTING_BASE_URL, process.env.VITE_DREAMSYNC_BASE_URL, process.env.VITE_EREPUTATION_BASE_URL, process.env.VITE_ECURRENCY_BASE_URL, process.env.PUBLIC_EMOVER_BASE_URL, process.env.PUBLIC_ESIGNER_BASE_URL, process.env.PUBLIC_FILE_MANAGER_BASE_URL - ]; + ].filter((url): url is string => url !== undefined); return platforms; });
🟠 Major comments (29)
platforms/file-manager-api/src/database/entities/FileAccess.ts-26-27 (1)
26-27: Permission model is too restrictive for a file manager.The
permissioncolumn only supports"view", which is insufficient for a comprehensive file management system. Users typically need varying access levels (e.g., read, write, delete, admin).🔎 Proposed enhancement to support multiple permission levels
- @Column({ default: "view" }) - permission!: "view"; + @Column({ default: "view" }) + permission!: "view" | "edit" | "delete" | "admin";platforms/file-manager-api/src/database/entities/UserEVaultMapping.ts-9-34 (1)
9-34: Add unique constraints to enforce intended mapping cardinality at the database level.The
user_evault_mappingstable (lines 9-34) lacks unique constraints, despite the code querying withfindOne()onlocalUserId, which assumes uniqueness. While the current application logic prevents duplicates through existence checks, the database schema does not enforce this constraint. This creates a silent failure risk: if duplicates are accidentally created (via direct SQL, migration errors, or future services),findOne()would return only the first row, causing hidden bugs.The intended cardinality appears to be one-to-one (one local user to one eVault identity) based on PlatformEVaultService usage, which maintains a single mapping for "file-manager-platform".
Add
@Uniquedecorators to the entity:Proposed fix
import { Entity, PrimaryGeneratedColumn, Column, CreateDateColumn, UpdateDateColumn, + Unique } from "typeorm"; @Entity("user_evault_mappings") +@Unique(["localUserId"]) +@Unique(["evaultW3id"]) export class UserEVaultMapping {platforms/file-manager/src/lib/stores/access.ts-90-104 (1)
90-104: Same error handling issue in folder access functions.
grantFolderAccessandrevokeFolderAccesshave the identical error-handling gap as the file access functions. Apply the same fix pattern here.platforms/file-manager/src/lib/utils/axios.ts-6-11 (1)
6-11: Missing token restoration on client initialization.The
apiClientis created without restoring the persisted token from localStorage. Users will lose authentication state on page refresh untilsetAuthTokenis explicitly called again.🔎 Proposed fix: Restore token on client-side initialization
export const apiClient = axios.create({ baseURL: API_BASE_URL, headers: { 'Content-Type': 'application/json', }, }); + +// Restore auth token on client side +if (typeof window !== 'undefined') { + const token = localStorage.getItem('file_manager_auth_token'); + if (token) { + apiClient.defaults.headers.common['Authorization'] = `Bearer ${token}`; + } +}platforms/file-manager/src/lib/stores/access.ts-4-5 (1)
4-5: Replaceany[]with properly typed arrays.Stores declared with
any[]bypass TypeScript's type checking. UseFileAccess[]andFolderAccess[](already defined below) for compile-time safety and better IDE support.🔎 Proposed fix
-export const fileAccess = writable<any[]>([]); -export const folderAccess = writable<any[]>([]); +export const fileAccess = writable<FileAccess[]>([]); +export const folderAccess = writable<FolderAccess[]>([]);platforms/esigner-api/src/web3adapter/watchers/subscriber.ts-138-159 (1)
138-159: Duplicate relation-loading logic between afterInsert and enrichEntity.
afterInsertmanually reloads files and signature_containers with specific relations (lines 138-159), then immediately passes the entity toenrichEntity(lines 162-166), which performs similar owner/file/user lookups (lines 50-86). This duplication wastes queries and increases maintenance burden.🔎 Proposed consolidation
Option 1: Remove the redundant loading in
enrichEntityand rely solely on the reload inafterInsert/afterUpdate:async enrichEntity(entity: any, tableName: string, tableTarget: any) { try { const enrichedEntity = { ...entity }; - // Special handling for File entities to ensure owner relation is loaded - if (tableName === "files" && (entity.ownerId || entity.owner)) { - const ownerId = entity.owner?.id || entity.ownerId; - if (ownerId) { - const owner = await AppDataSource.getRepository("User").findOne({ - where: { id: ownerId }, - select: ["id", "ename", "name"] - }); - if (owner) { - enrichedEntity.owner = owner; - } - } - } - - // Special handling for SignatureContainer entities... - if (tableName === "signature_containers") { - // ... (remove this block too) - } + // Relations are already loaded by afterInsert/afterUpdate for files and signature_containers + // enrichEntity now only handles serialization via entityToPlain return this.entityToPlain(enrichedEntity); } catch (error) { console.error("Error loading relations:", error); return this.entityToPlain(entity); } }Option 2: Move all relation loading into a shared helper and call it once per lifecycle hook.
Committable suggestion skipped: line range outside the PR's diff.
platforms/esigner-api/src/web3adapter/mappings/file.mapping.json-6-17 (1)
6-17: Add missing folderId field mapping.The AI summary indicates that the File entity includes a
folderIdfield for folder relationships, but this field is not present in thelocalToUniversalMap. This omission will prevent folder hierarchy information from being synchronized via the Web3 adapter.🔎 Proposed fix to add folderId mapping
"localToUniversalMap": { "name": "name", "displayName": "displayName", "description": "description", "mimeType": "mimeType", "size": "size", "md5Hash": "md5Hash", "data": "data", "ownerId": "users(owner.id),ownerId", + "folderId": "folderId", "createdAt": "__date(createdAt)", "updatedAt": "__date(updatedAt)" }platforms/esigner-api/src/web3adapter/mappings/file.mapping.json-3-3 (1)
3-3: Replace the schemaId with the actual file schema identifier.The schemaId "a1b2c3d4-e5f6-7890-abcd-ef1234567890" is a placeholder UUID pattern. It is consistently used across the file schema definition and related mappings (file-manager-api, esigner-api). Replace with the actual schema identifier for the File entity before deployment to prevent Web3 adapter failures.
platforms/file-manager-api/src/web3adapter/mappings/user.mapping.json-16-17 (1)
16-17: Inconsistent date field handling across platforms requires clarification.The
createdAtandupdatedAtfields lack the__date()wrapper in user.mapping.json, but this handling differs inconsistently across the codebase:
- Within file-manager-api: user.mapping.json has no wrapper, while file.mapping.json and signature.mapping.json use
__date().- Across platforms: blabsy-w3ds-auth-api applies
__date(calc(...))for user entities, whereas file-manager-api does not.Determine whether the absence of the
__date()wrapper in file-manager-api's user.mapping.json is intentional (e.g., due to different date storage format) or an oversight that should be aligned with file and signature entities.platforms/file-manager-api/src/database/entities/File.ts-41-42 (1)
41-42: Storing files directly in the database is not scalable.Using
byteato store file data directly in PostgreSQL will cause:
- Rapid database growth
- Poor query performance as tables grow
- Increased backup/restore times
- Higher memory consumption
Consider using object storage (S3, MinIO, etc.) and storing only file metadata and storage keys in the database.
💡 Recommended architecture
- @Column({ type: "bytea" }) - data!: Buffer; + @Column({ type: "text" }) + storageKey!: string; // S3 key or file path + + @Column({ type: "varchar", nullable: true }) + storageBucket!: string | null; // S3 bucket nameThen implement a FileStorageService to handle uploads/downloads to/from object storage.
platforms/file-manager-api/src/web3adapter/mappings/group.mapping.json-4-4 (1)
4-4: Useowner.enameinstead ofparticipants[].enamefor ownerEnamePath.The
ownerEnamePathmaps to an entity owner, but the current implementation accesses all participants instead. The Group entity has an explicitownerfield; the mapping should use"users(owner.ename)"to match the field semantics and align with similar patterns in other mappings (e.g., file.mapping.json).platforms/file-manager-api/src/utils/version.ts-7-20 (1)
7-20: Add input validation for version strings.The current implementation doesn't validate version string format. Non-numeric components (e.g.,
"1.0.0-beta","1.0.0-rc1") will produceNaNwhen parsed, leading to incorrect comparison results.🔎 Recommended fix with validation
export function compareVersions(version1: string, version2: string): number { + // Validate version format (basic semver: major.minor.patch with optional prerelease) + const semverRegex = /^(\d+)\.(\d+)\.(\d+)(?:-[\w.]+)?$/; + const match1 = version1.match(semverRegex); + const match2 = version2.match(semverRegex); + + if (!match1 || !match2) { + throw new Error(`Invalid version format. Expected format: major.minor.patch (got: "${version1}", "${version2}")`); + } + + // Extract only numeric parts for comparison (ignore prerelease) - const v1Parts = version1.split('.').map(Number); - const v2Parts = version2.split('.').map(Number); + const v1Parts = [match1[1], match1[2], match1[3]].map(Number); + const v2Parts = [match2[1], match2[2], match2[3]].map(Number); for (let i = 0; i < Math.max(v1Parts.length, v2Parts.length); i++) { const v1Part = v1Parts[i] || 0; const v2Part = v2Parts[i] || 0; if (v1Part < v2Part) return -1; if (v1Part > v2Part) return 1; } return 0; }Note: This treats all prerelease versions as equal to their base version. If you need full semver comparison including prerelease ordering, consider using the
semverpackage instead.platforms/file-manager-api/package.json-9-9 (1)
9-9: Use cross-platform commands in the build script.The
cp -rcommand is Unix-specific and will fail on Windows, breaking the build for developers on that platform.🔎 Recommended fix using cross-platform tools
- "build": "tsc && cp -r src/web3adapter/mappings dist/web3adapter/", + "build": "tsc && node -e \"require('fs').cpSync('src/web3adapter/mappings', 'dist/web3adapter/mappings', {recursive: true})\"",Alternatively, use a cross-platform package like
cpy-cli:- "build": "tsc && cp -r src/web3adapter/mappings dist/web3adapter/", + "build": "tsc && cpy 'src/web3adapter/mappings/**' dist/web3adapter/",And add to devDependencies:
"cpy-cli": "^5.0.0"services/ontology/schemas/signature.json-26-37 (1)
26-37: Add format constraints and validation for cryptographic fields.The
signature,publicKey, andmessagefields lack encoding and size constraints. These are security-critical fields that should comply with cryptographic standards:
- signature: Add pattern
"^[A-Za-z0-9_-]+$"to enforce base64url encoding per RFC 7515 (JWS)- publicKey: Currently a string but should be a structured JWK object per RFC 7517 with properties like
kty,crv/n,x/ydepending on key type (EC/RSA/OKP). Alternatively, if keeping it as a string, add the base64url pattern and document the expected format in the description- message: Add a
maxLengthconstraint to prevent excessively large payloads that could cause DoSplatforms/file-manager-api/src/controllers/WebhookController.ts-102-102 (1)
102-102: Fragile string parsing for reference extraction.The pattern
value.split("(")[1].split(")")[0]is used throughout to extract IDs from reference strings like"users(uuid)". This will throw if the format is unexpected. Add validation or use a helper function with proper error handling.🔎 Suggested helper function
function extractIdFromReference(ref: string): string | null { const match = ref.match(/\(([^)]+)\)/); return match ? match[1] : null; } // Usage: const userId = extractIdFromReference(local.data.ownerId); if (!userId) { console.error("Invalid reference format:", local.data.ownerId); return res.status(400).json({ error: "Invalid reference format" }); }Also applies to: 184-184, 260-261, 355-358, 367-368
platforms/file-manager-api/src/index.ts-53-65 (1)
53-65: CORS configuration allows credentials with wildcard origin.Setting
origin: "*"withcredentials: trueis invalid per the CORS specification—browsers will block credentialed requests when the origin is a wildcard. Either specify explicit allowed origins or removecredentials: true.🔎 Suggested fix
app.use( cors({ - origin: "*", + origin: process.env.ALLOWED_ORIGINS?.split(",") || ["http://localhost:3000"], methods: ["GET", "POST", "OPTIONS", "PATCH", "DELETE"], allowedHeaders: [ "Content-Type", "Authorization", "X-Webhook-Signature", "X-Webhook-Timestamp", ], credentials: true, }), );platforms/file-manager-api/src/controllers/WebhookController.ts-35-40 (1)
35-40: Fire-and-forget HTTP request without error handling.The
axios.postcall is not awaited and has no error handling. If the ANCHR service is unavailable, errors will be silently swallowed as unhandled promise rejections.🔎 Suggested fix
if (process.env.ANCHR_URL) { - axios.post( + axios.post( new URL("file-manager", process.env.ANCHR_URL).toString(), req.body - ); + ).catch(err => { + console.error("Failed to forward webhook to ANCHR:", err.message); + }); }platforms/file-manager-api/src/services/FolderService.ts-249-270 (1)
249-270: Recursive folder deletion doesn't clean up file access records.When deleting files inside folders, associated
FileAccessrecords are not deleted, which could leave orphaned access records in the database. Also, line 268 creates a new repository instance instead of usingthis.fileRepository.🔎 Suggested fix
private async deleteFolderRecursive(folderId: string): Promise<void> { // Get all child folders const childFolders = await this.folderRepository.find({ where: { parentFolderId: folderId }, }); // Recursively delete child folders for (const childFolder of childFolders) { await this.deleteFolderRecursive(childFolder.id); await this.folderAccessRepository.delete({ folderId: childFolder.id }); await this.folderRepository.remove(childFolder); } // Delete all files in this folder const files = await this.fileRepository.find({ where: { folderId }, }); + // Import or inject FileAccess repository + const fileAccessRepository = AppDataSource.getRepository(FileAccess); for (const file of files) { - await AppDataSource.getRepository(File).remove(file); + await fileAccessRepository.delete({ fileId: file.id }); + await this.fileRepository.remove(file); } }platforms/file-manager-api/src/controllers/FileController.ts-256-274 (1)
256-274: Inconsistent indentation indeleteFileandgetFileSignaturesmethods.Lines 256-310 have extra indentation compared to other methods in the file. This appears to be a copy-paste error that should be corrected for consistency.
🔎 Fix indentation
- deleteFile = async (req: Request, res: Response) => { - try { - if (!req.user) { - return res.status(401).json({ error: "Authentication required" }); - } + deleteFile = async (req: Request, res: Response) => { + try { + if (!req.user) { + return res.status(401).json({ error: "Authentication required" }); + } - const { id } = req.params; - const deleted = await this.fileService.deleteFile(id, req.user.id); + const { id } = req.params; + const deleted = await this.fileService.deleteFile(id, req.user.id); - if (!deleted) { - return res.status(404).json({ error: "File not found or not authorized" }); - } + if (!deleted) { + return res.status(404).json({ error: "File not found or not authorized" }); + } - res.json({ message: "File deleted successfully" }); - } catch (error) { - console.error("Error deleting file:", error); - res.status(500).json({ error: "Failed to delete file" }); - } - }; + res.json({ message: "File deleted successfully" }); + } catch (error) { + console.error("Error deleting file:", error); + res.status(500).json({ error: "Failed to delete file" }); + } + };platforms/file-manager-api/src/controllers/AccessController.ts-25-25 (1)
25-25: Dynamicrequireon every request is inefficient and breaks static analysis.
new (require('../services/FileService').FileService)()is called on every request, preventing bundler optimizations and making the code harder to analyze.🔎 Suggested fix: import at top and instantiate once
import { Request, Response } from "express"; import { AccessService } from "../services/AccessService"; +import { FileService } from "../services/FileService"; export class AccessController { private accessService: AccessService; + private fileService: FileService; constructor() { this.accessService = new AccessService(); + this.fileService = new FileService(); } // Then replace: - const fileService = new (require('../services/FileService').FileService)(); - const file = await fileService.getFileById(id, req.user.id); + const file = await this.fileService.getFileById(id, req.user.id);Also applies to: 69-69, 109-109
platforms/file-manager-api/src/services/GroupService.ts-58-77 (1)
58-77: Scalability concern: loading all private groups into memory.The fallback path loads every private group with members for in-memory filtering. This won't scale as the number of groups grows.
Consider adding database-level constraints or a dedicated query for exact member matching instead of falling back to full table scans.
platforms/file-manager/src/routes/(auth)/auth/+page.svelte-18-23 (1)
18-23: Dead code:getDeepLinkUrlalways returns the same value.Both branches return
uriunchanged. Either implement the intended deep-link transformation or remove this function.🔎 Suggested fix
- function getDeepLinkUrl(uri: string): string { - if (isMobileDevice()) { - return uri; - } - return uri; - } + // If no transformation is needed, use uri directly at call sites + // or implement actual deep-link logic hereCommittable suggestion skipped: line range outside the PR's diff.
platforms/file-manager-api/src/web3adapter/watchers/subscriber.ts-15-20 (1)
15-20: Environment variables used without validation.These env vars are cast to
stringbut may beundefinedat runtime, causing silent failures or crashes whenWeb3Adaptertries to use them.🔎 Suggested fix: validate env vars
+const FILE_MANAGER_MAPPING_DB_PATH = process.env.FILE_MANAGER_MAPPING_DB_PATH; +const PUBLIC_REGISTRY_URL = process.env.PUBLIC_REGISTRY_URL; +const PUBLIC_FILE_MANAGER_BASE_URL = process.env.PUBLIC_FILE_MANAGER_BASE_URL; + +if (!FILE_MANAGER_MAPPING_DB_PATH || !PUBLIC_REGISTRY_URL || !PUBLIC_FILE_MANAGER_BASE_URL) { + throw new Error("Missing required environment variables for Web3Adapter"); +} + export const adapter = new Web3Adapter({ schemasPath: path.resolve(__dirname, "../mappings/"), - dbPath: path.resolve(process.env.FILE_MANAGER_MAPPING_DB_PATH as string), - registryUrl: process.env.PUBLIC_REGISTRY_URL as string, - platform: process.env.PUBLIC_FILE_MANAGER_BASE_URL as string, + dbPath: path.resolve(FILE_MANAGER_MAPPING_DB_PATH), + registryUrl: PUBLIC_REGISTRY_URL, + platform: PUBLIC_FILE_MANAGER_BASE_URL, });platforms/file-manager-api/src/services/GroupService.ts-226-234 (1)
226-234: Refactor the query to avoid Cartesian product from multiple LEFT JOINs.The current approach with three
leftJoinAndSelectoperations and an OR condition across the joined relations will produce duplicate group results due to Cartesian multiplication. Use a subquery or UNION approach instead:async getUserGroups(userId: string): Promise<Group[]> { return await this.groupRepository .createQueryBuilder("group") .leftJoinAndSelect("group.members", "members") .leftJoinAndSelect("group.admins", "admins") .leftJoinAndSelect("group.participants", "participants") .where((qb) => { const memberQuery = qb.subQuery() .select("gm.group_id") .from("group_members", "gm") .where("gm.user_id = :userId", { userId }) .getQuery(); const adminQuery = qb.subQuery() .select("ga.group_id") .from("group_admins", "ga") .where("ga.user_id = :userId", { userId }) .getQuery(); const participantQuery = qb.subQuery() .select("gp.group_id") .from("group_participants", "gp") .where("gp.user_id = :userId", { userId }) .getQuery(); return `group.id IN (${memberQuery}) OR group.id IN (${adminQuery}) OR group.id IN (${participantQuery})`; }, { userId }) .getMany(); }Alternatively, load the relations separately using
findOne()withrelations: ["members", "admins", "participants"]for groups found via subquery.platforms/file-manager/src/routes/(auth)/auth/+page.svelte-93-129 (1)
93-129: EventSource not closed on component unmount - memory leak.The
EventSourceconnection created inwatchEventStreamis never closed when the component is destroyed, leading to lingering connections.🔎 Suggested fix using onDestroy
+ let eventSource: EventSource | null = null; + + onDestroy(() => { + if (eventSource) { + eventSource.close(); + } + }); function watchEventStream(id: string) { const sseUrl = new URL(`/api/auth/sessions/${id}`, PUBLIC_FILE_MANAGER_BASE_URL).toString(); - const eventSource = new EventSource(sseUrl); + eventSource = new EventSource(sseUrl);Committable suggestion skipped: line range outside the PR's diff.
platforms/file-manager-api/src/services/UserService.ts-89-109 (1)
89-109: Escape LIKE wildcard characters before using in patterns.While TypeORM's parameterized queries prevent SQL injection, they do not automatically escape LIKE wildcard characters (
%and_). IfsearchQuerycontains these characters, they will be interpreted as pattern wildcards instead of literals, causing unintended matches.Add an escape function and apply it before constructing the patterns:
function escapeLike(str: string): string { return str.replace(/([%_\\])/g, '\\$1'); } const safeQuery = escapeLike(searchQuery); // Then use with ESCAPE clause: .where("user.name ILIKE :query ESCAPE '\\' OR ...", { query: `%${safeQuery}%`, exactQuery: safeQuery, fuzzyQuery: `%${safeQuery.split('').join('%')}%` })platforms/file-manager/src/lib/stores/files.ts-60-64 (1)
60-64: Remove explicitContent-Typeheader for multipart uploads.When using
FormData, the browser automatically sets theContent-Typeheader with the correct boundary. Manually setting it to'multipart/form-data'omits the boundary parameter, which can cause the server to fail parsing the request.Suggested fix
const response = await apiClient.post('/api/files', formData, { - headers: { - 'Content-Type': 'multipart/form-data', - }, + headers: { + // Let the browser set Content-Type automatically with the correct boundary + }, });Or simply remove the headers object entirely:
- const response = await apiClient.post('/api/files', formData, { - headers: { - 'Content-Type': 'multipart/form-data', - }, - }); + const response = await apiClient.post('/api/files', formData);platforms/file-manager/src/routes/(protected)/files/+page.svelte-115-120 (1)
115-120: Auth subscription is not cleaned up and redirects unconditionally.The
isAuthenticated.subscribe()creates a subscription that's never cleaned up. Also, the subscription callback runs immediately with the current value and on every change, potentially causing multiple redirect attempts.Suggested fix
onMount(async () => { - isAuthenticated.subscribe((auth) => { - if (!auth) { - goto('/auth'); - } - }); + // Check auth once on mount + if (!get(isAuthenticated)) { + goto('/auth'); + return; + }Or if you need reactive auth checking, store and clean up the unsubscribe function.
Committable suggestion skipped: line range outside the PR's diff.
platforms/file-manager/src/routes/(protected)/files/+page.svelte-105-113 (1)
105-113: Memory leak: Store subscriptions are not cleaned up.The
subscribe()calls create subscriptions that are never unsubscribed. This causes memory leaks when the component is destroyed.Suggested fix
Either clean up subscriptions in onMount's return function:
+ let unsubscribeUser: () => void; + let unsubscribeFiles: () => void; + let unsubscribeFolders: () => void; - currentUser.subscribe(u => { + unsubscribeUser = currentUser.subscribe(u => { user = u; }); - files.subscribe(f => { + unsubscribeFiles = files.subscribe(f => { filesList = f; }); - folders.subscribe(f => { + unsubscribeFolders = folders.subscribe(f => { foldersList = f; }); onMount(async () => { // ... existing code ... return () => { document.removeEventListener('click', handleClickOutside); + unsubscribeUser(); + unsubscribeFiles(); + unsubscribeFolders(); }; });Or use Svelte's auto-subscription with
$prefix in templates.Committable suggestion skipped: line range outside the PR's diff.
| // Check if file already has signatures (single-use enforcement) | ||
| const existingSignatures = await this.signatureRepository.find({ | ||
| where: { fileId }, | ||
| }); | ||
|
|
||
| if (existingSignatures.length > 0) { | ||
| throw new Error("This file has already been used in a signature container and cannot be reused"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Race condition (TOCTOU) in single-use enforcement.
Between checking for existing signature containers (lines 31-33) and creating invitations (lines 52-96), concurrent requests can both pass the validation and proceed, violating the single-use constraint. This is a classic time-of-check to time-of-use vulnerability.
🔎 Recommended fix: Wrap in transaction with appropriate isolation
async inviteSignees(
fileId: string,
userIds: string[],
invitedBy: string
): Promise<FileSignee[]> {
+ return await AppDataSource.transaction(async (transactionalEntityManager) => {
+ const signatureRepository = transactionalEntityManager.getRepository(SignatureContainer);
+ const fileRepository = transactionalEntityManager.getRepository(File);
+ const fileSigneeRepository = transactionalEntityManager.getRepository(FileSignee);
+ const userRepository = transactionalEntityManager.getRepository(User);
+
// Verify file exists and user is owner
- const file = await this.fileRepository.findOne({
+ const file = await fileRepository.findOne({
where: { id: fileId, ownerId: invitedBy },
+ lock: { mode: "pessimistic_write" },
});
if (!file) {
throw new Error("File not found or user is not the owner");
}
// Check if file already has signatures (single-use enforcement)
- const existingSignatures = await this.signatureRepository.find({
+ const existingSignatureCount = await signatureRepository.count({
where: { fileId },
});
- if (existingSignatures.length > 0) {
+ if (existingSignatureCount > 0) {
throw new Error("This file has already been used in a signature container and cannot be reused");
}
// ... rest of the method using transactionalEntityManager repositories
+ });
}This approach:
- Uses a transaction to ensure atomicity
- Applies pessimistic write lock on the file to prevent concurrent modifications
- Improves performance by using
count()instead offind()
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @platforms/esigner-api/src/services/InvitationService.ts around lines 30-37,
The single-use check in InvitationService (the existingSignatures lookup via
this.signatureRepository.find with fileId) is vulnerable to a TOCTOU race; wrap
the check-and-create logic that spans the validation and invitation creation
(the block that precedes and includes the create-invitation flow) in a database
transaction, acquire a pessimistic write/row lock on the target file record (or
an appropriate row representing the file) inside that transaction, replace the
find(...) call with a count(...) query for existence, and perform the invitation
creation only within the same transaction so concurrent requests are serialized
and cannot both pass the check.
| { | ||
| "tableName": "signature_containers", | ||
| "schemaId": "b2c3d4e5-f6a7-8901-bcde-f12345678901", | ||
| "ownerEnamePath": "users(user.ename)", | ||
| "ownedJunctionTables": [], | ||
| "localToUniversalMap": { | ||
| "fileId": "files(file.id),fileId", | ||
| "userId": "users(user.id),userId", | ||
| "md5Hash": "md5Hash", | ||
| "signature": "signature", | ||
| "publicKey": "publicKey", | ||
| "message": "message", | ||
| "createdAt": "__date(createdAt)", | ||
| "updatedAt": "__date(updatedAt)" | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CRITICAL: This mapping file is identical to the one in file-manager-api.
Both platforms/file-manager-api/src/web3adapter/mappings/signature.mapping.json and this file share:
- Identical
schemaId:"b2c3d4e5-f6a7-8901-bcde-f12345678901" - Identical
tableName,ownerEnamePath, and field mappings
This raises several concerns:
- If schemaIds must be globally unique: This is a data integrity violation that could cause collisions in distributed systems.
- If this is intentional synchronization: The duplication creates a maintenance burden—updates must be kept in sync manually.
- Configuration management: Consider whether this shared configuration should live in a common location.
Verify the intended architecture and either:
- Generate unique schemaIds if required, or
- Extract to a shared configuration module if both platforms truly share the same schema
| grantFolderAccess = async (req: Request, res: Response) => { | ||
| try { | ||
| if (!req.user) { | ||
| return res.status(401).json({ error: "Authentication required" }); | ||
| } | ||
|
|
||
| const { id } = req.params; | ||
| const { userId, permission = "view" } = req.body; | ||
|
|
||
| if (!userId) { | ||
| return res.status(400).json({ error: "userId is required" }); | ||
| } | ||
|
|
||
| const access = await this.accessService.grantFolderAccess( | ||
| id, | ||
| userId, | ||
| req.user.id, | ||
| permission | ||
| ); | ||
|
|
||
| res.status(201).json({ | ||
| id: access.id, | ||
| folderId: access.folderId, | ||
| userId: access.userId, | ||
| grantedBy: access.grantedBy, | ||
| permission: access.permission, | ||
| createdAt: access.createdAt, | ||
| }); | ||
| } catch (error) { | ||
| console.error("Error granting folder access:", error); | ||
| if (error instanceof Error) { | ||
| return res.status(400).json({ error: error.message }); | ||
| } | ||
| res.status(500).json({ error: "Failed to grant folder access" }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing folder ownership check before granting access.
Unlike grantFileAccess which verifies file ownership (lines 24-34), grantFolderAccess doesn't verify the user owns the folder before granting access. This could allow unauthorized access grants.
🔎 Suggested fix: add ownership check
grantFolderAccess = async (req: Request, res: Response) => {
try {
if (!req.user) {
return res.status(401).json({ error: "Authentication required" });
}
const { id } = req.params;
const { userId, permission = "view" } = req.body;
if (!userId) {
return res.status(400).json({ error: "userId is required" });
}
+ // Check if the user is the owner of the folder
+ const folderAccess = await this.accessService.getFolderById(id);
+ if (!folderAccess) {
+ return res.status(404).json({ error: "Folder not found" });
+ }
+ if (folderAccess.ownerId !== req.user.id) {
+ return res.status(403).json({ error: "Only the folder owner can grant access" });
+ }
const access = await this.accessService.grantFolderAccess(Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @platforms/file-manager-api/src/controllers/AccessController.ts around lines
151-186, grantFolderAccess lacks a folder ownership check before granting
permissions; mirror the pattern used in grantFileAccess by fetching the folder
(e.g., this.folderService.getById or getOwner/getFolderById) for the given id,
compare its ownerId to req.user.id, and if they don't match return
res.status(403).json({ error: "Forbidden" }) (or similar) before calling
this.accessService.grantFolderAccess; keep the same error handling and response
shape as the existing method.
| revokeFolderAccess = async (req: Request, res: Response) => { | ||
| try { | ||
| if (!req.user) { | ||
| return res.status(401).json({ error: "Authentication required" }); | ||
| } | ||
|
|
||
| const { id, userId } = req.params; | ||
|
|
||
| const revoked = await this.accessService.revokeFolderAccess( | ||
| id, | ||
| userId, | ||
| req.user.id | ||
| ); | ||
|
|
||
| if (!revoked) { | ||
| return res.status(404).json({ error: "Access not found or not authorized" }); | ||
| } | ||
|
|
||
| res.json({ message: "Folder access revoked successfully" }); | ||
| } catch (error) { | ||
| console.error("Error revoking folder access:", error); | ||
| if (error instanceof Error) { | ||
| return res.status(400).json({ error: error.message }); | ||
| } | ||
| res.status(500).json({ error: "Failed to revoke folder access" }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing folder ownership check before revoking access.
Similar to grantFolderAccess, revokeFolderAccess doesn't verify folder ownership before allowing revocation.
Apply the same ownership verification pattern as in revokeFileAccess (lines 68-78).
🤖 Prompt for AI Agents
In @platforms/file-manager-api/src/controllers/AccessController.ts around lines
188-214, revokeFolderAccess currently calls
this.accessService.revokeFolderAccess without verifying the requester owns the
folder; add the same ownership check used by revokeFileAccess (lines ~68-78)
before attempting revocation: fetch the folder by id (e.g. via
this.folderService.getById or the equivalent method used in revokeFileAccess),
compare its owner id to req.user.id, return 403 if not owner, and only then call
this.accessService.revokeFolderAccess(id, userId, req.user.id) and proceed with
the existing success/error handling.
| data: fileData, | ||
| }); | ||
|
|
||
| this.adapter.addToLockedIds(file.id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accessing entity ID before save() may yield undefined.
On lines 339 and 422, file.id and signature.id are accessed with addToLockedIds() before the entity is saved. With TypeORM's @PrimaryGeneratedColumn("uuid"), the ID is generated by the database upon insert, not at create() time. Move these calls after save().
🔎 Suggested fix for files (lines 338-345)
- this.adapter.addToLockedIds(file.id);
- await this.fileRepository.save(file);
+ const savedFile = await this.fileRepository.save(file);
+ this.adapter.addToLockedIds(savedFile.id);
await this.adapter.mappingDb.storeMapping({
- localId: file.id,
+ localId: savedFile.id,
globalId: req.body.id,
});
- localId = file.id;
+ localId = savedFile.id;Also applies to: 422-422
🤖 Prompt for AI Agents
In @platforms/file-manager-api/src/controllers/WebhookController.ts around line
339, The calls to this.adapter.addToLockedIds(...) use file.id and signature.id
before the entities are persisted; with TypeORM @PrimaryGeneratedColumn("uuid")
IDs are assigned on save, so move the addToLockedIds calls to after the
corresponding await repository.save(...) calls in WebhookController (e.g., after
save() that persists the File entity and after save() that persists the
Signature entity) so you pass the generated id; update the locations where
file.id (around the current call near this.adapter.addToLockedIds(file.id)) and
signature.id (near this.adapter.addToLockedIds(signature.id)) are referenced to
occur only post-save.
| "devDependencies": { | ||
| "@eslint/compat": "^1.2.5", | ||
| "@eslint/js": "^9.18.0", | ||
| "@sveltejs/adapter-static": "^3.0.8", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conflicting SvelteKit adapters will cause build failure.
Both @sveltejs/adapter-static (line 19) and @sveltejs/adapter-node (line 39) are declared. SvelteKit projects can only use one adapter. This configuration error is almost certainly causing the pipeline build failure reported in the logs.
Action required: Choose the appropriate adapter based on your deployment target:
- Use
adapter-staticfor static site generation (JAMstack, CDN deployment) - Use
adapter-nodefor Node.js server deployment (SSR, dynamic routes)
🔎 Proposed fix (assuming static deployment)
"devDependencies": {
"@eslint/compat": "^1.2.5",
"@eslint/js": "^9.18.0",
"@sveltejs/adapter-static": "^3.0.8",
"@sveltejs/kit": "^2.16.0", "dependencies": {
- "@sveltejs/adapter-node": "^5.2.12",
"axios": "^1.6.7",Then ensure svelte.config.js imports and uses adapter-static.
🔎 Alternative fix (assuming Node.js server deployment)
"devDependencies": {
"@eslint/compat": "^1.2.5",
"@eslint/js": "^9.18.0",
- "@sveltejs/adapter-static": "^3.0.8",
"@sveltejs/kit": "^2.16.0", "dependencies": {
"@sveltejs/adapter-node": "^5.2.12",
"axios": "^1.6.7",Then ensure svelte.config.js imports and uses adapter-node.
Also applies to: 39-39
🤖 Prompt for AI Agents
In @platforms/file-manager/package.json around line 19, package.json currently
declares both "@sveltejs/adapter-static" and "@sveltejs/adapter-node", which
conflicts and breaks the SvelteKit build; remove the adapter you don't intend to
use (keep only "@sveltejs/adapter-static" for static deployments or only
"@sveltejs/adapter-node" for Node server deployments), then update
svelte.config.js to import and use the matching adapter name
("@sveltejs/adapter-static" or "@sveltejs/adapter-node") so the adapter in
dependencies and the adapter used in svelte.config.js are the same, reinstall
dependencies and re-run the build.
| }, | ||
| "dependencies": { | ||
| "@sveltejs/adapter-node": "^5.2.12", | ||
| "axios": "^1.6.7", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check axios version metadata and security information
echo "=== Axios 1.6.7 metadata from npm ==="
curl -s https://registry.npmjs.org/axios/1.6.7 | jq -r '{version: .version, published: .time["1.6.7"], dist_tags: .["dist-tags"]}'
echo -e "\n=== Latest axios version ==="
curl -s https://registry.npmjs.org/axios/latest | jq -r '.version'
echo -e "\n=== Axios releases near 1.6.7 ==="
curl -s https://registry.npmjs.org/axios | jq -r '.time | to_entries[] | select(.key | contains("1.6")) | "\(.key): \(.value)"' | sort
echo -e "\n=== Check axios for known vulnerabilities via npm audit ==="
npm audit --json --registry https://registry.npmjs.org axios 2>/dev/null || echo "npm audit not available in this context"Repository: MetaState-Prototype-Project/prototype
Length of output: 886
🏁 Script executed:
#!/bin/bash
# Search for axios CVEs and security advisories
echo "=== Searching for axios security advisories ==="
curl -s "https://api.github.com/repos/axios/axios/issues?state=all&labels=security" | jq -r '.[] | select(.created_at > "2023-01-01") | "\(.number): \(.title) (\(.state))"' | head -20 || echo "GitHub API call failed"Repository: MetaState-Prototype-Project/prototype
Length of output: 128
🏁 Script executed:
#!/bin/bash
# Search for axios CVEs in public databases
echo "=== Checking for axios CVEs via web sources ==="
# Try accessing the npm advisory database
echo "Attempting to fetch axios security from npm database..."
curl -s "https://security.snyk.io/vulnerability/npm:axios" | head -50 2>/dev/null || echo "Direct Snyk query not available"
echo -e "\n=== Checking axios GitHub releases for security patches ==="
curl -s "https://api.github.com/repos/axios/axios/releases?per_page=50" | jq -r '.[] | select(.published_at > "2024-01-25") | "\(.tag_name): \(.published_at) - \(.body | split("\n")[0])"' | head -20
echo -e "\n=== Check version range info ==="
# The caret (^) allows: >=1.6.7 and <2.0.0
# Check what versions are available in that range
curl -s https://registry.npmjs.org/axios | jq -r '.versions | keys[] | select(. > "1.6.7" and . < "2.0.0")' | tail -20Repository: MetaState-Prototype-Project/prototype
Length of output: 1674
🌐 Web query:
axios npm package security vulnerabilities CVE 2024 2025 2026
💡 Result:
Found three public axios CVEs (2024–2025) with fixes — summary:
- CVE-2024-39338 — SSRF via path-relative URLs (axios 1.3.2–1.7.3 / 1.7.2 cited). Upgrade to a patched release (see axios release notes). [1][2]
- CVE-2025-27152 — SSRF / credential leakage when passing absolute URLs; affects >=1.0.0, <1.8.2 (also older <0.30.0). Fixed in 1.8.2 (and 0.30.0). [3][4]
- CVE-2025-58754 — excessive memory allocation / DoS when given a data: URI on Node.js (ignores maxContentLength / maxBodyLength). Fixed in 1.12.0 and 0.30.2. [5]
References
- NVD entry — CVE-2024-39338 (axios SSRF). [link]. [1]
- Armis / advisories summary for CVE-2024-39338. [link]. [2]
- GitHub Advisory / axios — GHSA-jr5f-v2jv-69x6 (CVE-2025-27152), patched in 1.8.2/0.30.0. [link]. [3]
- NVD / vendor advisories for CVE-2025-27152. [link]. [4]
- IBM security bulletin describing CVE-2025-58754 and patched versions (1.12.0 / 0.30.2). [link]. [5]
If you want, I can: (a) list the exact patched axios release numbers and changelog links, (b) scan a package-lock or repo to check whether your project is using vulnerable versions. Which would you prefer?
Update axios to ≥1.8.2 to address critical security vulnerabilities.
The declared version ^1.6.7 is vulnerable to multiple CVEs:
- CVE-2024-39338: SSRF via path-relative URLs
- CVE-2025-27152: SSRF and credential leakage
- CVE-2025-58754: Denial of service via data: URI
Update to at least 1.8.2 (or the latest 1.13.2) which are available within the caret range and fix these issues.
🤖 Prompt for AI Agents
In @platforms/file-manager/package.json around line 40, The package.json entry
for the axios dependency is pinned to "^1.6.7", which is vulnerable; update the
axios version spec to a fixed safe minimum (e.g., "^1.8.2" or a newer 1.x
version like "^1.13.2") in package.json (the "axios" dependency), then
regenerate the lockfile by running your package manager (npm/yarn/pnpm install)
and commit the updated lockfile; after updating, run the test suite and a
security audit (npm audit or equivalent) to confirm vulnerabilities are
resolved.
| export interface Signature { | ||
| id: string; | ||
| userId: string; | ||
| md5Hash: string; | ||
| message: string; | ||
| signature: string; | ||
| publicKey: string; | ||
| createdAt: string; | ||
| user?: { | ||
| id: string; | ||
| name: string; | ||
| ename: string; | ||
| avatarUrl?: string; | ||
| } | null; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Missing fileSigneeId field in Signature interface.
The Signature interface is missing the fileSigneeId?: string | null field that exists in the esigner version (see platforms/esigner/src/lib/stores/signatures.ts). This field is essential for per-invitation signature tracking implemented in this PR.
Without this field:
- Type mismatches will occur when the API returns signatures with
fileSigneeId - The file-manager won't be able to properly display per-invitation signature status
- Runtime errors may occur when accessing
sig.fileSigneeId
🔎 Proposed fix
export interface Signature {
id: string;
userId: string;
+ fileSigneeId?: string | null;
md5Hash: string;
message: string;
signature: string;
publicKey: string;
createdAt: string;
user?: {
id: string;
name: string;
ename: string;
avatarUrl?: string;
} | null;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export interface Signature { | |
| id: string; | |
| userId: string; | |
| md5Hash: string; | |
| message: string; | |
| signature: string; | |
| publicKey: string; | |
| createdAt: string; | |
| user?: { | |
| id: string; | |
| name: string; | |
| ename: string; | |
| avatarUrl?: string; | |
| } | null; | |
| } | |
| export interface Signature { | |
| id: string; | |
| userId: string; | |
| fileSigneeId?: string | null; | |
| md5Hash: string; | |
| message: string; | |
| signature: string; | |
| publicKey: string; | |
| createdAt: string; | |
| user?: { | |
| id: string; | |
| name: string; | |
| ename: string; | |
| avatarUrl?: string; | |
| } | null; | |
| } |
🤖 Prompt for AI Agents
In @platforms/file-manager/src/lib/stores/signatures.ts around lines 5-19, The
Signature interface is missing the optional fileSigneeId field which causes type
and runtime errors; update the Signature interface declaration for the exported
type named Signature to include fileSigneeId?: string | null (optional and
nullable) alongside the existing properties so returned API objects with
fileSigneeId type-check and consumers can safely access sig.fileSigneeId.
| export const setAuthToken = (token: string) => { | ||
| localStorage.setItem('file_manager_auth_token', token); | ||
| apiClient.defaults.headers.common['Authorization'] = `Bearer ${token}`; | ||
| }; | ||
|
|
||
| export const removeAuthToken = () => { | ||
| localStorage.removeItem('file_manager_auth_token'); | ||
| delete apiClient.defaults.headers.common['Authorization']; | ||
| }; | ||
|
|
||
| export const removeAuthId = () => { | ||
| localStorage.removeItem('file_manager_auth_id'); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Unchecked localStorage access breaks SSR.
SvelteKit runs code on both server and client. All three functions (setAuthToken, removeAuthToken, removeAuthId) directly access localStorage without checking if it exists, causing runtime crashes during server-side rendering.
🔎 Proposed fix: Guard all localStorage access
export const setAuthToken = (token: string) => {
+ if (typeof window === 'undefined') return;
localStorage.setItem('file_manager_auth_token', token);
apiClient.defaults.headers.common['Authorization'] = `Bearer ${token}`;
};
export const removeAuthToken = () => {
+ if (typeof window === 'undefined') return;
localStorage.removeItem('file_manager_auth_token');
delete apiClient.defaults.headers.common['Authorization'];
};
export const removeAuthId = () => {
+ if (typeof window === 'undefined') return;
localStorage.removeItem('file_manager_auth_id');
};🤖 Prompt for AI Agents
In @platforms/file-manager/src/lib/utils/axios.ts around lines 13-25, All three
functions (setAuthToken, removeAuthToken, removeAuthId) access localStorage
directly which breaks SSR; guard every localStorage access with a runtime check
(e.g., ensure typeof window !== 'undefined' or typeof localStorage !==
'undefined') before calling localStorage methods and only modify
apiClient.defaults.headers.common['Authorization'] when running in the browser;
update setAuthToken to no-op server-side and update removeAuthToken/removeAuthId
similarly so they safely return without touching localStorage or apiClient on
the server.
| onMount(() => { | ||
| isAuthenticated.subscribe((auth) => { | ||
| if (auth) { | ||
| goto('/files'); | ||
| } else { | ||
| goto('/auth'); | ||
| } | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memory leak: Subscription is never cleaned up.
The subscribe() call creates a subscription that is never unsubscribed, causing a memory leak. Each time this component mounts, a new subscription is created without cleaning up the previous one.
🔎 Recommended fix using auto-subscription or proper cleanup
Option 1 (Recommended): Use Svelte 5 $derived for reactive navigation
<script lang="ts">
import { goto } from '$app/navigation';
import { isAuthenticated } from '$lib/stores/auth';
- import { onMount } from 'svelte';
+ import { onMount } from 'svelte';
+
+ // Use $derived to reactively handle auth state
+ let auth = $state($isAuthenticated);
onMount(() => {
- isAuthenticated.subscribe((auth) => {
- if (auth) {
- goto('/files');
- } else {
- goto('/auth');
- }
- });
+ if ($isAuthenticated) {
+ goto('/files');
+ } else {
+ goto('/auth');
+ }
});
</script>Option 2: Proper subscription cleanup
<script lang="ts">
import { goto } from '$app/navigation';
import { isAuthenticated } from '$lib/stores/auth';
- import { onMount } from 'svelte';
+ import { onMount, onDestroy } from 'svelte';
+
+ let unsubscribe: (() => void) | undefined;
onMount(() => {
- isAuthenticated.subscribe((auth) => {
+ unsubscribe = isAuthenticated.subscribe((auth) => {
if (auth) {
goto('/files');
} else {
goto('/auth');
}
});
});
+
+ onDestroy(() => {
+ unsubscribe?.();
+ });
</script>Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @platforms/file-manager/src/routes/+page.svelte around lines 6-14, The
component creates an isAuthenticated.subscribe inside onMount but never
unsubscribes, causing a memory leak; replace the raw subscription with either
Svelte auto-subscription (use the reactive $isAuthenticated or a derived store
to trigger navigation) or capture the subscription returned from
isAuthenticated.subscribe inside onMount and call its unsubscribe in the
onDestroy cleanup; update references to onMount, isAuthenticated.subscribe and
goto accordingly so navigation still occurs but the subscription is properly
disposed.
Description of change
Issue Number
Type of change
How the change has been tested
Change checklist
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.