Conversation
…Polqt/iComplain into feat/student-ticket-attachments
…folder and integrate Celery for task scheduling
Reviewer's GuideAdds Celery (with Redis) and email configuration for scheduled ticket-related tasks, tightens backend file upload validation, adjusts WebSocket consumer group handling, aligns comment/feedback APIs and types between frontend and backend, and wires up periodic ticket maintenance/reporting tasks. Sequence diagram for scheduled Celery ticket maintenance and reportingsequenceDiagram
participant CB as CeleryBeat
participant RB as RedisBroker
participant CW as CeleryWorker
participant DJ as DjangoApp
participant DB as PostgresDB
participant ES as EmailSMTPServer
CB->>RB: Enqueue auto_escalate_tickets
CB->>RB: Enqueue send_daily_summary
CB->>RB: Enqueue send_weekly_report
loop For each enqueued task
CW->>RB: Fetch next task
RB-->>CW: Task payload
alt auto_escalate_tickets
CW->>DJ: Load tickets nearing SLA
DJ->>DB: Query tickets
DB-->>DJ: Ticket records
DJ-->>CW: Tickets needing escalation
CW->>DJ: Update ticket status and escalation fields
DJ->>DB: Persist updates
DB-->>DJ: Update success
else send_daily_summary
CW->>DJ: Build daily summary data
DJ->>DB: Aggregate ticket metrics
DB-->>DJ: Aggregated metrics
DJ-->>CW: Summary payload
CW->>ES: Send summary emails
ES-->>CW: Email accepted
else send_weekly_report
CW->>DJ: Build weekly report data
DJ->>DB: Aggregate weekly metrics
DB-->>DJ: Aggregated metrics
DJ-->>CW: Report payload
CW->>ES: Send weekly reports
ES-->>CW: Email accepted
end
end
Class diagram for updated tickets validation, WebSocket consumer, and TicketComment typeclassDiagram
class TicketsValidation {
+list ALLOWED_TYPES
+int MAX_FILE_SIZE
+dict MAGIC_BYTES
+bool validate_file(file)
}
class TicketWebSocketConsumer {
-string user_group
+connect()
+disconnect(close_code)
}
class TicketComment {
+int id
+int ticket_id
+User user
+string message
+string created_at
}
class User {
+int id
+string email
+string name
+string avatar
}
TicketComment --> User
class CommentAPI {
+fetchComments(ticketId)
+createComment(ticketId, payload)
+editComment(ticketId, commentId, payload)
}
class FeedbackAPI {
+getFeedback(ticketId)
+createFeedback(ticketId, payload)
+updateFeedback(ticketId, payload)
}
CommentAPI ..> TicketComment
FeedbackAPI ..> TicketComment
class CeleryConfig {
+string CELERY_BROKER_URL
+string CELERY_RESULT_BACKEND
+list CELERY_ACCEPT_CONTENT
+string CELERY_TASK_SERIALIZER
+config beat_schedule
}
class EmailConfig {
+string EMAIL_BACKEND
+string EMAIL_HOST
+int EMAIL_PORT
+bool EMAIL_USE_TLS
+string EMAIL_HOST_USER
+string EMAIL_HOST_PASSWORD
+string DEFAULT_FROM_EMAIL
}
CeleryConfig ..> TicketWebSocketConsumer
EmailConfig ..> CeleryConfig
Flow diagram for WebSocket connection and group subscription changesflowchart TD
U["User Browser"] --> WSConnect["Open WebSocket connection"]
WSConnect --> Channels["Django Channels routing"]
Channels --> ConsumerConnect["TicketWebSocketConsumer.connect"]
subgraph ConsumerLogic
ConsumerConnect --> CheckTicketAccess["Check ticket access and user scope"]
CheckTicketAccess -->|Unauthorized| CloseConn["Close connection"]
CloseConn --> End["End"]
CheckTicketAccess -->|Authorized| AddTicketUpdatesGroup["group_add ticket_updates"]
AddTicketUpdatesGroup --> SetUserGroup["Set user_group user_id based group"]
SetUserGroup --> AddUserGroup["group_add user_group"]
AddUserGroup --> AcceptConn["accept"]
end
AcceptConn --> ActiveWS["Active WebSocket subscription"]
ActiveWS --> Disconnect["TicketWebSocketConsumer.disconnect"]
Disconnect --> DiscardTicketUpdates["group_discard ticket_updates"]
DiscardTicketUpdates --> CheckUserGroup["Check user_group attribute"]
CheckUserGroup -->|Has user_group| DiscardUserGroup["group_discard user_group"]
CheckUserGroup -->|No user_group| End
DiscardUserGroup --> End
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (5)
📝 WalkthroughWalkthroughAdds magic-byte validation (including WebP) for uploads; introduces Celery app and scheduled tasks plus SMTP settings; moves websocket routing to a Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (Browser)
participant API as API Server (Django + Ninja)
participant DB as Database
participant WS as Channels / Channel Layer
Client->>API: POST /tickets/{id}/feedback/ (create feedback + optional attachments)
API->>DB: INSERT feedback, update ticket status/history
DB-->>API: OK (created feedback)
API->>WS: emit "feedback_created" to ticket/user groups
WS-->>Client: WebSocket broadcast -> clients subscribed to groups receive updated payload
Client-->>API: GET /stats/dashboard (optionally later)
API->>DB: aggregate metrics for dashboard
DB-->>API: metrics data
API-->>Client: 200 DashboardStats JSON
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey - I've found 2 issues, and left some high level feedback:
- The new magic-bytes check in
validate_filehas a couple of issues:header.startwithis a typo (startswith), and theelseblock that checksheader.startswith(magic)is currently nested under the WebP-specific branch so it will never run for non-WebP types; this logic should be restructured so each content type is handled correctly. - In the WebSocket consumer,
self.user_group = f"user_{user.id}"now referencesuserwithout first retrieving it fromself.scopeor checking authentication as before, which will raise aNameErroror misbehave for anonymous connections; restore or rework the user lookup and guard logic. - Adding
'celery'toINSTALLED_APPSis unnecessary and can cause issues because Celery is not a Django app; it should be configured via the separate Celery app (as you already did inconfig/celery.py) without being listed inINSTALLED_APPS.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The new magic-bytes check in `validate_file` has a couple of issues: `header.startwith` is a typo (`startswith`), and the `else` block that checks `header.startswith(magic)` is currently nested under the WebP-specific branch so it will never run for non-WebP types; this logic should be restructured so each content type is handled correctly.
- In the WebSocket consumer, `self.user_group = f"user_{user.id}"` now references `user` without first retrieving it from `self.scope` or checking authentication as before, which will raise a `NameError` or misbehave for anonymous connections; restore or rework the user lookup and guard logic.
- Adding `'celery'` to `INSTALLED_APPS` is unnecessary and can cause issues because Celery is not a Django app; it should be configured via the separate Celery app (as you already did in `config/celery.py`) without being listed in `INSTALLED_APPS`.
## Individual Comments
### Comment 1
<location path="backend/apps/tickets/validation.py" line_range="38-42" />
<code_context>
+
+ for magic in magic_bytes_list:
+ if content_type == 'image/webp':
+ if header.startwith(b'RIFF') and header[8:12] == b'WEBP':
+ is_valid = True
+ break
+ else:
+ if header.startswith(magic):
+ is_valid = True
+ break
</code_context>
<issue_to_address>
**issue (bug_risk):** Fix `startswith` typo and ensure non-WebP types are actually validated against their magic bytes.
Two problems here:
1) `header.startwith` is a typo and will raise an `AttributeError`.
2) Because the loop body is inside `if content_type == 'image/webp':`, JPEG/PNG/PDF never run a `startswith` check and will always leave `is_valid = False`.
Consider fixing the typo and restructuring the logic so WebP is handled separately from the generic magic-byte validation, for example:
```python
if content_type == 'image/webp':
if header.startswith(b'RIFF') and header[8:12] == b'WEBP':
is_valid = True
else:
for magic in magic_bytes_list:
if header.startswith(magic):
is_valid = True
break
```
This preserves the WebP-specific offset check while allowing other types to validate correctly.
</issue_to_address>
### Comment 2
<location path="backend/apps/tickets/validation.py" line_range="30" />
<code_context>
+ header = file.read(12) # Read first 12 bytes for magic byte validation
+ file.seek(0) # Reset file pointer after reading
+
+ is_valid = False
+ content_type = file.content_type
+
</code_context>
<issue_to_address>
**issue (complexity):** Consider extracting the magic-byte matching into a helper and using early returns to simplify validate_file’s control flow and WebP handling.
You can simplify the magic-byte logic and make WebP handling clearer by extracting a small helper and removing the flag/loop complexity.
### 1. Extract magic-byte matching into a helper
This keeps `validate_file` focused and flattens control flow:
```python
MAGIC_BYTES = {
'image/jpeg': [
b'\xFF\xD8\xFF', # JPEG
],
'image/png': [
b'\x89PNG\r\n\x1a\n', # PNG
],
'image/webp': [
b'RIFF', # WebP
],
'application/pdf': [
b'%PDF-', # PDF
],
}
def matches_magic_bytes(header: bytes, content_type: str) -> bool:
if content_type == 'image/webp':
# WebP: "RIFF" at start and "WEBP" at offset 8
return header.startswith(b'RIFF') and header[8:12] == b'WEBP'
magic_bytes_list = MAGIC_BYTES.get(content_type, ())
return any(header.startswith(magic) for magic in magic_bytes_list)
```
### 2. Use the helper in `validate_file` and return early
No `is_valid` flag, no misleading WebP loop:
```python
def validate_file(file):
if file.content_type not in ALLOWED_TYPES:
raise ValueError("Unsupported file type.")
if file.size > MAX_FILE_SIZE:
raise ValueError("File size exceeds the maximum limit.")
file.seek(0)
header = file.read(12)
file.seek(0)
if file.content_type in MAGIC_BYTES and not matches_magic_bytes(header, file.content_type):
raise ValueError("File content does not match its declared type.")
return True
```
This preserves all current behavior while:
- Removing the redundant `for` loop for WebP.
- Eliminating the `is_valid` state flag.
- Making the WebP special case explicit and testable.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/apps/tickets/views.py (1)
402-415:⚠️ Potential issue | 🟠 MajorAvoid broadcasting commenter email in the global comment event payload.
Line 414 adds PII to data sent on
ticket_updates; this can leak user email to unintended subscribers.Suggested safe payload change
"comment": { "id": comment.id, "ticket_id": ticket.id, "user": { "id": comment.user.id, - "email": comment.user.email, "name": getattr(comment.user, "name", None), "avatar": getattr(comment.user, "avatar", None), },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/apps/tickets/views.py` around lines 402 - 415, The payload sent via channel_layer.group_send to the "ticket_updates" group currently includes commenter PII (comment.user.email); remove the email field from the comment payload (keep comment.id, ticket_id and non-PII user fields like user.id and user.name or a boolean is_anonymous) and update any consumer/handler expecting "user.email" (the "send_comment_update" consumer) to avoid relying on that field; ensure you only send non-PII values from the block constructing the "data" -> "comment" object and update tests/consumers accordingly.
🧹 Nitpick comments (2)
frontend/src/lib/api/feedback.ts (1)
17-17: Consider centralizing feedback URL construction to prevent slash drift.Line 17 and Line 34 now use a trailing slash, which is good. A small URL helper would keep this consistent and avoid accidental mismatch across methods.
Refactor sketch
+const feedbackCollectionUrl = (ticketId: number) => `${BASE}/${ticketId}/feedback/`; +const feedbackDetailUrl = (ticketId: number, feedbackId: number) => + `${BASE}/${ticketId}/feedback/${feedbackId}/`; -const res = await fetch(`${BASE}/${ticketId}/feedback/`, { +const res = await fetch(feedbackCollectionUrl(ticketId), { -const res = await fetch(`${BASE}/${ticketId}/feedback/${feedbackId}`, { +const res = await fetch(feedbackDetailUrl(ticketId, feedbackId), {Also applies to: 34-34
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/lib/api/feedback.ts` at line 17, Centralize construction of the feedback endpoint to avoid "slash drift": add a small helper (e.g., buildFeedbackUrl(ticketId) or feedbackUrl(ticketId)) that returns `${BASE}/${ticketId}/feedback/` ensuring the trailing slash is always present, then replace the direct fetch calls that use `${BASE}/${ticketId}/feedback/` (the occurrences around the fetch in this file) with fetch(buildFeedbackUrl(ticketId), ...) in both places so URL concatenation is consistent across methods.backend/apps/tickets/views.py (1)
11-11: Replaceasyncio.log.loggerwith the standard logging module pattern.
asyncio.logis an internal stdlib module, not a public API. For Django module logging, import and uselogging.getLogger(__name__)instead.Proposed fix
+import logging from django.contrib.auth import get_user_model from django.core.cache import cache from django.db.models import Count, Prefetch from django.shortcuts import get_object_or_404, redirect from django.utils import timezone from datetime import timedelta from ninja import File, Form, Router, UploadedFile from ninja.security import SessionAuth from channels.layers import get_channel_layer from asgiref.sync import async_to_sync -from asyncio.log import logger + +logger = logging.getLogger(__name__)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/apps/tickets/views.py` at line 11, Replace the incorrect import from asyncio.log (the line importing logger) with the standard logging pattern used in Django modules: import the logging module and instantiate a module-level logger via logging.getLogger(__name__). Update any references to the top-level symbol logger to use that logger variable; ensure the import is "import logging" and the logger is created with logging.getLogger(__name__) so views.py uses the public logging API instead of asyncio.log.logger.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/apps/tickets/validation.py`:
- Around line 33-47: Fix the magic-byte validation loop: replace the typo
`startwith` with `startswith` and restructure the per-magic check so WebP uses
its special RIFF/WEBP check but other types still run their
header.startswith(magic) test. Specifically, inside the loop over
MAGIC_BYTES[content_type] (referencing MAGIC_BYTES, content_type, header,
is_valid), use an if content_type == 'image/webp' branch to check
header.startswith(b'RIFF') and header[8:12] == b'WEBP', else check
header.startswith(magic); set is_valid = True and break when either check passes
so non-WebP types are validated correctly.
In `@backend/config/settings.py`:
- Around line 213-219: Add a startup validation in settings.py to fail fast when
EMAIL_BACKEND is set to "django.core.mail.backends.smtp.EmailBackend"
(production) and any of EMAIL_HOST_USER, EMAIL_HOST_PASSWORD, or
DEFAULT_FROM_EMAIL are empty: replicate the existing ALLOWED_EMAIL_DOMAINS-style
guard by checking EMAIL_BACKEND and, if using SMTP, raise
django.core.exceptions.ImproperlyConfigured with a clear message listing the
missing env vars (EMAIL_HOST_USER, EMAIL_HOST_PASSWORD, DEFAULT_FROM_EMAIL) so
the app won't start with empty SMTP credentials.
- Line 36: The settings.py INSTALLED_APPS currently includes the literal
'celery' entry which should be removed; edit the INSTALLED_APPS list (symbol:
INSTALLED_APPS in settings.py) to delete the 'celery' string and keep only
Django apps and any Celery-related Django integrations you actually use (e.g.,
add 'django_celery_results' or 'django_celery_beat' only if those extensions are
required), relying on your separate Celery app instance (config/celery.py) for
task autodiscovery.
In `@frontend/src/lib/api/comment.ts`:
- Around line 58-61: The update routine currently always constructs a FormData
(const formData) and sends a PUT even when payload.message is omitted, causing
empty/no-op requests; modify the function that builds formData and performs the
PUT (the block that reads payload.message and the subsequent fetch/PUT call) to
fail fast: after attempting to append fields to formData (e.g.,
payload.message), check whether any fields were appended (or check payload
properties) and if none are present, return early or throw a clear error (e.g.,
"No fields to update") instead of performing the fetch; ensure you reference and
guard the same FormData instance used in the PUT call so the request is never
sent with an empty body.
---
Outside diff comments:
In `@backend/apps/tickets/views.py`:
- Around line 402-415: The payload sent via channel_layer.group_send to the
"ticket_updates" group currently includes commenter PII (comment.user.email);
remove the email field from the comment payload (keep comment.id, ticket_id and
non-PII user fields like user.id and user.name or a boolean is_anonymous) and
update any consumer/handler expecting "user.email" (the "send_comment_update"
consumer) to avoid relying on that field; ensure you only send non-PII values
from the block constructing the "data" -> "comment" object and update
tests/consumers accordingly.
---
Nitpick comments:
In `@backend/apps/tickets/views.py`:
- Line 11: Replace the incorrect import from asyncio.log (the line importing
logger) with the standard logging pattern used in Django modules: import the
logging module and instantiate a module-level logger via
logging.getLogger(__name__). Update any references to the top-level symbol
logger to use that logger variable; ensure the import is "import logging" and
the logger is created with logging.getLogger(__name__) so views.py uses the
public logging API instead of asyncio.log.logger.
In `@frontend/src/lib/api/feedback.ts`:
- Line 17: Centralize construction of the feedback endpoint to avoid "slash
drift": add a small helper (e.g., buildFeedbackUrl(ticketId) or
feedbackUrl(ticketId)) that returns `${BASE}/${ticketId}/feedback/` ensuring the
trailing slash is always present, then replace the direct fetch calls that use
`${BASE}/${ticketId}/feedback/` (the occurrences around the fetch in this file)
with fetch(buildFeedbackUrl(ticketId), ...) in both places so URL concatenation
is consistent across methods.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
backend/apps/tickets/validation.pybackend/apps/tickets/views.pybackend/apps/tickets/ws/consumers.pybackend/apps/tickets/ws/routing.pybackend/config/asgi.pybackend/config/celery.pybackend/config/settings.pybackend/requirements.txtfrontend/src/lib/api/comment.tsfrontend/src/lib/api/feedback.tsfrontend/src/types/comments.ts
| const formData = new FormData(); | ||
| if (payload.message !== undefined) { | ||
| formData.append("message", payload.message); | ||
| } |
There was a problem hiding this comment.
Guard against empty update payloads before sending PUT.
If payload.message is omitted, Line 58–61 builds an empty FormData and Line 65 still sends the request. Fail fast client-side to avoid no-op/validation-failure calls.
Suggested fix
const formData = new FormData();
if (payload.message !== undefined) {
formData.append("message", payload.message);
}
+if ([...formData.keys()].length === 0) {
+ throw new Error("No fields provided for comment update.");
+}
const res = await fetch(`${BASE}/${ticketId}/comments/${commentId}`, {Also applies to: 65-65
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/lib/api/comment.ts` around lines 58 - 61, The update routine
currently always constructs a FormData (const formData) and sends a PUT even
when payload.message is omitted, causing empty/no-op requests; modify the
function that builds formData and performs the PUT (the block that reads
payload.message and the subsequent fetch/PUT call) to fail fast: after
attempting to append fields to formData (e.g., payload.message), check whether
any fields were appended (or check payload properties) and if none are present,
return early or throw a clear error (e.g., "No fields to update") instead of
performing the fetch; ensure you reference and guard the same FormData instance
used in the PUT call so the request is never sent with an empty body.
…y codes; fix the types by aligning it from the schema; integrate the feedback.
…ebP files); remove celery from INSTALLED_APPS per modern Celery integration guidelines
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
backend/apps/tickets/views.py (2)
523-531:⚠️ Potential issue | 🔴 CriticalFix inconsistent return tuple order in
create_feedbackerror responses.Error branches at lines 524, 527, and 531 return
(data, status)while the success case at line 574 and all other functions in this file (e.g.,get_feedback) use(status, data). This breaks the Django Ninja response contract and will cause malformed API responses.Suggested fix
if ticket.student != request.user: - return {"detail": "You do not have permission to submit feedback."}, 403 + return 403, {"detail": "You do not have permission to submit feedback."} if ticket.status != 'resolved': - return {"detail": "Feedback can only be submitted for resolved tickets."}, 400 + return 400, {"detail": "Feedback can only be submitted for resolved tickets."} # Only one feedback per ticket if hasattr(ticket, 'feedback'): - return {"detail": "Feedback already submitted for this ticket."}, 400 + return 400, {"detail": "Feedback already submitted for this ticket."}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/apps/tickets/views.py` around lines 523 - 531, In the create_feedback view, the error return tuples are reversed (returning (data, status)) which is inconsistent with the file's convention and Django Ninja expectations; update the three error branches that check ticket ownership (ticket.student != request.user), ticket.status != 'resolved', and hasattr(ticket, 'feedback') to return (status, data) instead of (data, status) so they match the success case and get_feedback behavior and restore proper response formatting.
405-419:⚠️ Potential issue | 🟠 MajorRemove
ticket_updatesgroup broadcast or use ticket-scoped channels.Line 417 broadcasts
comment.user.email(PII) to the global"ticket_updates"group (line 406). The consumer grants all authenticated users access to this group without per-ticket authorization (consumers.py:8-13), so any authenticated user receives all comment emails regardless of ticket access. Either removeuser_{id}group for per-user notifications).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/apps/tickets/views.py` around lines 405 - 419, The broadcast to the global "ticket_updates" group is leaking PII via comment.user.email in the payload sent by async_to_sync(channel_layer.group_send); remove the email field from the payload (delete/comment out comment.user.email) and instead, if you must notify the comment author, send that piece of data only to a scoped group (e.g., async_to_sync(channel_layer.group_send)(f"user_{comment.user.id}", {...}) or to a ticket-scoped group like f"ticket_{ticket.id}" with proper per-ticket ACLs); update the send site in views.py (the group_send call that constructs the "comment" dict) and add a separate group_send to the user_{id} or ticket_{id} group for any PII-containing notifications so the global "ticket_updates" payload no longer includes email.
🧹 Nitpick comments (3)
frontend/src/components/ui/admin/AdminDashboard.svelte (1)
70-73: Prefer in-component refetch over full page reload for retry.Line 70-73 currently refreshes the whole app; retrying the stats request directly is lighter and preserves UI state.
♻️ Suggested refactor
+ async function loadStats() { + isLoading = true; + error = null; + try { + stats = await getDashboardStats(); + } catch (err) { + error = err instanceof Error ? err.message : "Failed to load stats"; + console.error("Dashboard stats error:", err); + } finally { + isLoading = false; + } + } + - onMount(async () => { - try { - stats = await getDashboardStats(); - } catch (err) { - error = err instanceof Error ? err.message : "Failed to load stats"; - console.error("Dashboard stats error:", err); - } finally { - isLoading = false; - } - }); + onMount(() => { + void loadStats(); + });- <button - class="btn btn-sm btn-ghost" - onclick={() => window.location.reload()} - > + <button class="btn btn-sm btn-ghost" onclick={() => void loadStats()}> Retry </button>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ui/admin/AdminDashboard.svelte` around lines 70 - 73, Replace the full-page reload triggered by window.location.reload() with an in-component refetch of the stats API: remove the onclick that calls window.location.reload() in AdminDashboard.svelte and instead call the component's stats-fetching function (e.g., fetchStats(), loadStats(), or whatever method is responsible for loading dashboard stats) so it retries only the stats request and preserves UI state; if such a function doesn't exist yet, implement a small async method (e.g., fetchStats) that re-runs the API call, updates the component state, and wire the button onclick to that method.frontend/src/types/dashboard.ts (1)
6-6: Constraintrendto explicit literals.Line 6 is overly broad (
string) while UI logic treats it as a binary field (metric.trend === 'success'elsetext-error). Tighten the type to prevent invalid values and silent styling regressions.♻️ Suggested type tightening
export type DashboardMetric = { title: string; value: string; change: string; subtitle: string; - trend: string; + trend: "success" | "error"; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/types/dashboard.ts` at line 6, The trend field is typed too broadly as string; tighten it to an explicit union of allowed literals (e.g. 'success' | 'failure' or whatever exact labels the UI checks) by replacing trend: string with trend: 'success' | 'failure' (or include any other valid literal like 'neutral' if used elsewhere) in the dashboard type definition, and update any initializers, defaults or tests to use one of these literals so consumers (and the metric.trend === 'success' checks) are type-safe.frontend/src/stores/feedback.store.ts (1)
53-57: Use structured status handling instead of parsing error text.
errorMessage.includes("404")/"No feedback"is brittle. A backend message change can incorrectly surface a user-facing error instead of the empty state. Prefer branching on structured status codes fromfeedbackAPI.getFeedback().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/stores/feedback.store.ts` around lines 53 - 57, Replace brittle string checks on errorMessage in the feedback load path with structured status handling: when calling feedbackAPI.getFeedback() inspect the thrown error for a status code (e.g., error.status or error.response?.status) and branch on status === 404 to treat “no feedback” as the empty state inside update(...) instead of using errorMessage.includes("404") or "No feedback"; remove the text parsing and fall back to treating other errors as real failures (logging/setting error state).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/apps/tickets/schemas.py`:
- Around line 65-71: The try/except is too broad: instead of catching Exception
around resolving ticket.feedback, replace the block so has_feedback is computed
via the simple pattern used elsewhere (use hasattr(ticket, 'feedback') and check
ticket.feedback is not None) or, if you prefer explicit exception handling, only
catch AttributeError when accessing ticket.feedback; update the variables
referenced (ticket, has_feedback, feedback_obj/getattr usage) to use that
narrower check and remove the broad except Exception branch.
In `@backend/apps/tickets/views.py`:
- Around line 505-512: The route decorator for get_feedback is missing a 403
response entry while the function returns 403 for unauthorized access; update
the response mapping on the `@router.get` for get_feedback to include 403: dict so
the declared responses match actual returns and Django Ninja's contract (locate
the `@router.get`(...) decorator above the get_feedback function).
- Around line 642-646: The dashboard_stats endpoint currently only declares
DashboardStatsSchema for 200 and returns an error using the wrong tuple order;
update the router.get response to include both success and error responses
(e.g., response={200: DashboardStatsSchema, 403: ErrorResponseSchema} or the
project's existing error schema used by get_feedback/create_feedback) and change
the unauthorized return to the status-first form like (403, {"detail": "You do
not have permission to view this data."}) so dashboard_stats and its
DashboardStatsSchema follow the same response pattern as
get_feedback/create_feedback.
- Around line 752-760: The weekday-to-label mapping is incorrect because
day_date.weekday() returns 0=Monday..6=Sunday while the days list is
Sunday..Saturday; update the code that builds the TicketVolumeDataPointSchema
(the days array and the loop that sets day=...) to compute the label using
days[(day_date.weekday() + 1) % 7] (and remove the special-case i == 0 / i == 6
logic) so labels align with day_date correctly when creating
TicketVolumeDataPointSchema.
In `@frontend/src/routes/tickets/`[id]/+page.svelte:
- Around line 59-66: Remove the duplicate feedback fetch by keeping a single
guarded reactive fetch that only runs when ticket?.id is truthy; specifically,
eliminate the initial awaited call (the block that does "if (ticket?.id) { await
feedbackStore.loadFeedbackForTicket(ticket.id); }" inside the
onMount/initializer) or remove the unguarded reactive $: block and retain only
one $: if (ticket?.id) { feedbackStore.loadFeedbackForTicket(ticket.id); } so
feedbackStore.loadFeedbackForTicket is invoked exactly once per ticket id
change; ensure you reference ticket and feedbackStore.loadFeedbackForTicket in
the remaining guarded reactive block.
In `@frontend/src/stores/feedback.store.ts`:
- Around line 104-107: The current updater passed to update(...) replaces
feedbacks with s.feedbacks.map(...) which leaves the array unchanged if no id
matches; change the logic in the update callback used in the store (the update(s
=> ({ ... })) block) to detect whether any item was replaced and, if not, append
the updated feedback to s.feedbacks (or initialize the array with [feedback]
when s.feedbacks is missing), then set isLoading:false; reference the variables
used: s, feedbacks, feedbackId, feedback and the update(...) callback to
implement this fallback.
---
Outside diff comments:
In `@backend/apps/tickets/views.py`:
- Around line 523-531: In the create_feedback view, the error return tuples are
reversed (returning (data, status)) which is inconsistent with the file's
convention and Django Ninja expectations; update the three error branches that
check ticket ownership (ticket.student != request.user), ticket.status !=
'resolved', and hasattr(ticket, 'feedback') to return (status, data) instead of
(data, status) so they match the success case and get_feedback behavior and
restore proper response formatting.
- Around line 405-419: The broadcast to the global "ticket_updates" group is
leaking PII via comment.user.email in the payload sent by
async_to_sync(channel_layer.group_send); remove the email field from the payload
(delete/comment out comment.user.email) and instead, if you must notify the
comment author, send that piece of data only to a scoped group (e.g.,
async_to_sync(channel_layer.group_send)(f"user_{comment.user.id}", {...}) or to
a ticket-scoped group like f"ticket_{ticket.id}" with proper per-ticket ACLs);
update the send site in views.py (the group_send call that constructs the
"comment" dict) and add a separate group_send to the user_{id} or ticket_{id}
group for any PII-containing notifications so the global "ticket_updates"
payload no longer includes email.
---
Nitpick comments:
In `@frontend/src/components/ui/admin/AdminDashboard.svelte`:
- Around line 70-73: Replace the full-page reload triggered by
window.location.reload() with an in-component refetch of the stats API: remove
the onclick that calls window.location.reload() in AdminDashboard.svelte and
instead call the component's stats-fetching function (e.g., fetchStats(),
loadStats(), or whatever method is responsible for loading dashboard stats) so
it retries only the stats request and preserves UI state; if such a function
doesn't exist yet, implement a small async method (e.g., fetchStats) that
re-runs the API call, updates the component state, and wire the button onclick
to that method.
In `@frontend/src/stores/feedback.store.ts`:
- Around line 53-57: Replace brittle string checks on errorMessage in the
feedback load path with structured status handling: when calling
feedbackAPI.getFeedback() inspect the thrown error for a status code (e.g.,
error.status or error.response?.status) and branch on status === 404 to treat
“no feedback” as the empty state inside update(...) instead of using
errorMessage.includes("404") or "No feedback"; remove the text parsing and fall
back to treating other errors as real failures (logging/setting error state).
In `@frontend/src/types/dashboard.ts`:
- Line 6: The trend field is typed too broadly as string; tighten it to an
explicit union of allowed literals (e.g. 'success' | 'failure' or whatever exact
labels the UI checks) by replacing trend: string with trend: 'success' |
'failure' (or include any other valid literal like 'neutral' if used elsewhere)
in the dashboard type definition, and update any initializers, defaults or tests
to use one of these literals so consumers (and the metric.trend === 'success'
checks) are type-safe.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
backend/apps/tickets/schemas.pybackend/apps/tickets/views.pybackend/apps/users/views.pyfrontend/src/components/ui/admin/AdminDashboard.sveltefrontend/src/components/ui/feedback/FeedbackForm.sveltefrontend/src/lib/api/ticket.tsfrontend/src/routes/tickets/[id]/+page.sveltefrontend/src/stores/comment.store.tsfrontend/src/stores/feedback.store.tsfrontend/src/stores/tickets.store.tsfrontend/src/types/dashboard.tsfrontend/src/types/feedback.tsfrontend/src/types/tickets.ts
💤 Files with no reviewable changes (1)
- backend/apps/users/views.py
✅ Files skipped from review due to trivial changes (1)
- frontend/src/components/ui/feedback/FeedbackForm.svelte
| @router.get("/stats/dashboard", response=DashboardStatsSchema) | ||
| def dashboard_stats(request): | ||
| if not request.user.is_staff: | ||
| return {"detail": "You do not have permission to view this data."}, 403 | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's look at the file and its imports to understand the routing framework
head -50 backend/apps/tickets/views.py | cat -nRepository: Polqt/iComplain
Length of output: 1991
🏁 Script executed:
#!/bin/bash
# Get the specific lines mentioned in the review
sed -n '640,650p' backend/apps/tickets/views.py | cat -nRepository: Polqt/iComplain
Length of output: 505
🏁 Script executed:
#!/bin/bash
# Check imports to identify the routing framework
rg "^from|^import" backend/apps/tickets/views.py | head -20Repository: Polqt/iComplain
Length of output: 828
🏁 Script executed:
#!/bin/bash
# Search for other error handling patterns with return statements containing status codes
rg "return.*,\s*(40[0-9]|50[0-9])" backend/apps/tickets/views.py -A 1 -B 1Repository: Polqt/iComplain
Length of output: 1854
🏁 Script executed:
#!/bin/bash
# Search for endpoints with multiple response declarations to see the pattern used
rg "@router\.(get|post|put|delete).*response=\{" backend/ -A 5Repository: Polqt/iComplain
Length of output: 4542
🏁 Script executed:
#!/bin/bash
# Search for error responses with 403 status codes in the entire backend
rg "403" backend/apps/tickets/views.py -B 3 -A 1Repository: Polqt/iComplain
Length of output: 2822
dashboard_stats endpoint needs multiple response declarations and status-first return order.
The endpoint declares only DashboardStatsSchema for the 200 response but returns a 403 error using the incorrect tuple order (data, status). Update the response declaration to include both success and error responses, and change the return to use (status, data) order to match the Django Ninja convention and the pattern used consistently throughout the codebase (e.g., get_feedback, create_feedback endpoints).
Suggested fix
-@router.get("/stats/dashboard", response=DashboardStatsSchema)
+@router.get("/stats/dashboard", response={200: DashboardStatsSchema, 403: dict})
def dashboard_stats(request):
if not request.user.is_staff:
- return {"detail": "You do not have permission to view this data."}, 403
+ return 403, {"detail": "You do not have permission to view this data."}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/apps/tickets/views.py` around lines 642 - 646, The dashboard_stats
endpoint currently only declares DashboardStatsSchema for 200 and returns an
error using the wrong tuple order; update the router.get response to include
both success and error responses (e.g., response={200: DashboardStatsSchema,
403: ErrorResponseSchema} or the project's existing error schema used by
get_feedback/create_feedback) and change the unauthorized return to the
status-first form like (403, {"detail": "You do not have permission to view this
data."}) so dashboard_stats and its DashboardStatsSchema follow the same
response pattern as get_feedback/create_feedback.
| if (ticket?.id) { | ||
| await feedbackStore.loadFeedbackForTicket(ticket.id); | ||
| } | ||
| }); | ||
|
|
||
| $: if (ticket?.id) { | ||
| feedbackStore.loadFeedbackForTicket(ticket.id); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -a '+page.svelte' | rg 'frontend/src/routes/tickets/\[id\]/\+page\.svelte$' | head -n1Repository: Polqt/iComplain
Length of output: 350
🏁 Script executed:
find . -path '*/routes/tickets/\[id\]/+page.svelte' -o -path '*/routes/tickets/*/+page.svelte' 2>/dev/null | head -5Repository: Polqt/iComplain
Length of output: 106
🏁 Script executed:
rg -l 'loadFeedbackForTicket' --type svelte 2>/dev/null | head -5Repository: Polqt/iComplain
Length of output: 41
🏁 Script executed:
cat -n ./frontend/src/routes/tickets/[id]/+page.svelte | sed -n '50,80p'Repository: Polqt/iComplain
Length of output: 1145
🏁 Script executed:
rg -n 'loadFeedbackForTicket' ./frontend/src/routes/tickets/[id]/+page.svelteRepository: Polqt/iComplain
Length of output: 239
Prevent duplicate feedback fetches for the same ticket.
Lines 59-61 and 64-66 both fetch feedback for the same ticket on initial load, causing duplicate requests. The unguarded reactive block also retriggers on unrelated store invalidations.
Suggested fix (single guarded reactive fetch)
let showFeedbackForm = false;
+ let loadedFeedbackTicketId: number | null = null;
onMount(async () => {
@@
- if (ticket?.id) {
- await feedbackStore.loadFeedbackForTicket(ticket.id);
- }
});
- $: if (ticket?.id) {
- feedbackStore.loadFeedbackForTicket(ticket.id);
- }
+ $: if (ticket?.id && ticket.id !== loadedFeedbackTicketId) {
+ loadedFeedbackTicketId = ticket.id;
+ void feedbackStore.loadFeedbackForTicket(ticket.id);
+ }
async function handleFeedbackSaved() {
showFeedbackForm = false;
if (ticket?.id) {
+ loadedFeedbackTicketId = null; // force one refresh after save
await feedbackStore.loadFeedbackForTicket(ticket.id);
await ticketsStore.loadTicketById(ticket.id);
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/routes/tickets/`[id]/+page.svelte around lines 59 - 66, Remove
the duplicate feedback fetch by keeping a single guarded reactive fetch that
only runs when ticket?.id is truthy; specifically, eliminate the initial awaited
call (the block that does "if (ticket?.id) { await
feedbackStore.loadFeedbackForTicket(ticket.id); }" inside the
onMount/initializer) or remove the unguarded reactive $: block and retain only
one $: if (ticket?.id) { feedbackStore.loadFeedbackForTicket(ticket.id); } so
feedbackStore.loadFeedbackForTicket is invoked exactly once per ticket id
change; ensure you reference ticket and feedbackStore.loadFeedbackForTicket in
the remaining guarded reactive block.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
backend/apps/tickets/validation.py (2)
25-28: Consider adding a guard for very small files.If a file is smaller than 12 bytes,
header[8:12]returns an incomplete slice and validation fails gracefully. However, adding an explicit check provides a clearer error message and documents the minimum size requirement.♻️ Proposed enhancement
# Validate magic bytes (prevent extension spoofing) file.seek(0) header = file.read(12) # Read first 12 bytes for magic byte validation file.seek(0) # Reset file pointer after reading + + if len(header) < 12: + raise ValueError("File is too small to validate.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/apps/tickets/validation.py` around lines 25 - 28, After reading the first 12 bytes into header in validation.py (the code that does file.seek(0); header = file.read(12); file.seek(0)), add an explicit guard that checks len(header) < 12 and raises a clear validation error (e.g., ValueError or the module's ValidationError) with a message like "file too small for magic-byte validation" before any use of header[8:12]; this ensures tiny files are rejected with a clear message rather than relying on incomplete slicing of header when validating magic bytes.
11-13: Unused MAGIC_BYTES entry for WebP.The
'image/webp'entry in MAGIC_BYTES is never used because WebP validation is handled by the special branch at lines 33-34. Consider removing it to avoid confusion, or add a comment explaining why it exists.♻️ Option 1: Remove unused entry
MAGIC_BYTES = { 'image/jpeg': [ b'\xFF\xD8\xFF', # JPEG ], 'image/png': [ b'\x89PNG\r\n\x1a\n', # PNG ], - 'image/webp': [ - b'RIFF', # WebP (first 4 bytes, followed by 'WEBP' at offset 8) - ], 'application/pdf': [ b'%PDF-', # PDF ], }♻️ Option 2: Add clarifying comment
'image/webp': [ - b'RIFF', # WebP (first 4 bytes, followed by 'WEBP' at offset 8) + b'RIFF', # WebP - Note: handled specially in validate_file() due to additional offset check ],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/apps/tickets/validation.py` around lines 11 - 13, The MAGIC_BYTES dict contains an unused 'image/webp' entry while WebP is handled by the special-case branch in the validation logic; either remove the 'image/webp' key from MAGIC_BYTES or retain it but add a clear comment next to the 'image/webp' entry (referencing MAGIC_BYTES and the 'image/webp' key) explaining that WebP is validated separately by the special branch so the entry is kept only for completeness and should not be relied upon for WebP detection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@backend/apps/tickets/validation.py`:
- Around line 25-28: After reading the first 12 bytes into header in
validation.py (the code that does file.seek(0); header = file.read(12);
file.seek(0)), add an explicit guard that checks len(header) < 12 and raises a
clear validation error (e.g., ValueError or the module's ValidationError) with a
message like "file too small for magic-byte validation" before any use of
header[8:12]; this ensures tiny files are rejected with a clear message rather
than relying on incomplete slicing of header when validating magic bytes.
- Around line 11-13: The MAGIC_BYTES dict contains an unused 'image/webp' entry
while WebP is handled by the special-case branch in the validation logic; either
remove the 'image/webp' key from MAGIC_BYTES or retain it but add a clear
comment next to the 'image/webp' entry (referencing MAGIC_BYTES and the
'image/webp' key) explaining that WebP is validated separately by the special
branch so the entry is kept only for completeness and should not be relied upon
for WebP detection.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
backend/apps/tickets/validation.pybackend/config/settings.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/config/settings.py
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…icket auto-escalation and reporting tasks
…Polqt/iComplain into feat/student-ticket-attachments
Summary by Sourcery
Integrate Celery-based scheduled tasks for ticket operations, tighten file upload validation, and align frontend APIs and types with updated backend behavior.
New Features:
Enhancements:
Build:
Summary by CodeRabbit
New Features
Improvements
Removed