Conversation
78041e9 to
5a83c9f
Compare
There was a problem hiding this comment.
Pull request overview
This PR implements a batch delete operation for the CRUD API, adding a DELETE /records/{resource}/many endpoint that allows deletion of multiple records in a single request using query parameters.
Key Changes:
- Adds
delete_many()function indynamodb.rsthat uses DynamoDB'sbatch_write_itemAPI with automatic chunking (25 items per batch) - Adds
delete_many_records_handler()inmain.rswith ownership verification for user-scoped resources - Updates route configuration to support DELETE method on
/records/{resource}/manyendpoint
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| crud_api/src/dynamodb.rs | Implements delete_many() function using DynamoDB batch_write_item API with chunking and unprocessed item tracking |
| crud_api/src/main.rs | Adds delete_many_records_handler() with user ownership verification, updates route to include DELETE method, and modifies unused parameter naming in get_many_related_records_handler |
5a83c9f to
c332e00
Compare
|
@copilot apply changes based on the comments in this thread |
Co-authored-by: saebyn <185030+saebyn@users.noreply.github.com>
…fy deletion tracking Co-authored-by: saebyn <185030+saebyn@users.noreply.github.com>
… fix security gap, and improve consistency Co-authored-by: saebyn <185030+saebyn@users.noreply.github.com>
9c27f44 to
af71aa1
Compare
|
@copilot apply changes based on the comments in this thread |
… chunking logic Co-authored-by: saebyn <185030+saebyn@users.noreply.github.com>
| // Log warning if there are still unprocessed items after all retries | ||
| if !requests_to_process.is_empty() { | ||
| tracing::warn!( | ||
| "Failed to delete {} items after {} retries", | ||
| requests_to_process.len(), | ||
| MAX_RETRIES | ||
| ); | ||
| } |
There was a problem hiding this comment.
When batch delete still has requests_to_process after exhausting retries, the function only logs a warning and proceeds, returning 200 from the handler with a potentially incomplete deleted list. This can lead to silent partial deletes under throttling. Consider failing the operation (return an error) when unprocessed items remain after retries, or returning an explicit "failed" list so the handler can communicate partial failure to the caller.
| fn test_delete_many_empty_ids() { | ||
| // Test that empty ID list returns empty result without errors | ||
| let ids: Vec<&str> = vec![]; | ||
|
|
||
| // Verify chunking logic: empty input should result in no chunks | ||
| let chunks: Vec<_> = ids.chunks(25).collect(); | ||
| assert_eq!(chunks.len(), 0); | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_delete_many_chunking_under_limit() { | ||
| // Test that IDs under the 25-item limit fit in a single chunk | ||
| let ids: Vec<&str> = (0..20).map(|_| "test-id").collect(); | ||
|
|
||
| let chunks: Vec<_> = ids.chunks(25).collect(); | ||
| assert_eq!(chunks.len(), 1); | ||
| assert_eq!(chunks[0].len(), 20); | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_delete_many_chunking_at_limit() { | ||
| // Test that exactly 25 IDs fit in a single chunk | ||
| let ids: Vec<&str> = (0..25).map(|_| "test-id").collect(); | ||
|
|
||
| let chunks: Vec<_> = ids.chunks(25).collect(); | ||
| assert_eq!(chunks.len(), 1); | ||
| assert_eq!(chunks[0].len(), 25); | ||
| } | ||
|
|
||
| #[test] |
There was a problem hiding this comment.
The new unit tests only validate Rust slice chunking (ids.chunks(...)) rather than exercising get_many/delete_many behavior (request construction, retry handling, and unprocessed-key/item behavior). As written they won’t catch regressions in the actual DynamoDB logic. Consider extracting the chunking/batch-building into a helper you can unit test, and/or adding tests that drive the functions with a mocked/stubbed DynamoDB client to verify retries/unprocessed handling.
| fn test_delete_many_empty_ids() { | |
| // Test that empty ID list returns empty result without errors | |
| let ids: Vec<&str> = vec![]; | |
| // Verify chunking logic: empty input should result in no chunks | |
| let chunks: Vec<_> = ids.chunks(25).collect(); | |
| assert_eq!(chunks.len(), 0); | |
| } | |
| #[test] | |
| fn test_delete_many_chunking_under_limit() { | |
| // Test that IDs under the 25-item limit fit in a single chunk | |
| let ids: Vec<&str> = (0..20).map(|_| "test-id").collect(); | |
| let chunks: Vec<_> = ids.chunks(25).collect(); | |
| assert_eq!(chunks.len(), 1); | |
| assert_eq!(chunks[0].len(), 20); | |
| } | |
| #[test] | |
| fn test_delete_many_chunking_at_limit() { | |
| // Test that exactly 25 IDs fit in a single chunk | |
| let ids: Vec<&str> = (0..25).map(|_| "test-id").collect(); | |
| let chunks: Vec<_> = ids.chunks(25).collect(); | |
| assert_eq!(chunks.len(), 1); | |
| assert_eq!(chunks[0].len(), 25); | |
| } | |
| #[test] |
| let ids = query_params | ||
| .id | ||
| .iter() | ||
| .map(String::as_str) | ||
| .collect::<Vec<_>>(); | ||
|
|
||
| // Enforce a maximum of 100 IDs per request to align with DynamoDB batch limits | ||
| if ids.len() > 100 { | ||
| return ( | ||
| StatusCode::BAD_REQUEST, | ||
| [(header::CONTENT_TYPE, "application/json")], | ||
| Json(json!({ | ||
| "message": "Too many IDs provided. Maximum of 100 IDs per request.", | ||
| })), | ||
| ); | ||
| } | ||
|
|
||
| // If user_scoped, verify ownership of all items before deleting | ||
| if table_config.user_scoped { | ||
| match dynamodb::get_many(&state.dynamodb, &table_config, &ids).await { | ||
| Ok(items) => { | ||
| if let Some(user) = &user_id { | ||
| // Verify all requested IDs exist and belong to the user | ||
| if items.len() != ids.len() { | ||
| return ( |
There was a problem hiding this comment.
delete_many_records_handler treats duplicate IDs as an error because items.len() != ids.len() will trip even when all records exist (BatchGet returns one item per unique key). Consider de-duplicating the requested IDs (while optionally preserving original order) before doing existence/ownership checks and before calling delete_many, or return a 400 with a clear message when duplicates are provided.
| // Log warning if there are still unprocessed keys after all retries | ||
| if !request_items.is_empty() { | ||
| tracing::warn!( | ||
| "Failed to get some keys after {} retries for {} table(s)", | ||
| MAX_RETRIES, | ||
| request_items.len() | ||
| ); |
There was a problem hiding this comment.
get_many logs a warning and returns partial results when UnprocessedKeys remain after retries. Callers (notably ownership verification in batch delete) can misinterpret this as “not found” and return 404/403 incorrectly. Consider returning an error when unprocessed keys remain after the retry budget so callers can surface a 5xx/503 instead of silently dropping items.
Adds
DELETE /records/{resource}/manyendpoint to delete multiple records in a single request.Implementation
dynamodb.rs:
delete_many()usingbatch_write_itemAPI with chunking (25 items per batch) and exponential backoff retry for unprocessed itemsget_many()with 100-item chunking and retry logic for unprocessed keysmain.rs:
delete_many_records_handlerwith user ownership verification?id=id1&id=id2&id=id3{"deleted": [...], "count": n}Usage
Features
Testing
Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.