Skip to content

Implement batch delete operation for CRUD API#614

Open
Copilot wants to merge 4 commits intomainfrom
copilot/implement-delete-many-operation
Open

Implement batch delete operation for CRUD API#614
Copilot wants to merge 4 commits intomainfrom
copilot/implement-delete-many-operation

Conversation

Copy link
Contributor

Copilot AI commented Dec 24, 2025

Adds DELETE /records/{resource}/many endpoint to delete multiple records in a single request.

Implementation

  • dynamodb.rs:

    • Added delete_many() using batch_write_item API with chunking (25 items per batch) and exponential backoff retry for unprocessed items
    • Enhanced get_many() with 100-item chunking and retry logic for unprocessed keys
    • Added 9 unit tests covering chunking behavior for both functions
  • main.rs:

    • Added delete_many_records_handler with user ownership verification
    • Accepts query params: ?id=id1&id=id2&id=id3
    • Enforces maximum of 100 IDs per request (returns 400 Bad Request if exceeded)
    • Returns {"deleted": [...], "count": n}

Usage

DELETE /records/streams/many?id=abc123&id=def456&id=ghi789

{
  "deleted": ["abc123", "def456", "ghi789"],
  "count": 3
}

Features

  • Automatic Chunking: Handles DynamoDB batch limits (25 for writes, 100 for reads)
  • Retry Logic: Exponential backoff (100ms, 200ms, 400ms) for throttled requests
  • Security: Verifies all requested IDs exist and belong to user for user-scoped resources
  • Validation: Returns 404 if any ID not found, 403 if not owned, 400 if >100 IDs requested

Testing

  • ✅ 11 unit tests pass (9 new tests for chunking logic)
  • ✅ Code builds successfully
  • ✅ Clippy checks pass
Original prompt

This section details on the original issue you should resolve

<issue_title>Implement delete many operation</issue_title>
<issue_description>in our crud_api service, so that we can delete multiple records in a single request from the frontend.</issue_description>

Comments on the Issue (you are @copilot in this section)


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI changed the title [WIP] Add delete many operation to CRUD API service Implement batch delete operation for CRUD API Dec 24, 2025
Copilot AI requested a review from saebyn December 24, 2025 16:56
@saebyn saebyn force-pushed the copilot/implement-delete-many-operation branch from 78041e9 to 5a83c9f Compare December 24, 2025 19:44
@saebyn saebyn marked this pull request as ready for review December 24, 2025 19:44
Copilot AI review requested due to automatic review settings December 24, 2025 19:44
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements a batch delete operation for the CRUD API, adding a DELETE /records/{resource}/many endpoint that allows deletion of multiple records in a single request using query parameters.

Key Changes:

  • Adds delete_many() function in dynamodb.rs that uses DynamoDB's batch_write_item API with automatic chunking (25 items per batch)
  • Adds delete_many_records_handler() in main.rs with ownership verification for user-scoped resources
  • Updates route configuration to support DELETE method on /records/{resource}/many endpoint

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.

File Description
crud_api/src/dynamodb.rs Implements delete_many() function using DynamoDB batch_write_item API with chunking and unprocessed item tracking
crud_api/src/main.rs Adds delete_many_records_handler() with user ownership verification, updates route to include DELETE method, and modifies unused parameter naming in get_many_related_records_handler

@saebyn saebyn force-pushed the copilot/implement-delete-many-operation branch from 5a83c9f to c332e00 Compare February 8, 2026 18:39
@saebyn
Copy link
Owner

saebyn commented Feb 8, 2026

@copilot apply changes based on the comments in this thread

Copilot AI and others added 3 commits February 8, 2026 16:39
Co-authored-by: saebyn <185030+saebyn@users.noreply.github.com>
…fy deletion tracking

Co-authored-by: saebyn <185030+saebyn@users.noreply.github.com>
… fix security gap, and improve consistency

Co-authored-by: saebyn <185030+saebyn@users.noreply.github.com>
@saebyn saebyn force-pushed the copilot/implement-delete-many-operation branch from 9c27f44 to af71aa1 Compare February 9, 2026 00:39
@saebyn saebyn requested a review from Copilot February 9, 2026 00:50
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

@saebyn
Copy link
Owner

saebyn commented Feb 9, 2026

@copilot apply changes based on the comments in this thread

… chunking logic

Co-authored-by: saebyn <185030+saebyn@users.noreply.github.com>
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.

Comment on lines +600 to +607
// Log warning if there are still unprocessed items after all retries
if !requests_to_process.is_empty() {
tracing::warn!(
"Failed to delete {} items after {} retries",
requests_to_process.len(),
MAX_RETRIES
);
}
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When batch delete still has requests_to_process after exhausting retries, the function only logs a warning and proceeds, returning 200 from the handler with a potentially incomplete deleted list. This can lead to silent partial deletes under throttling. Consider failing the operation (return an error) when unprocessed items remain after retries, or returning an explicit "failed" list so the handler can communicate partial failure to the caller.

Copilot uses AI. Check for mistakes.
Comment on lines +770 to +799
fn test_delete_many_empty_ids() {
// Test that empty ID list returns empty result without errors
let ids: Vec<&str> = vec![];

// Verify chunking logic: empty input should result in no chunks
let chunks: Vec<_> = ids.chunks(25).collect();
assert_eq!(chunks.len(), 0);
}

#[test]
fn test_delete_many_chunking_under_limit() {
// Test that IDs under the 25-item limit fit in a single chunk
let ids: Vec<&str> = (0..20).map(|_| "test-id").collect();

let chunks: Vec<_> = ids.chunks(25).collect();
assert_eq!(chunks.len(), 1);
assert_eq!(chunks[0].len(), 20);
}

#[test]
fn test_delete_many_chunking_at_limit() {
// Test that exactly 25 IDs fit in a single chunk
let ids: Vec<&str> = (0..25).map(|_| "test-id").collect();

let chunks: Vec<_> = ids.chunks(25).collect();
assert_eq!(chunks.len(), 1);
assert_eq!(chunks[0].len(), 25);
}

#[test]
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new unit tests only validate Rust slice chunking (ids.chunks(...)) rather than exercising get_many/delete_many behavior (request construction, retry handling, and unprocessed-key/item behavior). As written they won’t catch regressions in the actual DynamoDB logic. Consider extracting the chunking/batch-building into a helper you can unit test, and/or adding tests that drive the functions with a mocked/stubbed DynamoDB client to verify retries/unprocessed handling.

Suggested change
fn test_delete_many_empty_ids() {
// Test that empty ID list returns empty result without errors
let ids: Vec<&str> = vec![];
// Verify chunking logic: empty input should result in no chunks
let chunks: Vec<_> = ids.chunks(25).collect();
assert_eq!(chunks.len(), 0);
}
#[test]
fn test_delete_many_chunking_under_limit() {
// Test that IDs under the 25-item limit fit in a single chunk
let ids: Vec<&str> = (0..20).map(|_| "test-id").collect();
let chunks: Vec<_> = ids.chunks(25).collect();
assert_eq!(chunks.len(), 1);
assert_eq!(chunks[0].len(), 20);
}
#[test]
fn test_delete_many_chunking_at_limit() {
// Test that exactly 25 IDs fit in a single chunk
let ids: Vec<&str> = (0..25).map(|_| "test-id").collect();
let chunks: Vec<_> = ids.chunks(25).collect();
assert_eq!(chunks.len(), 1);
assert_eq!(chunks[0].len(), 25);
}
#[test]

Copilot uses AI. Check for mistakes.
Comment on lines +733 to +757
let ids = query_params
.id
.iter()
.map(String::as_str)
.collect::<Vec<_>>();

// Enforce a maximum of 100 IDs per request to align with DynamoDB batch limits
if ids.len() > 100 {
return (
StatusCode::BAD_REQUEST,
[(header::CONTENT_TYPE, "application/json")],
Json(json!({
"message": "Too many IDs provided. Maximum of 100 IDs per request.",
})),
);
}

// If user_scoped, verify ownership of all items before deleting
if table_config.user_scoped {
match dynamodb::get_many(&state.dynamodb, &table_config, &ids).await {
Ok(items) => {
if let Some(user) = &user_id {
// Verify all requested IDs exist and belong to the user
if items.len() != ids.len() {
return (
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete_many_records_handler treats duplicate IDs as an error because items.len() != ids.len() will trip even when all records exist (BatchGet returns one item per unique key). Consider de-duplicating the requested IDs (while optionally preserving original order) before doing existence/ownership checks and before calling delete_many, or return a 400 with a clear message when duplicates are provided.

Copilot uses AI. Check for mistakes.
Comment on lines +301 to +307
// Log warning if there are still unprocessed keys after all retries
if !request_items.is_empty() {
tracing::warn!(
"Failed to get some keys after {} retries for {} table(s)",
MAX_RETRIES,
request_items.len()
);
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_many logs a warning and returns partial results when UnprocessedKeys remain after retries. Callers (notably ownership verification in batch delete) can misinterpret this as “not found” and return 404/403 incorrectly. Consider returning an error when unprocessed keys remain after the retry budget so callers can surface a 5xx/503 instead of silently dropping items.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement delete many operation

3 participants