Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .serena/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
/cache
1 change: 1 addition & 0 deletions .serena/memories/csv_workflow_documentation_oct_18_2025.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Oct 18 2025: Documented the CSV upload workflow in TECHNICAL_DOCUMENTATION.md, covering upload/validate (CsvUploadCard), transform (CsvTransformPanel), review (CsvDataReviewPanel), and the shared useCsvUploadWorkflow hook that coordinates inventory vs sales imports.
1 change: 1 addition & 0 deletions .serena/memories/frontend_lint_cleanup_oct_18_2025.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Oct 18 2025: Cleared all ESLint errors in the Vite React frontend by fixing prop-types (PeriodSelector, POS Square panels), escaping unescaped quotes, replacing unused imports, and stabilizing hooks (useDataReview via refs, useCallback in POS panels). Updated CSV and inventory test suites to import Vitest globals explicitly so npm run lint:frontend now passes cleanly.
9 changes: 9 additions & 0 deletions .serena/memories/issue47_backend_phase1_progress.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
## Issue 47 – Backend Phase 1 Progress (CSV Upload)
- Added multer/csv-parse dependencies via npm workspace install; noted upstream advisory on Multer 1.x for future upgrade.
- Extended backend settings with configurable CSV upload limits (max size, mime types, extensions) driven by env overrides.
- Created migrations for `csv_uploads` and `csv_upload_batches` tables to persist validation results in 1000-row batches.
- Implemented Sequelize models `CsvUpload` and `CsvUploadBatch` with associations and helper methods.
- Built CSV upload middleware, controller, and routes under `/api/v1/data/csv` for inventory/sales upload endpoints.
- Implemented parser + service layer (header normalization, validation rules, batch persistence, Restaurant365-aligned aliases) with summary responses and structured error reporting.
- Added Vitest coverage for `CsvUploadService` (valid upload path + header failure) using mocked models.
- Updated API router metadata and refreshed task TODO memory to track remaining phases.
7 changes: 7 additions & 0 deletions .serena/memories/issue47_backend_phase2_progress.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
## Issue 47 – Backend Phase 2 Progress (CSV Transformation)
- Implemented `CsvInventoryTransformer` with shared helper reuse, unit tests, and validation coverage for inventory csv headers and data coercion.
- Implemented `CsvSalesTransformer` with fuzzy location/item matching, unit tests, and safeguards for missing metadata.
- Added `CsvTransformService` orchestrating inventory/sales transforms with error thresholds, persistence to `CsvTransform`, and structured result summaries.
- Wired up `transformInventoryUpload` and `transformSalesUpload` controller endpoints plus routing under `/api/v1/data/csv`, including dry-run handling and request validation.
- Extended Vitest coverage: `CsvTransformService.test.js`, `CsvSalesTransformer.test.js`, and updated `CsvUploadService.test.js` leveraging non-mutating model overrides.
- Verified backend suite via `npm test`; all suites pass with CSV upload + transform workflows operational.
6 changes: 6 additions & 0 deletions .serena/memories/project_overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# CostFX Project Overview
- Purpose: AI-driven restaurant operations platform covering inventory, cost, and forecasting workflows with multi-agent architecture.
- Key components: Node.js/Express backend with PostgreSQL, React/Vite frontend with Redux Toolkit, AI agents (Inventory, Cost, Forecast), Docker-based local env, Terraform deploy scripts (currently de-emphasized).
- Domain focus: POS integrations (Square), variance analysis, recipe and inventory management for restaurants.
- Current emphasis: Square sales/import workflows, inventory variance features, strict testing and documentation standards.
- Important docs: docs/PROJECT_STATUS.md for progress, docs/TECHNICAL_DOCUMENTATION.md for architecture, .claude/claude.md for process rules.
8 changes: 8 additions & 0 deletions .serena/memories/suggested_commands.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Suggested Commands
- Setup: `npm install`, `npm run docker:up`, `npm run db:migrate`, `npm run db:seed` (or `npm run setup`).
- Dev servers: `npm run dev` (full stack), `npm run dev:backend`, `npm run dev:frontend`.
- Testing: `npm run test`, `npm run test:backend`, `npm run test:frontend`, plus `npm run test:watch` for both packages.
- Linting: `npm run lint`, `npm run lint:backend`, `npm run lint:frontend`, with `lint:fix` variants inside packages.
- Backend DB ops: `cd backend && npm run migrate`, `npm run migrate:up`, `npm run migrate:down`, `npm run db:seed`, `npm run setup` (test db prep).
- Frontend build/preview: `cd frontend && npm run build`, `npm run preview`.
- Docker helpers: `npm run docker:up`, `npm run docker:down`.
7 changes: 7 additions & 0 deletions .serena/memories/task_completion_checklist.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Task Completion Checklist
- Ensure plan approved before coding; document decisions and updates in docs/TECHNICAL_DOCUMENTATION.md and docs/PROJECT_STATUS.md when phases finish.
- Run `npm run lint`, `npm run build`, `npm run test` (all packages) and confirm success; rerun any failed command immediately.
- Verify `npm run dev` (or individual dev servers) start without errors.
- Update GitHub issue with progress comments and maintain knowledge graph memories.
- Avoid git commits/pushes/reverts unless supervisor directs; never leave TODOs or skipped tests.
- Add/refresh Swagger docs for API changes and keep documentation synchronized.
6 changes: 6 additions & 0 deletions .serena/memories/tech_stack_and_style.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Tech Stack & Conventions
- Backend: Node.js 18+, Express, Sequelize ORM, PostgreSQL, Redis (optional), Vitest for tests, ESLint (eslint:recommended, no unused vars, prefer const), services hold business logic.
- Frontend: React 18, Vite, Redux Toolkit, React Query, Tailwind, Vitest + Testing Library, ESLint with React plugins (no warnings allowed).
- Shared patterns: Service layer for business logic, models keep schema only, heavy use of custom hooks/components for frontend, Swagger docs required for APIs.
- Process rules: Follow .claude/claude.md (research → plan → implement, ask before coding, spawn agents for parallel work, never skip tests, no TODOs, add docs to docs/TECHNICAL_DOCUMENTATION.md only).
- Coding style: Meaningful names, early returns, no hard-coded secrets, no versioned function names, add concise comments only for complex logic.
31 changes: 31 additions & 0 deletions .serena/memories/wip_issue47_todos.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Issue 47 CSV Upload – TODOs

## Backend Phase 1 – Upload Infrastructure
- [x] Extend settings for CSV upload limits (max size, mime types)
- [x] Create Sequelize models for CSV uploads (metadata + staged data)
- [x] Add `/api/v1/data/csv` router with upload endpoints and multer middleware
- [x] Implement CSVParserService with validation + tests

## Backend Phase 2 – Transformation Pipeline
- [x] Implement CSVInventoryTransformer with helper reuse and tests
- [x] Implement CSVSalesTransformer with fuzzy matching + tests
- [x] Build transform controllers/routes (dry-run, error thresholds) + integration tests

## Frontend CSV UI
- [ ] New `/data-import/csv` route + module separate from Square components
- [ ] CSVUploadPanel with react-dropzone + validation feedback
- [ ] CSVFormatGuide with Restaurant365-style samples
- [ ] CSVTransformPanel + CSVDataReviewPanel reusing shared components/hooks
- [ ] Manual mapping interface for ambiguous matches
- [ ] Frontend service methods + Vitest component/service tests

## Documentation & Process
- [ ] Update Swagger docs for new API endpoints
- [ ] Append CSV architecture section to docs/TECHNICAL_DOCUMENTATION.md
- [ ] Update docs/PROJECT_STATUS.md after major milestones
- [ ] Keep GitHub issue comments and knowledge graph memories in sync

## Validation & QA
- [ ] End-to-end backend integration tests (upload → transform → DB)
- [ ] Frontend flow tests
- [ ] Final `npm run lint && npm run build && npm run test` plus dev server check
67 changes: 67 additions & 0 deletions .serena/project.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# language of the project (csharp, python, rust, java, typescript, go, cpp, or ruby)
# * For C, use cpp
# * For JavaScript, use typescript
# Special requirements:
# * csharp: Requires the presence of a .sln file in the project folder.
language: typescript

# whether to use the project's gitignore file to ignore files
# Added on 2025-04-07
ignore_all_files_in_gitignore: true
# list of additional paths to ignore
# same syntax as gitignore, so you can use * and **
# Was previously called `ignored_dirs`, please update your config if you are using that.
# Added (renamed) on 2025-04-07
ignored_paths: []

# whether the project is in read-only mode
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
# Added on 2025-04-18
read_only: false

# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
# Below is the complete list of tools for convenience.
# To make sure you have the latest list of tools, and to view their descriptions,
# execute `uv run scripts/print_tool_overview.py`.
#
# * `activate_project`: Activates a project by name.
# * `check_onboarding_performed`: Checks whether project onboarding was already performed.
# * `create_text_file`: Creates/overwrites a file in the project directory.
# * `delete_lines`: Deletes a range of lines within a file.
# * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
# * `execute_shell_command`: Executes a shell command.
# * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
# * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
# * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
# * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
# * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
# * `initial_instructions`: Gets the initial instructions for the current project.
# Should only be used in settings where the system prompt cannot be set,
# e.g. in clients you have no control over, like Claude Desktop.
# * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
# * `insert_at_line`: Inserts content at a given line in a file.
# * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
# * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
# * `list_memories`: Lists memories in Serena's project-specific memory store.
# * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
# * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
# * `read_file`: Reads a file within the project directory.
# * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
# * `remove_project`: Removes a project from the Serena configuration.
# * `replace_lines`: Replaces a range of lines within a file with new content.
# * `replace_symbol_body`: Replaces the full definition of a symbol.
# * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
# * `search_for_pattern`: Performs a search for a pattern in the project.
# * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
# * `switch_modes`: Activates modes by providing a list of their names
# * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
# * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
# * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
# * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
excluded_tools: []

# initial prompt for the project. It will always be given to the LLM upon activating the project
# (contrary to the memories, which are loaded on demand).
initial_prompt: ""

project_name: "CostFX"
92 changes: 92 additions & 0 deletions backend/migrations/1760500000000_create-csv-upload-tables.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
/**
* Migration: Create CSV upload metadata tables
*
* Purpose: Persist CSV upload validation results for multi-step transformation workflow
* Related: Issue #47 - CSV Upload for Inventory & Sales Data Import
*/

export const up = async (pgm) => {
pgm.createTable('csv_uploads', {
id: { type: 'serial', primaryKey: true, notNull: true },
restaurant_id: {
type: 'integer',
notNull: true,
references: 'restaurants',
onDelete: 'CASCADE'
},
upload_type: {
type: 'varchar(50)',
notNull: true,
comment: 'Inventory or sales CSV upload'
},
filename: { type: 'varchar(255)', notNull: true },
file_size_bytes: { type: 'integer', notNull: true },
mime_type: { type: 'varchar(100)', notNull: true },
extension: { type: 'varchar(10)', notNull: true },
status: {
type: 'varchar(50)',
notNull: true,
default: 'uploaded',
comment: 'uploaded | validated | failed | transformed'
},
rows_total: { type: 'integer', notNull: true, default: 0 },
rows_valid: { type: 'integer', notNull: true, default: 0 },
rows_invalid: { type: 'integer', notNull: true, default: 0 },
validation_errors: {
type: 'jsonb',
notNull: false,
comment: 'Aggregated validation error summary with counts'
},
metadata: {
type: 'jsonb',
notNull: false,
comment: 'Optional metadata (e.g., column headers, sample rows)'
},
created_at: { type: 'timestamptz', notNull: true, default: pgm.func('NOW()') },
updated_at: { type: 'timestamptz', notNull: true, default: pgm.func('NOW()') }
});

pgm.addConstraint('csv_uploads', 'csv_uploads_upload_type_check', {
check: "upload_type IN ('inventory', 'sales')"
});

pgm.createTable('csv_upload_batches', {
id: { type: 'serial', primaryKey: true, notNull: true },
upload_id: {
type: 'integer',
notNull: true,
references: 'csv_uploads',
onDelete: 'CASCADE'
},
batch_index: { type: 'integer', notNull: true, comment: 'Zero-based batch index' },
rows_total: { type: 'integer', notNull: true, default: 0 },
rows_valid: { type: 'integer', notNull: true, default: 0 },
rows_invalid: { type: 'integer', notNull: true, default: 0 },
rows: {
type: 'jsonb',
notNull: true,
default: pgm.func("'[]'::jsonb"),
comment: 'Validated CSV rows for this batch'
},
errors: {
type: 'jsonb',
notNull: true,
default: pgm.func("'[]'::jsonb"),
comment: 'Row-level validation errors for this batch'
},
created_at: { type: 'timestamptz', notNull: true, default: pgm.func('NOW()') }
});

pgm.addConstraint('csv_upload_batches', 'csv_upload_batches_unique_batch', {
unique: ['upload_id', 'batch_index']
});

pgm.createIndex('csv_uploads', ['restaurant_id', 'upload_type']);
pgm.createIndex('csv_uploads', ['status']);
pgm.createIndex('csv_uploads', ['created_at']);
};

export const down = async (pgm) => {
pgm.dropTable('csv_upload_batches');
pgm.dropTable('csv_uploads');
};
116 changes: 116 additions & 0 deletions backend/migrations/1760500000001_create-csv-transforms.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
/* eslint-disable camelcase */

// Migration: Create csv_transforms table and expand source POS provider constraint
// Date: 2024-10-15
// Related to Issue #47 - CSV Import Pipeline (Transformation Phase)

export const up = async function(pgm) {
// Create csv_transforms table to track transformation runs
pgm.createTable('csv_transforms', {
id: {
type: 'serial',
primaryKey: true,
notNull: true
},
upload_id: {
type: 'integer',
notNull: true,
references: 'csv_uploads(id)',
onDelete: 'CASCADE'
},
restaurant_id: {
type: 'integer',
notNull: true,
references: 'restaurants(id)',
onDelete: 'CASCADE'
},
transform_type: {
type: 'varchar(50)',
notNull: true
},
status: {
type: 'varchar(50)',
notNull: true,
default: 'processing'
},
dry_run: {
type: 'boolean',
notNull: true,
default: false
},
processed_count: {
type: 'integer',
notNull: true,
default: 0
},
created_count: {
type: 'integer',
notNull: true,
default: 0
},
updated_count: {
type: 'integer',
notNull: true,
default: 0
},
skipped_count: {
type: 'integer',
notNull: true,
default: 0
},
error_count: {
type: 'integer',
notNull: true,
default: 0
},
error_rate: {
type: 'numeric(6,3)',
notNull: true,
default: 0
},
summary: {
type: 'jsonb',
notNull: true,
default: pgm.func('jsonb_build_object()')
},
errors: {
type: 'jsonb',
notNull: true,
default: pgm.func('jsonb_build_array()')
},
created_at: {
type: 'timestamp',
notNull: true,
default: pgm.func('current_timestamp')
},
updated_at: {
type: 'timestamp',
notNull: true,
default: pgm.func('current_timestamp')
},
completed_at: {
type: 'timestamp',
notNull: false
}
});

pgm.createIndex('csv_transforms', ['upload_id']);
pgm.createIndex('csv_transforms', ['restaurant_id']);
pgm.createIndex('csv_transforms', ['transform_type']);
pgm.createIndex('csv_transforms', ['status']);

// Update valid_pos_provider constraint to allow CSV sourced items
pgm.dropConstraint('inventory_items', 'valid_pos_provider', { ifExists: true });
pgm.addConstraint('inventory_items', 'valid_pos_provider', {
check: "source_pos_provider IN ('square', 'toast', 'clover', 'csv') OR source_pos_provider IS NULL"
});
};

export const down = async function(pgm) {
pgm.dropConstraint('inventory_items', 'valid_pos_provider', { ifExists: true });
pgm.addConstraint('inventory_items', 'valid_pos_provider', {
check: "source_pos_provider IN ('square', 'toast', 'clover') OR source_pos_provider IS NULL"
});

pgm.dropTable('csv_transforms');
};
8 changes: 5 additions & 3 deletions backend/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@
"bcryptjs": "^2.4.3",
"compression": "^1.7.4",
"cors": "^2.8.5",
"csv-parse": "^5.6.0",
"date-fns": "^3.6.0",
"dotenv": "^16.4.5",
"env-var": "^7.5.0",
Expand All @@ -41,15 +42,16 @@
"jsonwebtoken": "^9.0.2",
"langchain": "^0.2.16",
"lodash": "^4.17.21",
"multer": "^1.4.5-lts.1",
"node-pg-migrate": "^8.0.3",
"openai": "^4.56.0",
"pg": "^8.12.0",
"pg-hstore": "^2.3.4",
"redis": "^4.7.0",
"sequelize": "^6.37.3",
"square": "^37.1.0",
"uuid": "^10.0.0",
"winston": "^3.14.2",
"node-pg-migrate": "^8.0.3",
"square": "^37.1.0"
"winston": "^3.14.2"
},
"devDependencies": {
"@vitest/coverage-v8": "^2.0.0",
Expand Down
Loading
Loading