Skip to content

Conversation

@drewsephski
Copy link
Owner

@drewsephski drewsephski commented Oct 28, 2025

Summary by CodeRabbit

  • New Features

    • AI-powered workflow generation using Google Gemini
    • Guest mode for local workflow creation without sign-up
    • Credentials management system with encryption
    • Execution tracking and history with detailed logs
    • Webhook and scheduled workflow triggers
    • AI provider integrations (OpenAI, Claude, Google Gemini)
    • Communication nodes (Slack, Discord, Email)
    • Data operations (JSON parsing, filtering, code execution)
    • Database query nodes (PostgreSQL, MongoDB)
    • Node drag-and-drop to canvas
    • Animated dark theme toggle
  • Bug Fixes

    • Routing improvements
    • UX enhancements for guest and authenticated users
    • Landing page performance optimization
    • Guest workflow persistence
  • Documentation

    • Expanded feature descriptions and node type documentation
    • Added guest mode usage guide

@codecapyai
Copy link

codecapyai bot commented Oct 28, 2025

🔍 Analyzing PR changes and preparing to run tests...

@netlify
Copy link

netlify bot commented Oct 28, 2025

Deploy Preview for nodebasex failed. Why did it fail? →

Name Link
🔨 Latest commit 01f34d6
🔍 Latest deploy log https://app.netlify.com/projects/nodebasex/deploys/69004775cbc64e000842b4a0

@vercel
Copy link

vercel bot commented Oct 28, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
nodebase Error Error Oct 28, 2025 4:37am

@coderabbitai
Copy link

coderabbitai bot commented Oct 28, 2025

Caution

Review failed

Failed to post review comments

Walkthrough

This PR introduces a comprehensive workflow automation platform with credentials management, execution tracking, guest mode support, and 15+ new node types spanning webhooks, scheduling, AI services, database operations, and utilities. It adds encryption for credential storage, a workflow execution engine with job queuing, expanded database schema, and enhanced authentication flows supporting both authenticated and guest users.

Changes

Cohort / File(s) Summary
Windsurf Configuration
.windsurfrules/NEXTJS.md
Adds Next.js conventions and best practices guide including App Router, client components, naming conventions, and React Server Components guidance.
Documentation
README.md
Expands key features, adds AI Workflow Generation section with Google Gemini details, introduces Available Node Types documentation, Guest Mode subsection, reworked Getting Started with prerequisites/environment setup, expanded Credential Types, and Recent Bug Fixes section.
Authentication & Layout
lib/auth-utils.ts, app/(dashboard)/(editor)/layout.tsx
Adds getAuth() utility for retrieving session without enforcement. Introduces new layout component for editor dashboard with flex container.
Page Updates & Routing
app/(dashboard)/(editor)/workflows/[workflowId]/page.tsx, app/(dashboard)/(rest)/credentials/*, app/(dashboard)/(rest)/executions/*, app/(dashboard)/(rest)/workflows/page.tsx, app/page.tsx
Updates authentication checks from requireAuth() to getAuth(), adds prefetching guards for guest workflows, introduces HomePage landing component with auth/signup flows, and restructures pages with HydrateClient/ErrorBoundary/Suspense patterns.
Credentials Feature
features/credentials/*, lib/encryption.ts
Introduces complete CRUD UI for credentials with form validation, list view with search/pagination, hooks for data fetching/mutations, server-side TRPC router with encryption/decryption, and encryption utilities (AES-256-GCM).
Execution System
lib/execution-engine.ts, lib/execution-queue.ts, lib/execution-context.ts, lib/workflow-scheduler.ts
Adds ExecutionEngine orchestrating workflow node execution, ExecutionQueue for job management, ExecutionContextManager for variable/output tracking, and WorkflowScheduler for background processing.
Guest Workflows & Storage
lib/guest-workflow-storage.ts
Provides localStorage-backed guest workflow persistence with migration to authenticated user accounts.
Node Helpers & Utilities
lib/node-helpers.ts
Exports 13+ utility functions for credential access, node data validation, condition evaluation, JSON path extraction, HTTP requests with retry, and data type conversions.
Node Components — Triggers
features/triggers/components/webhook-trigger/*, features/triggers/components/schedule-trigger/*
Adds WebhookTriggerNode/Dialog and ScheduleTriggerNode/Dialog with configuration UI for HTTP methods/auth types and cron/interval scheduling.
Node Components — Communication
features/communication/components/{slack-send,discord-send,email-send}/*
Introduces SlackSendNode/Dialog, DiscordSendNode/Dialog, EmailSendNode/Dialog with credential selection and message configuration.
Node Components — AI Services
features/ai/components/{openai-chat,anthropic-chat,google-gemini-chat}/*, features/ai/server/workflow-generator.ts
Adds OpenAIChatNode, AnthropicChatNode, GoogleGeminiChatNode with model/prompt configuration; introduces WorkflowChatDialog for AI-powered workflow generation and TRPC router with rate limiting.
Node Components — Data Operations
features/data-operations/components/{json-parse,filter,set-variable,code-execute}/*
Adds JsonParseNode, FilterNode, SetVariableNode, CodeExecuteNode with operation-specific dialogs and condition/transformation UIs.
Node Components — Database
features/database/components/{postgres-query,mongodb-query}/*
Introduces PostgresQueryNode/Dialog and MongoDBQueryNode/Dialog for SQL/MongoDB operations with credential integration.
Node Components — Utilities
features/utilities/components/{delay,if-condition,merge}/*
Adds DelayNode, IfConditionNode, MergeNode with configurable timing, condition logic, and merge strategies.
Workflow Features
features/workflows/components/workflows.tsx, features/workflows/components/execution-history.tsx, features/workflows/components/inline-workflow-editor.tsx, features/workflows/hooks/use-guest-workflows.ts, features/workflows/hooks/use-workflows.ts, features/workflows/server/routers.ts
Restructures workflows UI for guest/authenticated splits, adds execution history display, inline editor dialog, guest workflow hooks, and TRPC router with execution/webhook/schedule triggers.
Execution Features
features/executions/components/*, features/executions/hooks/*, features/executions/params.ts, features/executions/server/*
Implements execution listing/details views, suspense hooks for data fetching, params descriptor, and server-side prefetch helpers.
Editor Components
features/editor/components/editor-client.tsx, features/editor/components/editor-header.tsx, features/editor/components/editor.tsx, features/editor/components/add-node-button.tsx
Introduces EditorClient wrapper with error boundary, updates editor header with guest/auth-aware save/execute flows, splits Editor into guest/authenticated paths, and connects AddNodeButton to node selector.
Auth Forms
features/auth/components/{login-form,register-form}.tsx
Adds guest workflow migration logic to login/signup flows with progress toasts.
UI Components
components/ui/{checkbox,scroll-area,slider,tabs}.tsx
Adds Radix UI wrapper components for Checkbox, ScrollArea, Slider, and Tabs with Tailwind styling.
Layout & Common UI
components/app-header.tsx, components/entity-components.tsx, components/error-boundary.tsx, components/guest-mode-banner.tsx, components/auth-prompt.tsx, components/upgrade-modal.tsx, components/node-selector.tsx
Adds AnimatedThemeToggler to app header, extends EntityHeader with extraActions prop, introduces ErrorBoundary for error catching, GuestModeBanner with dismissal, AuthPrompt dialog, removes onClose from UpgradeModal, adds drag-and-drop and category metadata to NodeSelector.
Configuration
config/node-components.ts
Expands nodeComponents mapping to include 17 new node types across categories.
Database Schema
prisma/schema.prisma
Adds Credential model with encryption, extends User/Session/Workflow models, introduces Execution/ExecutionStep/ExecutionLog models, adds ExecutionJob model for queue, expands NodeType enum, and adds LogLevel/ExecutionStatus/StepStatus/JobStatus enums.
API Routes
app/api/webhooks/[workflowId]/route.ts
Implements POST webhook handler that validates workflow ownership and enqueues execution jobs.
TRPC Setup
trpc/client.tsx, trpc/routers/_app.ts
Enhances HTTP batch link with error logging and exponential backoff retry logic; adds credentialsRouter and workflowGeneratorRouter to app router.
Dependencies
package.json
Adds Radix UI checkbox, scroll-area, slider, tabs packages.
Testing
test-workflow-auth.js
Adds test script validating auth flows and drag-and-drop interactions.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Browser as Browser/UI
    participant Server
    participant Queue as Job Queue
    participant Engine as Execution Engine
    participant DB as Database

    rect rgb(200, 220, 255)
    note over User,DB: Authenticated Workflow Execution
    User->>Browser: Clicks execute button
    Browser->>Server: POST /trpc/workflows.execute
    Server->>DB: Verify workflow ownership
    Server->>Engine: executeWorkflow(workflowId)
    Engine->>DB: Create Execution record
    Engine->>DB: Load workflow nodes/connections
    Engine->>Engine: Calculate topological sort
    loop For each node (in order)
        Engine->>Engine: executeNode(nodeId)
        Engine->>Engine: dispatch by NodeType
        Engine->>DB: Save execution step + logs
    end
    Engine->>DB: Update Execution (COMPLETED/FAILED)
    Server->>Browser: Return jobId
    end

    rect rgb(220, 255, 220)
    note over User,DB: Webhook/Scheduled Trigger
    Server->>Queue: enqueueJob(webhook payload)
    Queue->>DB: Save ExecutionJob (PENDING)
    Note over Queue: Background Scheduler
    Queue->>Queue: dequeueJob()
    Queue->>Engine: executeWorkflow()
    Engine->>DB: Create Execution, run nodes
    Queue->>DB: markJobCompleted()
    end

    rect rgb(255, 240, 220)
    note over User,DB: Guest Mode Workflow
    User->>Browser: Create/edit workflow (no auth)
    Browser->>Browser: saveGuestWorkflow() → localStorage
    User->>Browser: Sign up
    Browser->>Server: POST /trpc/workflows.migrateGuestWorkflows
    Server->>DB: Create workflows for user
    Server->>Browser: Return migrated IDs
    Browser->>Browser: clearGuestWorkflows() → localStorage
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120+ minutes

Areas requiring extra attention:

  • Encryption & Decryption Logic (lib/encryption.ts, credentials TRPC router): Verify AES-256-GCM implementation, IV/AAD handling, and key management from environment.
  • Execution Engine (lib/execution-engine.ts): Complex orchestration with 18 node type handlers, topological sorting, and error propagation; verify all node handlers are correctly implemented and error states propagate properly.
  • Job Queue & Scheduler (lib/execution-queue.ts, lib/workflow-scheduler.ts): Verify atomic dequeue operations, job status transitions, and scheduler's background loop doesn't create race conditions.
  • Auth Flow Changes: Multiple pages changed from requireAuth() to getAuth(), affecting access control; verify guest/authenticated/unauthenticated paths are correctly guarded.
  • Guest Workflow Migration (lib/guest-workflow-storage.ts, auth forms): Verify localStorage quota handling and migration atomicity; ensure failed migrations don't lose data.
  • Database Schema Migrations: New Credential, Execution, ExecutionStep, ExecutionLog, ExecutionJob models with foreign keys; verify Prisma migrations and relation constraints.
  • TRPC Retry Logic (trpc/client.tsx): Exponential backoff and retry conditions; verify no retries on 401/403 (auth errors) but retries on transient failures.
  • Node Drag-and-Drop (components/node-selector.tsx, features/editor/components/editor.tsx): Verify dataTransfer serialization, drop zone hit detection, and coordinate transformation from screen to canvas.
  • Credential Storage & Masking: Verify masked data doesn't reveal sensitive info in API responses and that decryption is only done when necessary.
  • Error Boundary & Fallback UI (components/error-boundary.tsx): Verify error categorization and navigation redirects work correctly in all scenarios.

Possibly related PRs

  • PR 07: ai-providers #6: Adds AI-related router and testAi procedures in TRPC setup, parallel work on AI workflow generation feature.
  • PR 16: node-selector #15: Updates NodeSelector functionality with NodeTypeOption category metadata and drag-and-drop, overlapping node UI changes.
  • PR 09: sidebar-layout #8: Modifies auth handling in editor workflow page (requireAuth → getAuth pattern), same authentication refactoring approach.

Poem

🐰 A workflow engine springs to life,
With nodes that execute without strife,
Credentials locked tight with cipher's might,
Guest workflows saved in browser's night,
From queue to queue, jobs take their flight!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 44.68% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "18:ai-integration" is related to the changeset, as AI workflow generation via Google Gemini and associated infrastructure are indeed significant features added in this PR. However, the title is overly generic and does not adequately capture the scope of this large changeset, which includes substantial additions: credentials management system, execution tracking and logging pipeline, multiple new node types (triggers, communication, AI services, data operations, database operations, utilities), guest workflow management enhancements, authentication refactoring, and schema expansions. While the title refers to a real and meaningful aspect of the changes, it is underdescriptive for the breadth of work involved and would not give a teammate scanning history sufficient clarity about what this PR primarily accomplishes.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 18-ai-integration

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecapyai
Copy link

codecapyai bot commented Oct 28, 2025

🚀 Scrapybara Ubuntu instance started!

Interactive stream

⚠️ Error fetching GitHub variables, continuing setup:

status_code: 502, body: {'detail': "Error communicating with container: Client error '400 Bad Request' for url 'http://54.153.120.199:30029/env'\nFor more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400"}

@codecapyai
Copy link

codecapyai bot commented Oct 28, 2025

🔧 Setting up test environment...

Agent Steps
Cloning repository drewsephski/nodebase...

First, let's verify and cd into the repository location:
bash: {'command': 'cd /home/scrapybara/Documents/drewsephski/nodebase && pwd'}
Let's check if Node.js and PostgreSQL are installed:
bash: {'command': 'node --version && which psql'}
PostgreSQL command isn't found, so let's install it:
bash: {'command': 'sudo apt update && sudo apt install -y postgresql postgresql-contrib'}
Install pnpm globally:
bash: {'command': 'sudo npm install -g pnpm'}
Let's check if .env.example exists and create .env.local:
str_replace_editor: [REDACTED]
Let's check for .env:
str_replace_editor: [REDACTED]
Before proceeding further, I need to know what environment variables are available to create the .env file properly. Could you please share:

1. The list of environment variables that are available in the environment
2. If possible, share the contents of .env.example so we know the required format

This information is crucial because:
- The instructions mention several required variables (DATABASE_URL, BETTER_AUTH_SECRET, BETTER_AUTH_URL, etc.)
- We need to create a proper .env file for the application to work
- Without knowing the available variables and their format, we risk creating an incomplete or incorrect configuration

Please provide these details so I can continue with the setup process.

@codecapyai
Copy link

codecapyai bot commented Oct 28, 2025

❌ Something went wrong:

1 validation error for SetupSchema
  Input should be a valid dictionary or instance of SetupSchema [type=model_type, input_value=None, input_type=NoneType]
    For further information visit https://errors.pydantic.dev/2.10/v/model_type

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 80

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (7)
components/entity-components.tsx (1)

1-1: Remove unused import.

The never import from zod is not used anywhere in this file.

Apply this diff:

-import { never } from "zod";
 import { Button } from "./ui/button";
app/(dashboard)/(rest)/credentials/[credentialId]/page.tsx (1)

3-15: Fix params typing and remove unnecessary await.

Next.js passes params as a plain object; awaiting it will throw.

 interface PageProps {
-  params: Promise<{ credentialId: string }>;
+  params: { credentialId: string };
 }
 
 // http://localhost:3000/credentials/123
 
 const Page = async ({ params }: PageProps) => {
   const session = await getAuth();
-  const { credentialId } = await params;
+  const { credentialId } = params;
 
   // If user is not authenticated, still show the page but they won't be able to perform actions
   return <p>Credential id: {credentialId}</p>;
 }
features/auth/components/login-form.tsx (2)

38-59: Fix double navigation and improve error handling.

The current implementation has multiple issues:

  1. Double navigation: The callbackURL: "/workflows" causes automatic navigation, then router.push("/workflows") navigates again.
  2. Race condition: Migration and navigation happen concurrently, potentially navigating before migration completes.
  3. Silent failures: Migration errors are only logged to console—users receive no feedback.

Apply this diff to fix the issues:

 const onSubmit = async (values: LoginFormValues) => {
     await authClient.signIn.email({
         email: values.email,
         password: values.password,
-        callbackURL: "/workflows",
     }, {
         onSuccess: async () => {
             const guestWorkflows = getGuestWorkflows();
             if (guestWorkflows.length > 0) {
                 try {
                     await migrateMutation.mutateAsync();
+                    toast.success("Workflows migrated successfully!");
                 } catch (error) {
-                    console.error("Failed to migrate guest workflows:", error);
+                    toast.error("Failed to migrate workflows. Please try again.");
+                    console.error("Failed to migrate guest workflows:", error);
                 }
             }
             router.push("/workflows");
         },
         onError: (ctx) => {
             toast.error(ctx.error.message);
         },
     });
 };

61-81: Apply the same fixes to social login flow.

The social login handler has identical issues: double navigation, race conditions, and silent error handling.

Apply this diff:

 const handleSocialLogin = async (provider: "github" | "google") => {
     await authClient.signIn.social({
         provider,
-        callbackURL: "/workflows",
     }, {
         onSuccess: async () => {
             const guestWorkflows = getGuestWorkflows();
             if (guestWorkflows.length > 0) {
                 try {
                     await migrateMutation.mutateAsync();
+                    toast.success("Workflows migrated successfully!");
                 } catch (error) {
-                    console.error("Failed to migrate guest workflows:", error);
+                    toast.error("Failed to migrate workflows. Please try again.");
+                    console.error("Failed to migrate guest workflows:", error);
                 }
             }
             router.push("/workflows");
         },
         onError: (ctx) => {
             toast.error(ctx.error.message);
         },
     });
 };
features/workflows/server/routers.ts (1)

44-52: Validate node.type against enum; don’t cast strings/null.

Accepting string/nullish and casting to NodeType will break writes (e.g., "initial" vs NodeType.INITIAL).

-            type: z.string().nullish(),
+            type: z.nativeEnum(NodeType),

Apply in both updateNodes and update input schemas. Also ensure client sends valid enum or map client types to enum before persisting.

Also applies to: 110-116

features/workflows/components/workflows.tsx (2)

324-355: Hooks rule violation: conditional hook call via ternary.

isAuthenticated ? useRemoveWorkflow(...) : useDeleteGuestWorkflow() calls hooks conditionally. This can break render order.

-export const WorkflowItem = ({ data }: { data: Workflow }) => {
+export const WorkflowItem = ({ data }: { data: Workflow }) => {
   const { data: authData } = useQuery({
     queryKey: ["session"],
     queryFn: () => authClient.useSession(),
   });
   const isAuthenticated = !!authData?.data?.user;
-  const removeWorkflow = isAuthenticated ? useRemoveWorkflow(data.id) : useDeleteGuestWorkflow();
+  const removeWorkflowMutation = useRemoveWorkflow(data.id);
+  const deleteGuestWorkflowMutation = useDeleteGuestWorkflow();
 
   const handleRemove = () => {
-    removeWorkflow.mutate(isAuthenticated ? { id: data.id } : data.id, {
+    const mutate = isAuthenticated
+      ? () => removeWorkflowMutation.mutate({ id: data.id }, callbacks)
+      : () => deleteGuestWorkflowMutation.mutate(data.id, callbacks);
+    const callbacks = {
       onSuccess: () => {
         toast.success(`Workflow ${data.name} removed`);
       },
       onError: (error) => {
         toast.error(`Failed to remove workflow: ${error.message}`);
       },
-    });
+    };
+    mutate();
   };
 
   return (
     <EntityItem
...
-      isRemoving={removeWorkflow.isPending}
+      isRemoving={isAuthenticated ? removeWorkflowMutation.isPending : deleteGuestWorkflowMutation.isPending}
     />
   );
 };

231-243: AuthenticatedWorkflowsPagination must be wrapped in a Suspense boundary.

The pagination component calls useSuspenseWorkflows() at line 233 but is rendered outside the existing Suspense boundary. The boundary in app/(dashboard)/(rest)/workflows/page.tsx only wraps <WorkflowsList />, while the pagination is rendered as a separate prop in EntityContainer within WorkflowsContainer, leaving it unprotected.

Comment on lines +1 to +5
---
trigger: glob
description: This rule explains Next.js conventions and best practices for fullstack development.
globs: **/*.js,**/*.jsx,**/*.ts,**/*.tsx
---
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Fix front matter: make globs a YAML list (prevents rule parser misread).
Using a comma-separated string is brittle. Prefer explicit array.

Apply:

 --- 
 trigger: glob
 description: This rule explains Next.js conventions and best practices for fullstack development.
-globs: **/*.js,**/*.jsx,**/*.ts,**/*.tsx
+globs:
+  - "**/*.js"
+  - "**/*.jsx"
+  - "**/*.ts"
+  - "**/*.tsx"
 ---
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
---
trigger: glob
description: This rule explains Next.js conventions and best practices for fullstack development.
globs: **/*.js,**/*.jsx,**/*.ts,**/*.tsx
---
---
trigger: glob
description: This rule explains Next.js conventions and best practices for fullstack development.
globs:
- "**/*.js"
- "**/*.jsx"
- "**/*.ts"
- "**/*.tsx"
---
🤖 Prompt for AI Agents
.windsurfrules/NEXTJS.md lines 1-5: the front matter uses a comma-separated
string for `globs`, which can be misparsed; replace the `globs` value with a
YAML sequence (each pattern on its own line, prefixed with a dash) so it becomes
an explicit list of glob strings in the front matter.

@@ -0,0 +1,9 @@
const Layout = ({ children }: { children: React.ReactNode }) => {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add missing React import for React.ReactNode type.

The component uses React.ReactNode in the type annotation but doesn't import React. While the new JSX transform in Next.js 15/React 19 eliminates the need to import React for JSX, the React.ReactNode type reference still requires it.

Apply this diff to add the import:

+import React from "react";
+
 const Layout = ({ children }: { children: React.ReactNode }) => {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const Layout = ({ children }: { children: React.ReactNode }) => {
import React from "react";
const Layout = ({ children }: { children: React.ReactNode }) => {
🤖 Prompt for AI Agents
In app/(dashboard)/(editor)/layout.tsx around line 1, the component's type
annotation uses React.ReactNode but React isn't imported; add a type-only import
from React and update the prop type to use it (e.g., import type { ReactNode }
from 'react' and annotate children as ReactNode) so the type reference is
resolved without introducing a runtime React import.

Comment on lines +13 to +30
const session = await getAuth();
const { executionId } = await params;

return <p>Execution id: {executionId}</p>;
}
// Only prefetch execution details if user is authenticated
if (session) {
await prefetchExecutionDetails(executionId);
}

return (
<HydrateClient>
<ErrorBoundary>
<Suspense fallback={<div>Loading...</div>}>
<ExecutionDetail executionId={executionId} />
</Suspense>
</ErrorBoundary>
</HydrateClient>
);
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify authentication enforcement and user experience for execution details.

The page conditionally prefetches execution data based on session presence, but unconditionally renders ExecutionDetail. This creates two concerns:

  1. Security: If the tRPC procedure called by useSuspenseExecutionDetails doesn't enforce authentication, execution data could be exposed to unauthenticated users.
  2. UX: Unauthenticated users will render the component but have no prefetched data, likely resulting in errors or confusing empty states.

Run this script to verify that execution detail procedures have proper authentication guards:

If guest access is not intended, consider either:

  • Redirecting unauthenticated users before rendering, or
  • Showing a proper authentication required state instead of the detail component

🏁 Script executed:

#!/bin/bash
# Description: Check if execution detail procedures enforce authentication

# Search for getExecutionDetails procedure definition
rg -A 10 "getExecutionDetails.*procedure" --type ts -g "**/workflows/server/routers.ts"

# Check if it uses protectedProcedure or baseProcedure
rg "(protectedProcedure|baseProcedure).*getExecutionDetails" --type ts -g "**/workflows/server/**"

Length of output: 0


I need to search more broadly to locate the actual procedure definitions and understand how data is being fetched.


🏁 Script executed:

#!/bin/bash
# Search for ExecutionDetail component to find how it fetches data
echo "=== Finding ExecutionDetail component ==="
fd -e tsx -e ts | xargs grep -l "ExecutionDetail" | head -20

# Search for tRPC procedures related to execution broadly
echo -e "\n=== Finding execution-related procedures ==="
fd -path "*/server/**" -e ts | xargs grep -n "procedure.*execution" -i | head -20

# Search for procedure definitions more broadly
echo -e "\n=== Finding procedure file structure ==="
fd -path "*/workflows/server/**" -e ts | head -20

# Look for useSuspenseExecutionDetails hook
echo -e "\n=== Finding useSuspenseExecutionDetails hook ==="
rg "useSuspenseExecutionDetails" --type ts -n

Length of output: 1079


🏁 Script executed:

#!/bin/bash
# Read the useSuspenseExecutionDetails hook to find the procedure call
echo "=== useSuspenseExecutionDetails hook implementation ==="
cat features/executions/hooks/use-executions.ts

# Look for execution-related tRPC routes/routers
echo -e "\n=== Finding tRPC router files ==="
fd -type f -name "*router*" | grep -E "(execution|trpc)" | head -10

# Find all server directories related to executions
echo -e "\n=== Finding server files in executions ==="
find features/executions -type f -name "*.ts" -path "*/server/*"

# Check prefetch.ts for auth context
echo -e "\n=== prefetch.ts content ==="
cat features/executions/server/prefetch.ts

Length of output: 1556


🏁 Script executed:

#!/bin/bash
# Find tRPC router structure
echo "=== Finding trpc directories and files ==="
find . -type d -name "trpc" | head -10

# Find workflows router specifically
echo -e "\n=== Searching for workflows router ==="
fd -e ts | xargs grep -l "getExecutionDetails" | head -20

# Search for router definitions with procedure pattern
echo -e "\n=== Searching for procedure definitions ==="
rg "getExecutionDetails" -n -A 3 -B 3 --type ts | head -50

Length of output: 1015


🏁 Script executed:

#!/bin/bash
# Explore trpc structure
echo "=== Contents of ./trpc ==="
find ./trpc -type f -name "*.ts" | head -30

# Look for workflows router specifically
echo -e "\n=== Searching for workflows router definition ==="
fd -e ts -path "*/trpc/*" | xargs grep -l "workflows" | head -20

# Search for procedure definitions with router pattern
echo -e "\n=== Searching in router files ==="
rg "\.query.*getExecutionDetails|getExecutionDetails.*query" --type ts -A 5 -B 5

Length of output: 1466


🏁 Script executed:

#!/bin/bash
# Read the main app router
echo "=== Main tRPC router (_app.ts) ==="
cat ./trpc/routers/_app.ts

# Also check if there are more router files
echo -e "\n=== All files in routers directory ==="
find ./trpc/routers -type f

Length of output: 562


🏁 Script executed:

#!/bin/bash
# Find and read the workflow router
echo "=== Finding workflows/server/routers files ==="
find features/workflows/server -type f -name "*router*"

# Also check what files are in that directory
echo -e "\n=== All files in workflows/server ==="
find features/workflows/server -type f

# Read the routers file
echo -e "\n=== Workflow router content ==="
cat features/workflows/server/routers.ts 2>/dev/null || cat features/workflows/server/routers/index.ts 2>/dev/null || find features/workflows/server -name "*router*" -exec cat {} \;

Length of output: 12058


🏁 Script executed:

#!/bin/bash
# Search comprehensively for getExecutionDetails definition
echo "=== Searching for getExecutionDetails definition anywhere ==="
rg "getExecutionDetails" -n --type ts

# Check if there are any other router files
echo -e "\n=== Checking for other router files ==="
find . -type f -name "*.ts" | xargs grep -l "\.query.*\(.*executionId" | head -20

Length of output: 513


The getExecutionDetails tRPC procedure is missing from the workflow router and must be implemented.

The page calls trpc.workflows.getExecutionDetails via both useSuspenseExecutionDetails hook and prefetchExecutionDetails, but this procedure is not defined in features/workflows/server/routers.ts. The router only has getExecutions (plural) for listing executions, not a query to fetch a single execution's details.

Add the missing procedure to features/workflows/server/routers.ts:

getExecutionDetails: protectedProcedure
  .input(z.object({ executionId: z.string() }))
  .query(async ({ ctx, input }) => {
    // Verify execution belongs to user's workflow
    const execution = await prisma.execution.findUniqueOrThrow({
      where: { id: input.executionId },
      include: { steps: true, logs: true },
    });
    
    // Verify ownership via workflow
    const workflow = await prisma.workflow.findUniqueOrThrow({
      where: { id: execution.workflowId, userId: ctx.auth.user.id },
    });
    
    return execution;
  }),
🤖 Prompt for AI Agents
In features/workflows/server/routers.ts (add near the other workflow
procedures), implement a new protectedProcedure named getExecutionDetails that
accepts input z.object({ executionId: z.string() }) and performs a query which:
fetches the execution by id including its steps and logs (use
prisma.execution.findUniqueOrThrow with include: { steps: true, logs: true }),
then verifies ownership by ensuring the related workflow exists and belongs to
ctx.auth.user.id (use prisma.workflow.findUniqueOrThrow with where: { id:
execution.workflowId, userId: ctx.auth.user.id }), and finally returns the
execution; export/include this procedure in the router alongside getExecutions.

Comment on lines +12 to +23
// Verify workflow exists and is owned by a user
const workflow = await prisma.workflow.findUnique({
where: { id: workflowId },
select: { id: true, userId: true },
});

if (!workflow) {
return NextResponse.json(
{ error: "Workflow not found" },
{ status: 404 }
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add authentication for webhook calls (currently unauthenticated).

Anyone can enqueue executions if they know/guess a workflowId. Require a secret immediately; then move to per‑workflow secrets/HMAC.

Apply minimal hardening now (env-scoped shared secret):

 export async function POST(
   request: NextRequest,
   { params }: { params: { workflowId: string } }
 ) {
   try {
     const { workflowId } = params;

     // Verify workflow exists and is owned by a user
     const workflow = await prisma.workflow.findUnique({
       where: { id: workflowId },
-      select: { id: true, userId: true },
+      select: { id: true, userId: true },
     });

     if (!workflow) {
       return NextResponse.json(
         { error: "Workflow not found" },
         { status: 404 }
       );
     }
+
+    // Basic auth: require shared secret header (short-term mitigation)
+    const provided = request.headers.get("x-webhook-secret");
+    const expected = process.env.WEBHOOK_SHARED_SECRET;
+    if (!expected || !provided || provided !== expected) {
+      return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
+    }

     // Get webhook data from request body
     const webhookData = await request.json();

Next steps (recommended):

  • Per‑workflow secret in DB (e.g., workflow.webhookSecret) and rotateable keys.
  • HMAC signature verification (e.g., X-Signature: sha256=...) with constant‑time compare.
  • Basic rate limiting and body size limits.

Also applies to: 25-36, 43-49

🤖 Prompt for AI Agents
In app/api/webhooks/[workflowId]/route.ts around lines 12-23 (and similarly for
25-36, 43-49) add a minimal authentication gate: require a shared secret header
(e.g., "x-webhook-secret") on incoming requests, load the expected secret from
an env var (e.g., process.env.WEBHOOK_SHARED_SECRET), and if the env var is
missing return 500; if the header is missing or does not match the expected
secret return 401 JSON. Perform the comparison using a constant-time method
(crypto.timingSafeEqual) on Buffer values to avoid timing attacks and ensure you
normalize types/lengths before comparing; log only non-sensitive context on
failures. This is a stopgap until per-workflow secrets and HMAC verification are
implemented.

Comment on lines +221 to 239
setNodes((nodes) => {
const hasInitialTrigger = nodes.some(
(node) => node.type === NodeType.INITIAL
);

const flowPosition = screenToFlowPosition({ x, y });
const newNode = {
id: createId(),
data: {},
type: nodeType.type,
position: flowPosition,
};

if (hasInitialTrigger) {
return [newNode];
}

const flowPosition = screenToFlowPosition({
x: centerX + (Math.random() - 0.5) * 200,
y: centerY + (Math.random() - 0.5) * 200,
return [...nodes, newNode];
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Destructive logic: existing nodes are wiped when INITIAL exists.

Returning [newNode] replaces the whole graph. You likely meant to append.

-        if (hasInitialTrigger) {
-          return [newNode];
-        }
-
-        return [...nodes, newNode];
+        // Keep existing nodes; just append
+        return [...nodes, newNode];
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
setNodes((nodes) => {
const hasInitialTrigger = nodes.some(
(node) => node.type === NodeType.INITIAL
);
const flowPosition = screenToFlowPosition({ x, y });
const newNode = {
id: createId(),
data: {},
type: nodeType.type,
position: flowPosition,
};
if (hasInitialTrigger) {
return [newNode];
}
const flowPosition = screenToFlowPosition({
x: centerX + (Math.random() - 0.5) * 200,
y: centerY + (Math.random() - 0.5) * 200,
return [...nodes, newNode];
});
setNodes((nodes) => {
const hasInitialTrigger = nodes.some(
(node) => node.type === NodeType.INITIAL
);
const flowPosition = screenToFlowPosition({ x, y });
const newNode = {
id: createId(),
data: {},
type: nodeType.type,
position: flowPosition,
};
// Keep existing nodes; just append
return [...nodes, newNode];
});
🤖 Prompt for AI Agents
In components/node-selector.tsx around lines 221 to 239, the setter currently
returns [newNode] when an INITIAL node exists which wipes the entire node list;
replace that destructive return with an append so existing nodes are preserved
by returning [...nodes, newNode] (i.e., change the hasInitialTrigger branch to
return [...nodes, newNode] instead of [newNode]).


**Required Variables:**
- `DATABASE_URL` - PostgreSQL connection string
- `BETTER_AUTH_SECRET` - Random secret key for authentication
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Wrap bare URL in inline code or markdown link syntax.

Line 211 contains a bare URL http://localhost:3000 which violates markdown linting rules (MD034). Wrap it in backticks or use proper markdown link syntax.

- `BETTER_AUTH_URL` - Your app's URL (http://localhost:3000 for development)
+ `BETTER_AUTH_URL` - Your app's URL (`http://localhost:3000` for development)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In README.md around line 211, a bare URL `http://localhost:3000` is used which
breaks MD034 linting; wrap the URL in inline code backticks or convert it to a
proper markdown link (e.g., [http://localhost:3000](http://localhost:3000)) so
the URL is not left bare in the text.

Comment on lines +5 to +26
const testGetMany = () => {
console.log("✓ Test 1: getMany procedure should return empty list for unauthenticated users");
return true;
};

// Test 2: Verify AddNodeButton opens selector
const testAddNodeButton = () => {
console.log("✓ Test 2: AddNodeButton should have onClick handler to open selector");
return true;
};

// Test 3: Verify drag and drop functionality
const testDragDrop = () => {
console.log("✓ Test 3: Node items should be draggable and have onDragStart handler");
return true;
};

// Test 4: Verify ReactFlow handles drops
const testReactFlowDrop = () => {
console.log("✓ Test 4: Editor component should have onDragOver, onDrop, and onDragLeave handlers");
return true;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stub tests provide false confidence.

All test functions unconditionally return true without performing any actual validation. These placeholder tests will always pass regardless of whether the functionality works correctly, which defeats the purpose of testing.

Either implement real assertions or remove this file. For example, actual tests would:

  • Make HTTP requests to verify unauthenticated access
  • Query the DOM to verify UI elements exist
  • Test event handlers are properly attached

Would you like me to help generate actual test implementations using a testing framework like Jest or Vitest?

🤖 Prompt for AI Agents
In test-workflow-auth.js around lines 5–26 the test functions are just stubs
returning true which gives false confidence; replace each stub with real
assertions or delete the file. Implement concrete tests using your test runner
(Jest/Vitest) and React Testing Library: 1) for getMany perform an HTTP call (or
call the procedure directly) and assert unauthenticated requests return 401 or
an empty list; 2) render the component and assert the AddNodeButton exists and
fires its click handler (use a mock or spy) to open the selector; 3) render node
items and assert they have draggable=true and that onDragStart is called when
you simulate a dragStart event; 4) render the editor/ReactFlow component and
simulate dragOver/drop/dragLeave events asserting the corresponding handlers are
invoked and expected side effects occur; if you don’t want to implement tests,
delete the file to avoid misleading passing tests.

Comment on lines +49 to +61
// Enhanced error handling with detailed logging
onError: ({ error, path, input, ctx, type }) => {
console.error('tRPC Error:', {
error: error.message,
code: error.code,
path,
input,
type,
url: getUrl(),
httpStatus: error.data?.httpStatus,
stack: error.stack,
});
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

httpBatchLink doesn’t support onError; use loggerLink instead

Move error logging to loggerLink and drop onError from httpBatchLink.

-import { createTRPCClient, httpBatchLink } from '@trpc/client';
+import { createTRPCClient, httpBatchLink, loggerLink } from '@trpc/client';
@@
-      links: [
-        httpBatchLink({
+      links: [
+        loggerLink({
+          enabled: () => process.env.NODE_ENV !== 'production',
+        }),
+        httpBatchLink({
           transformer: superjson,
           url: getUrl(),
-          // Enhanced error handling with detailed logging
-          onError: ({ error, path, input, ctx, type }) => {
-            console.error('tRPC Error:', {
-              error: error.message,
-              code: error.code,
-              path,
-              input,
-              type,
-              url: getUrl(),
-              httpStatus: error.data?.httpStatus,
-              stack: error.stack,
-            });
-          },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Enhanced error handling with detailed logging
onError: ({ error, path, input, ctx, type }) => {
console.error('tRPC Error:', {
error: error.message,
code: error.code,
path,
input,
type,
url: getUrl(),
httpStatus: error.data?.httpStatus,
stack: error.stack,
});
},
import { createTRPCClient, httpBatchLink, loggerLink } from '@trpc/client';
// ... other code ...
links: [
loggerLink({
enabled: () => process.env.NODE_ENV !== 'production',
}),
httpBatchLink({
transformer: superjson,
url: getUrl(),
}),
],
🤖 Prompt for AI Agents
In trpc/client.tsx around lines 49–61, remove the onError option from the
httpBatchLink and instead add a loggerLink before the httpBatchLink; configure
loggerLink to capture and log error details (message, code, path, input,
url/getUrl(), httpStatus and stack) using console.error (or your process logger)
when a response contains an error, and keep the httpBatchLink strictly for
transport configuration. Ensure the new loggerLink is placed earlier in the
links array so it sees responses, and delete the onError block entirely from the
httpBatchLink call.

Comment on lines +63 to +77
retryDelay: (attemptIndex) => Math.min(1000 * 2 ** attemptIndex, 30000),
retryCondition: (error, attemptIndex) => {
// Don't retry on authentication errors (401, 403)
if (error.data?.httpStatus === 401 || error.data?.httpStatus === 403) {
return false;
}
// Retry on network errors, 5xx server errors, or other transient issues (up to 3 attempts)
return (
attemptIndex < 3 &&
(error.code === 'INTERNAL_SERVER_ERROR' ||
error.code === 'TIMEOUT' ||
!error.data?.httpStatus ||
error.data.httpStatus >= 500)
);
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Retry options are not part of httpBatchLink config

Implement retries in the fetch option (below) and remove retryDelay/retryCondition here.

-          // Retry logic with exponential backoff, avoiding retries on auth errors
-          retryDelay: (attemptIndex) => Math.min(1000 * 2 ** attemptIndex, 30000),
-          retryCondition: (error, attemptIndex) => {
-            // Don't retry on authentication errors (401, 403)
-            if (error.data?.httpStatus === 401 || error.data?.httpStatus === 403) {
-              return false;
-            }
-            // Retry on network errors, 5xx server errors, or other transient issues (up to 3 attempts)
-            return (
-              attemptIndex < 3 &&
-              (error.code === 'INTERNAL_SERVER_ERROR' ||
-                error.code === 'TIMEOUT' ||
-                !error.data?.httpStatus ||
-                error.data.httpStatus >= 500)
-            );
-          },

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In trpc/client.tsx around lines 63 to 77, the current retryDelay and
retryCondition belong in the fetch implementation, not the httpBatchLink config;
remove retryDelay and retryCondition from the link options and implement the
same retry logic inside the fetch option: wrap the original fetch call to
perform up to 3 attempts with exponential backoff (Math.min(1000 * 2 **
attemptIndex, 30000)), do not retry for HTTP 401 or 403, retry on network
errors, timeouts, or server 5xx responses, and surface the final error after all
attempts; ensure the wrapper preserves request/response shape expected by tRPC
and respects abort signals/timeouts.

Comment on lines +79 to +88
fetch: async (input, init) => {
const response = await fetch(input, init);
const contentType = response.headers.get('content-type');
if (contentType && contentType.includes('text/html')) {
throw new Error(
`Received HTML response instead of JSON from ${input}. This may indicate an authentication error, server error page, or misconfigured API endpoint. Check server logs and authentication state.`
);
}
return response;
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Do retries inside fetch with exponential backoff and auth error short‑circuit

Use the existing RetryService and keep the HTML-response guard.

-          // Custom fetch to detect HTML responses and throw descriptive errors
-          fetch: async (input, init) => {
-            const response = await fetch(input, init);
-            const contentType = response.headers.get('content-type');
-            if (contentType && contentType.includes('text/html')) {
-              throw new Error(
-                `Received HTML response instead of JSON from ${input}. This may indicate an authentication error, server error page, or misconfigured API endpoint. Check server logs and authentication state.`
-              );
-            }
-            return response;
-          },
+          // Custom fetch with retry + HTML response detection
+          fetch: async (input, init) => {
+            const doFetch = async () => {
+              const res = await fetch(input, init);
+              // Auth errors: don't retry, fail fast
+              if (res.status === 401 || res.status === 403) return res;
+              const ct = res.headers.get('content-type');
+              if (ct && ct.includes('text/html')) {
+                throw new Error(
+                  `Received HTML response instead of JSON from ${input}. This may indicate an auth error, server error page, or misconfigured API endpoint.`
+                );
+              }
+              if (res.status >= 500) {
+                // trigger retry for 5xx
+                throw new Error(`HTTP ${res.status} from ${input}`);
+              }
+              return res;
+            };
+            // 3 attempts, 1s * 2^n backoff
+            return RetryService.withRetry(doFetch, 3, 1000, 2);
+          },

Add the import if not present:

import { RetryService } from '@/lib/retry';

Also avoid logging request inputs in production; loggerLink above is gated by NODE_ENV.

🤖 Prompt for AI Agents
In trpc/client.tsx around lines 79 to 88, replace the direct fetch with a call
through the existing RetryService: add "import { RetryService } from
'@/lib/retry';", wrap the network call in RetryService.retry or equivalent so it
uses exponential backoff and limited attempts, but preserve the current
HTML-response guard and short-circuit by immediately throwing on content-type
'text/html' (do not retry HTML/auth error responses). Ensure any logging inside
the retry logic does not include request inputs when NODE_ENV === 'production'
(only log inputs in non-production or remove them), and retain the same returned
Response when successful.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +66 to +73
const { data: credentials = [] } = useQuery({
queryKey: ["credentials"],
queryFn: async () => {
const response = await fetch("/api/trpc/credentials.list");
const data = await response.json();
return data.result?.data || [];
},
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling for credential fetching.

The fetch operation has no error handling. If the API call fails, users will see an empty credential list with no indication of what went wrong.

Add error handling:

  const { data: credentials = [] } = useQuery({
    queryKey: ["credentials"],
    queryFn: async () => {
-      const response = await fetch("/api/trpc/credentials.list");
-      const data = await response.json();
-      return data.result?.data || [];
+      const response = await fetch("/api/trpc/credentials.list");
+      if (!response.ok) {
+        throw new Error(`Failed to fetch credentials: ${response.statusText}`);
+      }
+      const data = await response.json();
+      return data.result?.data || [];
    },
  });

Consider showing an error message in the UI when credential fetching fails.

🤖 Prompt for AI Agents
In features/ai/components/openai-chat/dialog.tsx around lines 66 to 73, the
credential fetch lacks error handling so failures silently return an empty list;
update the useQuery to handle errors by making the queryFn throw when
response.ok is false or parsing fails, and add onError (or let react-query
surface the error) to capture the error message; propagate the error state to
the component (e.g., data, isError, error) and render an inline error message or
fallback UI when isError is true so users see a clear failure notice instead of
an empty credential list.

Comment on lines +91 to +94
const google = createGoogleGenerativeAI({
apiKey: process.env.NEXT_PUBLIC_GOOGLE_AI_API_KEY!,
});

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Don’t expose AI provider API keys in the client. Move generation server‑side.

NEXT_PUBLIC_GOOGLE_AI_API_KEY ships to browsers and can be abused.

  • Remove client SDK initialization.
  • Call a server endpoint (TRPC router) to generate and return the workflow.
-  const google = createGoogleGenerativeAI({
-    apiKey: process.env.NEXT_PUBLIC_GOOGLE_AI_API_KEY!,
-  });
+  // Use TRPC on the client; the server holds the provider key securely.
+  import { useTRPC } from "@/trpc/client";
+  const trpc = useTRPC();

Then in sendMessage:

-      const { text } = await generateText({
-        model: google("gemini-2.0-flash-exp"),
-        system: systemPrompt,
-        prompt: content,
-        experimental_output: workflowSchema,
-      });
-
-      const parsedWorkflow = workflowSchema.parse(JSON.parse(text));
+      const result = await trpc.workflowGenerator.generateWorkflow.mutate({
+        prompt: content,
+        systemPrompt,
+      });
+      const parsedWorkflow = workflowSchema.parse(result.workflow);

Adjust the router/method names to your actual TRPC API. Based on learnings.

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +37 to +43
const slackFormSchema = z.object({
credentialId: z.string().min(1, "Please select a credential"),
channel: z.string().min(1, "Channel is required"),
messageType: z.enum(["text", "blocks"]),
messageText: z.string().min(1, "Message text is required"),
blocksJson: z.string().optional(),
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Schema bug: blocks mode still requires messageText; also validate Blocks JSON.

When messageType="blocks", the form omits messageText but zod still requires it, causing save failures. Use a discriminated union and validate blocksJson.

-const slackFormSchema = z.object({
-  credentialId: z.string().min(1, "Please select a credential"),
-  channel: z.string().min(1, "Channel is required"),
-  messageType: z.enum(["text", "blocks"]),
-  messageText: z.string().min(1, "Message text is required"),
-  blocksJson: z.string().optional(),
-});
+const slackFormSchema = z.discriminatedUnion("messageType", [
+  z.object({
+    messageType: z.literal("text"),
+    credentialId: z.string().min(1, "Please select a credential"),
+    channel: z.string().min(1, "Channel is required"),
+    messageText: z.string().min(1, "Message text is required"),
+    blocksJson: z.string().optional(),
+  }),
+  z
+    .object({
+      messageType: z.literal("blocks"),
+      credentialId: z.string().min(1, "Please select a credential"),
+      channel: z.string().min(1, "Channel is required"),
+      blocksJson: z.string().min(1, "Blocks JSON is required"),
+      messageText: z.string().optional(),
+    })
+    .superRefine(({ blocksJson }, ctx) => {
+      try {
+        JSON.parse(blocksJson);
+      } catch {
+        ctx.addIssue({ code: z.ZodIssueCode.custom, message: "Invalid Blocks JSON", path: ["blocksJson"] });
+      }
+    }),
+]);

Also applies to: 200-252

🤖 Prompt for AI Agents
In features/communication/components/slack-send/dialog.tsx around lines 37 to 43
(and similarly at lines ~200-252), the zod schema currently always requires
messageText even when messageType === "blocks" and does not validate blocksJson;
replace the flat schema with a discriminated union on messageType (one branch
for "text" that requires messageText and optional blocksJson, and one branch for
"blocks" that requires blocksJson and makes messageText optional/omitted), and
add a custom refinement or transform to validate that blocksJson is present and
is valid JSON (and optionally validate the parsed structure if needed); update
any form types/usages accordingly so the form no longer fails when using blocks
mode.

Comment on lines +63 to +70
const { data: credentials = [] } = useQuery({
queryKey: ["credentials"],
queryFn: async () => {
const response = await fetch("/api/trpc/credentials.list");
const data = await response.json();
return data.result?.data || [];
},
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Prefer the typed TRPC client over raw fetch for credentials.

Use your generated TRPC React hooks (credentials.list) instead of calling /api/trpc directly; you’ll get types, errors, and batching.

Run to locate the TRPC React client in the repo:


🏁 Script executed:

#!/bin/bash
rg -nP --type=ts --type=tsx -C2 '(createTRPCReact|createTRPCNext|createTRPCProxyClient)'
rg -nP --type=ts --type=tsx -C2 'credentials\.(list|get|all)'
fd -t f -a 'trpc' -x sed -n '1,120p' {}

Length of output: 55


I need to adjust the search strategy since the file type flags failed. Let me run corrected scripts to locate the TRPC client and verify the review comment.


---

🏁 Script executed:

```shell
#!/bin/bash
# Search for TRPC client creation with broader approach
rg -n 'createTRPCReact|createTRPCNext|createTRPCProxyClient' --max-count=5

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Find all files with 'trpc' in name or path
fd -t f 'trpc' | head -20

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Search for credentials hooks/usage
rg -n 'credentials\.list|credentials\.get|useQuery.*credentials' --max-count=10

Length of output: 2326


🏁 Script executed:

#!/bin/bash
# Look for @/ import aliases to understand import pattern
rg -n "from ['\"]@/" --max-count=5 features/communication/components/slack-send/dialog.tsx

Length of output: 211


Comment on lines +76 to +89
{selectedType === "CUSTOM" ? (
<Textarea
placeholder="Enter JSON data..."
value={typeof field.value === 'object' ? JSON.stringify(field.value, null, 2) : ''}
onChange={(e) => {
try {
const parsed = JSON.parse(e.target.value);
field.onChange(parsed);
} catch {
field.onChange({});
}
}}
/>
) : (
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix CUSTOM JSON editing UX; don’t overwrite with {} on partial input.

Parsing on every keystroke resets the field and loses user input. Keep raw text, validate on blur, and wrap with FormControl.

-            {selectedType === "CUSTOM" ? (
-              <Textarea
-                placeholder="Enter JSON data..."
-                value={typeof field.value === 'object' ? JSON.stringify(field.value, null, 2) : ''}
-                onChange={(e) => {
-                  try {
-                    const parsed = JSON.parse(e.target.value);
-                    field.onChange(parsed);
-                  } catch {
-                    field.onChange({});
-                  }
-                }}
-              />
-            ) : (
+            {selectedType === "CUSTOM" ? (
+              <FormControl>
+                <Textarea
+                  placeholder="Enter JSON data..."
+                  value={customJson}
+                  onChange={(e) => setCustomJson(e.target.value)}
+                  onBlur={() => {
+                    try {
+                      const parsed = JSON.parse(customJson);
+                      setJsonError(undefined);
+                      field.onChange(parsed);
+                    } catch (err) {
+                      setJsonError("Invalid JSON");
+                    }
+                  }}
+                />
+              </FormControl>
+            ) : (

Add these helper states at the top of the component (outside renderCredentialDataFields):

// additions needed near other hooks in CredentialForm
const [customJson, setCustomJson] = React.useState(
  typeof defaultValues?.data === "object" ? JSON.stringify(defaultValues?.data, null, 2) : "{}"
);
const [jsonError, setJsonError] = React.useState<string | undefined>();

Optionally surface the error:

{jsonError && <FormMessage>{jsonError}</FormMessage>}
🤖 Prompt for AI Agents
In features/credentials/components/credential-form.tsx around lines 76 to 89,
the CUSTOM JSON textarea currently parses on every keystroke and replaces the
field with {} on parse errors, which wipes user input; instead add two hook
states near the other hooks (customJson initialized from defaultValues.data JSON
stringified, and jsonError as undefined), wrap the textarea with a
FormControl/FormMessage, bind the textarea value to customJson and update
customJson on change without parsing, validate/parse JSON only on blur (on
successful parse call field.onChange(parsed) and clear jsonError, on failure set
jsonError but do not change field), and render jsonError in a FormMessage so
partial/invalid input isn’t discarded during typing.

Comment on lines +1061 to +1114
private async executeHttpRequestNode(node: any): Promise<NodeExecutionResult> {
const nodeData = node.data as any || {};

if (!nodeData.endpoint) {
return {
success: false,
error: "HTTP Request node requires an endpoint configuration",
};
}

try {
const response = await fetch(nodeData.endpoint, {
method: nodeData.method || "GET",
headers: {
"Content-Type": "application/json",
...nodeData.headers,
},
body: nodeData.method !== "GET" && nodeData.method !== "HEAD"
? JSON.stringify(nodeData.body)
: undefined,
});

const data = await response.text();

return {
success: response.ok,
output: {
status: response.status,
statusText: response.statusText,
headers: Object.fromEntries(response.headers.entries()),
data: data,
},
logs: [{
level: response.ok ? "INFO" : "ERROR",
message: `HTTP request to ${nodeData.endpoint} returned ${response.status}`,
data: {
endpoint: nodeData.endpoint,
method: nodeData.method,
status: response.status,
},
}],
};

} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : "HTTP request failed",
logs: [{
level: "ERROR",
message: `HTTP request failed: ${error instanceof Error ? error.message : "Unknown error"}`,
}],
};
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use makeHttpRequest helper and add authentication support.

The HTTP request node uses plain fetch without timeouts or retries, while the codebase provides a makeHttpRequest helper with retry logic. Additionally, it lacks credential/authentication support.

Improvements:

private async executeHttpRequestNode(node: any): Promise<NodeExecutionResult> {
  const nodeData = node.data as any || {};

  if (!nodeData.endpoint) {
    return {
      success: false,
      error: "HTTP Request node requires an endpoint configuration",
    };
  }

  try {
    // Build headers
    const headers: Record<string, string> = {
      'Content-Type': 'application/json',
      ...nodeData.headers,
    };

    // Add authentication if credential provided
    if (nodeData.credentialId) {
      const credential = await getCredential(this.prisma, nodeData.credentialId);
      if (credential) {
        switch (credential.type) {
          case 'API_KEY':
            headers['X-API-Key'] = credential.data as string;
            break;
          case 'BEARER_TOKEN':
            headers['Authorization'] = `Bearer ${credential.data}`;
            break;
          case 'BASIC_AUTH':
            const basicAuth = credential.data as { username: string; password: string };
            headers['Authorization'] = `Basic ${Buffer.from(`${basicAuth.username}:${basicAuth.password}`).toString('base64')}`;
            break;
        }
      }
    }

    // Use makeHttpRequest helper with retries
    const response = await makeHttpRequest(
      nodeData.endpoint,
      {
        method: nodeData.method || 'GET',
        headers,
        body: nodeData.method !== 'GET' && nodeData.method !== 'HEAD'
          ? JSON.stringify(nodeData.body)
          : undefined,
      },
      nodeData.retries || 3
    );

    return {
      success: response.success,
      output: response.data,
      error: response.error,
      logs: [{
        level: response.success ? "INFO" : "ERROR",
        message: `HTTP ${nodeData.method || 'GET'} request to ${nodeData.endpoint} ${response.success ? 'succeeded' : 'failed'}`,
        data: {
          endpoint: nodeData.endpoint,
          method: nodeData.method,
          status: response.status,
        },
      }],
    };
  } catch (error) {
    return {
      success: false,
      error: error instanceof Error ? error.message : "HTTP request failed",
      logs: [{
        level: "ERROR",
        message: `HTTP request failed: ${error instanceof Error ? error.message : "Unknown error"}`,
      }],
    };
  }
}
🤖 Prompt for AI Agents
lib/execution-engine.ts lines 1061-1114: The current HTTP node implementation
uses raw fetch without timeouts/retries and lacks credential support; replace
the fetch usage with the shared makeHttpRequest helper, build headers first
(merge nodeData.headers with Content-Type), if nodeData.credentialId fetch
credential via getCredential(this.prisma, nodeData.credentialId) and attach
credentials (API_KEY -> X-API-Key, BEARER_TOKEN -> Authorization: Bearer
<token>, BASIC_AUTH -> Authorization: Basic <base64 user:pass>), pass
method/body as before, call makeHttpRequest with nodeData.retries (default 3),
then map the helper response into NodeExecutionResult (use response.success,
response.data, response.error, include status in logs) and preserve the existing
error catch behavior.

Comment on lines +1116 to +1131
static async createExecution(
workflowId: string,
userId: string,
triggerType: "manual" | "webhook" | "schedule" = "manual",
prisma: PrismaClient
): Promise<ExecutionEngine> {
const context: ExecutionContext = {
executionId: "", // Will be set when execution starts
workflowId,
userId,
triggerType,
};

return new ExecutionEngine(context, prisma);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

executionId initialization is problematic.

Line 1123: The executionId is initialized as an empty string with a comment "Will be set when execution starts", but it's never actually set in the createExecution factory method. This causes the context to have an invalid executionId until executeWorkflow creates the execution record.

This was already flagged in the review of executeWorkflow (lines 40-108), where the execution.id should be stored in context but isn't.

The factory method should either:

  1. Accept the executionId as a parameter if the execution record is created elsewhere
  2. Create the execution record and return both the engine and execution ID

Recommended approach:

static async createExecution(
  workflowId: string,
  userId: string,
  triggerType: "manual" | "webhook" | "schedule" = "manual",
  prisma: PrismaClient
): Promise<{ engine: ExecutionEngine; executionId: string }> {
  // Create the execution record
  const execution = await prisma.execution.create({
    data: {
      workflowId,
      status: "PENDING",
      triggeredBy: userId,
    },
  });

  const context: ExecutionContext = {
    executionId: execution.id,
    workflowId,
    userId,
    triggerType,
  };

  return {
    engine: new ExecutionEngine(context, prisma),
    executionId: execution.id,
  };
}

Then update executeWorkflow to not create the execution record since it's already created.

🤖 Prompt for AI Agents
lib/execution-engine.ts lines 1116-1131: the factory currently sets executionId
to an empty string and never populates it, leaving the context invalid; change
createExecution to create the execution record via prisma (status PENDING,
triggeredBy userId, workflowId), set context.executionId to the created
execution.id, and return both the new ExecutionEngine and the executionId (e.g.,
Promise<{ engine: ExecutionEngine; executionId: string }>) so callers get a
valid ID; also update executeWorkflow to stop creating a duplicate execution
record when using this factory.

Comment on lines +34 to +68
async dequeueJob(): Promise<{ jobId: string; workflowId: string; userId: string; triggerType: "webhook" | "schedule"; scheduledAt?: Date; webhookData?: any } | null> {
// Find the next pending job
const job = await this.prisma.executionJob.findFirst({
where: {
status: "PENDING",
OR: [
{ scheduledAt: { lte: new Date() } },
{ scheduledAt: null },
],
},
orderBy: [
{ scheduledAt: "asc" },
{ createdAt: "asc" },
],
});

if (!job) {
return null;
}

// Mark as processing
await this.prisma.executionJob.update({
where: { id: job.id },
data: { status: "PROCESSING" },
});

return {
jobId: job.id,
workflowId: job.workflowId,
userId: job.userId,
triggerType: job.triggerType as "webhook" | "schedule",
scheduledAt: job.scheduledAt || undefined,
webhookData: job.webhookData,
};
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix race condition in job dequeuing.

The dequeueJob method has a race condition: between findFirst (line 36) and update (line 55), another worker could dequeue the same job. This can lead to duplicate job processing.

Wrap the operation in a transaction and use optimistic locking:

  async dequeueJob(): Promise<{ jobId: string; workflowId: string; userId: string; triggerType: "webhook" | "schedule"; scheduledAt?: Date; webhookData?: any } | null> {
-    // Find the next pending job
-    const job = await this.prisma.executionJob.findFirst({
+    return await this.prisma.$transaction(async (tx) => {
+      const job = await tx.executionJob.findFirst({
-      where: {
-        status: "PENDING",
-        OR: [
-          { scheduledAt: { lte: new Date() } },
-          { scheduledAt: null },
-        ],
-      },
-      orderBy: [
-        { scheduledAt: "asc" },
-        { createdAt: "asc" },
-      ],
-    });
+        where: {
+          status: "PENDING",
+          OR: [
+            { scheduledAt: { lte: new Date() } },
+            { scheduledAt: null },
+          ],
+        },
+        orderBy: [
+          { scheduledAt: "asc" },
+          { createdAt: "asc" },
+        ],
+      });
  
-    if (!job) {
-      return null;
-    }
+      if (!job) {
+        return null;
+      }
  
-    // Mark as processing
-    await this.prisma.executionJob.update({
-      where: { id: job.id },
-      data: { status: "PROCESSING" },
-    });
+      // Atomically update only if still PENDING
+      const updated = await tx.executionJob.updateMany({
+        where: { 
+          id: job.id,
+          status: "PENDING" // Optimistic lock
+        },
+        data: { status: "PROCESSING" },
+      });
+
+      // If no rows updated, job was claimed by another worker
+      if (updated.count === 0) {
+        return null;
+      }
  
-    return {
-      jobId: job.id,
-      workflowId: job.workflowId,
-      userId: job.userId,
-      triggerType: job.triggerType as "webhook" | "schedule",
-      scheduledAt: job.scheduledAt || undefined,
-      webhookData: job.webhookData,
-    };
+      return {
+        jobId: job.id,
+        workflowId: job.workflowId,
+        userId: job.userId,
+        triggerType: job.triggerType as "webhook" | "schedule",
+        scheduledAt: job.scheduledAt || undefined,
+        webhookData: job.webhookData,
+      };
+    });
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async dequeueJob(): Promise<{ jobId: string; workflowId: string; userId: string; triggerType: "webhook" | "schedule"; scheduledAt?: Date; webhookData?: any } | null> {
// Find the next pending job
const job = await this.prisma.executionJob.findFirst({
where: {
status: "PENDING",
OR: [
{ scheduledAt: { lte: new Date() } },
{ scheduledAt: null },
],
},
orderBy: [
{ scheduledAt: "asc" },
{ createdAt: "asc" },
],
});
if (!job) {
return null;
}
// Mark as processing
await this.prisma.executionJob.update({
where: { id: job.id },
data: { status: "PROCESSING" },
});
return {
jobId: job.id,
workflowId: job.workflowId,
userId: job.userId,
triggerType: job.triggerType as "webhook" | "schedule",
scheduledAt: job.scheduledAt || undefined,
webhookData: job.webhookData,
};
}
async dequeueJob(): Promise<{ jobId: string; workflowId: string; userId: string; triggerType: "webhook" | "schedule"; scheduledAt?: Date; webhookData?: any } | null> {
return await this.prisma.$transaction(async (tx) => {
const job = await tx.executionJob.findFirst({
where: {
status: "PENDING",
OR: [
{ scheduledAt: { lte: new Date() } },
{ scheduledAt: null },
],
},
orderBy: [
{ scheduledAt: "asc" },
{ createdAt: "asc" },
],
});
if (!job) {
return null;
}
// Atomically update only if still PENDING
const updated = await tx.executionJob.updateMany({
where: {
id: job.id,
status: "PENDING" // Optimistic lock
},
data: { status: "PROCESSING" },
});
// If no rows updated, job was claimed by another worker
if (updated.count === 0) {
return null;
}
return {
jobId: job.id,
workflowId: job.workflowId,
userId: job.userId,
triggerType: job.triggerType as "webhook" | "schedule",
scheduledAt: job.scheduledAt || undefined,
webhookData: job.webhookData,
};
});
}
🤖 Prompt for AI Agents
In lib/execution-queue.ts around lines 34 to 68, the current findFirst then
update flow has a race condition; replace it with an atomic optimistic-locking
pattern: inside a short retry loop (or transaction), select the candidate job
(same findFirst criteria), then attempt a conditional update that only sets
status to "PROCESSING" when the job still has status "PENDING" (e.g. update
where { id: job.id, status: "PENDING" }); if the conditional update returns no
rows (another worker claimed it), retry or return null; optionally wrap the
select+conditional-update in a prisma.transaction and limit retries to avoid
livelock. Ensure you check the conditional update result before returning the
job object so a job is only returned when the update succeeded.

Comment on lines +139 to +187
model Execution {
id String @id @default(cuid())
workflowId String
status ExecutionStatus @default(PENDING)
startedAt DateTime?
completedAt DateTime?
error String?
triggeredBy String? // user ID or "webhook" or "schedule"
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade)
steps ExecutionStep[]
logs ExecutionLog[]
@@map("execution")
}

model ExecutionStep {
id String @id @default(cuid())
executionId String
nodeId String
status StepStatus @default(PENDING)
startedAt DateTime?
completedAt DateTime?
error String?
input Json? // Input data for this step
output Json? // Output data from this step
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)
@@map("execution_step")
}

model ExecutionLog {
id String @id @default(cuid())
executionId String
level LogLevel @default(INFO)
message String
nodeId String? // Which node generated this log
data Json? // Additional structured data
createdAt DateTime @default(now())
execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)
@@map("execution_log")
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add database indices for execution query performance.

The execution tracking models will be queried frequently for status monitoring, filtering by workflow, and retrieving logs. Missing indices will cause performance issues as data grows.

Apply these indices:

  execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)

+  @@index([executionId])
+  @@index([nodeId])
   @@map("execution_step")
 }

 model ExecutionLog {
   id          String   @id @default(cuid())
   executionId String
   level       LogLevel @default(INFO)
   message     String
   nodeId      String?
   data        Json?
   createdAt   DateTime @default(now())

   execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)

+  @@index([executionId])
+  @@index([level])
+  @@index([nodeId])
+  @@index([createdAt])
   @@map("execution_log")
 }

Additionally, add indices to the Execution model:

  workflow      Workflow       @relation(fields: [workflowId], references: [id], onDelete: Cascade)
  steps         ExecutionStep[]
  logs          ExecutionLog[]

+  @@index([workflowId])
+  @@index([status])
+  @@index([startedAt])
+  @@index([triggeredBy])
   @@map("execution")
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
model Execution {
id String @id @default(cuid())
workflowId String
status ExecutionStatus @default(PENDING)
startedAt DateTime?
completedAt DateTime?
error String?
triggeredBy String? // user ID or "webhook" or "schedule"
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade)
steps ExecutionStep[]
logs ExecutionLog[]
@@map("execution")
}
model ExecutionStep {
id String @id @default(cuid())
executionId String
nodeId String
status StepStatus @default(PENDING)
startedAt DateTime?
completedAt DateTime?
error String?
input Json? // Input data for this step
output Json? // Output data from this step
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)
@@map("execution_step")
}
model ExecutionLog {
id String @id @default(cuid())
executionId String
level LogLevel @default(INFO)
message String
nodeId String? // Which node generated this log
data Json? // Additional structured data
createdAt DateTime @default(now())
execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)
@@map("execution_log")
}
model Execution {
id String @id @default(cuid())
workflowId String
status ExecutionStatus @default(PENDING)
startedAt DateTime?
completedAt DateTime?
error String?
triggeredBy String? // user ID or "webhook" or "schedule"
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade)
steps ExecutionStep[]
logs ExecutionLog[]
@@index([workflowId])
@@index([status])
@@index([startedAt])
@@index([triggeredBy])
@@map("execution")
}
model ExecutionStep {
id String @id @default(cuid())
executionId String
nodeId String
status StepStatus @default(PENDING)
startedAt DateTime?
completedAt DateTime?
error String?
input Json? // Input data for this step
output Json? // Output data from this step
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)
@@index([executionId])
@@index([nodeId])
@@map("execution_step")
}
model ExecutionLog {
id String @id @default(cuid())
executionId String
level LogLevel @default(INFO)
message String
nodeId String? // Which node generated this log
data Json? // Additional structured data
createdAt DateTime @default(now())
execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)
@@index([executionId])
@@index([level])
@@index([nodeId])
@@index([createdAt])
@@map("execution_log")
}

Comment on lines +247 to 270
model ExecutionJob {
id String @id @default(cuid())
workflowId String
userId String
triggerType String // "webhook" or "schedule"
status JobStatus @default(PENDING)
scheduledAt DateTime?
webhookData Json?
error String?
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
completedAt DateTime?
workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade)
@@map("execution_job")
}

enum JobStatus {
PENDING
PROCESSING
COMPLETED
FAILED
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add indices and consider triggerType enum for consistency.

  1. Missing indices: The ExecutionJob table will be queried frequently by schedulers and job processors. Add indices:
  workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade)

+  @@index([workflowId])
+  @@index([status])
+  @@index([scheduledAt])
+  @@index([userId])
   @@map("execution_job")
 }
  1. Type consistency: The triggerType field is a String but could be an enum for type safety:
+enum TriggerType {
+  MANUAL
+  WEBHOOK
+  SCHEDULE
+}

 model ExecutionJob {
   id          String   @id @default(cuid())
   workflowId  String
   userId      String
-  triggerType String   // "webhook" or "schedule"
+  triggerType TriggerType

This would align with the ExecutionContext interface in lib/execution-engine.ts (line 17).

🤖 Prompt for AI Agents
In prisma/schema.prisma around lines 247 to 270, the ExecutionJob model lacks
database indices and uses a plain String for triggerType; add an enum
TriggerType { WEBHOOK SCHEDULE } and change triggerType to use that enum, then
add appropriate indexes to optimize scheduler/processor queries (e.g., index on
scheduledAt, index on status, index on workflowId and userId, and a composite
index commonly queried together such as (status, scheduledAt)); ensure the new
enum is declared in the schema and the model includes the @@index declarations
(or @index on fields) to match the suggested indexed fields.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants