-
Notifications
You must be signed in to change notification settings - Fork 9
18:ai-integration #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
🔍 Analyzing PR changes and preparing to run tests... |
❌ Deploy Preview for nodebasex failed. Why did it fail? →
|
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedFailed to post review comments WalkthroughThis PR introduces a comprehensive workflow automation platform with credentials management, execution tracking, guest mode support, and 15+ new node types spanning webhooks, scheduling, AI services, database operations, and utilities. It adds encryption for credential storage, a workflow execution engine with job queuing, expanded database schema, and enhanced authentication flows supporting both authenticated and guest users. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Browser as Browser/UI
participant Server
participant Queue as Job Queue
participant Engine as Execution Engine
participant DB as Database
rect rgb(200, 220, 255)
note over User,DB: Authenticated Workflow Execution
User->>Browser: Clicks execute button
Browser->>Server: POST /trpc/workflows.execute
Server->>DB: Verify workflow ownership
Server->>Engine: executeWorkflow(workflowId)
Engine->>DB: Create Execution record
Engine->>DB: Load workflow nodes/connections
Engine->>Engine: Calculate topological sort
loop For each node (in order)
Engine->>Engine: executeNode(nodeId)
Engine->>Engine: dispatch by NodeType
Engine->>DB: Save execution step + logs
end
Engine->>DB: Update Execution (COMPLETED/FAILED)
Server->>Browser: Return jobId
end
rect rgb(220, 255, 220)
note over User,DB: Webhook/Scheduled Trigger
Server->>Queue: enqueueJob(webhook payload)
Queue->>DB: Save ExecutionJob (PENDING)
Note over Queue: Background Scheduler
Queue->>Queue: dequeueJob()
Queue->>Engine: executeWorkflow()
Engine->>DB: Create Execution, run nodes
Queue->>DB: markJobCompleted()
end
rect rgb(255, 240, 220)
note over User,DB: Guest Mode Workflow
User->>Browser: Create/edit workflow (no auth)
Browser->>Browser: saveGuestWorkflow() → localStorage
User->>Browser: Sign up
Browser->>Server: POST /trpc/workflows.migrateGuestWorkflows
Server->>DB: Create workflows for user
Server->>Browser: Return migrated IDs
Browser->>Browser: clearGuestWorkflows() → localStorage
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Areas requiring extra attention:
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
🚀 Scrapybara Ubuntu instance started! |
|
🔧 Setting up test environment... Agent Steps |
|
❌ Something went wrong: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 80
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
components/entity-components.tsx (1)
1-1: Remove unused import.The
neverimport from zod is not used anywhere in this file.Apply this diff:
-import { never } from "zod"; import { Button } from "./ui/button";app/(dashboard)/(rest)/credentials/[credentialId]/page.tsx (1)
3-15: Fix params typing and remove unnecessary await.Next.js passes params as a plain object; awaiting it will throw.
interface PageProps { - params: Promise<{ credentialId: string }>; + params: { credentialId: string }; } // http://localhost:3000/credentials/123 const Page = async ({ params }: PageProps) => { const session = await getAuth(); - const { credentialId } = await params; + const { credentialId } = params; // If user is not authenticated, still show the page but they won't be able to perform actions return <p>Credential id: {credentialId}</p>; }features/auth/components/login-form.tsx (2)
38-59: Fix double navigation and improve error handling.The current implementation has multiple issues:
- Double navigation: The
callbackURL: "/workflows"causes automatic navigation, thenrouter.push("/workflows")navigates again.- Race condition: Migration and navigation happen concurrently, potentially navigating before migration completes.
- Silent failures: Migration errors are only logged to console—users receive no feedback.
Apply this diff to fix the issues:
const onSubmit = async (values: LoginFormValues) => { await authClient.signIn.email({ email: values.email, password: values.password, - callbackURL: "/workflows", }, { onSuccess: async () => { const guestWorkflows = getGuestWorkflows(); if (guestWorkflows.length > 0) { try { await migrateMutation.mutateAsync(); + toast.success("Workflows migrated successfully!"); } catch (error) { - console.error("Failed to migrate guest workflows:", error); + toast.error("Failed to migrate workflows. Please try again."); + console.error("Failed to migrate guest workflows:", error); } } router.push("/workflows"); }, onError: (ctx) => { toast.error(ctx.error.message); }, }); };
61-81: Apply the same fixes to social login flow.The social login handler has identical issues: double navigation, race conditions, and silent error handling.
Apply this diff:
const handleSocialLogin = async (provider: "github" | "google") => { await authClient.signIn.social({ provider, - callbackURL: "/workflows", }, { onSuccess: async () => { const guestWorkflows = getGuestWorkflows(); if (guestWorkflows.length > 0) { try { await migrateMutation.mutateAsync(); + toast.success("Workflows migrated successfully!"); } catch (error) { - console.error("Failed to migrate guest workflows:", error); + toast.error("Failed to migrate workflows. Please try again."); + console.error("Failed to migrate guest workflows:", error); } } router.push("/workflows"); }, onError: (ctx) => { toast.error(ctx.error.message); }, }); };features/workflows/server/routers.ts (1)
44-52: Validate node.type against enum; don’t cast strings/null.Accepting string/nullish and casting to NodeType will break writes (e.g., "initial" vs NodeType.INITIAL).
- type: z.string().nullish(), + type: z.nativeEnum(NodeType),Apply in both updateNodes and update input schemas. Also ensure client sends valid enum or map client types to enum before persisting.
Also applies to: 110-116
features/workflows/components/workflows.tsx (2)
324-355: Hooks rule violation: conditional hook call via ternary.
isAuthenticated ? useRemoveWorkflow(...) : useDeleteGuestWorkflow()calls hooks conditionally. This can break render order.-export const WorkflowItem = ({ data }: { data: Workflow }) => { +export const WorkflowItem = ({ data }: { data: Workflow }) => { const { data: authData } = useQuery({ queryKey: ["session"], queryFn: () => authClient.useSession(), }); const isAuthenticated = !!authData?.data?.user; - const removeWorkflow = isAuthenticated ? useRemoveWorkflow(data.id) : useDeleteGuestWorkflow(); + const removeWorkflowMutation = useRemoveWorkflow(data.id); + const deleteGuestWorkflowMutation = useDeleteGuestWorkflow(); const handleRemove = () => { - removeWorkflow.mutate(isAuthenticated ? { id: data.id } : data.id, { + const mutate = isAuthenticated + ? () => removeWorkflowMutation.mutate({ id: data.id }, callbacks) + : () => deleteGuestWorkflowMutation.mutate(data.id, callbacks); + const callbacks = { onSuccess: () => { toast.success(`Workflow ${data.name} removed`); }, onError: (error) => { toast.error(`Failed to remove workflow: ${error.message}`); }, - }); + }; + mutate(); }; return ( <EntityItem ... - isRemoving={removeWorkflow.isPending} + isRemoving={isAuthenticated ? removeWorkflowMutation.isPending : deleteGuestWorkflowMutation.isPending} /> ); };
231-243: AuthenticatedWorkflowsPagination must be wrapped in a Suspense boundary.The pagination component calls
useSuspenseWorkflows()at line 233 but is rendered outside the existing Suspense boundary. The boundary in app/(dashboard)/(rest)/workflows/page.tsx only wraps<WorkflowsList />, while the pagination is rendered as a separate prop inEntityContainerwithinWorkflowsContainer, leaving it unprotected.
| --- | ||
| trigger: glob | ||
| description: This rule explains Next.js conventions and best practices for fullstack development. | ||
| globs: **/*.js,**/*.jsx,**/*.ts,**/*.tsx | ||
| --- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Fix front matter: make globs a YAML list (prevents rule parser misread).
Using a comma-separated string is brittle. Prefer explicit array.
Apply:
---
trigger: glob
description: This rule explains Next.js conventions and best practices for fullstack development.
-globs: **/*.js,**/*.jsx,**/*.ts,**/*.tsx
+globs:
+ - "**/*.js"
+ - "**/*.jsx"
+ - "**/*.ts"
+ - "**/*.tsx"
---📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| --- | |
| trigger: glob | |
| description: This rule explains Next.js conventions and best practices for fullstack development. | |
| globs: **/*.js,**/*.jsx,**/*.ts,**/*.tsx | |
| --- | |
| --- | |
| trigger: glob | |
| description: This rule explains Next.js conventions and best practices for fullstack development. | |
| globs: | |
| - "**/*.js" | |
| - "**/*.jsx" | |
| - "**/*.ts" | |
| - "**/*.tsx" | |
| --- |
🤖 Prompt for AI Agents
.windsurfrules/NEXTJS.md lines 1-5: the front matter uses a comma-separated
string for `globs`, which can be misparsed; replace the `globs` value with a
YAML sequence (each pattern on its own line, prefixed with a dash) so it becomes
an explicit list of glob strings in the front matter.
| @@ -0,0 +1,9 @@ | |||
| const Layout = ({ children }: { children: React.ReactNode }) => { | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing React import for React.ReactNode type.
The component uses React.ReactNode in the type annotation but doesn't import React. While the new JSX transform in Next.js 15/React 19 eliminates the need to import React for JSX, the React.ReactNode type reference still requires it.
Apply this diff to add the import:
+import React from "react";
+
const Layout = ({ children }: { children: React.ReactNode }) => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const Layout = ({ children }: { children: React.ReactNode }) => { | |
| import React from "react"; | |
| const Layout = ({ children }: { children: React.ReactNode }) => { |
🤖 Prompt for AI Agents
In app/(dashboard)/(editor)/layout.tsx around line 1, the component's type
annotation uses React.ReactNode but React isn't imported; add a type-only import
from React and update the prop type to use it (e.g., import type { ReactNode }
from 'react' and annotate children as ReactNode) so the type reference is
resolved without introducing a runtime React import.
| const session = await getAuth(); | ||
| const { executionId } = await params; | ||
|
|
||
| return <p>Execution id: {executionId}</p>; | ||
| } | ||
| // Only prefetch execution details if user is authenticated | ||
| if (session) { | ||
| await prefetchExecutionDetails(executionId); | ||
| } | ||
|
|
||
| return ( | ||
| <HydrateClient> | ||
| <ErrorBoundary> | ||
| <Suspense fallback={<div>Loading...</div>}> | ||
| <ExecutionDetail executionId={executionId} /> | ||
| </Suspense> | ||
| </ErrorBoundary> | ||
| </HydrateClient> | ||
| ); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify authentication enforcement and user experience for execution details.
The page conditionally prefetches execution data based on session presence, but unconditionally renders ExecutionDetail. This creates two concerns:
- Security: If the tRPC procedure called by
useSuspenseExecutionDetailsdoesn't enforce authentication, execution data could be exposed to unauthenticated users. - UX: Unauthenticated users will render the component but have no prefetched data, likely resulting in errors or confusing empty states.
Run this script to verify that execution detail procedures have proper authentication guards:
If guest access is not intended, consider either:
- Redirecting unauthenticated users before rendering, or
- Showing a proper authentication required state instead of the detail component
🏁 Script executed:
#!/bin/bash
# Description: Check if execution detail procedures enforce authentication
# Search for getExecutionDetails procedure definition
rg -A 10 "getExecutionDetails.*procedure" --type ts -g "**/workflows/server/routers.ts"
# Check if it uses protectedProcedure or baseProcedure
rg "(protectedProcedure|baseProcedure).*getExecutionDetails" --type ts -g "**/workflows/server/**"Length of output: 0
I need to search more broadly to locate the actual procedure definitions and understand how data is being fetched.
🏁 Script executed:
#!/bin/bash
# Search for ExecutionDetail component to find how it fetches data
echo "=== Finding ExecutionDetail component ==="
fd -e tsx -e ts | xargs grep -l "ExecutionDetail" | head -20
# Search for tRPC procedures related to execution broadly
echo -e "\n=== Finding execution-related procedures ==="
fd -path "*/server/**" -e ts | xargs grep -n "procedure.*execution" -i | head -20
# Search for procedure definitions more broadly
echo -e "\n=== Finding procedure file structure ==="
fd -path "*/workflows/server/**" -e ts | head -20
# Look for useSuspenseExecutionDetails hook
echo -e "\n=== Finding useSuspenseExecutionDetails hook ==="
rg "useSuspenseExecutionDetails" --type ts -nLength of output: 1079
🏁 Script executed:
#!/bin/bash
# Read the useSuspenseExecutionDetails hook to find the procedure call
echo "=== useSuspenseExecutionDetails hook implementation ==="
cat features/executions/hooks/use-executions.ts
# Look for execution-related tRPC routes/routers
echo -e "\n=== Finding tRPC router files ==="
fd -type f -name "*router*" | grep -E "(execution|trpc)" | head -10
# Find all server directories related to executions
echo -e "\n=== Finding server files in executions ==="
find features/executions -type f -name "*.ts" -path "*/server/*"
# Check prefetch.ts for auth context
echo -e "\n=== prefetch.ts content ==="
cat features/executions/server/prefetch.tsLength of output: 1556
🏁 Script executed:
#!/bin/bash
# Find tRPC router structure
echo "=== Finding trpc directories and files ==="
find . -type d -name "trpc" | head -10
# Find workflows router specifically
echo -e "\n=== Searching for workflows router ==="
fd -e ts | xargs grep -l "getExecutionDetails" | head -20
# Search for router definitions with procedure pattern
echo -e "\n=== Searching for procedure definitions ==="
rg "getExecutionDetails" -n -A 3 -B 3 --type ts | head -50Length of output: 1015
🏁 Script executed:
#!/bin/bash
# Explore trpc structure
echo "=== Contents of ./trpc ==="
find ./trpc -type f -name "*.ts" | head -30
# Look for workflows router specifically
echo -e "\n=== Searching for workflows router definition ==="
fd -e ts -path "*/trpc/*" | xargs grep -l "workflows" | head -20
# Search for procedure definitions with router pattern
echo -e "\n=== Searching in router files ==="
rg "\.query.*getExecutionDetails|getExecutionDetails.*query" --type ts -A 5 -B 5Length of output: 1466
🏁 Script executed:
#!/bin/bash
# Read the main app router
echo "=== Main tRPC router (_app.ts) ==="
cat ./trpc/routers/_app.ts
# Also check if there are more router files
echo -e "\n=== All files in routers directory ==="
find ./trpc/routers -type fLength of output: 562
🏁 Script executed:
#!/bin/bash
# Find and read the workflow router
echo "=== Finding workflows/server/routers files ==="
find features/workflows/server -type f -name "*router*"
# Also check what files are in that directory
echo -e "\n=== All files in workflows/server ==="
find features/workflows/server -type f
# Read the routers file
echo -e "\n=== Workflow router content ==="
cat features/workflows/server/routers.ts 2>/dev/null || cat features/workflows/server/routers/index.ts 2>/dev/null || find features/workflows/server -name "*router*" -exec cat {} \;Length of output: 12058
🏁 Script executed:
#!/bin/bash
# Search comprehensively for getExecutionDetails definition
echo "=== Searching for getExecutionDetails definition anywhere ==="
rg "getExecutionDetails" -n --type ts
# Check if there are any other router files
echo -e "\n=== Checking for other router files ==="
find . -type f -name "*.ts" | xargs grep -l "\.query.*\(.*executionId" | head -20Length of output: 513
The getExecutionDetails tRPC procedure is missing from the workflow router and must be implemented.
The page calls trpc.workflows.getExecutionDetails via both useSuspenseExecutionDetails hook and prefetchExecutionDetails, but this procedure is not defined in features/workflows/server/routers.ts. The router only has getExecutions (plural) for listing executions, not a query to fetch a single execution's details.
Add the missing procedure to features/workflows/server/routers.ts:
getExecutionDetails: protectedProcedure
.input(z.object({ executionId: z.string() }))
.query(async ({ ctx, input }) => {
// Verify execution belongs to user's workflow
const execution = await prisma.execution.findUniqueOrThrow({
where: { id: input.executionId },
include: { steps: true, logs: true },
});
// Verify ownership via workflow
const workflow = await prisma.workflow.findUniqueOrThrow({
where: { id: execution.workflowId, userId: ctx.auth.user.id },
});
return execution;
}),🤖 Prompt for AI Agents
In features/workflows/server/routers.ts (add near the other workflow
procedures), implement a new protectedProcedure named getExecutionDetails that
accepts input z.object({ executionId: z.string() }) and performs a query which:
fetches the execution by id including its steps and logs (use
prisma.execution.findUniqueOrThrow with include: { steps: true, logs: true }),
then verifies ownership by ensuring the related workflow exists and belongs to
ctx.auth.user.id (use prisma.workflow.findUniqueOrThrow with where: { id:
execution.workflowId, userId: ctx.auth.user.id }), and finally returns the
execution; export/include this procedure in the router alongside getExecutions.
| // Verify workflow exists and is owned by a user | ||
| const workflow = await prisma.workflow.findUnique({ | ||
| where: { id: workflowId }, | ||
| select: { id: true, userId: true }, | ||
| }); | ||
|
|
||
| if (!workflow) { | ||
| return NextResponse.json( | ||
| { error: "Workflow not found" }, | ||
| { status: 404 } | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add authentication for webhook calls (currently unauthenticated).
Anyone can enqueue executions if they know/guess a workflowId. Require a secret immediately; then move to per‑workflow secrets/HMAC.
Apply minimal hardening now (env-scoped shared secret):
export async function POST(
request: NextRequest,
{ params }: { params: { workflowId: string } }
) {
try {
const { workflowId } = params;
// Verify workflow exists and is owned by a user
const workflow = await prisma.workflow.findUnique({
where: { id: workflowId },
- select: { id: true, userId: true },
+ select: { id: true, userId: true },
});
if (!workflow) {
return NextResponse.json(
{ error: "Workflow not found" },
{ status: 404 }
);
}
+
+ // Basic auth: require shared secret header (short-term mitigation)
+ const provided = request.headers.get("x-webhook-secret");
+ const expected = process.env.WEBHOOK_SHARED_SECRET;
+ if (!expected || !provided || provided !== expected) {
+ return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
+ }
// Get webhook data from request body
const webhookData = await request.json();Next steps (recommended):
- Per‑workflow secret in DB (e.g.,
workflow.webhookSecret) and rotateable keys. - HMAC signature verification (e.g.,
X-Signature: sha256=...) with constant‑time compare. - Basic rate limiting and body size limits.
Also applies to: 25-36, 43-49
🤖 Prompt for AI Agents
In app/api/webhooks/[workflowId]/route.ts around lines 12-23 (and similarly for
25-36, 43-49) add a minimal authentication gate: require a shared secret header
(e.g., "x-webhook-secret") on incoming requests, load the expected secret from
an env var (e.g., process.env.WEBHOOK_SHARED_SECRET), and if the env var is
missing return 500; if the header is missing or does not match the expected
secret return 401 JSON. Perform the comparison using a constant-time method
(crypto.timingSafeEqual) on Buffer values to avoid timing attacks and ensure you
normalize types/lengths before comparing; log only non-sensitive context on
failures. This is a stopgap until per-workflow secrets and HMAC verification are
implemented.
| setNodes((nodes) => { | ||
| const hasInitialTrigger = nodes.some( | ||
| (node) => node.type === NodeType.INITIAL | ||
| ); | ||
|
|
||
| const flowPosition = screenToFlowPosition({ x, y }); | ||
| const newNode = { | ||
| id: createId(), | ||
| data: {}, | ||
| type: nodeType.type, | ||
| position: flowPosition, | ||
| }; | ||
|
|
||
| if (hasInitialTrigger) { | ||
| return [newNode]; | ||
| } | ||
|
|
||
| const flowPosition = screenToFlowPosition({ | ||
| x: centerX + (Math.random() - 0.5) * 200, | ||
| y: centerY + (Math.random() - 0.5) * 200, | ||
| return [...nodes, newNode]; | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Destructive logic: existing nodes are wiped when INITIAL exists.
Returning [newNode] replaces the whole graph. You likely meant to append.
- if (hasInitialTrigger) {
- return [newNode];
- }
-
- return [...nodes, newNode];
+ // Keep existing nodes; just append
+ return [...nodes, newNode];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| setNodes((nodes) => { | |
| const hasInitialTrigger = nodes.some( | |
| (node) => node.type === NodeType.INITIAL | |
| ); | |
| const flowPosition = screenToFlowPosition({ x, y }); | |
| const newNode = { | |
| id: createId(), | |
| data: {}, | |
| type: nodeType.type, | |
| position: flowPosition, | |
| }; | |
| if (hasInitialTrigger) { | |
| return [newNode]; | |
| } | |
| const flowPosition = screenToFlowPosition({ | |
| x: centerX + (Math.random() - 0.5) * 200, | |
| y: centerY + (Math.random() - 0.5) * 200, | |
| return [...nodes, newNode]; | |
| }); | |
| setNodes((nodes) => { | |
| const hasInitialTrigger = nodes.some( | |
| (node) => node.type === NodeType.INITIAL | |
| ); | |
| const flowPosition = screenToFlowPosition({ x, y }); | |
| const newNode = { | |
| id: createId(), | |
| data: {}, | |
| type: nodeType.type, | |
| position: flowPosition, | |
| }; | |
| // Keep existing nodes; just append | |
| return [...nodes, newNode]; | |
| }); |
🤖 Prompt for AI Agents
In components/node-selector.tsx around lines 221 to 239, the setter currently
returns [newNode] when an INITIAL node exists which wipes the entire node list;
replace that destructive return with an append so existing nodes are preserved
by returning [...nodes, newNode] (i.e., change the hasInitialTrigger branch to
return [...nodes, newNode] instead of [newNode]).
|
|
||
| **Required Variables:** | ||
| - `DATABASE_URL` - PostgreSQL connection string | ||
| - `BETTER_AUTH_SECRET` - Random secret key for authentication |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrap bare URL in inline code or markdown link syntax.
Line 211 contains a bare URL http://localhost:3000 which violates markdown linting rules (MD034). Wrap it in backticks or use proper markdown link syntax.
- `BETTER_AUTH_URL` - Your app's URL (http://localhost:3000 for development)
+ `BETTER_AUTH_URL` - Your app's URL (`http://localhost:3000` for development)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In README.md around line 211, a bare URL `http://localhost:3000` is used which
breaks MD034 linting; wrap the URL in inline code backticks or convert it to a
proper markdown link (e.g., [http://localhost:3000](http://localhost:3000)) so
the URL is not left bare in the text.
| const testGetMany = () => { | ||
| console.log("✓ Test 1: getMany procedure should return empty list for unauthenticated users"); | ||
| return true; | ||
| }; | ||
|
|
||
| // Test 2: Verify AddNodeButton opens selector | ||
| const testAddNodeButton = () => { | ||
| console.log("✓ Test 2: AddNodeButton should have onClick handler to open selector"); | ||
| return true; | ||
| }; | ||
|
|
||
| // Test 3: Verify drag and drop functionality | ||
| const testDragDrop = () => { | ||
| console.log("✓ Test 3: Node items should be draggable and have onDragStart handler"); | ||
| return true; | ||
| }; | ||
|
|
||
| // Test 4: Verify ReactFlow handles drops | ||
| const testReactFlowDrop = () => { | ||
| console.log("✓ Test 4: Editor component should have onDragOver, onDrop, and onDragLeave handlers"); | ||
| return true; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stub tests provide false confidence.
All test functions unconditionally return true without performing any actual validation. These placeholder tests will always pass regardless of whether the functionality works correctly, which defeats the purpose of testing.
Either implement real assertions or remove this file. For example, actual tests would:
- Make HTTP requests to verify unauthenticated access
- Query the DOM to verify UI elements exist
- Test event handlers are properly attached
Would you like me to help generate actual test implementations using a testing framework like Jest or Vitest?
🤖 Prompt for AI Agents
In test-workflow-auth.js around lines 5–26 the test functions are just stubs
returning true which gives false confidence; replace each stub with real
assertions or delete the file. Implement concrete tests using your test runner
(Jest/Vitest) and React Testing Library: 1) for getMany perform an HTTP call (or
call the procedure directly) and assert unauthenticated requests return 401 or
an empty list; 2) render the component and assert the AddNodeButton exists and
fires its click handler (use a mock or spy) to open the selector; 3) render node
items and assert they have draggable=true and that onDragStart is called when
you simulate a dragStart event; 4) render the editor/ReactFlow component and
simulate dragOver/drop/dragLeave events asserting the corresponding handlers are
invoked and expected side effects occur; if you don’t want to implement tests,
delete the file to avoid misleading passing tests.
| // Enhanced error handling with detailed logging | ||
| onError: ({ error, path, input, ctx, type }) => { | ||
| console.error('tRPC Error:', { | ||
| error: error.message, | ||
| code: error.code, | ||
| path, | ||
| input, | ||
| type, | ||
| url: getUrl(), | ||
| httpStatus: error.data?.httpStatus, | ||
| stack: error.stack, | ||
| }); | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
httpBatchLink doesn’t support onError; use loggerLink instead
Move error logging to loggerLink and drop onError from httpBatchLink.
-import { createTRPCClient, httpBatchLink } from '@trpc/client';
+import { createTRPCClient, httpBatchLink, loggerLink } from '@trpc/client';
@@
- links: [
- httpBatchLink({
+ links: [
+ loggerLink({
+ enabled: () => process.env.NODE_ENV !== 'production',
+ }),
+ httpBatchLink({
transformer: superjson,
url: getUrl(),
- // Enhanced error handling with detailed logging
- onError: ({ error, path, input, ctx, type }) => {
- console.error('tRPC Error:', {
- error: error.message,
- code: error.code,
- path,
- input,
- type,
- url: getUrl(),
- httpStatus: error.data?.httpStatus,
- stack: error.stack,
- });
- },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Enhanced error handling with detailed logging | |
| onError: ({ error, path, input, ctx, type }) => { | |
| console.error('tRPC Error:', { | |
| error: error.message, | |
| code: error.code, | |
| path, | |
| input, | |
| type, | |
| url: getUrl(), | |
| httpStatus: error.data?.httpStatus, | |
| stack: error.stack, | |
| }); | |
| }, | |
| import { createTRPCClient, httpBatchLink, loggerLink } from '@trpc/client'; | |
| // ... other code ... | |
| links: [ | |
| loggerLink({ | |
| enabled: () => process.env.NODE_ENV !== 'production', | |
| }), | |
| httpBatchLink({ | |
| transformer: superjson, | |
| url: getUrl(), | |
| }), | |
| ], |
🤖 Prompt for AI Agents
In trpc/client.tsx around lines 49–61, remove the onError option from the
httpBatchLink and instead add a loggerLink before the httpBatchLink; configure
loggerLink to capture and log error details (message, code, path, input,
url/getUrl(), httpStatus and stack) using console.error (or your process logger)
when a response contains an error, and keep the httpBatchLink strictly for
transport configuration. Ensure the new loggerLink is placed earlier in the
links array so it sees responses, and delete the onError block entirely from the
httpBatchLink call.
| retryDelay: (attemptIndex) => Math.min(1000 * 2 ** attemptIndex, 30000), | ||
| retryCondition: (error, attemptIndex) => { | ||
| // Don't retry on authentication errors (401, 403) | ||
| if (error.data?.httpStatus === 401 || error.data?.httpStatus === 403) { | ||
| return false; | ||
| } | ||
| // Retry on network errors, 5xx server errors, or other transient issues (up to 3 attempts) | ||
| return ( | ||
| attemptIndex < 3 && | ||
| (error.code === 'INTERNAL_SERVER_ERROR' || | ||
| error.code === 'TIMEOUT' || | ||
| !error.data?.httpStatus || | ||
| error.data.httpStatus >= 500) | ||
| ); | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retry options are not part of httpBatchLink config
Implement retries in the fetch option (below) and remove retryDelay/retryCondition here.
- // Retry logic with exponential backoff, avoiding retries on auth errors
- retryDelay: (attemptIndex) => Math.min(1000 * 2 ** attemptIndex, 30000),
- retryCondition: (error, attemptIndex) => {
- // Don't retry on authentication errors (401, 403)
- if (error.data?.httpStatus === 401 || error.data?.httpStatus === 403) {
- return false;
- }
- // Retry on network errors, 5xx server errors, or other transient issues (up to 3 attempts)
- return (
- attemptIndex < 3 &&
- (error.code === 'INTERNAL_SERVER_ERROR' ||
- error.code === 'TIMEOUT' ||
- !error.data?.httpStatus ||
- error.data.httpStatus >= 500)
- );
- },Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In trpc/client.tsx around lines 63 to 77, the current retryDelay and
retryCondition belong in the fetch implementation, not the httpBatchLink config;
remove retryDelay and retryCondition from the link options and implement the
same retry logic inside the fetch option: wrap the original fetch call to
perform up to 3 attempts with exponential backoff (Math.min(1000 * 2 **
attemptIndex, 30000)), do not retry for HTTP 401 or 403, retry on network
errors, timeouts, or server 5xx responses, and surface the final error after all
attempts; ensure the wrapper preserves request/response shape expected by tRPC
and respects abort signals/timeouts.
| fetch: async (input, init) => { | ||
| const response = await fetch(input, init); | ||
| const contentType = response.headers.get('content-type'); | ||
| if (contentType && contentType.includes('text/html')) { | ||
| throw new Error( | ||
| `Received HTML response instead of JSON from ${input}. This may indicate an authentication error, server error page, or misconfigured API endpoint. Check server logs and authentication state.` | ||
| ); | ||
| } | ||
| return response; | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Do retries inside fetch with exponential backoff and auth error short‑circuit
Use the existing RetryService and keep the HTML-response guard.
- // Custom fetch to detect HTML responses and throw descriptive errors
- fetch: async (input, init) => {
- const response = await fetch(input, init);
- const contentType = response.headers.get('content-type');
- if (contentType && contentType.includes('text/html')) {
- throw new Error(
- `Received HTML response instead of JSON from ${input}. This may indicate an authentication error, server error page, or misconfigured API endpoint. Check server logs and authentication state.`
- );
- }
- return response;
- },
+ // Custom fetch with retry + HTML response detection
+ fetch: async (input, init) => {
+ const doFetch = async () => {
+ const res = await fetch(input, init);
+ // Auth errors: don't retry, fail fast
+ if (res.status === 401 || res.status === 403) return res;
+ const ct = res.headers.get('content-type');
+ if (ct && ct.includes('text/html')) {
+ throw new Error(
+ `Received HTML response instead of JSON from ${input}. This may indicate an auth error, server error page, or misconfigured API endpoint.`
+ );
+ }
+ if (res.status >= 500) {
+ // trigger retry for 5xx
+ throw new Error(`HTTP ${res.status} from ${input}`);
+ }
+ return res;
+ };
+ // 3 attempts, 1s * 2^n backoff
+ return RetryService.withRetry(doFetch, 3, 1000, 2);
+ },Add the import if not present:
import { RetryService } from '@/lib/retry';Also avoid logging request inputs in production; loggerLink above is gated by NODE_ENV.
🤖 Prompt for AI Agents
In trpc/client.tsx around lines 79 to 88, replace the direct fetch with a call
through the existing RetryService: add "import { RetryService } from
'@/lib/retry';", wrap the network call in RetryService.retry or equivalent so it
uses exponential backoff and limited attempts, but preserve the current
HTML-response guard and short-circuit by immediately throwing on content-type
'text/html' (do not retry HTML/auth error responses). Ensure any logging inside
the retry logic does not include request inputs when NODE_ENV === 'production'
(only log inputs in non-production or remove them), and retain the same returned
Response when successful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
| const { data: credentials = [] } = useQuery({ | ||
| queryKey: ["credentials"], | ||
| queryFn: async () => { | ||
| const response = await fetch("/api/trpc/credentials.list"); | ||
| const data = await response.json(); | ||
| return data.result?.data || []; | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for credential fetching.
The fetch operation has no error handling. If the API call fails, users will see an empty credential list with no indication of what went wrong.
Add error handling:
const { data: credentials = [] } = useQuery({
queryKey: ["credentials"],
queryFn: async () => {
- const response = await fetch("/api/trpc/credentials.list");
- const data = await response.json();
- return data.result?.data || [];
+ const response = await fetch("/api/trpc/credentials.list");
+ if (!response.ok) {
+ throw new Error(`Failed to fetch credentials: ${response.statusText}`);
+ }
+ const data = await response.json();
+ return data.result?.data || [];
},
});Consider showing an error message in the UI when credential fetching fails.
🤖 Prompt for AI Agents
In features/ai/components/openai-chat/dialog.tsx around lines 66 to 73, the
credential fetch lacks error handling so failures silently return an empty list;
update the useQuery to handle errors by making the queryFn throw when
response.ok is false or parsing fails, and add onError (or let react-query
surface the error) to capture the error message; propagate the error state to
the component (e.g., data, isError, error) and render an inline error message or
fallback UI when isError is true so users see a clear failure notice instead of
an empty credential list.
| const google = createGoogleGenerativeAI({ | ||
| apiKey: process.env.NEXT_PUBLIC_GOOGLE_AI_API_KEY!, | ||
| }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Don’t expose AI provider API keys in the client. Move generation server‑side.
NEXT_PUBLIC_GOOGLE_AI_API_KEY ships to browsers and can be abused.
- Remove client SDK initialization.
- Call a server endpoint (TRPC router) to generate and return the workflow.
- const google = createGoogleGenerativeAI({
- apiKey: process.env.NEXT_PUBLIC_GOOGLE_AI_API_KEY!,
- });
+ // Use TRPC on the client; the server holds the provider key securely.
+ import { useTRPC } from "@/trpc/client";
+ const trpc = useTRPC();Then in sendMessage:
- const { text } = await generateText({
- model: google("gemini-2.0-flash-exp"),
- system: systemPrompt,
- prompt: content,
- experimental_output: workflowSchema,
- });
-
- const parsedWorkflow = workflowSchema.parse(JSON.parse(text));
+ const result = await trpc.workflowGenerator.generateWorkflow.mutate({
+ prompt: content,
+ systemPrompt,
+ });
+ const parsedWorkflow = workflowSchema.parse(result.workflow);Adjust the router/method names to your actual TRPC API. Based on learnings.
Committable suggestion skipped: line range outside the PR's diff.
| const slackFormSchema = z.object({ | ||
| credentialId: z.string().min(1, "Please select a credential"), | ||
| channel: z.string().min(1, "Channel is required"), | ||
| messageType: z.enum(["text", "blocks"]), | ||
| messageText: z.string().min(1, "Message text is required"), | ||
| blocksJson: z.string().optional(), | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Schema bug: blocks mode still requires messageText; also validate Blocks JSON.
When messageType="blocks", the form omits messageText but zod still requires it, causing save failures. Use a discriminated union and validate blocksJson.
-const slackFormSchema = z.object({
- credentialId: z.string().min(1, "Please select a credential"),
- channel: z.string().min(1, "Channel is required"),
- messageType: z.enum(["text", "blocks"]),
- messageText: z.string().min(1, "Message text is required"),
- blocksJson: z.string().optional(),
-});
+const slackFormSchema = z.discriminatedUnion("messageType", [
+ z.object({
+ messageType: z.literal("text"),
+ credentialId: z.string().min(1, "Please select a credential"),
+ channel: z.string().min(1, "Channel is required"),
+ messageText: z.string().min(1, "Message text is required"),
+ blocksJson: z.string().optional(),
+ }),
+ z
+ .object({
+ messageType: z.literal("blocks"),
+ credentialId: z.string().min(1, "Please select a credential"),
+ channel: z.string().min(1, "Channel is required"),
+ blocksJson: z.string().min(1, "Blocks JSON is required"),
+ messageText: z.string().optional(),
+ })
+ .superRefine(({ blocksJson }, ctx) => {
+ try {
+ JSON.parse(blocksJson);
+ } catch {
+ ctx.addIssue({ code: z.ZodIssueCode.custom, message: "Invalid Blocks JSON", path: ["blocksJson"] });
+ }
+ }),
+]);Also applies to: 200-252
🤖 Prompt for AI Agents
In features/communication/components/slack-send/dialog.tsx around lines 37 to 43
(and similarly at lines ~200-252), the zod schema currently always requires
messageText even when messageType === "blocks" and does not validate blocksJson;
replace the flat schema with a discriminated union on messageType (one branch
for "text" that requires messageText and optional blocksJson, and one branch for
"blocks" that requires blocksJson and makes messageText optional/omitted), and
add a custom refinement or transform to validate that blocksJson is present and
is valid JSON (and optionally validate the parsed structure if needed); update
any form types/usages accordingly so the form no longer fails when using blocks
mode.
| const { data: credentials = [] } = useQuery({ | ||
| queryKey: ["credentials"], | ||
| queryFn: async () => { | ||
| const response = await fetch("/api/trpc/credentials.list"); | ||
| const data = await response.json(); | ||
| return data.result?.data || []; | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Prefer the typed TRPC client over raw fetch for credentials.
Use your generated TRPC React hooks (credentials.list) instead of calling /api/trpc directly; you’ll get types, errors, and batching.
Run to locate the TRPC React client in the repo:
🏁 Script executed:
#!/bin/bash
rg -nP --type=ts --type=tsx -C2 '(createTRPCReact|createTRPCNext|createTRPCProxyClient)'
rg -nP --type=ts --type=tsx -C2 'credentials\.(list|get|all)'
fd -t f -a 'trpc' -x sed -n '1,120p' {}Length of output: 55
I need to adjust the search strategy since the file type flags failed. Let me run corrected scripts to locate the TRPC client and verify the review comment.
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for TRPC client creation with broader approach
rg -n 'createTRPCReact|createTRPCNext|createTRPCProxyClient' --max-count=5
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Find all files with 'trpc' in name or path
fd -t f 'trpc' | head -20Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for credentials hooks/usage
rg -n 'credentials\.list|credentials\.get|useQuery.*credentials' --max-count=10Length of output: 2326
🏁 Script executed:
#!/bin/bash
# Look for @/ import aliases to understand import pattern
rg -n "from ['\"]@/" --max-count=5 features/communication/components/slack-send/dialog.tsxLength of output: 211
| {selectedType === "CUSTOM" ? ( | ||
| <Textarea | ||
| placeholder="Enter JSON data..." | ||
| value={typeof field.value === 'object' ? JSON.stringify(field.value, null, 2) : ''} | ||
| onChange={(e) => { | ||
| try { | ||
| const parsed = JSON.parse(e.target.value); | ||
| field.onChange(parsed); | ||
| } catch { | ||
| field.onChange({}); | ||
| } | ||
| }} | ||
| /> | ||
| ) : ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix CUSTOM JSON editing UX; don’t overwrite with {} on partial input.
Parsing on every keystroke resets the field and loses user input. Keep raw text, validate on blur, and wrap with FormControl.
- {selectedType === "CUSTOM" ? (
- <Textarea
- placeholder="Enter JSON data..."
- value={typeof field.value === 'object' ? JSON.stringify(field.value, null, 2) : ''}
- onChange={(e) => {
- try {
- const parsed = JSON.parse(e.target.value);
- field.onChange(parsed);
- } catch {
- field.onChange({});
- }
- }}
- />
- ) : (
+ {selectedType === "CUSTOM" ? (
+ <FormControl>
+ <Textarea
+ placeholder="Enter JSON data..."
+ value={customJson}
+ onChange={(e) => setCustomJson(e.target.value)}
+ onBlur={() => {
+ try {
+ const parsed = JSON.parse(customJson);
+ setJsonError(undefined);
+ field.onChange(parsed);
+ } catch (err) {
+ setJsonError("Invalid JSON");
+ }
+ }}
+ />
+ </FormControl>
+ ) : (Add these helper states at the top of the component (outside renderCredentialDataFields):
// additions needed near other hooks in CredentialForm
const [customJson, setCustomJson] = React.useState(
typeof defaultValues?.data === "object" ? JSON.stringify(defaultValues?.data, null, 2) : "{}"
);
const [jsonError, setJsonError] = React.useState<string | undefined>();Optionally surface the error:
{jsonError && <FormMessage>{jsonError}</FormMessage>}🤖 Prompt for AI Agents
In features/credentials/components/credential-form.tsx around lines 76 to 89,
the CUSTOM JSON textarea currently parses on every keystroke and replaces the
field with {} on parse errors, which wipes user input; instead add two hook
states near the other hooks (customJson initialized from defaultValues.data JSON
stringified, and jsonError as undefined), wrap the textarea with a
FormControl/FormMessage, bind the textarea value to customJson and update
customJson on change without parsing, validate/parse JSON only on blur (on
successful parse call field.onChange(parsed) and clear jsonError, on failure set
jsonError but do not change field), and render jsonError in a FormMessage so
partial/invalid input isn’t discarded during typing.
| private async executeHttpRequestNode(node: any): Promise<NodeExecutionResult> { | ||
| const nodeData = node.data as any || {}; | ||
|
|
||
| if (!nodeData.endpoint) { | ||
| return { | ||
| success: false, | ||
| error: "HTTP Request node requires an endpoint configuration", | ||
| }; | ||
| } | ||
|
|
||
| try { | ||
| const response = await fetch(nodeData.endpoint, { | ||
| method: nodeData.method || "GET", | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| ...nodeData.headers, | ||
| }, | ||
| body: nodeData.method !== "GET" && nodeData.method !== "HEAD" | ||
| ? JSON.stringify(nodeData.body) | ||
| : undefined, | ||
| }); | ||
|
|
||
| const data = await response.text(); | ||
|
|
||
| return { | ||
| success: response.ok, | ||
| output: { | ||
| status: response.status, | ||
| statusText: response.statusText, | ||
| headers: Object.fromEntries(response.headers.entries()), | ||
| data: data, | ||
| }, | ||
| logs: [{ | ||
| level: response.ok ? "INFO" : "ERROR", | ||
| message: `HTTP request to ${nodeData.endpoint} returned ${response.status}`, | ||
| data: { | ||
| endpoint: nodeData.endpoint, | ||
| method: nodeData.method, | ||
| status: response.status, | ||
| }, | ||
| }], | ||
| }; | ||
|
|
||
| } catch (error) { | ||
| return { | ||
| success: false, | ||
| error: error instanceof Error ? error.message : "HTTP request failed", | ||
| logs: [{ | ||
| level: "ERROR", | ||
| message: `HTTP request failed: ${error instanceof Error ? error.message : "Unknown error"}`, | ||
| }], | ||
| }; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Use makeHttpRequest helper and add authentication support.
The HTTP request node uses plain fetch without timeouts or retries, while the codebase provides a makeHttpRequest helper with retry logic. Additionally, it lacks credential/authentication support.
Improvements:
private async executeHttpRequestNode(node: any): Promise<NodeExecutionResult> {
const nodeData = node.data as any || {};
if (!nodeData.endpoint) {
return {
success: false,
error: "HTTP Request node requires an endpoint configuration",
};
}
try {
// Build headers
const headers: Record<string, string> = {
'Content-Type': 'application/json',
...nodeData.headers,
};
// Add authentication if credential provided
if (nodeData.credentialId) {
const credential = await getCredential(this.prisma, nodeData.credentialId);
if (credential) {
switch (credential.type) {
case 'API_KEY':
headers['X-API-Key'] = credential.data as string;
break;
case 'BEARER_TOKEN':
headers['Authorization'] = `Bearer ${credential.data}`;
break;
case 'BASIC_AUTH':
const basicAuth = credential.data as { username: string; password: string };
headers['Authorization'] = `Basic ${Buffer.from(`${basicAuth.username}:${basicAuth.password}`).toString('base64')}`;
break;
}
}
}
// Use makeHttpRequest helper with retries
const response = await makeHttpRequest(
nodeData.endpoint,
{
method: nodeData.method || 'GET',
headers,
body: nodeData.method !== 'GET' && nodeData.method !== 'HEAD'
? JSON.stringify(nodeData.body)
: undefined,
},
nodeData.retries || 3
);
return {
success: response.success,
output: response.data,
error: response.error,
logs: [{
level: response.success ? "INFO" : "ERROR",
message: `HTTP ${nodeData.method || 'GET'} request to ${nodeData.endpoint} ${response.success ? 'succeeded' : 'failed'}`,
data: {
endpoint: nodeData.endpoint,
method: nodeData.method,
status: response.status,
},
}],
};
} catch (error) {
return {
success: false,
error: error instanceof Error ? error.message : "HTTP request failed",
logs: [{
level: "ERROR",
message: `HTTP request failed: ${error instanceof Error ? error.message : "Unknown error"}`,
}],
};
}
}🤖 Prompt for AI Agents
lib/execution-engine.ts lines 1061-1114: The current HTTP node implementation
uses raw fetch without timeouts/retries and lacks credential support; replace
the fetch usage with the shared makeHttpRequest helper, build headers first
(merge nodeData.headers with Content-Type), if nodeData.credentialId fetch
credential via getCredential(this.prisma, nodeData.credentialId) and attach
credentials (API_KEY -> X-API-Key, BEARER_TOKEN -> Authorization: Bearer
<token>, BASIC_AUTH -> Authorization: Basic <base64 user:pass>), pass
method/body as before, call makeHttpRequest with nodeData.retries (default 3),
then map the helper response into NodeExecutionResult (use response.success,
response.data, response.error, include status in logs) and preserve the existing
error catch behavior.
| static async createExecution( | ||
| workflowId: string, | ||
| userId: string, | ||
| triggerType: "manual" | "webhook" | "schedule" = "manual", | ||
| prisma: PrismaClient | ||
| ): Promise<ExecutionEngine> { | ||
| const context: ExecutionContext = { | ||
| executionId: "", // Will be set when execution starts | ||
| workflowId, | ||
| userId, | ||
| triggerType, | ||
| }; | ||
|
|
||
| return new ExecutionEngine(context, prisma); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
executionId initialization is problematic.
Line 1123: The executionId is initialized as an empty string with a comment "Will be set when execution starts", but it's never actually set in the createExecution factory method. This causes the context to have an invalid executionId until executeWorkflow creates the execution record.
This was already flagged in the review of executeWorkflow (lines 40-108), where the execution.id should be stored in context but isn't.
The factory method should either:
- Accept the executionId as a parameter if the execution record is created elsewhere
- Create the execution record and return both the engine and execution ID
Recommended approach:
static async createExecution(
workflowId: string,
userId: string,
triggerType: "manual" | "webhook" | "schedule" = "manual",
prisma: PrismaClient
): Promise<{ engine: ExecutionEngine; executionId: string }> {
// Create the execution record
const execution = await prisma.execution.create({
data: {
workflowId,
status: "PENDING",
triggeredBy: userId,
},
});
const context: ExecutionContext = {
executionId: execution.id,
workflowId,
userId,
triggerType,
};
return {
engine: new ExecutionEngine(context, prisma),
executionId: execution.id,
};
}Then update executeWorkflow to not create the execution record since it's already created.
🤖 Prompt for AI Agents
lib/execution-engine.ts lines 1116-1131: the factory currently sets executionId
to an empty string and never populates it, leaving the context invalid; change
createExecution to create the execution record via prisma (status PENDING,
triggeredBy userId, workflowId), set context.executionId to the created
execution.id, and return both the new ExecutionEngine and the executionId (e.g.,
Promise<{ engine: ExecutionEngine; executionId: string }>) so callers get a
valid ID; also update executeWorkflow to stop creating a duplicate execution
record when using this factory.
| async dequeueJob(): Promise<{ jobId: string; workflowId: string; userId: string; triggerType: "webhook" | "schedule"; scheduledAt?: Date; webhookData?: any } | null> { | ||
| // Find the next pending job | ||
| const job = await this.prisma.executionJob.findFirst({ | ||
| where: { | ||
| status: "PENDING", | ||
| OR: [ | ||
| { scheduledAt: { lte: new Date() } }, | ||
| { scheduledAt: null }, | ||
| ], | ||
| }, | ||
| orderBy: [ | ||
| { scheduledAt: "asc" }, | ||
| { createdAt: "asc" }, | ||
| ], | ||
| }); | ||
|
|
||
| if (!job) { | ||
| return null; | ||
| } | ||
|
|
||
| // Mark as processing | ||
| await this.prisma.executionJob.update({ | ||
| where: { id: job.id }, | ||
| data: { status: "PROCESSING" }, | ||
| }); | ||
|
|
||
| return { | ||
| jobId: job.id, | ||
| workflowId: job.workflowId, | ||
| userId: job.userId, | ||
| triggerType: job.triggerType as "webhook" | "schedule", | ||
| scheduledAt: job.scheduledAt || undefined, | ||
| webhookData: job.webhookData, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix race condition in job dequeuing.
The dequeueJob method has a race condition: between findFirst (line 36) and update (line 55), another worker could dequeue the same job. This can lead to duplicate job processing.
Wrap the operation in a transaction and use optimistic locking:
async dequeueJob(): Promise<{ jobId: string; workflowId: string; userId: string; triggerType: "webhook" | "schedule"; scheduledAt?: Date; webhookData?: any } | null> {
- // Find the next pending job
- const job = await this.prisma.executionJob.findFirst({
+ return await this.prisma.$transaction(async (tx) => {
+ const job = await tx.executionJob.findFirst({
- where: {
- status: "PENDING",
- OR: [
- { scheduledAt: { lte: new Date() } },
- { scheduledAt: null },
- ],
- },
- orderBy: [
- { scheduledAt: "asc" },
- { createdAt: "asc" },
- ],
- });
+ where: {
+ status: "PENDING",
+ OR: [
+ { scheduledAt: { lte: new Date() } },
+ { scheduledAt: null },
+ ],
+ },
+ orderBy: [
+ { scheduledAt: "asc" },
+ { createdAt: "asc" },
+ ],
+ });
- if (!job) {
- return null;
- }
+ if (!job) {
+ return null;
+ }
- // Mark as processing
- await this.prisma.executionJob.update({
- where: { id: job.id },
- data: { status: "PROCESSING" },
- });
+ // Atomically update only if still PENDING
+ const updated = await tx.executionJob.updateMany({
+ where: {
+ id: job.id,
+ status: "PENDING" // Optimistic lock
+ },
+ data: { status: "PROCESSING" },
+ });
+
+ // If no rows updated, job was claimed by another worker
+ if (updated.count === 0) {
+ return null;
+ }
- return {
- jobId: job.id,
- workflowId: job.workflowId,
- userId: job.userId,
- triggerType: job.triggerType as "webhook" | "schedule",
- scheduledAt: job.scheduledAt || undefined,
- webhookData: job.webhookData,
- };
+ return {
+ jobId: job.id,
+ workflowId: job.workflowId,
+ userId: job.userId,
+ triggerType: job.triggerType as "webhook" | "schedule",
+ scheduledAt: job.scheduledAt || undefined,
+ webhookData: job.webhookData,
+ };
+ });
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async dequeueJob(): Promise<{ jobId: string; workflowId: string; userId: string; triggerType: "webhook" | "schedule"; scheduledAt?: Date; webhookData?: any } | null> { | |
| // Find the next pending job | |
| const job = await this.prisma.executionJob.findFirst({ | |
| where: { | |
| status: "PENDING", | |
| OR: [ | |
| { scheduledAt: { lte: new Date() } }, | |
| { scheduledAt: null }, | |
| ], | |
| }, | |
| orderBy: [ | |
| { scheduledAt: "asc" }, | |
| { createdAt: "asc" }, | |
| ], | |
| }); | |
| if (!job) { | |
| return null; | |
| } | |
| // Mark as processing | |
| await this.prisma.executionJob.update({ | |
| where: { id: job.id }, | |
| data: { status: "PROCESSING" }, | |
| }); | |
| return { | |
| jobId: job.id, | |
| workflowId: job.workflowId, | |
| userId: job.userId, | |
| triggerType: job.triggerType as "webhook" | "schedule", | |
| scheduledAt: job.scheduledAt || undefined, | |
| webhookData: job.webhookData, | |
| }; | |
| } | |
| async dequeueJob(): Promise<{ jobId: string; workflowId: string; userId: string; triggerType: "webhook" | "schedule"; scheduledAt?: Date; webhookData?: any } | null> { | |
| return await this.prisma.$transaction(async (tx) => { | |
| const job = await tx.executionJob.findFirst({ | |
| where: { | |
| status: "PENDING", | |
| OR: [ | |
| { scheduledAt: { lte: new Date() } }, | |
| { scheduledAt: null }, | |
| ], | |
| }, | |
| orderBy: [ | |
| { scheduledAt: "asc" }, | |
| { createdAt: "asc" }, | |
| ], | |
| }); | |
| if (!job) { | |
| return null; | |
| } | |
| // Atomically update only if still PENDING | |
| const updated = await tx.executionJob.updateMany({ | |
| where: { | |
| id: job.id, | |
| status: "PENDING" // Optimistic lock | |
| }, | |
| data: { status: "PROCESSING" }, | |
| }); | |
| // If no rows updated, job was claimed by another worker | |
| if (updated.count === 0) { | |
| return null; | |
| } | |
| return { | |
| jobId: job.id, | |
| workflowId: job.workflowId, | |
| userId: job.userId, | |
| triggerType: job.triggerType as "webhook" | "schedule", | |
| scheduledAt: job.scheduledAt || undefined, | |
| webhookData: job.webhookData, | |
| }; | |
| }); | |
| } |
🤖 Prompt for AI Agents
In lib/execution-queue.ts around lines 34 to 68, the current findFirst then
update flow has a race condition; replace it with an atomic optimistic-locking
pattern: inside a short retry loop (or transaction), select the candidate job
(same findFirst criteria), then attempt a conditional update that only sets
status to "PROCESSING" when the job still has status "PENDING" (e.g. update
where { id: job.id, status: "PENDING" }); if the conditional update returns no
rows (another worker claimed it), retry or return null; optionally wrap the
select+conditional-update in a prisma.transaction and limit retries to avoid
livelock. Ensure you check the conditional update result before returning the
job object so a job is only returned when the update succeeded.
| model Execution { | ||
| id String @id @default(cuid()) | ||
| workflowId String | ||
| status ExecutionStatus @default(PENDING) | ||
| startedAt DateTime? | ||
| completedAt DateTime? | ||
| error String? | ||
| triggeredBy String? // user ID or "webhook" or "schedule" | ||
| createdAt DateTime @default(now()) | ||
| updatedAt DateTime @updatedAt | ||
| workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade) | ||
| steps ExecutionStep[] | ||
| logs ExecutionLog[] | ||
| @@map("execution") | ||
| } | ||
|
|
||
| model ExecutionStep { | ||
| id String @id @default(cuid()) | ||
| executionId String | ||
| nodeId String | ||
| status StepStatus @default(PENDING) | ||
| startedAt DateTime? | ||
| completedAt DateTime? | ||
| error String? | ||
| input Json? // Input data for this step | ||
| output Json? // Output data from this step | ||
| createdAt DateTime @default(now()) | ||
| updatedAt DateTime @updatedAt | ||
| execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade) | ||
| @@map("execution_step") | ||
| } | ||
|
|
||
| model ExecutionLog { | ||
| id String @id @default(cuid()) | ||
| executionId String | ||
| level LogLevel @default(INFO) | ||
| message String | ||
| nodeId String? // Which node generated this log | ||
| data Json? // Additional structured data | ||
| createdAt DateTime @default(now()) | ||
| execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade) | ||
| @@map("execution_log") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add database indices for execution query performance.
The execution tracking models will be queried frequently for status monitoring, filtering by workflow, and retrieving logs. Missing indices will cause performance issues as data grows.
Apply these indices:
execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)
+ @@index([executionId])
+ @@index([nodeId])
@@map("execution_step")
}
model ExecutionLog {
id String @id @default(cuid())
executionId String
level LogLevel @default(INFO)
message String
nodeId String?
data Json?
createdAt DateTime @default(now())
execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade)
+ @@index([executionId])
+ @@index([level])
+ @@index([nodeId])
+ @@index([createdAt])
@@map("execution_log")
}Additionally, add indices to the Execution model:
workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade)
steps ExecutionStep[]
logs ExecutionLog[]
+ @@index([workflowId])
+ @@index([status])
+ @@index([startedAt])
+ @@index([triggeredBy])
@@map("execution")
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| model Execution { | |
| id String @id @default(cuid()) | |
| workflowId String | |
| status ExecutionStatus @default(PENDING) | |
| startedAt DateTime? | |
| completedAt DateTime? | |
| error String? | |
| triggeredBy String? // user ID or "webhook" or "schedule" | |
| createdAt DateTime @default(now()) | |
| updatedAt DateTime @updatedAt | |
| workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade) | |
| steps ExecutionStep[] | |
| logs ExecutionLog[] | |
| @@map("execution") | |
| } | |
| model ExecutionStep { | |
| id String @id @default(cuid()) | |
| executionId String | |
| nodeId String | |
| status StepStatus @default(PENDING) | |
| startedAt DateTime? | |
| completedAt DateTime? | |
| error String? | |
| input Json? // Input data for this step | |
| output Json? // Output data from this step | |
| createdAt DateTime @default(now()) | |
| updatedAt DateTime @updatedAt | |
| execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade) | |
| @@map("execution_step") | |
| } | |
| model ExecutionLog { | |
| id String @id @default(cuid()) | |
| executionId String | |
| level LogLevel @default(INFO) | |
| message String | |
| nodeId String? // Which node generated this log | |
| data Json? // Additional structured data | |
| createdAt DateTime @default(now()) | |
| execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade) | |
| @@map("execution_log") | |
| } | |
| model Execution { | |
| id String @id @default(cuid()) | |
| workflowId String | |
| status ExecutionStatus @default(PENDING) | |
| startedAt DateTime? | |
| completedAt DateTime? | |
| error String? | |
| triggeredBy String? // user ID or "webhook" or "schedule" | |
| createdAt DateTime @default(now()) | |
| updatedAt DateTime @updatedAt | |
| workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade) | |
| steps ExecutionStep[] | |
| logs ExecutionLog[] | |
| @@index([workflowId]) | |
| @@index([status]) | |
| @@index([startedAt]) | |
| @@index([triggeredBy]) | |
| @@map("execution") | |
| } | |
| model ExecutionStep { | |
| id String @id @default(cuid()) | |
| executionId String | |
| nodeId String | |
| status StepStatus @default(PENDING) | |
| startedAt DateTime? | |
| completedAt DateTime? | |
| error String? | |
| input Json? // Input data for this step | |
| output Json? // Output data from this step | |
| createdAt DateTime @default(now()) | |
| updatedAt DateTime @updatedAt | |
| execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade) | |
| @@index([executionId]) | |
| @@index([nodeId]) | |
| @@map("execution_step") | |
| } | |
| model ExecutionLog { | |
| id String @id @default(cuid()) | |
| executionId String | |
| level LogLevel @default(INFO) | |
| message String | |
| nodeId String? // Which node generated this log | |
| data Json? // Additional structured data | |
| createdAt DateTime @default(now()) | |
| execution Execution @relation(fields: [executionId], references: [id], onDelete: Cascade) | |
| @@index([executionId]) | |
| @@index([level]) | |
| @@index([nodeId]) | |
| @@index([createdAt]) | |
| @@map("execution_log") | |
| } |
| model ExecutionJob { | ||
| id String @id @default(cuid()) | ||
| workflowId String | ||
| userId String | ||
| triggerType String // "webhook" or "schedule" | ||
| status JobStatus @default(PENDING) | ||
| scheduledAt DateTime? | ||
| webhookData Json? | ||
| error String? | ||
| createdAt DateTime @default(now()) | ||
| updatedAt DateTime @updatedAt | ||
| completedAt DateTime? | ||
| workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade) | ||
| @@map("execution_job") | ||
| } | ||
|
|
||
| enum JobStatus { | ||
| PENDING | ||
| PROCESSING | ||
| COMPLETED | ||
| FAILED | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add indices and consider triggerType enum for consistency.
- Missing indices: The ExecutionJob table will be queried frequently by schedulers and job processors. Add indices:
workflow Workflow @relation(fields: [workflowId], references: [id], onDelete: Cascade)
+ @@index([workflowId])
+ @@index([status])
+ @@index([scheduledAt])
+ @@index([userId])
@@map("execution_job")
}- Type consistency: The
triggerTypefield is a String but could be an enum for type safety:
+enum TriggerType {
+ MANUAL
+ WEBHOOK
+ SCHEDULE
+}
model ExecutionJob {
id String @id @default(cuid())
workflowId String
userId String
- triggerType String // "webhook" or "schedule"
+ triggerType TriggerTypeThis would align with the ExecutionContext interface in lib/execution-engine.ts (line 17).
🤖 Prompt for AI Agents
In prisma/schema.prisma around lines 247 to 270, the ExecutionJob model lacks
database indices and uses a plain String for triggerType; add an enum
TriggerType { WEBHOOK SCHEDULE } and change triggerType to use that enum, then
add appropriate indexes to optimize scheduler/processor queries (e.g., index on
scheduledAt, index on status, index on workflowId and userId, and a composite
index commonly queried together such as (status, scheduledAt)); ensure the new
enum is declared in the schema and the model includes the @@index declarations
(or @index on fields) to match the suggested indexed fields.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation