-
Notifications
You must be signed in to change notification settings - Fork 1
paragraph counts. restore unwrapped and other fixes #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
2d6c7ea
03d4d5c
096e5cd
1bf9a2a
f28e761
03c72fa
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -68,3 +68,6 @@ fastlane/test_output | |
|
|
||
| claude-agent-sdk-demos | ||
| xcuserdata | ||
|
|
||
| apps/pipeline/app.log | ||
| .tmp | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -6,10 +6,9 @@ import { | |
| } from "@google/genai"; | ||
| import { type z } from "zod"; | ||
| import { google } from "@ai-sdk/google"; | ||
| import { generateObject, generateText, streamText, wrapLanguageModel } from "ai"; | ||
| import { generateObject, generateText, streamText } from "ai"; | ||
| import { toGeminiSchema } from "gemini-zod"; | ||
| import "dotenv/config"; | ||
| import { openrouter } from "@openrouter/ai-sdk-provider"; | ||
| import type { LanguageModelV2Middleware } from "@ai-sdk/provider"; | ||
|
|
||
| export const callFastGemini = async ( | ||
|
|
@@ -76,44 +75,25 @@ Based on the book text answer the user's question, using quotes from the wider b | |
| }; | ||
|
|
||
| export const callGeminiWithThinking = async (prompt: string) => { | ||
| const ai = new GoogleGenAI({ apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY }); | ||
| const config = { | ||
| responseMimeType: "text/plain", | ||
| httpOptions: { | ||
| timeout: 15 * 60 * 1000, // 15 minutes in milliseconds | ||
| }, | ||
| }; | ||
| const model = "gemini-3-flash-preview"; | ||
| // const model = "gemini-3-pro-preview"; | ||
|
|
||
| const contents = [{ role: "user", parts: [{ text: prompt }] }]; | ||
| const safetySettings = [ | ||
| { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_NONE }, | ||
| { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_NONE }, | ||
| { | ||
| category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, | ||
| threshold: HarmBlockThreshold.BLOCK_NONE, | ||
| }, | ||
| { | ||
| category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, | ||
| threshold: HarmBlockThreshold.BLOCK_NONE, | ||
| }, | ||
| { | ||
| category: HarmCategory.HARM_CATEGORY_CIVIC_INTEGRITY, | ||
| threshold: HarmBlockThreshold.BLOCK_NONE, | ||
| }, | ||
| { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.OFF }, | ||
| { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.OFF }, | ||
| { category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.OFF }, | ||
| { category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.OFF }, | ||
| { category: HarmCategory.HARM_CATEGORY_CIVIC_INTEGRITY, threshold: HarmBlockThreshold.OFF }, | ||
| ]; | ||
|
|
||
| console.log("before response", model); | ||
| const response = await ai.models.generateContent({ | ||
| model, | ||
| config: { ...config, safetySettings }, | ||
| contents, | ||
| console.log("CALLING GEMINI WITH THINKING"); | ||
| const { textStream } = await streamText({ | ||
| model: google("gemini-3-flash-preview"), | ||
| prompt, | ||
| experimental_telemetry: { isEnabled: true, recordInputs: true, recordOutputs: true }, | ||
| providerOptions: { google: { safetySettings } }, | ||
| }); | ||
|
|
||
| console.log("after response"); | ||
|
|
||
| return response?.text; | ||
| let text = ""; | ||
| for await (const textPart of textStream) { | ||
| text += textPart; | ||
| } | ||
| return text; | ||
| }; | ||
|
Comment on lines
77
to
97
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Missing The function Additionally, this function uses Proposed fix to add thinking configuration const { text } = await generateText({
model: google("gemini-3-flash-preview"),
prompt,
experimental_telemetry: { isEnabled: true, recordInputs: true, recordOutputs: true },
- providerOptions: { google: { safetySettings } },
+ providerOptions: {
+ google: {
+ safetySettings,
+ thinkingConfig: { thinkingBudget: 1024, includeThoughts: true },
+ },
+ },
});🤖 Prompt for AI Agents |
||
|
|
||
| export const callGeminiWithThinkingAndSchema = async <T>( | ||
|
|
@@ -270,54 +250,6 @@ export const anthropicThinkingSchemaMiddleware: LanguageModelV2Middleware = { | |
| }, | ||
| }; | ||
|
|
||
| export const callSlowGeminiWithThinkingAndSchemaAndParsed = async <T>( | ||
| prompt: string, | ||
| zodSchema: z.ZodSchema<T>, | ||
| model: string = "google/gemini-3-flash-preview", | ||
| ) => { | ||
| const claudeModel = wrapLanguageModel({ | ||
| model: openrouter(model), | ||
| middleware: anthropicThinkingSchemaMiddleware, | ||
| }); | ||
| const { object } = await generateObject({ | ||
| model: | ||
| model.includes("claude") || model.includes("minimax") || model.includes("kimi") | ||
| ? claudeModel | ||
| : openrouter(model), | ||
| schema: zodSchema, | ||
| prompt, | ||
| experimental_telemetry: { isEnabled: true, recordInputs: true, recordOutputs: true }, | ||
| providerOptions: { | ||
| openrouter: { | ||
| safetySettings: [ | ||
| { | ||
| category: HarmCategory.HARM_CATEGORY_HARASSMENT, | ||
| threshold: HarmBlockThreshold.BLOCK_NONE, | ||
| }, | ||
| { | ||
| category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, | ||
| threshold: HarmBlockThreshold.BLOCK_NONE, | ||
| }, | ||
| { | ||
| category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, | ||
| threshold: HarmBlockThreshold.BLOCK_NONE, | ||
| }, | ||
| { | ||
| category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, | ||
| threshold: HarmBlockThreshold.BLOCK_NONE, | ||
| }, | ||
| { | ||
| category: HarmCategory.HARM_CATEGORY_CIVIC_INTEGRITY, | ||
| threshold: HarmBlockThreshold.BLOCK_NONE, | ||
| }, | ||
| ], | ||
| }, | ||
| }, | ||
| }); | ||
|
|
||
| return object as T; | ||
| }; | ||
|
|
||
| export const callGeminiWithImage = async <T>( | ||
| prompt: string, | ||
| imageBase64: string, | ||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,50 @@ | ||||||||||||||||||||||
| import OpenAI from "openai"; | ||||||||||||||||||||||
| import { type z } from "zod"; | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| const endpoint = "https://bookgenius.services.ai.azure.com/openai/v1/"; | ||||||||||||||||||||||
| const model = "grok-4-fast-reasoning"; | ||||||||||||||||||||||
| const api_key = process.env.AZURE_GROK_KEY; | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| const client = new OpenAI({ baseURL: endpoint, apiKey: api_key }); | ||||||||||||||||||||||
|
Comment on lines
+6
to
+8
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The const api_key = process.env.AZURE_GROK_KEY;
if (!api_key) {
throw new Error("Missing environment variable: AZURE_GROK_KEY");
}
const client = new OpenAI({ baseURL: endpoint, apiKey: api_key });
Comment on lines
+6
to
+8
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Missing API key validation before client initialization. If 🛡️ Proposed fix const api_key = process.env.AZURE_GROK_KEY;
+
+if (!api_key) {
+ throw new Error("Missing AZURE_GROK_KEY environment variable");
+}
const client = new OpenAI({ baseURL: endpoint, apiKey: api_key });📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| export const callGrokAzure = async (prompt: string) => { | ||||||||||||||||||||||
| const completion = await client.chat.completions.create({ | ||||||||||||||||||||||
| messages: [{ role: "user", content: prompt }], | ||||||||||||||||||||||
| model, | ||||||||||||||||||||||
| }); | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| return completion.choices[0].message.content; | ||||||||||||||||||||||
| }; | ||||||||||||||||||||||
|
Comment on lines
+10
to
+17
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Return type may be null; consider handling.
🛡️ Proposed fix to handle null content-export const callGrokAzure = async (prompt: string) => {
+export const callGrokAzure = async (prompt: string): Promise<string> => {
const completion = await client.chat.completions.create({
messages: [{ role: "user", content: prompt }],
model,
});
- return completion.choices[0].message.content;
+ const content = completion.choices[0].message.content;
+ if (content === null) {
+ throw new Error("Model returned null content");
+ }
+ return content;
};🤖 Prompt for AI Agents |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| export const callGrokAzureWithSchema = async <T>(prompt: string, zodSchema: z.ZodSchema<T>) => { | ||||||||||||||||||||||
| const completion = await client.chat.completions.create({ | ||||||||||||||||||||||
| messages: [{ role: "user", content: prompt }], | ||||||||||||||||||||||
| model, | ||||||||||||||||||||||
| response_format: { | ||||||||||||||||||||||
| type: "json_schema", | ||||||||||||||||||||||
| json_schema: { | ||||||||||||||||||||||
| name: "response", | ||||||||||||||||||||||
| strict: true, | ||||||||||||||||||||||
| // @ts-expect-error(zod typing) | ||||||||||||||||||||||
| schema: zodSchema.shape, | ||||||||||||||||||||||
|
Comment on lines
+28
to
+29
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Using Consider using a library like import { zodToJsonSchema } from 'zod-to-json-schema';
// ... inside callGrokAzureWithSchema
const jsonSchema = zodToJsonSchema(zodSchema);
const completion = await client.chat.completions.create({
// ...
response_format: {
type: "json_schema",
json_schema: {
name: "response",
strict: true,
schema: jsonSchema,
},
},
});This will require adding |
||||||||||||||||||||||
| }, | ||||||||||||||||||||||
| }, | ||||||||||||||||||||||
| }); | ||||||||||||||||||||||
| let result: T; | ||||||||||||||||||||||
| try { | ||||||||||||||||||||||
| result = JSON.parse(completion.choices[0].message.content as string) as T; | ||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The API response const content = completion.choices[0].message.content;
if (content === null) {
throw new Error("Received null content from the model.");
}
result = JSON.parse(content) as T; |
||||||||||||||||||||||
| } catch (e) { | ||||||||||||||||||||||
| console.error("Error parsing JSON", e); | ||||||||||||||||||||||
| throw e; | ||||||||||||||||||||||
| } | ||||||||||||||||||||||
| return result; | ||||||||||||||||||||||
| }; | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| // if (require.main === module) { | ||||||||||||||||||||||
| // const schema = z.object({ name: z.string(), age: z.number() }); | ||||||||||||||||||||||
| // const prompt = "What is my name? My name is John Doe and I'm 30"; | ||||||||||||||||||||||
| // const result = await callGrokAzureWithSchema(prompt, schema); | ||||||||||||||||||||||
| // console.log(result); | ||||||||||||||||||||||
| // console.log(result.name); | ||||||||||||||||||||||
| // console.log(result.age); | ||||||||||||||||||||||
| // } | ||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -22,7 +22,6 @@ export const callO3WithSchema = async <T>( | |
| model: openai(model), | ||
| schema: zodSchema, | ||
| prompt, | ||
| // providerOptions: { google: { thinkingConfig: { thinkingBudget: 0, includeThoughts: true } } }, | ||
| experimental_telemetry: { isEnabled: true, recordInputs: true, recordOutputs: true }, | ||
| }); | ||
|
|
||
|
|
@@ -36,7 +35,7 @@ export const callGpt5 = async <T = string>( | |
| ) => { | ||
| const chatCompletion = await client.chat.completions.create({ | ||
| messages: [{ role: "user", content: prompt }], | ||
| model: "gpt-5.1", | ||
| model: "gpt-5.2", | ||
| reasoning_effort: "medium", | ||
|
Comment on lines
36
to
39
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🌐 Web query:
💡 Result: As of February 5, 2026, GPT‑5.2 is available to all developers in the OpenAI API, including the Chat Completions API.
Citations:
🏁 Script executed: cat -n apps/pipeline/src/callO3.tsRepository: TheBrainFamily/BookgeniusPlayer Length of output: 1671 Remove the review concern: The model is officially supported for all developers as of February 2026, including the direct OpenAI SDK path used here. However, the 🤖 Prompt for AI Agents |
||
| }); | ||
| return chatCompletion.choices[0].message.content as string; | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
Repository: TheBrainFamily/BookgeniusPlayer
Length of output: 6042
🌐 Web query:
@ai-sdk v3 and ai v6 migration guide compatibility peer dependencies💡 Result:
Compatibility (AI SDK v6 +
@ai-sdk/*v3)For AI SDK 6, Vercel’s migration guide says to align package majors like this:
ai@^6+@ai-sdk/*@^3, plus@ai-sdk/provider@^3and@ai-sdk/provider-utils@^4. [1]That effectively means
@ai-sdk/*v3 is the “provider/UI generation” set that matchesaiv6 (don’t mix with older major lines unless a package explicitly supports multiple majors). [1]Peer dependencies (the ones that commonly cause install conflicts)
aiv6ai@6.xdeclares a peer dependency onzod:zod: ^3.25.76 || ^4.1.8[2]and requires Node >= 18. [2]
@ai-sdk/reactv3 (example@ai-sdk/*v3 package)@ai-sdk/react@3.xdeclares a peer dependency onreact:react: ^18 || ~19.0.1 || ~19.1.2 || ^19.2.1[3](It brings
ai@6.xas a regular dependency in its package metadata.) [3]Providers (
@ai-sdk/openai, etc.)The AI SDK 6 migration guide indicates providers should be on the
@ai-sdk/*@^3line as well when upgrading to v6. [1]If you need “compatibility across majors” (peer dep ranges)
Some third-party integrations explicitly broaden their peer dependency ranges to support multiple
aimajors (example: Trigger.dev acceptingai@^4 || ^5 || ^6). [4]Sources:
[1] AI SDK “Migrate AI SDK 5.x to 6.0” (package version alignment) (ai-sdk.dev)
[2]
ai@6.xpackage.json (peerDependencies, Node engine) (app.unpkg.com)[3]
@ai-sdk/react@3.xpackage.json (peerDependencies) (app.unpkg.com)[4] Example of broadened peer dependency compatibility across
aimajors (trigger.dev)Citations:
🌐 Web query:
@ai-sdk/anthropic@ai-sdk/openaiv3.0 ai v6.0 breaking changes requirements💡 Result:
Version alignment / requirements (AI SDK v6 + provider v3)
When upgrading to AI SDK
aiv6.x, you should upgrade the provider packages to the matching major versions:@ai-sdk/*→^3.0.0, plus the shared internals@ai-sdk/provider→^3.0.0and@ai-sdk/provider-utils→^4.0.0. The core package becomesai→^6.0.0. [1]The repo’s install requirements call out Node.js 18+. [2]
AI SDK (
ai) v6.0 breaking changes (high impact)Experimental_Agentreplaced byToolLoopAgent;systemrenamed toinstructions; defaultstopWhenchanged (from 1 step to 20). [1]CoreMessageremoved;convertToCoreMessagesreplaced byconvertToModelMessages, which is now async (requiresawait). [1]generateObject/streamObjectare deprecated in v6 (planned for removal later); prefergenerateText/streamTextwith the newoutputsetting. [1]@ai-sdk/openaiv3.0 breaking changes (notable behavior change)strictJsonSchemadefaults totruefor JSON outputs and tool calls (stricter schema validation; you may need to adjust schemas like usingnullinstead ofundefined, or explicitly disable strict mode). [1]@ai-sdk/anthropicv3.0 breaking changes (notable new option)structuredOutputModeto control how structured outputs are produced ('outputFormat' | 'jsonTool' | 'auto'), reflecting Anthropic’s native structured outputs support on newer models. [1]Sources:
[1] AI SDK “Migrate AI SDK 5.x to 6.0” (package versions + breaking changes, incl. OpenAI/Anthropic sections) https://ai-sdk.dev/docs/migration-guides/migration-guide-6-0
[2] vercel/ai GitHub README (Node.js requirement) https://github.com/vercel/ai
Update
@ai-sdk/cerebrasto v3.x and audit migration impact.The peer dependencies are aligned (zod@^4.1.8, react@^19.2.0), but
@ai-sdk/cerebras@^2.0.30should be@ai-sdk/cerebras@^3.xto match theai@^6ecosystem per Vercel's migration guide.Also review code for ai v6 breaking changes:
generateObjectandstreamObjectare deprecated; migrate togenerateText/streamTextwith theoutputsettingconvertToCoreMessagesreplaced byconvertToModelMessages(now async, requiresawait)Experimental_Agentreplaced byToolLoopAgent;systemrenamed toinstructions@ai-sdk/openai@3.0:strictJsonSchemadefaults totrue(may require schema adjustments)🤖 Prompt for AI Agents