-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathoptimize_cot.py
More file actions
executable file
·28 lines (23 loc) · 1.51 KB
/
optimize_cot.py
File metadata and controls
executable file
·28 lines (23 loc) · 1.51 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#optimize_cot.py
#Configures and compiles a Chain-of-Thought (CoT) module with few-shot learning examples for optimizing grant question answering.
# So basically I have applied bootstrap few shot learning to optimize a Chain-of-Thought (CoT) module for grant-related question answering using DSPy. The code initializes the LLM, defines a metric for optimization, and compiles the CoT module with few-shot examples.
import dspy # type: ignore
from dspy.teleprompt import BootstrapFewShot # type: ignore
from grant_cot_module import GrantCoTModule
from grant_cot_examples import trainset
import streamlit as st
OPENAI_API_KEY = st.secrets["OPENAI_API_KEY"]
# ✅ ADD THIS: Configure the LLM
lm = dspy.LM("openai/gpt-4o", api_key=OPENAI_API_KEY)
dspy.configure(lm=lm)
# ✅ Metric for optimization (simple match)
def metric(example, prediction, trace=None):
return int(example.answer.strip().lower() in prediction.answer.strip().lower())
# Initialize your module
cot = GrantCoTModule()
# Compile with BootstrapFewShot
teleprompter = BootstrapFewShot(metric=metric, max_bootstrapped_demos=2, max_labeled_demos=2)
optimized_cot = teleprompter.compile(cot, trainset=trainset)
# ✅ Save as .json (for streamlit use)
optimized_cot.save("optimized_grant_cot.json")
# The json file formed on running this code is a saved configuration file containing the best few-shot examples and prompt structure that DSPy selected during training to help the AI generate high-quality, reasoning-based answers for grant questions.