-
Notifications
You must be signed in to change notification settings - Fork 1
cmr care ported to akd extension #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
|
@sanzog03 Thanks for working on this. Couple of things:
I'd create a PR template that will be easier to fill in next time. |
NISH1001
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for quickly having this PR. I have added a detail feedback, particularly how we want to design the openaisdk porting.
| class OpenAIBaseAgentConfig(BaseAgentConfig): | ||
| """Configuration for OpenAI agents based AKD Agents. | ||
| This configuration extends BaseConfig to provide OpenAI-specific | ||
| settings for agents built with OpenAI's platform agent builder. | ||
| """ | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this doesn't add anything new fields, maybe just removing the config will be better. Or
Add tracing_params: dict[str, Any]? I'd also add reasoning_effort as part of the OpenAIBaseAgentConfig and probably mcp as well.
class OpenAIBaseAgentConfig(BaseAgentConfig):
"""Configuration for OpenAI agents based AKD Agents.
This configuration extends BaseConfig to provide OpenAI-specific
settings for agents built with OpenAI's platform agent builder.
"""
reasoning_effort: Literal["low", "medium", "high",...] = Field(....)
tracing_params: dict[str, Any] | None = Field(default_factory=dict, description="Paramters that can be passed to OpenAI Agents SDK for traceability in the platform API)| input_schema = InputSchema | ||
| output_schema = OutputSchema | ||
|
|
||
| def __init__(self, | ||
| config: OpenAIBaseAgentConfig | None = None, | ||
| debug: bool = False | ||
| ) -> None: | ||
| super().__init__(config=config, debug=debug) | ||
| self.config = config | ||
|
|
||
| async def _arun(self, params: InSchema, **kwargs) -> OutSchema: | ||
| raise NotImplementedError("Subclasses must implement this method.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can remove this boilerplate as akd.agents._base.BaseAgent already does it. We also don't need to do self.config=config because the base meta class already handles this.
What we can do is:
class OpenAIBaseAgent[
InSchema: InputSchema,
OutSchema: OutputSchema,
](BaseAgent):
"""Base class for OpenAI agents.
Any agent generated from the openai's platform agent builder should inherit from this class.
"""
config = OpenAIBaseAgentConfig
def __init__(self,
config: OpenAIBaseAgentConfig | None = None,
debug: bool = False
) -> None:
super().__init__(config=config, debug=debug)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change the base filename to akd_ext.agents._base.py. This will make it consistent in naming convention.
| class CMRCareAgentConfig(OpenAIBaseAgentConfig): | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this now? IF it's just a dummy, then we can in fact just remove the config.
Normally, I'd inherit the config if there's extra things that are being added. But in this case OpenAIBaseAgentConfig or BaseAgentConfig handles it. Or when I want to override some field values. Eg: reasoning_effort could be set to medium by default for CMR while the base will default to low.
Base on the comment in another thread, I think this can take advantage of just overriding system_prompt with CARE prompt we have.
This way, we can make sure Openai agents SDK is properly ported for configurability.
|
|
||
| class CMRCareInputSchema(InputSchema): | ||
| """Input schema for CMR Care Agent.""" | ||
| input_as_text: str = Field(..., description="Input query") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rename input_as_text to just maybe query to have consistent input field across everything. And reflect the changes accordingly. This will make consistent for any agents like we have in akd core.
|
|
||
| nasa_cmr_data_search_agent = Agent( | ||
| name="NASA CMR Data Search Agent", | ||
| instructions="""ROLE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this prompt to CMRAgentConfig.system_prompt.
Since akd.agents._base.BaseAgentConfig.system_prompt already exists. So CMRCareAgentConfig.system_prompt could be overridden from this. It will make it proper in a way that if we have newer care version for CMR which would be just change in system prompt and instructions TBH. So, CMRCareAgentConfig.system_prompt is a proper place.
|
|
||
| # Main code entrypoint | ||
| async def run_workflow(workflow_input: WorkflowInput): | ||
| with trace("Workshop Data agent"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the name could come from tracing_params
| class OutputAgentSchema(BaseModel): | ||
| dataset_concept_ids: list[str] | ||
|
|
||
| nasa_cmr_data_search_agent = Agent( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also if we want to properly generalize theopenaibase agent then...the Agent object could in fact be abstracted out within the base class as self.agent or something.
| "__trace_source__": "agent-builder", | ||
| "workflow_id": "wf_6949ac60e244819082e6ed0bf22ccead09adcd4d789b8c37" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
again, this can come from self.config.tracing_params
| }) | ||
| ) | ||
|
|
||
| conversation_history.extend([item.to_input_item() for item in output_agent_result_temp.new_items]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make use of conversation tracking we have inBaseAgent. Since memory (see) is abstracted in base...we can have OpenAIBaseAgent.memory as list of dict. Looking a tit. IT does seems like it's a list.
| source = { editable = "." } | ||
| dependencies = [ | ||
| { name = "akd" }, | ||
| { name = "deepeval" }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need to install deepeval for akd-ext.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove deepeval now
Summary 📝
Porting a CMR agent created through CARE process using the openai agent platform into the AKD as an extension.
Details
A new CMR agent is created going through the CARE process in openai agent creation platform. The provided agent code is ported to AKD as extension for its easy usability on the AKD backend. To easily port the generated code with minimal intervention to it, the code is copied to get_response_async method that is extended from the base agent.
Usage
Tested Changes