Replies: 1 comment
-
|
Shipped: https://docs.agenta.ai/changelog/customize-llm-as-a-judge-output-schemas |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Problem Statement
Currently, LLM-as-a-judge evaluators in Agenta return a score and basic feedback. They cannot use structured output formats or generate multiple output fields. This limits the depth of insights teams can extract from automated evaluations.
Proposed Solution
We are enhancing LLM-as-a-judge evaluators with two capabilities:
Structured Output Support
Define JSON schemas for evaluator outputs to ensure consistent, parseable results.
Multiple Output Fields
Generate not just a score, but also additional fields such as:
These fields will be configurable when setting up the evaluator. Teams will be able to analyze and filter evaluation results based on any of these fields.
Beta Was this translation helpful? Give feedback.
All reactions