Skip to content

[FT] Add response_format support to LiteLLM #1115

@pjavanrood

Description

@pjavanrood

Issue encountered

The response_format parameter was not passed to litellm API calls in the JudgeLM class, even though it was available as an instance variable and used in other backends (OpenAI/TGI). This prevented structured output formats (e.g., JSON) when using the litellm backend.

Solution/Feature

Add "response_format": self.response_format to the kwargs dictionary when calling litellm.completion() in the __call_litellm method, matching the pattern used in the __call_api method for OpenAI/TGI backends.

Possible alternatives

  1. Make response_format backend-specific via configuration, but this adds complexity and breaks consistency.
  2. Handle response_format only in backends that support it, but this reduces feature parity.
  3. Remove response_format support entirely, but this removes a useful feature.

The chosen solution maintains consistency across backends and enables structured output formats with litellm.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions