Skip to content

Conversation

@fabianvf
Copy link
Contributor

@fabianvf fabianvf commented Dec 16, 2025

Summary by CodeRabbit

Release Notes

  • New Features
    • Added support for configurable llama-stack image across bundle generation and deployment
    • Llama-stack image reference can now be customized through configuration parameters with sensible defaults
    • Introduced environment variable support for flexible image reference management during deployment

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 16, 2025

Walkthrough

Adds llama_stack image parameter propagation across the build pipeline and deployment configuration, including GitHub Actions inputs, workflow bundle arguments, Kubernetes manifests, Helm charts, and Ansible role defaults.

Changes

Cohort / File(s) Summary
GitHub Actions & Workflows
.github/actions/make-bundle/action.yml, .github/workflows/create-release.yml
Added llama_stack input to the make-bundle action and propagated it through bundle generation commands with conditional option appending. Updated release workflow to include llama_stack image in bundle build step arguments.
Kubernetes & Container Configuration
bundle/manifests/konveyor-operator.clusterserviceversion.yaml
Added llama-stack to relatedImages section and introduced RELATED_IMAGE_LLAMA_STACK environment variable in the tackle-operator container.
Helm Configuration
helm/values.yaml, helm/templates/deployment.yaml
Added llama_stack image entry with default value in values.yaml and corresponding RELATED_IMAGE_LLAMA_STACK environment variable in deployment template.
Ansible Configuration
roles/tackle/defaults/main.yml
Converted kai_llm_proxy_image_fqin from static image reference to environment-driven lookup with fallback default.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~15 minutes

  • The changes follow a consistent, repetitive pattern of adding the same image reference across different configuration layers, which reduces cognitive load per file
  • However, the spread across 6 files in different formats (GitHub Actions YAML, Kubernetes manifests, Helm templates, Ansible defaults) requires familiarity with multiple configuration styles
  • The environment variable indirection in roles/tackle/defaults/main.yml warrants verification that the Jinja2 lookup syntax is correct and the fallback behavior is as intended

Possibly related PRs

Suggested reviewers

  • djzager

Poem

🐰 A new stack arrives, from llama's embrace,
Through workflows and Helm, it finds its place,
From actions to manifests, configs align,
Each layer now knows the image to assign!
The operator stands ready, no stone left unturned.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title mentions adding llama_stack to release workflows, which aligns with the changes across GitHub Actions, Helm, and Ansible configurations that introduce llama_stack image support to the release process.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@fabianvf fabianvf force-pushed the llm-proxy-release-changes branch from eee02df to 0c6c79f Compare December 16, 2025 16:55
Signed-off-by: Fabian von Feilitzsch <fabian@fabianism.us>
@fabianvf fabianvf force-pushed the llm-proxy-release-changes branch from 0c6c79f to 96f65ff Compare December 16, 2025 16:57
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fc1083f and 96f65ff.

📒 Files selected for processing (6)
  • .github/actions/make-bundle/action.yml (2 hunks)
  • .github/workflows/create-release.yml (1 hunks)
  • bundle/manifests/konveyor-operator.clusterserviceversion.yaml (2 hunks)
  • helm/templates/deployment.yaml (1 hunks)
  • helm/values.yaml (1 hunks)
  • roles/tackle/defaults/main.yml (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: run-ci / e2e-api-integration-tests
🔇 Additional comments (7)
helm/templates/deployment.yaml (1)

66-67: LGTM! Environment variable correctly references Helm values.

The RELATED_IMAGE_LLAMA_STACK environment variable is properly configured using the Helm template syntax and follows the same pattern as other related image environment variables.

.github/workflows/create-release.yml (1)

231-231: LGTM! Release workflow correctly includes llama_stack with mirrored image.

The llama_stack parameter uses the quay.io/konveyor registry, which is correct for the release process. The image mirroring step (lines 187-210) will copy external images like docker.io/llamastack/distribution-starter to quay.io/konveyor/distribution-starter during the release.

bundle/manifests/konveyor-operator.clusterserviceversion.yaml (2)

362-363: LGTM! Environment variable properly added to ClusterServiceVersion.

The RELATED_IMAGE_LLAMA_STACK environment variable follows the established pattern for other related images in the operator deployment specification.


567-568: LGTM! Related image entry correctly configured.

The llama-stack entry in the relatedImages list uses the same image reference as the environment variable, maintaining consistency. The hyphenated name format matches the convention used for other related images.

.github/actions/make-bundle/action.yml (2)

64-67: LGTM! Input parameter properly defined.

The llama_stack input follows the established convention for other image inputs in this action, with appropriate description, optional requirement, and empty default value.


107-107: LGTM! Conditional logic correctly propagates llama_stack to bundle options.

The conditional check and OPTS append follow the same pattern as other image parameters, ensuring the llama_stack value is properly passed to the Helm-based bundle generation process.

roles/tackle/defaults/main.yml (1)

332-332: LGTM! Environment-driven image configuration correctly implemented.

The change to kai_llm_proxy_image_fqin properly reads from the RELATED_IMAGE_LLAMA_STACK environment variable with an appropriate fallback to the default image. This aligns with the pattern used for other image configurations in this file (e.g., hub_image_fqin on line 28) and integrates well with the environment variables added in the deployment manifests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants