-
Notifications
You must be signed in to change notification settings - Fork 0
Adds deployment-id based rollout #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Marcos Candeia <marrcooos@gmail.com>
WalkthroughReplaces the Changes
Sequence Diagram(s)sequenceDiagram
participant Controller as Decofile Controller
participant Notifier as Notifier
participant Pod as Pod (selected by label)
rect rgba(150,220,180,0.2)
note over Controller,Notifier: New flow (deploymentId-driven)
Controller->>Notifier: dataChanged + deploymentId (fallback=name)
Notifier->>Pod: List pods where label app.deco/deploymentId=deploymentId
Notifier->>Pod: Notify pods (batched, timeout)
Controller->>Controller: Update status PodsNotified using deploymentId
end
sequenceDiagram
participant Service as Knative Service
participant Webhook as Service Custom Defaulter
participant Decofile as Decofile CR
participant ConfigMap as ConfigMap
participant PodSpec as PodTemplate
rect rgba(200,160,160,0.15)
note over Webhook: Mutation on Create/Update
Service->>Webhook: Admission request
Webhook->>Webhook: getDeploymentId(service)
Webhook->>Decofile: findDecofileByDeploymentId(namespace, deploymentId)
alt Decofile & ConfigMap ready
Webhook->>ConfigMap: ensure configMap name present
Webhook->>PodSpec: addOrUpdateVolume(configMap)
Webhook->>PodSpec: addOrUpdateVolumeMount(targetContainer, mountDir)
Webhook->>PodSpec: addOrUpdateEnvVars(DECO_RELEASE, RELOAD_TOKEN)
Webhook->>Service: return mutated PodTemplate
else Decofile missing/not-ready
Webhook->>Service: allow creation (non-blocking), log info
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (7)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used🧬 Code graph analysis (3)cmd/main.go (1)
internal/webhook/v1/decofile_webhook.go (1)
internal/webhook/v1/service_webhook.go (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
🔇 Additional comments (14)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/webhook/v1/service_webhook.go (1)
66-67: Based on my verification of the Kubernetes admission webhook behavior and the codebase analysis, I can now generate the rewritten review comment.Services with the decofile finalizer will get stuck in Terminating state because the mutating webhook cannot remove the finalizer during DELETE operations
The mutating webhook is configured with
verbs=create;update(line 66), which means it will not be invoked when a DELETE request is made. According to Kubernetes admission control, when a resource with finalizers is deleted, the API server setsmetadata.deletionTimestampinstead of immediately deleting it. The initial DELETE request IS delivered to mutating webhooks, but only if the webhook includesdeletein itsverbslist.The
Default()method checksDeletionTimestampat line 287 and callshandleDeletion()to remove the finalizer. However, since the webhook will never be invoked on DELETE operations (missingverbs=delete), this code path is unreachable. TheRemoveFinalizercall at line 92 will never execute, and services will remain stuck inTerminatingstate indefinitely. Additionally,cleanupDecofiles()will never be called, leaking associated Decofiles.The only place in the codebase where this finalizer is managed is in this webhook file. There is no separate controller to handle finalizer removal during deletion.
Fix this by either:
- Adding
deleteto the webhook verbs and regenerating manifests:-// +kubebuilder:webhook:path=/mutate-serving-knative-dev-v1-service,mutating=true,failurePolicy=fail,sideEffects=None,groups=serving.knative.dev,resources=services,verbs=create;update,versions=v1,name=mservice-v1.kb.io,admissionReviewVersions=v1 +// +kubebuilder:webhook:path=/mutate-serving-knative-dev-v1-service,mutating=true,failurePolicy=fail,sideEffects=None,groups=serving.knative.dev,resources=services,verbs=create;update;delete,versions=v1,name=mservice-v1.kb.io,admissionReviewVersions=v1
- Or, move the finalizer-removal and Decofile cleanup logic into a dedicated controller that watches Services and reconciles on
DeletionTimestamp.Also applies to: 79-99, 278-289, 329-333, 340-376
🧹 Nitpick comments (3)
api/v1alpha1/decofile_types.go (1)
42-45: Consider adding validation for label value constraints.The
DeploymentIdis used as a Kubernetes label value (app.deco/deploymentId). Kubernetes label values have restrictions: max 63 characters, must start/end with alphanumeric, and can only contain alphanumerics,-,_, or..If you want to enforce valid label values at admission time, consider adding a validation pattern:
// DeploymentId is used for pod label matching (defaults to metadata.name if absent) // Pods are queried using the app.deco/deploymentId label // +optional + // +kubebuilder:validation:MaxLength=63 + // +kubebuilder:validation:Pattern=`^([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]$` DeploymentId string `json:"deploymentId,omitempty"`This is optional since the fallback to
metadata.namealready conforms to these constraints.internal/webhook/v1/service_webhook.go (2)
175-276: Volume/mount/env mutation logic is nicely idempotent but could be slightly simplifiedThe helpers
addOrUpdateVolume,addOrUpdateVolumeMount, andaddOrUpdateEnvVarscorrectly:
- Avoid duplicating entries on repeated invocations.
- Update existing entries in-place when they already exist.
- Handle
nilslices safely viaappend.Two minor polish points you might consider later:
- Stick to either
service.Spec.Template.Spec.Volumesor...Spec.PodSpec.Volumesconsistently to improve readability; both refer to the same underlying field but mixing them is a bit confusing.- In
addOrUpdateEnvVars, you regenerateDECO_RELEASE_RELOAD_TOKENon every admission call, even when neither the Decofile nor mount path has changed. If you ever want to tie reloads strictly to config changes, you may want to compute the token from a hash instead of a fresh UUID.Functionally this looks sound and idempotent as-is.
114-134: Decofile lookup/cleanup is O(N) per request; consider label/field selection or indexingBoth
findDecofileByDeploymentIdandcleanupDecofileslist allDecofileobjects in the namespace and then scan in-process for a matchingspec.deploymentId(or name fallback):err := d.Client.List(ctx, decofileList, client.InNamespace(namespace)) // ... for i := range decofileList.Items { df := &decofileList.Items[i] dfDeploymentId := df.Spec.DeploymentId if dfDeploymentId == "" { dfDeploymentId = df.Name } if dfDeploymentId == deploymentId { // match } }This is fine for a small number of Decofiles but becomes a scalability and latency concern as the count grows, especially since this runs on the admission path. The codebase already uses
client.MatchingLabelsto efficiently query Pods bydeploymentId(seenotifier.goline 91).I'd suggest adding a field index in
DecofileReconciler.SetupWithManager()(currently lines 384-390) to indexspec.deploymentId, then useclient.MatchingFieldsin both functions:// In SetupWithManager mgr.GetFieldIndexer().IndexField(ctx, &decositesv1alpha1.Decofile{}, "spec.deploymentId", func(o client.Object) []string { d := o.(*decositesv1alpha1.Decofile) if d.Spec.DeploymentId == "" { return []string{d.Name} } return []string{d.Spec.DeploymentId} }) // In findDecofileByDeploymentId & cleanupDecofiles err := d.Client.List(ctx, decofileList, client.InNamespace(namespace), client.MatchingFields{"spec.deploymentId": deploymentId})That keeps admission latency predictable even with many Decofiles in a namespace.
Also applies to: 352-373
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
api/v1alpha1/decofile_types.go(1 hunks)chart/templates/clusterrole-operator-manager-role.yaml(1 hunks)chart/templates/customresourcedefinition-decofiles.deco.sites.yaml(1 hunks)config/crd/bases/deco.sites_decofiles.yaml(1 hunks)config/rbac/role.yaml(1 hunks)internal/controller/decofile_controller.go(4 hunks)internal/controller/notifier.go(2 hunks)internal/webhook/v1/service_webhook.go(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
internal/controller/decofile_controller.go (1)
internal/controller/notifier.go (1)
NewNotifier(52-59)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Run Tests
- GitHub Check: Build Binary
🔇 Additional comments (13)
config/crd/bases/deco.sites_decofiles.yaml (1)
42-46: LGTM!The
deploymentIdfield addition is well-documented, correctly typed as string, and appropriately optional. The description clearly explains the pod label matching behavior with fallback to metadata.name.config/rbac/role.yaml (1)
54-69: LGTM!The RBAC additions for Knative Serving resources are appropriate. The permissions (get, list, patch, update, watch for services; update for finalizers) align with the webhook mutation and cleanup requirements without granting unnecessary delete access.
chart/templates/clusterrole-operator-manager-role.yaml (1)
53-68: LGTM!The Helm chart ClusterRole template mirrors the RBAC additions in
config/rbac/role.yaml, maintaining consistency between the Helm deployment and kustomize-based deployment.chart/templates/customresourcedefinition-decofiles.deco.sites.yaml (1)
41-45: LGTM!The Helm chart CRD template is consistent with the base CRD definition in
config/crd/bases/deco.sites_decofiles.yaml.internal/controller/notifier.go (3)
38-38: LGTM!The
deploymentIdLabelconstant follows Kubernetes label naming conventions with a valid prefix (app.deco/) and key (deploymentId).
75-78: LGTM!The method signature update from
decofileNametodeploymentIdis consistent with the broader refactoring. The function continues to provide proper timeout handling and parallel batch processing.
87-94: LGTM!The pod selection using
deploymentIdLabelcorrectly filters pods by theapp.deco/deploymentIdlabel, aligning with the new deployment-id based rollout mechanism.internal/controller/decofile_controller.go (5)
238-242: LGTM!The deploymentId computation with fallback to
decofile.Nameis correctly implemented, ensuring backward compatibility when the field is not specified.
167-170: Verify: Pods are not notified on initial ConfigMap creation.When a new ConfigMap is created,
dataChangedis set tofalse(line 170), so pods will not be notified. This is likely intentional since pods should mount the ConfigMap on startup, but confirm this is the expected behavior.
275-288: LGTM!The notification flow is clean: create a new
Notifier, callNotifyPodsForDecofilewith the computeddeploymentId, and properly handle both success and failure cases. The error is captured for status update while allowing the reconciliation to continue.
320-350: LGTM!The PodsNotified condition logic correctly distinguishes between success and failure cases, includes useful identifiers (commit or timestamp) in the message for debugging, and only updates when
dataChangedis true.
361-363: LGTM on the requeue behavior.Returning an error when notification fails ensures the controller will requeue and retry, which aligns with the incomplete notification detection logic at lines 88-103.
internal/webhook/v1/service_webhook.go (1)
100-112: Perfect. I now have all the information needed to verify this review comment. Let me analyze the findings:Verification Results
Confirmed facts about the admission error handling:
failurePolicy is explicitly set to
fail(line 66): This means any error returned from the Default() method will result in the Service admission being rejected with the error message displayed to the user.Error translation is correct: Per the web search, Go webhook frameworks return errors that translate to AdmissionReview responses with
Allowed=falseand a Status with the error message. kubectl displays this message to the user.Error messages are clear and user-facing:
- "service has deco.sites/decofile-inject annotation but no labels"
- "service has deco.sites/decofile-inject annotation but no app.deco/deploymentId label"
- "decofile %s does not have a ConfigMap created yet"
The strict behavior appears intentional:
failurePolicy=failis explicit in the kubebuilder annotation (not accidentally inherited)- Error-on-missing-dependency pattern is the standard safeguard approach in Kubernetes webhooks
- No TODOs or comments suggesting uncertainty about this approach
This is the correct pattern: The strictness prevents Services from being created in an inconsistent state (annotation present but no Decofile injection possible), which would be worse than requiring users to fix their setup first.
The review comment is asking valid clarifying questions about design intent, but the code itself implements this correctly and intentionally. No changes are needed to the code.
Admission errors on missing deploymentId/Decofile are intentional and correctly handled
The webhook has
failurePolicy=fail(line 66), so all errors from the Default() method properly translate to admission denials with clear messages displayed to kubectl users. The strict contract—requiring both theapp.deco/deploymentIdlabel and a ready Decofile before allowing Service creation/update—is the correct pattern to prevent inconsistent state. This aligns with Kubernetes admission webhook best practices (fail closed by default, clear error messages). No changes needed; this is solid design.
| func (d *ServiceCustomDefaulter) injectDecofileVolume(ctx context.Context, service *servingknativedevv1.Service, decofile *decositesv1alpha1.Decofile, mountDir string) error { | ||
| // Check if ConfigMap is compressed to set correct file extension | ||
| configMap := &corev1.ConfigMap{} | ||
| err = d.Client.Get(ctx, types.NamespacedName{ | ||
| err := d.Client.Get(ctx, types.NamespacedName{ | ||
| Name: decofile.Status.ConfigMapName, | ||
| Namespace: service.Namespace, | ||
| }, configMap) | ||
|
|
||
| fileExtension := "json" | ||
| if err == nil { | ||
| // Check if compressed | ||
| if _, hasCompressed := configMap.Data["decofile.bin"]; hasCompressed { | ||
| fileExtension = "bin" | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Script executed:
# First, let's examine the file and see the full function context
cd /tmp && git clone --depth 1 https://github.com/placeholder/placeholder.git 2>/dev/null || true
# Read the service_webhook.go file to understand the full context
wc -l internal/webhook/v1/service_webhook.go 2>/dev/null || wc -l $(find . -name "service_webhook.go" -type f 2>/dev/null | head -1)Repository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
0
</shell_result>
Script executed:
# Find and read the service_webhook.go file
fd service_webhook.go --type fRepository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
internal/webhook/v1/service_webhook.go
</shell_result>
Script executed:
# Read the relevant section of the file to see the full function
if [ -f "internal/webhook/v1/service_webhook.go" ]; then
head -200 internal/webhook/v1/service_webhook.go | tail -100
fiRepository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
func (d *ServiceCustomDefaulter) getDeploymentId(service *servingknativedevv1.Service) (string, error) {
if service.Labels == nil {
return "", fmt.Errorf("service has deco.sites/decofile-inject annotation but no labels")
}
deploymentId, exists := service.Labels[deploymentIdLabel]
if !exists || deploymentId == "" {
return "", fmt.Errorf("service has deco.sites/decofile-inject annotation but no app.deco/deploymentId label")
}
return deploymentId, nil
}
// findDecofileByDeploymentId finds a Decofile matching the given deploymentId
func (d *ServiceCustomDefaulter) findDecofileByDeploymentId(ctx context.Context, namespace, deploymentId string) (*decositesv1alpha1.Decofile, error) {
decofileList := &decositesv1alpha1.DecofileList{}
err := d.Client.List(ctx, decofileList, client.InNamespace(namespace))
if err != nil {
return nil, fmt.Errorf("failed to list Decofiles: %w", err)
}
for i := range decofileList.Items {
df := &decofileList.Items[i]
dfDeploymentId := df.Spec.DeploymentId
if dfDeploymentId == "" {
dfDeploymentId = df.Name
}
if dfDeploymentId == deploymentId {
return df, nil
}
}
return nil, fmt.Errorf("no Decofile found with deploymentId %s in namespace %s", deploymentId, namespace)
}
// injectDecofileVolume injects the Decofile ConfigMap as a volume into the Service
func (d *ServiceCustomDefaulter) injectDecofileVolume(ctx context.Context, service *servingknativedevv1.Service, decofile *decositesv1alpha1.Decofile, mountDir string) error {
// Check if ConfigMap is compressed to set correct file extension
configMap := &corev1.ConfigMap{}
err := d.Client.Get(ctx, types.NamespacedName{
Name: decofile.Status.ConfigMapName,
Namespace: service.Namespace,
}, configMap)
fileExtension := "json"
if err == nil {
if _, hasCompressed := configMap.Data["decofile.bin"]; hasCompressed {
fileExtension = "bin"
}
}
// Create DECO_RELEASE environment variable
decoReleaseValue := fmt.Sprintf("file://%s/decofile.%s", mountDir, fileExtension)
// Ensure volumes array exists
if service.Spec.Template.Spec.Volumes == nil {
service.Spec.Template.Spec.Volumes = []corev1.Volume{}
}
// Add or update volume
d.addOrUpdateVolume(service, decofile.Status.ConfigMapName)
// Find target container and add volumeMount + env vars
if len(service.Spec.Template.Spec.Containers) == 0 {
return fmt.Errorf("no containers found in Service spec")
}
targetContainerIdx := d.findTargetContainer(service)
d.addOrUpdateVolumeMount(service, targetContainerIdx, mountDir)
d.addOrUpdateEnvVars(service, targetContainerIdx, decoReleaseValue)
return nil
}
// addOrUpdateVolume adds or updates the decofile volume
func (d *ServiceCustomDefaulter) addOrUpdateVolume(service *servingknativedevv1.Service, configMapName string) {
volumeName := "decofile-config"
volumeExists := false
for i, vol := range service.Spec.Template.Spec.Volumes {
if vol.Name == volumeName {
service.Spec.Template.Spec.PodSpec.Volumes[i].VolumeSource = corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: configMapName},
},
}
volumeExists = true
break
}
}
if !volumeExists {
service.Spec.Template.Spec.Volumes = append(service.Spec.Template.Spec.Volumes, corev1.Volume{
Name: volumeName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: configMapName},
},
},
})
</shell_result>
Script executed:
# Search for the file in the repository
fd "service_webhook" --type fRepository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
internal/webhook/v1/service_webhook.go
internal/webhook/v1/service_webhook_test.go
</shell_result>
Script executed:
# If found, read lines around 137-173 to see the full function context
if [ -f "internal/webhook/v1/service_webhook.go" ]; then
sed -n '130,180p' internal/webhook/v1/service_webhook.go
fiRepository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
}
}
return nil, fmt.Errorf("no Decofile found with deploymentId %s in namespace %s", deploymentId, namespace)
}
// injectDecofileVolume injects the Decofile ConfigMap as a volume into the Service
func (d *ServiceCustomDefaulter) injectDecofileVolume(ctx context.Context, service *servingknativedevv1.Service, decofile *decositesv1alpha1.Decofile, mountDir string) error {
// Check if ConfigMap is compressed to set correct file extension
configMap := &corev1.ConfigMap{}
err := d.Client.Get(ctx, types.NamespacedName{
Name: decofile.Status.ConfigMapName,
Namespace: service.Namespace,
}, configMap)
fileExtension := "json"
if err == nil {
if _, hasCompressed := configMap.Data["decofile.bin"]; hasCompressed {
fileExtension = "bin"
}
}
// Create DECO_RELEASE environment variable
decoReleaseValue := fmt.Sprintf("file://%s/decofile.%s", mountDir, fileExtension)
// Ensure volumes array exists
if service.Spec.Template.Spec.Volumes == nil {
service.Spec.Template.Spec.Volumes = []corev1.Volume{}
}
// Add or update volume
d.addOrUpdateVolume(service, decofile.Status.ConfigMapName)
// Find target container and add volumeMount + env vars
if len(service.Spec.Template.Spec.Containers) == 0 {
return fmt.Errorf("no containers found in Service spec")
}
targetContainerIdx := d.findTargetContainer(service)
d.addOrUpdateVolumeMount(service, targetContainerIdx, mountDir)
d.addOrUpdateEnvVars(service, targetContainerIdx, decoReleaseValue)
return nil
}
// addOrUpdateVolume adds or updates the decofile volume
func (d *ServiceCustomDefaulter) addOrUpdateVolume(service *servingknativedevv1.Service, configMapName string) {
volumeName := "decofile-config"
volumeExists := false
for i, vol := range service.Spec.Template.Spec.Volumes {
</shell_result>
Script executed:
# Also search for ConfigMapName validation to see if it's treated as hard error elsewhere
rg "ConfigMapName" -A 3 -B 3 --type goRepository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
internal/webhook/v1/service_webhook.go- // Check if ConfigMap is compressed to set correct file extension
internal/webhook/v1/service_webhook.go- configMap := &corev1.ConfigMap{}
internal/webhook/v1/service_webhook.go- err := d.Client.Get(ctx, types.NamespacedName{
internal/webhook/v1/service_webhook.go: Name: decofile.Status.ConfigMapName,
internal/webhook/v1/service_webhook.go- Namespace: service.Namespace,
internal/webhook/v1/service_webhook.go- }, configMap)
internal/webhook/v1/service_webhook.go-
--
internal/webhook/v1/service_webhook.go- }
internal/webhook/v1/service_webhook.go-
internal/webhook/v1/service_webhook.go- // Add or update volume
internal/webhook/v1/service_webhook.go: d.addOrUpdateVolume(service, decofile.Status.ConfigMapName)
internal/webhook/v1/service_webhook.go-
internal/webhook/v1/service_webhook.go- // Find target container and add volumeMount + env vars
internal/webhook/v1/service_webhook.go- if len(service.Spec.Template.Spec.Containers) == 0 {
--
internal/webhook/v1/service_webhook.go- }
internal/webhook/v1/service_webhook.go-
internal/webhook/v1/service_webhook.go- // Check if ConfigMap is ready
internal/webhook/v1/service_webhook.go: if decofile.Status.ConfigMapName == "" {
internal/webhook/v1/service_webhook.go- return fmt.Errorf("decofile %s does not have a ConfigMap created yet", decofile.Name)
internal/webhook/v1/service_webhook.go- }
internal/webhook/v1/service_webhook.go-
--
internal/webhook/v1/service_webhook.go- servicelog.Info("Added finalizer to Service", "service", service.Name)
internal/webhook/v1/service_webhook.go- }
internal/webhook/v1/service_webhook.go-
internal/webhook/v1/service_webhook.go: servicelog.Info("Successfully injected Decofile into Service", "service", service.Name, "deploymentId", deploymentId, "configmap", decofile.Status.ConfigMapName)
internal/webhook/v1/service_webhook.go-
internal/webhook/v1/service_webhook.go- return nil
internal/webhook/v1/service_webhook.go-}
--
internal/controller/decofile_controller.go- }
internal/controller/decofile_controller.go-
internal/controller/decofile_controller.go- // Update Decofile status
internal/controller/decofile_controller.go: freshDecofile.Status.ConfigMapName = configMapName
internal/controller/decofile_controller.go- freshDecofile.Status.LastUpdated = metav1.Time{Time: time.Now()}
internal/controller/decofile_controller.go- freshDecofile.Status.SourceType = sourceType
internal/controller/decofile_controller.go-
--
api/v1alpha1/decofile_types.go-
api/v1alpha1/decofile_types.go-// DecofileStatus defines the observed state of Decofile.
api/v1alpha1/decofile_types.go-type DecofileStatus struct {
api/v1alpha1/decofile_types.go: // ConfigMapName is the name of the ConfigMap created for this Decofile
api/v1alpha1/decofile_types.go- // +optional
api/v1alpha1/decofile_types.go: ConfigMapName string `json:"configMapName,omitempty"`
api/v1alpha1/decofile_types.go-
api/v1alpha1/decofile_types.go- // LastUpdated is the timestamp of the last update
api/v1alpha1/decofile_types.go- // +optional
</shell_result>
Script executed:
wc -l internal/webhook/v1/service_webhook.goRepository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
431 internal/webhook/v1/service_webhook.go
</shell_result>
Script executed:
sed -n '100,190p' internal/webhook/v1/service_webhook.goRepository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
// getDeploymentId extracts deploymentId from Service labels
func (d *ServiceCustomDefaulter) getDeploymentId(service *servingknativedevv1.Service) (string, error) {
if service.Labels == nil {
return "", fmt.Errorf("service has deco.sites/decofile-inject annotation but no labels")
}
deploymentId, exists := service.Labels[deploymentIdLabel]
if !exists || deploymentId == "" {
return "", fmt.Errorf("service has deco.sites/decofile-inject annotation but no app.deco/deploymentId label")
}
return deploymentId, nil
}
// findDecofileByDeploymentId finds a Decofile matching the given deploymentId
func (d *ServiceCustomDefaulter) findDecofileByDeploymentId(ctx context.Context, namespace, deploymentId string) (*decositesv1alpha1.Decofile, error) {
decofileList := &decositesv1alpha1.DecofileList{}
err := d.Client.List(ctx, decofileList, client.InNamespace(namespace))
if err != nil {
return nil, fmt.Errorf("failed to list Decofiles: %w", err)
}
for i := range decofileList.Items {
df := &decofileList.Items[i]
dfDeploymentId := df.Spec.DeploymentId
if dfDeploymentId == "" {
dfDeploymentId = df.Name
}
if dfDeploymentId == deploymentId {
return df, nil
}
}
return nil, fmt.Errorf("no Decofile found with deploymentId %s in namespace %s", deploymentId, namespace)
}
// injectDecofileVolume injects the Decofile ConfigMap as a volume into the Service
func (d *ServiceCustomDefaulter) injectDecofileVolume(ctx context.Context, service *servingknativedevv1.Service, decofile *decositesv1alpha1.Decofile, mountDir string) error {
// Check if ConfigMap is compressed to set correct file extension
configMap := &corev1.ConfigMap{}
err := d.Client.Get(ctx, types.NamespacedName{
Name: decofile.Status.ConfigMapName,
Namespace: service.Namespace,
}, configMap)
fileExtension := "json"
if err == nil {
if _, hasCompressed := configMap.Data["decofile.bin"]; hasCompressed {
fileExtension = "bin"
}
}
// Create DECO_RELEASE environment variable
decoReleaseValue := fmt.Sprintf("file://%s/decofile.%s", mountDir, fileExtension)
// Ensure volumes array exists
if service.Spec.Template.Spec.Volumes == nil {
service.Spec.Template.Spec.Volumes = []corev1.Volume{}
}
// Add or update volume
d.addOrUpdateVolume(service, decofile.Status.ConfigMapName)
// Find target container and add volumeMount + env vars
if len(service.Spec.Template.Spec.Containers) == 0 {
return fmt.Errorf("no containers found in Service spec")
}
targetContainerIdx := d.findTargetContainer(service)
d.addOrUpdateVolumeMount(service, targetContainerIdx, mountDir)
d.addOrUpdateEnvVars(service, targetContainerIdx, decoReleaseValue)
return nil
}
// addOrUpdateVolume adds or updates the decofile volume
func (d *ServiceCustomDefaulter) addOrUpdateVolume(service *servingknativedevv1.Service, configMapName string) {
volumeName := "decofile-config"
volumeExists := false
for i, vol := range service.Spec.Template.Spec.Volumes {
if vol.Name == volumeName {
service.Spec.Template.Spec.PodSpec.Volumes[i].VolumeSource = corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{Name: configMapName},
},
}
volumeExists = true
break
}
}
</shell_result>
Script executed:
# Search for other uses of ConfigMapName to understand the pattern better
rg "ConfigMapName" internal/webhook/v1/service_webhook.go -B 2 -A 2Repository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
configMap := &corev1.ConfigMap{}
err := d.Client.Get(ctx, types.NamespacedName{
Name: decofile.Status.ConfigMapName,
Namespace: service.Namespace,
}, configMap)
--
// Add or update volume
d.addOrUpdateVolume(service, decofile.Status.ConfigMapName)
// Find target container and add volumeMount + env vars
--
// Check if ConfigMap is ready
if decofile.Status.ConfigMapName == "" {
return fmt.Errorf("decofile %s does not have a ConfigMap created yet", decofile.Name)
}
--
}
servicelog.Info("Successfully injected Decofile into Service", "service", service.Name, "deploymentId", deploymentId, "configmap", decofile.Status.ConfigMapName)
return nil
</shell_result>
Kubernetes pod CreateContainerConfigError missing ConfigMap volume mounting behavior
<web_search_result>
Short answer
CreateContainerConfigError when using a ConfigMap usually means Kubernetes could not assemble the container config because a referenced ConfigMap/key/volumeMount is wrong or the kubelet failed to prepare the mount (common causes: missing ConfigMap or key, mismatched volume/volumeMount names, subPath misuse, or kubelet/subPath bugs). See checks and fixes below. [1][2][3]
Quick checks to run
- kubectl describe pod -n and kubectl get events -n — read the kubelet event message (it usually names the missing ConfigMap/key or mount error). [3]
- kubectl get configmap -n — confirm the ConfigMap exists and keys match what you reference. [1][3]
- Inspect pod spec: ensure .spec.volumes[].configMap.name matches the ConfigMap name, and each container’s volumeMount.name matches a volume. Check any configMapKeyRef names/keys. [1]
Common causes and how they produce CreateContainerConfigError
- Missing ConfigMap or key: envFrom/configMapKeyRef or volumes referencing a non‑existent ConfigMap/key will fail pod creation. Fix: create the ConfigMap or correct the name/key. [1][3]
- volume/volumeMount name mismatch: a mount refers to a volume name that doesn’t exist in .spec.volumes. Fix: align the names. [2][3]
- subPath misuse: using subPath requires the subPath name to correspond to an entry in the projected volume; a wrong subPath causes “failed to prepare subPath” and CreateContainerConfigError. Also note: mounts with subPath do not receive ConfigMap updates. Use full-volume mounts or write an initContainer if you need single-file behavior without these limitations. [1][2][4]
- Mounting hides existing files: mounting a ConfigMap at a path overwrites (hides) whatever was already at that mountPath in the image filesystem — that can break startup if the app expects bundled files. Use subPath to mount a single file or arrange mount points so you don’t overwrite required files. [1][5]
- Kubelet / platform bugs or permission issues: kubelet errors (e.g., unable to create/cleanup subpath directories) can also cause failures; check kubelet logs on the node. There are recent reported subPath-related issues (see upstream issue). [2]
Minimal remediation steps
- kubectl describe pod -n and read the Warning/Error message (it often says the exact problem). [3]
- kubectl get configmap -n and verify keys; validate any configMapKeyRef names/keys used in env or volumes. [1]
- Verify .spec.volumes[].configMap.name and .spec.containers[].volumeMounts[].name match. Confirm subPath values match actual keys if used. [1][4]
- If the error is a subPath/kubelet failure, check kubelet logs on the node for more detail and search for any known bugs/patches for your Kubernetes version. [2]
- As a test, try removing subPath / mounting the whole configMap into an empty directory to see if the pod then starts (this narrows whether subPath/overwrite is the issue). [1][4]
References
- Kubernetes ConfigMap docs (volumes, keys, update behavior). [1]
- GitHub issue and recent reports about subPath / kubelet failures impacting ConfigMap mounts. [2]
- Common CreateContainerConfigError causes and diagnostics (examples). [3]
- StackOverflow on subPath preparing errors and expected subPath/key behavior. [4]
- Notes about mounting hiding files and recommended patterns (subPath, initContainer). [5]
If you want, paste your Pod YAML (or the kubectl describe output) and I’ll point to the exact field causing CreateContainerConfigError.
</web_search_result>
Script executed:
sed -n '160,173p' internal/webhook/v1/service_webhook.goRepository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
// Add or update volume
d.addOrUpdateVolume(service, decofile.Status.ConfigMapName)
// Find target container and add volumeMount + env vars
if len(service.Spec.Template.Spec.Containers) == 0 {
return fmt.Errorf("no containers found in Service spec")
}
targetContainerIdx := d.findTargetContainer(service)
d.addOrUpdateVolumeMount(service, targetContainerIdx, mountDir)
d.addOrUpdateEnvVars(service, targetContainerIdx, decoReleaseValue)
return nil
}
</shell_result>
Script executed:
# Get broader context to see what function contains lines 160-173
sed -n '155,180p' internal/webhook/v1/service_webhook.goRepository: decocms/operator
<shell_result>
Repository: decocms/operator
Exit code: 0
stdout:
// Ensure volumes array exists
if service.Spec.Template.Spec.Volumes == nil {
service.Spec.Template.Spec.Volumes = []corev1.Volume{}
}
// Add or update volume
d.addOrUpdateVolume(service, decofile.Status.ConfigMapName)
// Find target container and add volumeMount + env vars
if len(service.Spec.Template.Spec.Containers) == 0 {
return fmt.Errorf("no containers found in Service spec")
}
targetContainerIdx := d.findTargetContainer(service)
d.addOrUpdateVolumeMount(service, targetContainerIdx, mountDir)
d.addOrUpdateEnvVars(service, targetContainerIdx, decoReleaseValue)
return nil
}
// addOrUpdateVolume adds or updates the decofile volume
func (d *ServiceCustomDefaulter) addOrUpdateVolume(service *servingknativedevv1.Service, configMapName string) {
volumeName := "decofile-config"
volumeExists := false
for i, vol := range service.Spec.Template.Spec.Volumes {
</shell_result>
ConfigMap lookup errors are silently ignored, producing invalid pod specs at runtime
Verified: The code at lines 137-174 (injectDecofileVolume) ignores errors from Client.Get when attempting to retrieve the ConfigMap. Even if the ConfigMap is missing or access is denied, the function proceeds to:
- Call
addOrUpdateVolume()with the ConfigMapName (line 161) - Set volumeMount and environment variables (lines 167–168)
This creates a Service that passes admission but whose pods will fail at runtime with CreateContainerConfigError when the kubelet cannot find the referenced ConfigMap.
This is inconsistent with the check at line ~127 (if decofile.Status.ConfigMapName == "") which treats an empty ConfigMapName as a hard error. Treating ConfigMap retrieval failure the same way would make failures deterministic at admission time. The web search confirms that missing ConfigMaps result in pod-level CreateContainerConfigError, not admission rejection.
The suggested refactor is valid:
- configMap := &corev1.ConfigMap{}
- err := d.Client.Get(ctx, types.NamespacedName{
+ configMap := &corev1.ConfigMap{}
+ err := d.Client.Get(ctx, types.NamespacedName{
Name: decofile.Status.ConfigMapName,
Namespace: service.Namespace,
}, configMap)
- fileExtension := "json"
- if err == nil {
+ if err != nil {
+ return fmt.Errorf("failed to get Decofile ConfigMap %q: %w", decofile.Status.ConfigMapName, err)
+ }
+
+ fileExtension := "json"
+ if _, hasCompressed := configMap.Data["decofile.bin"]; hasCompressed {
+ fileExtension = "bin"
+ }
- if _, hasCompressed := configMap.Data["decofile.bin"]; hasCompressed {
- fileExtension = "bin"
- }
- }Also applies to lines 160–173 (the entire section that injects volume/mount/env remains unconditional).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (d *ServiceCustomDefaulter) injectDecofileVolume(ctx context.Context, service *servingknativedevv1.Service, decofile *decositesv1alpha1.Decofile, mountDir string) error { | |
| // Check if ConfigMap is compressed to set correct file extension | |
| configMap := &corev1.ConfigMap{} | |
| err = d.Client.Get(ctx, types.NamespacedName{ | |
| err := d.Client.Get(ctx, types.NamespacedName{ | |
| Name: decofile.Status.ConfigMapName, | |
| Namespace: service.Namespace, | |
| }, configMap) | |
| fileExtension := "json" | |
| if err == nil { | |
| // Check if compressed | |
| if _, hasCompressed := configMap.Data["decofile.bin"]; hasCompressed { | |
| fileExtension = "bin" | |
| } | |
| } | |
| func (d *ServiceCustomDefaulter) injectDecofileVolume(ctx context.Context, service *servingknativedevv1.Service, decofile *decositesv1alpha1.Decofile, mountDir string) error { | |
| // Check if ConfigMap is compressed to set correct file extension | |
| configMap := &corev1.ConfigMap{} | |
| err := d.Client.Get(ctx, types.NamespacedName{ | |
| Name: decofile.Status.ConfigMapName, | |
| Namespace: service.Namespace, | |
| }, configMap) | |
| if err != nil { | |
| return fmt.Errorf("failed to get Decofile ConfigMap %q: %w", decofile.Status.ConfigMapName, err) | |
| } | |
| fileExtension := "json" | |
| if _, hasCompressed := configMap.Data["decofile.bin"]; hasCompressed { | |
| fileExtension = "bin" | |
| } |
🤖 Prompt for AI Agents
In internal/webhook/v1/service_webhook.go around lines 137 to 151, the function
injectDecofileVolume currently ignores errors from d.Client.Get when fetching
the ConfigMap and proceeds to inject a volume referencing a
non-existent/unreadable ConfigMap; change this so that if d.Client.Get returns
any error (including NotFound or permission errors) you return that error (or
wrap it with context) immediately instead of continuing, so admission fails
deterministically; use the k8s API errors helpers (e.g., apierrors.IsNotFound)
if you want to customize the returned message, but do not proceed to call
addOrUpdateVolume, set mounts, or env vars when the ConfigMap fetch fails.
Signed-off-by: Marcos Candeia <marrcooos@gmail.com>
Summary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.