-
Notifications
You must be signed in to change notification settings - Fork 584
MON-4031: Add prometheusOperatorConfig API #2481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
MON-4031: Add prometheusOperatorConfig API #2481
Conversation
|
Hello @marioferh! Some important instructions when contributing to openshift/api: |
|
/assign |
everettraven
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most, if not all, of my recent comments from #2461 apply here
| // Specifically, it can configure how the Prometheus Operator instance is deployed, pod scheduling, and resource allocation. | ||
| // When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. | ||
| // +optional | ||
| PrometheusOperatorConfig PrometheusOperatorConfig `json:"prometheusOperatorConfig,omitempty,omitzero"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this configuration relate to the configuration proposed in #2463?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is Prometheus Operator, the other one is Prometheus config. Of course they are related but they have different configs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the Prometheus config used by the PrometheusOperator?
Would it make sense to co-locate the configurations under a top-level prometheus field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not directly . Prometheus Config is use by Prometheus. PrometheusOperator manages Prometheus instances, a
Alertmanagare, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what configures the Prometheus instances created by the Prometheus Operator to use the Prometheus Config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CMO takes PrometheusK8sConfing configmap and create a CR.
PrometheosOperator takes that CR and configure Prometheus.
I can understand your idea but PrometheusOperator manages all these components and I it's not a good idea to have all fields inside PrometheusOperator.
A core feature of the Prometheus Operator is to monitor the Kubernetes API server for changes to specific objects and ensure that the current Prometheus deployments match these objects. The Operator acts on the following [Custom Resource Definitions (CRDs)](https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/):
Prometheus, which defines a desired Prometheus deployment.
PrometheusAgent, which defines a desired Prometheus deployment, but running in Agent mode.
Alertmanager, which defines a desired Alertmanager deployment.
ThanosRuler, which defines a desired Thanos Ruler deployment.
ServiceMonitor, which declaratively specifies how groups of Kubernetes services should be monitored. The Operator automatically generates Prometheus scrape configuration based on the current state of the objects in the API server.
PodMonitor, which declaratively specifies how group of pods should be monitored. The Operator automatically generates Prometheus scrape configuration based on the current state of the objects in the API server.
Probe, which declaratively specifies how groups of ingresses or static targets should be monitored. The Operator automatically generates Prometheus scrape configuration based on the definition.
ScrapeConfig, which declaratively specifies scrape configurations to be added to Prometheus. This CustomResourceDefinition helps with scraping resources outside the Kubernetes cluster.
PrometheusRule, which defines a desired set of Prometheus alerting and/or recording rules. The Operator generates a rule file, which can be used by Prometheus instances.
AlertmanagerConfig, which declaratively specifies subsections of the Alertmanager configuration, allowing routing of alerts to custom receivers, and setting inhibit rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So to make sure I am following along, the CMO will:
- Deploy the PrometheusOperator based on the
PrometheusOperatorConfig - Create
PrometheusCRs using the configurations provided inPrometheusK8sConfig. Does this apply to all Prometheus CRs?
While these are two distinct things, they are both inherently related to how the CMO handles prometheus configuration on the cluster.
I can understand your idea but PrometheusOperator manages all these components and I it's not a good idea to have all fields inside PrometheusOperator.
I'm not suggesting that we put all the fields under PrometheusOperatorConfig, I'm suggesting we use a shared parent field named prometheus that can have sibling fields for configuring the Prometheus Operator itself and, separately, configuring the individual Prometheus instance configurations. This way, if you want to add additional configuration options related to prometheus in the future, you don't have to add another Prometheus* field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Deploy the PrometheusOperator based on the PrometheusOperatorConfig
- Create Prometheus CRs using the configurations provided in PrometheusK8sConfig. Does this apply to all Prometheus CRs?
Correct
I'm not suggesting that we put all the fields under PrometheusOperatorConfig, I'm suggesting we use a shared parent field named prometheus that can have sibling fields for configuring the Prometheus Operator itself and, separately, configuring the individual Prometheus instance configurations. This way, if you want to add additional configuration options related to prometheus in the future, you don't have to add another Prometheus* field.
But they are different things, the are related but from my point of view and how CMO works it makes no sense.
https://github.com/prometheus-operator/prometheus-operator
https://github.com/prometheus/prometheus
@danielmellado @simonpasquier any thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would keep @marioferh approach. While I understand @everettraven concern about API organization, the reality is that prometheusOperatorConfig and prometheusK8sConfig are solving different problems and IMHO having them under a shared parent would actually make things more confusing.
Think about it from an operator's perspective: when you're configuring prometheusOperatorConfig, you're basically saying "how should we deploy and run the Prometheus Operator pods themselves" - stuff like resource limits, node scheduling, log levels. But when you're dealing with prometheusK8sConfig, you're configuring "what should the actual Prometheus servers do" - scraping rules, storage, retention policies, etc. Again, I think mixing them together would be confusing.
Plus, we already have a working pattern with alertmanagerConfig and metricsServerConfig that users understand. Why break that consistency for a theoretical future problem?
If we do end up with too many prometheus fields later, I'm totally happy to revisit the structure, but I think that for now the separation actually makes the API clearer and more intuitive.
@simonpasquier wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do end up with too many prometheus fields later, I'm totally happy to revisit the structure
If you go with a distributed field approach, you have to maintain that essentially forever or go through a pretty painful process to refine the structure once you've promoted the API to v1.
Why break that consistency for a theoretical future problem?
I'm not sold that doing something like:
prometheusConfig:
operator:
...
servers:
...breaks that consistency, but if you folks feel strongly that users will have a better experience with multiple prometheus*Config fields I won't block it.
If you don't think you'll ever have more than the two fields for the operator and servers respectively, this probably isn't that big of a deal.
Think about it from an operator's perspective: when you're configuring prometheusOperatorConfig, you're basically saying "how should we deploy and run the Prometheus Operator pods themselves" - stuff like resource limits, node scheduling, log levels. But when you're dealing with prometheusK8sConfig, you're configuring "what should the actual Prometheus servers do" - scraping rules, storage, retention policies, etc. Again, I think mixing them together would be confusing
I think the example above still considers this perspective and difference of field responsibilities. Except now you have a dedicated umbrella field that captures everything related to the configuration of the "Prometheus stack" (operator and the servers).
Again, if you folks feel strongly that this doesn't make sense and users would have a better experience with the currently implemented approach I won't stop it from being done, but it must be clear to users what each prometheus*Config field is responsible for and when they should/should not be specifying the fields.
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@marioferh: This pull request references MON-4031 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the sub-task to target the "4.21.0" version, but no target version was set. DetailsIn response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
📝 WalkthroughWalkthroughAdded a new public type PrometheusOperatorConfig in config/v1alpha1 and an optional PrometheusOperatorConfig field on ClusterMonitoringSpec. PrometheusOperatorConfig exposes logLevel, nodeSelector, resources, tolerations, and topologySpreadConstraints. Generated deepcopy, SwaggerDoc, and OpenAPI mappings were added for the new type and wired into ClusterMonitoringSpec. The ClusterMonitoring CRD payloads (CustomNoUpgrade, DevPreviewNoUpgrade, TechPreviewNoUpgrade) were updated to include the prometheusOperatorConfig property with detailed nested schema and validations. MetricsServerConfig also gained a topologySpreadConstraints field and several validation comment punctuations were normalized. 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: build linters: unable to load custom analyzer "kubeapilinter": tools/_output/bin/kube-api-linter.so, plugin: not implemented Comment |
fca2297 to
30171eb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
🤖 Fix all issues with AI agents
In @config/v1alpha1/types_cluster_monitoring.go:
- Around line 451-471: The doc comment for the Resources field incorrectly
references "KubeStateMetrics container"; update the comment above the Resources
[]ContainerResource declaration to state that it defines compute resource
requests and limits for the Prometheus Operator container (not
KubeStateMetrics), keep the rest of the documentation intact (including
defaults, optionality, list validation tags and MaxItems/MinItems constraints)
and ensure any explanatory lines (e.g., "When specified, resources must contain
at least 1 entry...") remain accurate and refer to the Prometheus Operator
container.
- Around line 440-450: The comment for the NodeSelector field incorrectly refers
to "resources must contain at least 1 entry"; update the documentation text for
the NodeSelector map (symbol: NodeSelector map[string]string
`json:"nodeSelector,omitempty"`) to reference "nodeSelector entries" instead of
"resources" and ensure the MinProperties/MaxProperties validation comments
remain accurate for nodeSelector (i.e., change the phrasing to "When specified,
nodeSelector must contain at least 1 entry and must not contain more than 10
entries.").
- Around line 428-439: The doc comment for the LogLevel field incorrectly says
"logs emitted by Alertmanager"; update the comment to refer to the Prometheus
Operator component instead and adjust any wording that mentions Alertmanager
specifically (e.g., the first sentence and any examples) so it accurately
documents that LogLevel controls the verbosity of logs from the Prometheus
Operator; keep existing allowed values, default note, and +optional tag intact
for the LogLevel `logLevel` field.
In @openapi/generated_openapi/zz_generated.openapi.go:
- Around line 23699-23720: Update the Description string for the SchemaProps of
the resources field (the struct containing VendorExtensible/SchemaProps with
Type []string{"array"} and Ref to ContainerResource) to replace the incorrect
"KubeStateMetrics container" text with "Prometheus Operator container"; locate
the resources SchemaProps block in zz_generated.openapi.go (the one with Default
map[string]interface{} and
Ref("github.com/openshift/api/config/v1alpha1.ContainerResource")) and edit only
the wording in the Description to reference the Prometheus Operator container.
- Around line 23676-23682: The logLevel schema description incorrectly mentions
"Alertmanager"; update the description for the "logLevel" SchemaProps (the
logLevel field in the generated OpenAPI schema for PrometheusOperatorConfig) to
reference "Prometheus Operator" instead, keeping the rest of the text and
allowed values the same and preserving the note about omission/default; ensure
you only change the wording "Alertmanager" → "Prometheus Operator" in the
Description string.
In @openapi/openapi.json:
- Around line 12962-12969: The nodeSelector object schema currently documents
"at least 1 entry and ... not more than 10 entries" but lacks enforcement;
update the "nodeSelector" schema in openapi.json to add minProperties: 1 and
maxProperties: 10 so the OpenAPI validator enforces the constraint, keeping the
existing type/object and additionalProperties unchanged.
- Around line 12991-13003: The schema for the topologySpreadConstraints array is
missing validation for the described bounds; update the
topologySpreadConstraints property in the OpenAPI schema to include minItems: 1
and maxItems: 10 (the array that currently references
io.k8s.api.core.v1.TopologySpreadConstraint), ensuring the JSON object that
contains "type": "array" for topologySpreadConstraints gains these two keys so
the API enforces the documented minimum and maximum lengths.
- Around line 12982-12990: The OpenAPI schema for the "tolerations" array
declares a minimum length in the description but lacks an enforced constraint;
add "minItems: 1" to the "tolerations" array definition so the schema enforces
the described minimum (keep the existing "type": "array", "items": { "$ref":
"#/definitions/io.k8s.api.core.v1.Toleration", "default": {} }, and
"x-kubernetes-list-type": "atomic" intact).
- Around line 12970-12981: The "resources" array schema in the OpenAPI
definition lacks the validation constraints described in its description; add
minItems: 1 and maxItems: 10 to the "resources" property in the schema (the
object with "description", "type": "array", "items", "x-kubernetes-list-*" etc.)
so the API enforces at least one and at most ten entries for the resources
array.
- Around line 12958-12961: The "logLevel" schema description incorrectly names
"Alertmanager" and lacks an enum; update the "logLevel" property description to
state it configures the Prometheus Operator (not Alertmanager) and reword the
text accordingly, then add an "enum" array to the "logLevel" schema restricting
values to ["Error","Warn","Info","Debug"] so the OpenAPI spec enforces allowed
values; target the "logLevel" property in the openapi.json schema to make these
changes.
In
@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yaml:
- Around line 1312-1325: The generated CRD description for nodeSelector contains
a copy-paste phrase "resources must contain..."—update the Go type that defines
the nodeSelector field (the struct field named NodeSelector in the type that
generates this CRD) to correct the field description so it refers to
"nodeSelector entries must contain at least 1 entry and must not contain more
than 10 entries" (or similar wording referencing nodeSelector entries), then
re-run the CRD generation to regenerate
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yaml
so the description no longer mentions "resources".
- Around line 1294-1311: The CRD `logLevel` description incorrectly references
"Alertmanager"—update the Go type comment that generates this CRD (the struct
field named LogLevel in the relevant ClusterMonitoring/Prometheus Operator
config type) to describe that it controls the Prometheus Operator's verbosity
(allowed values Error, Warn, Info, Debug), then regenerate the CRD so the
corrected description appears in the YAML; ensure the Go doc comment tied to the
LogLevel field is changed, rebuild the CRD generation artifacts, and commit the
regenerated YAML.
- Around line 1326-1342: The CRD description for the resources field incorrectly
says "KubeStateMetrics container"; update the Go type comment that generates the
CRD (the comment/documentation for the Resources field on the
ClusterMonitoring/Prometheus spec type that produces the clustermonitorings CRD)
to reference "Prometheus Operator container" instead, then re-run the CRD
generation (controller-gen / make generate or your repo's CRD build step) to
regenerate
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yaml
so the resources description is corrected.
In
@payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yaml:
- Around line 1287-1648: The docs for PrometheusOperatorConfig contain
copy‑paste errors; update the Go struct comments in PrometheusOperatorConfig
(config/v1alpha1/types_cluster_monitoring.go) so LogLevel's comment refers to
the Prometheus Operator instead of Alertmanager, NodeSelector's description
references nodeSelector limits (min/max entries) rather than "resources", and
Resources describes the Prometheus Operator container (not KubeStateMetrics);
after fixing the comments for the fields LogLevel, NodeSelector and Resources on
the PrometheusOperatorConfig type, regenerate the CRD YAMLs so the corrected
text appears in payload-manifests/crds/...crd.yaml.
🧹 Nitpick comments (1)
config/v1alpha1/zz_generated.swagger_doc_generated.go (1)
221-232: Fix copy‑paste mistakes inPrometheusOperatorConfigSwagger docs (Alertmanager/KubeStateMetrics wording, nodeSelector sentence)The descriptions here look copied from other components and are misleading for this API:
- Line 223:
logLevel“verbosity of logs emitted by Alertmanager” – should be Prometheus Operator.- Line 225:
resources“for the KubeStateMetrics container” – should be the Prometheus Operator container.- Line 224:
nodeSelectordescription’s last sentence talks about “resources must contain at least 1 entry…” – that constraint belongs to the resources list, not nodeSelector.These strings will surface in generated API docs and CRDs, so it’s worth correcting at the source (
PrometheusOperatorConfigcomment inconfig/v1alpha1/types_cluster_monitoring.go) and then regenerating swagger/OpenAPI/CRDs.Example of corrected wording (apply in the Go type comment, then regenerate)
- // logLevel defines the verbosity of logs emitted by Alertmanager. + // logLevel defines the verbosity of logs emitted by the Prometheus Operator. - // resources defines the compute resource requests and limits for the KubeStateMetrics container. + // resources defines the compute resource requests and limits for the Prometheus Operator container. - // The current default value is `kubernetes.io/os: linux`. - // When specified, resources must contain at least 1 entry and must not contain more than 10 entries. + // The current default value is `kubernetes.io/os: linux`. + // When specified, nodeSelector must contain at least 1 entry and must not contain more than 10 entries.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
⛔ Files ignored due to path filters (4)
config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**
📒 Files selected for processing (8)
config/v1alpha1/types_cluster_monitoring.goconfig/v1alpha1/zz_generated.deepcopy.goconfig/v1alpha1/zz_generated.swagger_doc_generated.goopenapi/generated_openapi/zz_generated.openapi.goopenapi/openapi.jsonpayload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yamlpayload-manifests/crds/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yamlpayload-manifests/crds/0000_10_config-operator_01_clustermonitorings-TechPreviewNoUpgrade.crd.yaml
🧰 Additional context used
🧬 Code graph analysis (2)
config/v1alpha1/zz_generated.swagger_doc_generated.go (1)
config/v1alpha1/types_cluster_monitoring.go (1)
PrometheusOperatorConfig(427-504)
config/v1alpha1/types_cluster_monitoring.go (2)
config/v1alpha1/zz_generated.swagger_doc_generated.go (2)
PrometheusOperatorConfig(230-232)ContainerResource(203-205)operator/v1/types.go (1)
LogLevel(94-94)
🔇 Additional comments (9)
config/v1alpha1/zz_generated.swagger_doc_generated.go (1)
176-182: ClusterMonitoringSpec swagger entry forprometheusOperatorConfiglooks consistentThe new field description is consistent with the Go type intent and other config blocks (Alertmanager, MetricsServer). No functional or doc issues here from my side.
config/v1alpha1/zz_generated.deepcopy.go (1)
363-380: Deep‑copy semantics forPrometheusOperatorConfiglook correct and consistent
ClusterMonitoringSpec.DeepCopyIntonow deep‑copiesPrometheusOperatorConfig, and the newPrometheusOperatorConfig.DeepCopyInto/DeepCopyimplementation mirrors the existing patterns forMetricsServerConfigandAlertmanagerCustomConfig:
- Maps and slices are reallocated and populated element‑wise.
- Nested Kubernetes types (
v1.Toleration,v1.TopologySpreadConstraint) use their ownDeepCopyInto.- Value fields (like
LogLevel) are copied by value.I don’t see any nil‑safety or aliasing issues here.
Also applies to: 930-972
config/v1alpha1/types_cluster_monitoring.go (2)
97-101: LGTM!The new
PrometheusOperatorConfigfield follows the established pattern for optional configuration fields inClusterMonitoringSpec, using consistent JSON tags (omitempty,omitzero) and proper documentation.
472-504: LGTM!The
tolerationsandtopologySpreadConstraintsfields are correctly implemented with appropriate validation markers, list type annotations, and documentation that correctly references "Prometheus Operator Pods".payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yaml (1)
1287-1292: LGTM for structure and validation rules.The
prometheusOperatorConfigsection is correctly structured with:
minProperties: 1constraint on the object- Proper validation rules for nested fields (resources, tolerations, topologySpreadConstraints)
- Correct list type annotations (
x-kubernetes-list-type: map/atomic)- Consistent patterns with existing
metricsServerConfigandalertmanagerConfigsectionsAlso applies to: 1406-1455, 1456-1648
openapi/openapi.json (1)
10354-10360: Verify this change is intentional—appears unrelated to PR objective.The modification to the
policyTypefield description (line 10357) seems unrelated to the PrometheusOperatorConfig API addition. Clarify whether this is a deliberate change, an accidental inclusion from a rebase, or scope creep.openapi/generated_openapi/zz_generated.openapi.go (3)
480-486: LGTM!The mapping entry for
PrometheusOperatorConfigis correctly added in alphabetical order following the existing pattern.
22730-22741: LGTM!The
prometheusOperatorConfigfield is properly added toClusterMonitoringSpecwith an appropriate description, default value, and the dependencies list is correctly updated.
23740-23768: LGTM!The remaining schema fields (
tolerations,topologySpreadConstraints) and the dependencies list are correctly defined with appropriate list types and map keys for Kubernetes-style merge semantics.
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yaml
Show resolved
Hide resolved
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yaml
Show resolved
Hide resolved
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yaml
Outdated
Show resolved
Hide resolved
...ad-manifests/crds/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yaml
Show resolved
Hide resolved
10f5dfa to
8054d15
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @openapi/openapi.json:
- Around line 12973-13024: The PrometheusOperatorConfig schema lacks enforcement
of documented limits: add an "enum" with ["Error","Warn","Info","Debug"] to the
logLevel property; add "minProperties": 1 and "maxProperties": 10 to the
nodeSelector property; add "minItems": 1 and "maxItems": 10 to the resources
array (property name resources); add "minItems": 1 and "maxItems": 10 to the
tolerations array (property name tolerations); and add "minItems": 1 and
"maxItems": 10 to the topologySpreadConstraints array (property name
topologySpreadConstraints) so the schema enforces the constraints described in
PrometheusOperatorConfig.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
config/v1alpha1/zz_generated.deepcopy.goconfig/v1alpha1/zz_generated.swagger_doc_generated.goopenapi/generated_openapi/zz_generated.openapi.goopenapi/openapi.json
🧰 Additional context used
🧬 Code graph analysis (2)
config/v1alpha1/zz_generated.deepcopy.go (2)
config/v1alpha1/zz_generated.swagger_doc_generated.go (2)
PrometheusOperatorConfig(230-232)ContainerResource(203-205)config/v1alpha1/types_cluster_monitoring.go (2)
PrometheusOperatorConfig(427-505)ContainerResource(299-329)
config/v1alpha1/zz_generated.swagger_doc_generated.go (1)
config/v1alpha1/types_cluster_monitoring.go (1)
PrometheusOperatorConfig(427-505)
🔇 Additional comments (7)
config/v1alpha1/zz_generated.deepcopy.go (2)
368-368: LGTM: ClusterMonitoringSpec correctly updated for new field.The integration of
PrometheusOperatorConfigintoClusterMonitoringSpec.DeepCopyIntois properly implemented.
956-998: LGTM: PrometheusOperatorConfig deepcopy implementation is correct.The generated deepcopy methods properly handle all fields:
- Simple field (LogLevel) copied by value assignment
- Map field (NodeSelector) with proper nil check and key-value copying
- Slice fields (Resources, Tolerations, TopologySpreadConstraints) with nil checks and element-wise deep copying
The implementation follows the established patterns for similar types in this codebase.
config/v1alpha1/zz_generated.swagger_doc_generated.go (2)
177-182: LGTM: ClusterMonitoringSpec swagger documentation correctly updated.The new
prometheusOperatorConfigfield is properly documented in the ClusterMonitoringSpec swagger map with appropriate description.
221-232: LGTM: PrometheusOperatorConfig swagger documentation is complete.The generated Swagger documentation comprehensively covers all fields of PrometheusOperatorConfig:
- logLevel with detailed verbosity level explanations
- nodeSelector with scheduling context and validation constraints
- resources with default values and list constraints
- tolerations with defaults and list constraints
- topologySpreadConstraints with high availability context and constraints
The documentation quality and style are consistent with similar configuration types in this file.
openapi/generated_openapi/zz_generated.openapi.go (3)
484-484: LGTM: OpenAPI definition registered correctly.The PrometheusOperatorConfig schema is properly registered in the GetOpenAPIDefinitions map, following the established pattern and alphabetical ordering.
22730-22741: LGTM: ClusterMonitoringSpec properly extended.The new
prometheusOperatorConfigfield is correctly integrated into ClusterMonitoringSpec with:
- Clear description indicating it's optional and subject to platform defaults
- Proper reference to the PrometheusOperatorConfig schema
- Updated dependencies list
23697-23797: LGTM: PrometheusOperatorConfig schema correctly generated.The complete schema definition includes all expected fields with proper:
- Type definitions (string, object, array)
- Kubernetes list semantics (map/atomic with appropriate keys)
- Standard Kubernetes type references (Toleration, TopologySpreadConstraint)
- Detailed descriptions for each field
- Appropriate defaults and constraints
The past review issues regarding incorrect component references in descriptions have been successfully addressed.
Note: This is generated code (indicated by
zz_generatedprefix) and should be regenerated from source if changes are needed, not manually edited.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @openapi/openapi.json:
- Around line 13001-13012: The "resources" array schema is missing runtime
validation for the described bounds; add "minItems": 1 and "maxItems": 10 to the
"resources" object (next to "type": "array") so the OpenAPI definition enforces
the documented minimum and maximum list lengths for the "resources" property
that references
"#/definitions/com.github.openshift.api.config.v1alpha1.ContainerResource".
- Around line 13013-13021: The tolerations array schema is missing validation
for the described bounds; update the "tolerations" property in the OpenAPI
definition to include minItems: 1 and maxItems: 10 so the schema enforces the
stated "Minimum length for this list is 1" and "Maximum length for this list is
10" (keep the existing "type": "array", "items": { "$ref":
"#/definitions/io.k8s.api.core.v1.Toleration", "default": {} } and
"x-kubernetes-list-type": "atomic").
- Around line 12993-13000: The schema for nodeSelector in openapi/openapi.json
documents a requirement of "must contain at least 1 entry and must not contain
more than 10 entries" but lacks validation; update the nodeSelector object
schema to include minProperties: 1 and maxProperties: 10 so the OpenAPI
validates the documented constraints (locate the nodeSelector object under the
relevant resource in openapi.json and add the minProperties and maxProperties
fields alongside type/additionalProperties).
- Around line 13022-13034: The schema for the topologySpreadConstraints array is
missing the documented validation bounds; update the topologySpreadConstraints
definition to include "minItems": 1 and "maxItems": 10 so the OpenAPI spec
enforces the described minimum and maximum lengths for the array (the block that
currently defines "type": "array", "items": { "$ref":
"#/definitions/io.k8s.api.core.v1.TopologySpreadConstraint" } should be extended
with minItems and maxItems).
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
config/v1alpha1/types_cluster_monitoring.goopenapi/generated_openapi/zz_generated.openapi.goopenapi/openapi.json
🔇 Additional comments (9)
openapi/openapi.json (2)
12405-12409: LGTM: PrometheusOperatorConfig field properly integrated.The new
prometheusOperatorConfigfield is correctly added to the ClusterMonitoringSpec schema with appropriate description, optional semantics, and reference to the new definition.
12983-12992: LGTM: Previous logLevel issues have been addressed.The logLevel property now correctly references "Prometheus Operator" in its description and includes the enum constraint with allowed values
["Debug", "Error", "Info", "Warn"].config/v1alpha1/types_cluster_monitoring.go (4)
97-101: LGTM! PrometheusOperatorConfig field addition follows established patterns.The field documentation is clear, the optional tagging is correct, and it follows the same pattern as other configuration fields in the spec. The structure is consistent with the team's decision to maintain separate top-level config fields rather than nesting under a common parent.
283-283: Good addition of +enum marker for consistency.Adding the
+enummarker to theLogLeveltype aligns it with other enumerated types in the file and aids code generation tools.
425-506: Well-structured PrometheusOperatorConfig follows established patterns.The struct definition is consistent with existing config types in the codebase:
- Proper validation tags and constraints throughout
- Clear, accurate documentation for all fields
- Follows the same pattern as
AlertmanagerCustomConfigandMetricsServerConfig- All previously identified documentation errors have been resolved
458-465: Verify appropriateness of Prometheus Operator resource defaults against upstream recommendations and operational experience.The documented defaults (4m CPU request, 40Mi memory request) do match those of Alertmanager and MetricsServer. However, there is no justification in the codebase for whether these minimal defaults are suitable for Prometheus Operator, which manages multiple Prometheus instances and related resources. Consider consulting upstream Prometheus Operator documentation or performance testing data to confirm these defaults are adequate.
openapi/generated_openapi/zz_generated.openapi.go (3)
484-484: LGTM: PrometheusOperatorConfig registered correctly.The new type is properly wired into the OpenAPI definitions registry with the correct package path and function reference.
23698-23799: LGTM: PrometheusOperatorConfig schema well-defined.The PrometheusOperatorConfig OpenAPI schema is correctly structured with all necessary fields:
- ✅ logLevel: Properly references "Prometheus Operator" in description (past copy-paste issue resolved)
- ✅ nodeSelector: Standard map[string]string pattern with reasonable constraints
- ✅ resources: Properly references "Prometheus Operator container" in description (past copy-paste issue resolved), with list-map semantics
- ✅ tolerations: Standard Kubernetes toleration array with atomic list type
- ✅ topologySpreadConstraints: Proper map-type list with composite keys
All type references and dependencies are correct. The x-kubernetes-list-type annotations are appropriate for their respective field semantics.
Note: Validation constraints (min/max items) are documented in descriptions. Actual enforcement is typically done via kubebuilder validation markers in the source types, which should be reflected in the CRD manifests.
22731-22737: This pattern is standard throughout the OpenAPI schema file—187+ optional fields with default values follow the same convention. BothprometheusOperatorConfigand neighboring fields likemetricsServerConfigandalertmanagerConfigare described as optional withDefault: map[string]interface{}{}, and their descriptions explicitly clarify: "When omitted, this means no opinion and the platform is left to choose a reasonable default." The empty default object does not contradict the optional nature; it represents the zero-value for optional configuration objects in Kubernetes APIs. No changes needed.Likely an incorrect or invalid review comment.
Signed-off-by: Mario Fernandez <mariofer@redhat.com>
ae9ccea to
66fff25
Compare
|
@marioferh: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @openapi/openapi.json:
- Around line 13001-13009: Fix the punctuation in the Go doc comment that
generates the OpenAPI description for the "tolerations" field
(io.k8s.api.core.v1.Toleration list) by adding a period between the two
sentences so it reads "Maximum length for this list is 10. Minimum length for
this list is 1"; update the comment on the Go struct field (Tolerations) in the
core v1 type, then regenerate the OpenAPI JSON so the corrected sentence appears
in the "tolerations" description.
- Around line 12989-13000: The OpenAPI schema for PrometheusOperatorConfig is
missing JSON Schema constraints that the Go kubebuilder markers declared; update
the OpenAPI definitions for the fields resources, tolerations,
topologySpreadConstraints and nodeSelector to include the corresponding
constraints (add "minItems" and "maxItems" for array fields resources,
tolerations, topologySpreadConstraints and add "minProperties" and
"maxProperties" for the object field nodeSelector) matching the values from the
kubebuilder markers so the generated schema enforces the documented bounds.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
⛔ Files ignored due to path filters (4)
config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**
📒 Files selected for processing (8)
config/v1alpha1/types_cluster_monitoring.goconfig/v1alpha1/zz_generated.deepcopy.goconfig/v1alpha1/zz_generated.swagger_doc_generated.goopenapi/generated_openapi/zz_generated.openapi.goopenapi/openapi.jsonpayload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yamlpayload-manifests/crds/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yamlpayload-manifests/crds/0000_10_config-operator_01_clustermonitorings-TechPreviewNoUpgrade.crd.yaml
🧰 Additional context used
🧬 Code graph analysis (3)
config/v1alpha1/zz_generated.deepcopy.go (1)
config/v1alpha1/types_cluster_monitoring.go (2)
PrometheusOperatorConfig(427-505)ContainerResource(299-329)
config/v1alpha1/types_cluster_monitoring.go (2)
config/v1alpha1/zz_generated.swagger_doc_generated.go (2)
PrometheusOperatorConfig(230-232)ContainerResource(203-205)operator/v1/types.go (1)
LogLevel(94-94)
config/v1alpha1/zz_generated.swagger_doc_generated.go (1)
config/v1alpha1/types_cluster_monitoring.go (1)
PrometheusOperatorConfig(427-505)
🔇 Additional comments (14)
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-TechPreviewNoUpgrade.crd.yaml (1)
1287-1649: PrometheusOperatorConfig schema matches Go type and existing patternsThe new
prometheusOperatorConfigblock mirrorsMetricsServerConfig/AlertmanagerCustomConfigpatterns: enums, nodeSelector Min/MaxProperties,ContainerResourcelist semantics, tolerations, and topology spread constraints all align with the kubebuilder validations on the Go types. This looks correct and internally consistent for the TechPreview CRD.payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yaml (1)
1287-1649: CustomNoUpgrade CRD stays consistent with TechPreview schemaThe
prometheusOperatorConfigdefinition here matches the TechPreview CRD (logLevel enum, nodeSelector constraints,ContainerResourcelist, tolerations, topologySpreadConstraints), keeping the API shape consistent across feature sets. No issues spotted.config/v1alpha1/zz_generated.deepcopy.go (1)
363-369: DeepCopy coverage for PrometheusOperatorConfig is complete and consistent
ClusterMonitoringSpec.DeepCopyIntonow deep-copiesPrometheusOperatorConfig, and the newPrometheusOperatorConfig.DeepCopyInto/DeepCopycorrectly handle all mutable fields (map, slices ofContainerResource,v1.Toleration, andv1.TopologySpreadConstraint) in the same style asMetricsServerConfigandAlertmanagerCustomConfig. This avoids aliasing and looks like the expected codegen output.Also applies to: 956-998
config/v1alpha1/types_cluster_monitoring.go (1)
97-101: PrometheusOperatorConfig API is well-shaped and aligned with existing config typesAdding
PrometheusOperatorConfigtoClusterMonitoringSpecas an optional value-type field follows the existing pattern (AlertmanagerConfig,MetricsServerConfig,UserDefined). The newPrometheusOperatorConfigstruct reuses the sharedLogLevelandContainerResourcetypes and applies the same validation scheme as Metrics Server (MinProperties on the struct, bounded nodeSelector, resource list, tolerations, and topologySpreadConstraints). This keeps the API consistent and supports the CRD schema you added.Also applies to: 424-505
payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yaml (1)
1287-1649: LGTM! Clean CRD schema for prometheusOperatorConfig.The new
prometheusOperatorConfigfield is well-structured and follows the established patterns used for similar configuration objects in this CRD. The schema includes:
- Proper constraints (minProperties, maxProperties, minItems, maxItems)
- Comprehensive validation rules for resources, tolerations, and topology spread constraints
- Clear, user-friendly documentation for all fields
- Appropriate Kubernetes-specific validations (x-kubernetes-validations, x-kubernetes-list-type)
The structure is consistent with
metricsServerConfigandalertmanagerConfig, ensuring a uniform API surface.config/v1alpha1/zz_generated.swagger_doc_generated.go (2)
221-232: LGTM! Swagger documentation properly generated.The new
map_PrometheusOperatorConfigand itsSwaggerDoc()method follow the established pattern used throughout this auto-generated file. All field documentation is comprehensive and matches the CRD schema.
177-181: LGTM! ClusterMonitoringSpec properly updated.The
prometheusOperatorConfigfield is correctly added to themap_ClusterMonitoringSpecwith appropriate documentation that matches the CRD definition.openapi/generated_openapi/zz_generated.openapi.go (6)
484-484: LGTM: Registry entry is properly ordered.The new OpenAPI definition mapping for
PrometheusOperatorConfigis correctly placed in alphabetical order.
22730-22741: LGTM: Field integration follows OpenAPI conventions.The
prometheusOperatorConfigfield is properly integrated intoClusterMonitoringSpecwith appropriate description, default value, and dependency references.
23749-23767: Verify validation constraints are enforced for the tolerations field.Similar to the
resourcesfield, the description on Line 23756 mentions "Maximum length for this list is 10 Minimum length for this list is 1" but the schema lacksminItemsandmaxItemsconstraints.This uses the same verification approach as the previous comment - checking whether validation markers exist in the source Go types or if validation is handled elsewhere.
23768-23790: Verify validation constraints are enforced for topologySpreadConstraints.The description on Line 23779 states "Maximum length for this list is 10. Minimum length for this list is 1 Entries must have unique topologyKey and whenUnsatisfiable pairs," but the schema only includes the
x-kubernetes-list-map-keysextension withoutminItems/maxItemsconstraints.The uniqueness constraint is partially addressed by the
x-kubernetes-list-type: "map"extension, but the length constraints are not enforced in the schema.Please confirm these constraints are properly validated, either through the source type definitions or admission webhooks.
23727-23748: Validation constraints are already enforced in the source code.The resources field in
PrometheusOperatorConfig(andAlertmanagerOperatorConfig,MetricsServerConfig) already has kubebuilder validation markers inconfig/v1alpha1/types_cluster_monitoring.go:
- Line 470:
// +kubebuilder:validation:MaxItems=10- Line 471:
// +kubebuilder:validation:MinItems=1These markers enforce the documented constraints at the source level. If the generated OpenAPI schema is missing these constraints, the issue lies with the OpenAPI code generation process, not the source type definitions.
Likely an incorrect or invalid review comment.
23711-23725: Note that validation constraints are defined in source but not translated to the OpenAPI schema.The
+kubebuilder:validation:MinProperties=1and+kubebuilder:validation:MaxProperties=10markers are already present in the source type (./config/v1alpha1/types_cluster_monitoring.go), but the kube-openapi generator does not translate them tominPropertiesandmaxPropertiesin the generated OpenAPI schema. Similarly, kubebuilder validation markers for array constraints (MinItems/MaxItems) are also absent from the generated schema despite being defined in the source.This is a systematic limitation of the code generation tool—the OpenAPI schema lacks enforcement of cardinality constraints while the descriptions preserve the textual constraints. Additionally, the
Default: ""on line 23719 for map values is incorrect; map value defaults are not meaningful in OpenAPI schemas.Consider whether the generation tool should be updated to convert kubebuilder validation markers to OpenAPI schema constraints, or document this limitation explicitly.
openapi/openapi.json (1)
12399-12403: LGTM!The
prometheusOperatorConfigfield is properly defined with a clear description and appropriate reference to thePrometheusOperatorConfigtype. The optional semantics and default behavior are well documented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @openapi/openapi.json:
- Around line 12973-13024: PrometheusOperatorConfig schema is missing
enforcement of documented constraints: add an "enum" with
["Error","Warn","Info","Debug"] to the logLevel property; add "minProperties": 1
and "maxProperties": 10 to the nodeSelector object; and add "minItems": 1 and
"maxItems": 10 to each of the array properties resources, tolerations, and
topologySpreadConstraints so the OpenAPI schema enforces the boundaries
described in the property descriptions.
🧹 Nitpick comments (1)
config/v1alpha1/types_cluster_monitoring.go (1)
424-505: LGTM! Well-structured configuration type.The
PrometheusOperatorConfigtype is well-designed and follows the established patterns for similar configuration types in this file. All fields, validations, and documentation are appropriate.Optional: Minor documentation style inconsistency
Line 446 includes extra clarification text "When specified, nodeSelector must contain at least 1 entry and must not contain more than 10 entries." that isn't present in similar
nodeSelectorfields inAlertmanagerCustomConfig(lines 168-176) andMetricsServerConfig(lines 348-357).While this additional clarification is helpful, you may want to either:
- Remove it here for consistency, or
- Add similar clarification to the other
nodeSelectorfieldsThis is purely a style consideration and doesn't affect functionality.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
⛔ Files ignored due to path filters (4)
config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.crd-manifests/0000_10_config-operator_01_clustermonitorings-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/*config/v1alpha1/zz_generated.featuregated-crd-manifests/clustermonitorings.config.openshift.io/ClusterMonitoringConfig.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**
📒 Files selected for processing (7)
config/v1alpha1/types_cluster_monitoring.goconfig/v1alpha1/zz_generated.swagger_doc_generated.goopenapi/generated_openapi/zz_generated.openapi.goopenapi/openapi.jsonpayload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yamlpayload-manifests/crds/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yamlpayload-manifests/crds/0000_10_config-operator_01_clustermonitorings-TechPreviewNoUpgrade.crd.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
- config/v1alpha1/zz_generated.swagger_doc_generated.go
- payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-CustomNoUpgrade.crd.yaml
🧰 Additional context used
🧬 Code graph analysis (1)
config/v1alpha1/types_cluster_monitoring.go (2)
config/v1alpha1/zz_generated.swagger_doc_generated.go (2)
PrometheusOperatorConfig(230-232)ContainerResource(203-205)operator/v1/types.go (1)
LogLevel(94-94)
🔇 Additional comments (11)
config/v1alpha1/types_cluster_monitoring.go (3)
97-101: LGTM! Clean integration of the new configuration field.The
PrometheusOperatorConfigfield is properly integrated intoClusterMonitoringSpecwith clear documentation and appropriate validation tags.
226-227: LGTM! Helpful consistency improvements.The punctuation cleanup improves documentation consistency across the file.
Also applies to: 243-243, 364-365
403-422: LGTM! Proper implementation of topology spread constraints.The
TopologySpreadConstraintsfield addition toMetricsServerConfigfollows the established pattern and includes appropriate validations.payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-TechPreviewNoUpgrade.crd.yaml (1)
1287-1649: LGTM! CRD schema properly reflects the new API type.The
prometheusOperatorConfigschema definition is complete and correctly generated from the Go type definitions, with all validations and nested properties properly structured.openapi/openapi.json (1)
12399-12403: LGTM! New API field addition looks correct.The
prometheusOperatorConfigfield is properly structured with a clear description, appropriate reference to the new schema type, and correct default handling for optional behavior.payload-manifests/crds/0000_10_config-operator_01_clustermonitorings-DevPreviewNoUpgrade.crd.yaml (2)
219-220: LGTM: Documentation consistency improvements.The addition of periods to validation descriptions improves documentation consistency across the CRD schema.
Also applies to: 273-273, 1036-1037, 1090-1090
1287-1649: LGTM: Well-structured prometheusOperatorConfig field.The new
prometheusOperatorConfigfield follows established patterns frommetricsServerConfigandalertmanagerConfig.customConfig. The schema includes:
- Proper optional field design with
minProperties: 1validation- Standard Kubernetes resource, toleration, and topology spread constraint schemas
- Consistent validation rules and documentation
- Appropriate constraints (1-10 items/properties for lists/maps)
The structure is consistent with the existing CRD patterns and includes all necessary Kubernetes-style validations.
openapi/generated_openapi/zz_generated.openapi.go (4)
484-484: LGTM: Registry entry follows the expected pattern.The new PrometheusOperatorConfig mapping is correctly added to the OpenAPI definitions registry and follows the established pattern for type registration.
22218-22218: LGTM: Documentation punctuation normalized.The description updates normalize the ending punctuation for consistency across similar field descriptions.
Also applies to: 22241-22241, 23361-23361, 23413-23413
22730-22741: LGTM: PrometheusOperatorConfig properly integrated into ClusterMonitoringSpec.The new field is correctly defined with appropriate description, default value (empty map), and reference to the PrometheusOperatorConfig schema. The dependencies list is properly updated to include the new type.
23697-23797: The schema structure is correct and Kubernetes type versions are properly aligned with the project's dependencies.Verification confirms:
- Project uses k8s.io/api v0.34.1, which correctly supports the referenced types
k8s.io/api/core/v1.Tolerationandk8s.io/api/core/v1.TopologySpreadConstraint- The PrometheusOperatorConfig OpenAPI schema follows proper conventions with correct Kubernetes extensions (x-kubernetes-list-type, x-kubernetes-list-map-keys)
- All field definitions and type references are consistent with the project's dependency versions
Note: This is an auto-generated file (marked with
//go:build !ignore_autogenerated); the schema content is automatically generated and does not require manual code review.
| "com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig": { | ||
| "description": "PrometheusOperatorConfig provides configuration options for the Prometheus Operator instance Use this configuration to control how the Prometheus Operator instance is deployed, how it logs, and how its pods are scheduled.", | ||
| "type": "object", | ||
| "properties": { | ||
| "logLevel": { | ||
| "description": "logLevel defines the verbosity of logs emitted by Prometheus Operator. This field allows users to control the amount and severity of logs generated, which can be useful for debugging issues or reducing noise in production environments. Allowed values are Error, Warn, Info, and Debug. When set to Error, only errors will be logged. When set to Warn, both warnings and errors will be logged. When set to Info, general information, warnings, and errors will all be logged. When set to Debug, detailed debugging information will be logged. When omitted, this means no opinion and the platform is left to choose a reasonable default, that is subject to change over time. The current default value is `Info`.", | ||
| "type": "string" | ||
| }, | ||
| "nodeSelector": { | ||
| "description": "nodeSelector defines the nodes on which the Pods are scheduled nodeSelector is optional.\n\nWhen omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. The current default value is `kubernetes.io/os: linux`. When specified, nodeSelector must contain at least 1 entry and must not contain more than 10 entries.", | ||
| "type": "object", | ||
| "additionalProperties": { | ||
| "type": "string", | ||
| "default": "" | ||
| } | ||
| }, | ||
| "resources": { | ||
| "description": "resources defines the compute resource requests and limits for the Prometheus Operator container. This includes CPU, memory and HugePages constraints to help control scheduling and resource usage. When not specified, defaults are used by the platform. Requests cannot exceed limits. This field is optional. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ This is a simplified API that maps to Kubernetes ResourceRequirements. The current default values are:\n resources:\n - name: cpu\n request: 4m\n limit: null\n - name: memory\n request: 40Mi\n limit: null\nMaximum length for this list is 10. Minimum length for this list is 1.", | ||
| "type": "array", | ||
| "items": { | ||
| "default": {}, | ||
| "$ref": "#/definitions/com.github.openshift.api.config.v1alpha1.ContainerResource" | ||
| }, | ||
| "x-kubernetes-list-map-keys": [ | ||
| "name" | ||
| ], | ||
| "x-kubernetes-list-type": "map" | ||
| }, | ||
| "tolerations": { | ||
| "description": "tolerations defines tolerations for the pods. tolerations is optional.\n\nWhen omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. Defaults are empty/unset. Maximum length for this list is 10. Minimum length for this list is 1.", | ||
| "type": "array", | ||
| "items": { | ||
| "default": {}, | ||
| "$ref": "#/definitions/io.k8s.api.core.v1.Toleration" | ||
| }, | ||
| "x-kubernetes-list-type": "atomic" | ||
| }, | ||
| "topologySpreadConstraints": { | ||
| "description": "topologySpreadConstraints defines rules for how Prometheus Operator Pods should be distributed across topology domains such as zones, nodes, or other user-defined labels. topologySpreadConstraints is optional. This helps improve high availability and resource efficiency by avoiding placing too many replicas in the same failure domain.\n\nWhen omitted, this means no opinion and the platform is left to choose a default, which is subject to change over time. This field maps directly to the `topologySpreadConstraints` field in the Pod spec. Default is empty list. Maximum length for this list is 10. Minimum length for this list is 1. Entries must have unique topologyKey and whenUnsatisfiable pairs.", | ||
| "type": "array", | ||
| "items": { | ||
| "default": {}, | ||
| "$ref": "#/definitions/io.k8s.api.core.v1.TopologySpreadConstraint" | ||
| }, | ||
| "x-kubernetes-list-map-keys": [ | ||
| "topologyKey", | ||
| "whenUnsatisfiable" | ||
| ], | ||
| "x-kubernetes-list-type": "map" | ||
| } | ||
| } | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that all referenced type definitions exist in the OpenAPI schema
echo "Checking for referenced type definitions in openapi/openapi.json..."
# Check for ContainerResource
if jq -e '.definitions["com.github.openshift.api.config.v1alpha1.ContainerResource"]' openapi/openapi.json > /dev/null 2>&1; then
echo "✓ ContainerResource definition found"
else
echo "✗ ContainerResource definition NOT FOUND"
fi
# Check for Toleration
if jq -e '.definitions["io.k8s.api.core.v1.Toleration"]' openapi/openapi.json > /dev/null 2>&1; then
echo "✓ Toleration definition found"
else
echo "✗ Toleration definition NOT FOUND"
fi
# Check for TopologySpreadConstraint
if jq -e '.definitions["io.k8s.api.core.v1.TopologySpreadConstraint"]' openapi/openapi.json > /dev/null 2>&1; then
echo "✓ TopologySpreadConstraint definition found"
else
echo "✗ TopologySpreadConstraint definition NOT FOUND"
fi
echo ""
echo "Checking if validation constraints are present in the PrometheusOperatorConfig schema..."
# Check if logLevel has enum constraint
if jq -e '.definitions["com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig"].properties.logLevel.enum' openapi/openapi.json > /dev/null 2>&1; then
echo "✓ logLevel has enum constraint"
jq '.definitions["com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig"].properties.logLevel.enum' openapi/openapi.json
else
echo "✗ logLevel missing enum constraint (described values: Error, Warn, Info, Debug)"
fi
# Check if nodeSelector has min/max properties
if jq -e '.definitions["com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig"].properties.nodeSelector.minProperties' openapi/openapi.json > /dev/null 2>&1; then
echo "✓ nodeSelector has minProperties"
jq '.definitions["com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig"].properties.nodeSelector.minProperties' openapi/openapi.json
else
echo "✗ nodeSelector missing minProperties constraint (described min: 1)"
fi
if jq -e '.definitions["com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig"].properties.nodeSelector.maxProperties' openapi/openapi.json > /dev/null 2>&1; then
echo "✓ nodeSelector has maxProperties"
jq '.definitions["com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig"].properties.nodeSelector.maxProperties' openapi/openapi.json
else
echo "✗ nodeSelector missing maxProperties constraint (described max: 10)"
fi
# Check array constraints
for field in resources tolerations topologySpreadConstraints; do
if jq -e ".definitions[\"com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig\"].properties.${field}.minItems" openapi/openapi.json > /dev/null 2>&1; then
echo "✓ ${field} has minItems"
jq ".definitions[\"com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig\"].properties.${field}.minItems" openapi/openapi.json
else
echo "✗ ${field} missing minItems constraint (described min: 1)"
fi
if jq -e ".definitions[\"com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig\"].properties.${field}.maxItems" openapi/openapi.json > /dev/null 2>&1; then
echo "✓ ${field} has maxItems"
jq ".definitions[\"com.github.openshift.api.config.v1alpha1.PrometheusOperatorConfig\"].properties.${field}.maxItems" openapi/openapi.json
else
echo "✗ ${field} missing maxItems constraint (described max: 10)"
fi
doneRepository: openshift/api
Length of output: 933
Add missing validation constraints to PrometheusOperatorConfig schema.
The referenced type definitions (ContainerResource, Toleration, TopologySpreadConstraint) all exist in the schema, but the PrometheusOperatorConfig definition has missing validation constraints that are described in field descriptions:
- logLevel: Missing
enumconstraint for the documented allowed values (Error, Warn, Info, Debug) - nodeSelector: Missing
minProperties: 1andmaxProperties: 10constraints - resources, tolerations, topologySpreadConstraints: Missing
minItems: 1andmaxItems: 10constraints
The descriptions promise these validation boundaries, but the OpenAPI schema does not enforce them. These constraints should be added to the schema definitions.
🤖 Prompt for AI Agents
In @openapi/openapi.json around lines 12973 - 13024, PrometheusOperatorConfig
schema is missing enforcement of documented constraints: add an "enum" with
["Error","Warn","Info","Debug"] to the logLevel property; add "minProperties": 1
and "maxProperties": 10 to the nodeSelector object; and add "minItems": 1 and
"maxItems": 10 to each of the array properties resources, tolerations, and
topologySpreadConstraints so the OpenAPI schema enforces the boundaries
described in the property descriptions.
|
@marioferh When this PR is ready for another round of review, please remove the Thank you! |
|
/remove-lifecycle stale |
Done. About this: #2481 (comment) I think current approach is the better one |
No description provided.