Skip to content

Conversation

@Nikokolas3270
Copy link
Contributor

@Nikokolas3270 Nikokolas3270 commented Jan 2, 2026

RHOBS or classic webhook may process an alert twice in a sequential or in a parallel way due to Prometheus or AlertManager redundancy. This change makes sure that the custom resources used to track notifications are tested and set (TAS) in an atomic way. This avoids sending notifications for alerts duplicates.

What type of PR is this?

bug

What this PR does / why we need it?

Customers are currently spammed and receive the same notification several times

Which Jira/Github issue(s) this PR fixes?

Fixes SREP-2079

Special notes for your reviewer:

For RHOBS webhook:

  • For the sake of atomicity: counters are now incremented before sending the service log or the limited support notification.
  • lastTransitionTime is now only incremented when when sending a notification
  • Whole status is tested and set in an atomic way
  • Counters are decremented if, for some reason, the notification cannot be ultimately be sent or discarded

For classic webhook:

  • For the sake of atomicity: AlertFiring and AlertResolved conditions are test and set in an atomic way.
  • AlertFiring timestamp is only updated when the condition status changes
  • AlertResolved timestamp change any time the webhook is called
  • ServiceLogSent condition is not processed in an atomic way as this condition is not used to determine whether the alert was already firing or not.
  • Unlike the RHOBS webhook, there is no need to restore conditions in a previous state if a service log cannot be sent: counting the number of SLs sent and the time at which the SL was sent is handled by the ServiceLogSent condition which is processed later, asynchronously.

Pre-checks (if applicable):

  • Tested latest changes against a cluster
  • Ran make generate command locally to validate code changes -> There is no make generate command.
  • Included documentation changes with PR -> Not needed, no API change

@openshift-ci-robot
Copy link

openshift-ci-robot commented Jan 2, 2026

@Nikokolas3270: This pull request references SREP-2079 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the bug to target the "4.22.0" version, but no target version was set.

Details

In response to this:

RHOBS or classic webhook may process an alert twice in a sequential or in a parallel way due to Prometheus or AlertManager redundancy. This change makes sure that the custom resources used to track notifications are tested and set (TAS) in an atomic way. This avoids sending notifications for alerts duplicates.

What type of PR is this?

bug

What this PR does / why we need it?

Customers are currently spammed and receive the same notification several times

Which Jira/Github issue(s) this PR fixes?

Fixes SREP-2079

Special notes for your reviewer:

For RHOBS webhook:

  • For the sake of atomicity: counters are now incremented before sending the service log or the limited support notification.
  • lastTransitionTime is now only incremented when when sending a notification
  • Whole status is tested and set in an atomic way
  • Counters are decremented if, for some reason, the notification cannot be ultimately be sent or discarded

For classic webhook:

  • For the sake of atomicity: AlertFiring and AlertResolved conditions are test and set in an atomic way.
  • AlertFiring timestamp is only updated when the condition status changes
  • AlertResolved timestamp change any time the webhook is called
  • ServiceLogSent condition is not processed in an atomic way as this condition is not used to determine whether the alert was already firing or not.
  • Unlike the RHOBS webhook, there is no need to restore conditions in a previous state if a service log cannot be sent: counting the number of SLs sent and the time at which the SL was sent is handled by the ServiceLogSent condition which is processed later, asynchronously.

Pre-checks (if applicable):

  • Tested latest changes against a cluster
  • Ran make generate command locally to validate code changes -> There is no make generate command.
  • Included documentation changes with PR -> Not needed, no API change

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jan 2, 2026
@openshift-ci openshift-ci bot requested review from Tafhim and ravitri January 2, 2026 18:49
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 2, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Nikokolas3270

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 2, 2026
@codecov-commenter
Copy link

codecov-commenter commented Jan 2, 2026

Codecov Report

❌ Patch coverage is 89.59538% with 36 lines in your changes missing coverage. Please review.
✅ Project coverage is 55.67%. Comparing base (f601961) to head (f12430d).

Files with missing lines Patch % Lines
pkg/handlers/webhookreceiver.go 86.25% 15 Missing and 7 partials ⚠️
pkg/handlers/webhookrhobsreceiver.go 92.47% 10 Missing and 4 partials ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #175      +/-   ##
==========================================
+ Coverage   53.95%   55.67%   +1.71%     
==========================================
  Files          23       23              
  Lines        1820     1895      +75     
==========================================
+ Hits          982     1055      +73     
- Misses        780      785       +5     
+ Partials       58       55       -3     
Files with missing lines Coverage Δ
pkg/handlers/webhookrhobsreceiver.go 90.90% <92.47%> (+5.34%) ⬆️
pkg/handlers/webhookreceiver.go 81.28% <86.25%> (+1.88%) ⬆️

... and 2 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

RHOBS or classic webhook may process an alert twice in a sequential or in a parallel way due to Prometheus or AlertManager redundancy.
This change makes sure that the custom resources used to track notifications are tested and set (TAS) in an atomic way.
This avoids sending notifications for alerts duplicates.
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 2, 2026

@Nikokolas3270: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

return err
for _, limitedSupportReason := range limitedSupportReasons {
// If the reason matches the fleet notification LS reason, remove it
// TODO(ngrauss): The limitedSupportReason.ID() should be stored in the ManagedFleetNotificationRecord record item object

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a todo here, is this something outside of the scope of this PR ? :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I will open a new ticket for that.
Essentially the custom resources definitions should be changed to make sure that the LS which is removed is really the one signalled by the code.

return c.inPlaceStatusUpdate()
}

// TODO(ngrauss): to be removed

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a todo here, is this something outside of the scope of this PR ? :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, once the CRD model will be changed we won't need anymore to restore the status the way it was

func (c *fleetNotificationContext) inPlaceStatusUpdate() error {
// c.notificationRecordItem is a pointer but it is not part of the managedFleetNotificationRecord object
// Below code makes sure to update the oav1alpha1.NotificationRecordItem inside the managedFleetNotificationRecord object with the latest values.
// TODO(ngrauss): refactor GetNotificationRecordItem method to return a reference to the object inside the managedFleetNotificationRecord

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

todo? ;)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, one more todo :)
The "record item" returned GetNotificationRecordItem is a copy of the one in the oav1alpha1.ManagedFleetNotificationRecord object.
Hence changing it (as done in fleetNotificationContext.updateNotificationStatus method) does not change the one in the root object (oav1alpha1.ManagedFleetNotificationRecord); this is why the first part of this method is about to find back the record item in the root record object and update it with the one in the context.

notificationRecordItem.FiringNotificationSentCount > notificationRecordItem.ResolvedNotificationSentCount
// Counters are identical when no limited support is active
// Sent counter is higher than resolved counter by 1 when limited support is active
// TODO(ngrauss): record the limited support reason ID in the NotificationRecordItem object to be able to

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

todo? ;)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, comparing counters to know if a LS was sent or not is not the proper way to track a state.
Ideally there should be:

  • A boolean field to know if the alert is firing or not. This field would be updated in an atomic way.
  • A string field containing the LS reason ID. This field wouldn't be updated in an atomic way. It would be possible to have the above "firing" boolean field set to "true" while this LS ID field would be empty; in case the LS failed to be sent for instance.
    The ticket I discussed before should cover that.

if resolvedCondition != nil {
lastWebhookCallTime := resolvedCondition.LastTransitionTime

if nowTime.Before(lastWebhookCallTime.Add(3 * time.Minute)) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: is the 3 minutes here intentional? The comment above says ServiceLogSent may be updated within up to 2 minutes

learning question for me, where does the 2 minutes max allowed time come from? :D

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It comes from here:
https://pkg.go.dev/k8s.io/client-go/util/retry#RetryOnConflict
And the associated call to this retry.RetryOnConflict function later in the code.

Remark that the sleep duration between the retries is specified there:
https://pkg.go.dev/k8s.io/client-go/util/retry#pkg-variables

The DefaultRetry does not specify a Cap but but doing the math, knowing that

  • The initial duration is 10 ms
  • The multiplication factor is 1 at minimal and 1.1 at maximal due to jitering
  • The are at most 5 retries (so 4 sleeps between retries)
    Here is the max cumulated sleep time:
    41.1^410 ms ~= 60 ms

Now for each retry there are 2 API calls (so 10 calls in total):

  • One to get the object
  • One to update the object status
    Each of those calls may take time to return... but it is not yet clear if there is a time out defined in the kube config used by the ocm-agent pods... for sure each call must not exceed 12 seconds to be within the 2 minutes window discussed in the comment.

if c.retriever.fleetNotification.ResendWait > 0 {
dontResendDuration = time.Duration(c.retriever.fleetNotification.ResendWait) * time.Hour
} else {
dontResendDuration = time.Duration(3) * time.Minute

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thought: if this is the default resendWait time, is 3 minutes a bit low?
Not sure IIUC, if an alert fires every 4 minutes will this resend everytime?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This 3 minutes resend time is only there to avoids the duplicates; it is not there to handle a flickering alert.
You see, alert manager may send the same alert several times to OCM agent (due to redundancy).
Lets imagine we are sending service logs and not LS: in that case the "resolved" counter is not updated and there is therefore nothing in the record we can use to know if the alert was firing or not.
First alert will update the LastTransitionTime with the current time and a SL will be sent.
Second alert will try to do the same but as the LastTransitionTime and the current time are more or less the same time: it won't.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants