-
Notifications
You must be signed in to change notification settings - Fork 160
Open
Labels
Description
I'm seeing what seems to be a race condition where the limiter key is still present in the Redis store after the code waits for Context.Reset to be reached. This causes the next iteration of the code to also be rate-limited, but Context.Reset is already past, so the sleep duration is negative.
Been seeing this with the following:
- ulele/limiter v3.5.0
- redis store (redis 5.0.8)
Here's a snippet of the code which should be able to reproduce it:
for {
limit, err := r.Limiter.Get(ctx, key)
if err != nil {
log.Printf("[ERROR] failed to fetch rate-limit context %s", err)
return err
}
if limit.Reached {
sleep := time.Until(time.Unix(limit.Reset, 0))
log.Printf("[ERROR] client has proactively throttled for %s", sleep.String())
<-time.After(sleep)
continue
}
// do stuff
}
And here are the logs:
2020/04/07 19:41:41 [DEBUG] ratelimit context: {Limit:2 Remaining:0 Reset:1586288531 Reached:true}
2020/04/07 19:41:41 [ERROR] client has proactively throttled for 29.954364124s
2020/04/07 19:42:11 [DEBUG] ratelimit context: {Limit:2 Remaining:0 Reset:1586288531 Reached:true}
2020/04/07 19:42:11 [ERROR] client has proactively throttled for -81.245359ms
Can anyone else reproduce this?