-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change Rate Limiter state dynamically based on external value #320
Comments
Hi Jose - There's nothing like this for RateLimiter<HttpResponse> limiter = RateLimiter.smoothBuilder(ctx -> {
int rate = Integer.valueOf(ctx.getLastResult().getHeader("rate"));
return Rate.of(rate, Duration.ofSeconds(1));
}).build(); Another alternative would be to allow you to simply set a new rate limit against the policy: rateLimiter.setRate(10, Duration.ofSeconds(1)); Do you have a preference either way? It would be good to know, how often do you see the rate changing? Will it change on each execution? Will the One of the complexities with changing a rate limiter that's in-use is some threads may be waiting to acquire a permit (sleeping) based on the current rate limiter configuration, and when the configuration changes they may need to wait even longer, or they may actually wait longer than needed. |
Hi Jonathan, Thanks for the quick reply. I quite like the idea of updating the policy and setting a new rate limiter. This could work pretty well with the flow I have in mind (an OkHttp network interceptor that checks the value of the headers and updates the policy accordingly). To answer your question, the rate will remain consistent. It will always allow 60 requests per second. However, as mentioned before, the problem is that you never know how many requests you have used in that 60-second window before the first request (application start). Of course, this is all for a single-instance application. If you have more than one instance running, you'd definitely need to check the response headers to determine how many you've got left. In this situation, being able to update the rate limiter dynamically becomes extremely important. For existing threads waiting to acquire a permit to execute, I think that could be avoided by ensuring the Thanks, |
So it sounds like on the server side the accepted rate will be constant, but on the client side the response header will be constantly changing? Presumably this means we may need to constantly update the Maybe you could describe how you see it being used. For example, create the
Definitely. With multiple clients our client side rate limiter is eventually consistent with the server at best. So there's always a chance we'll attempt a request that the server rejects due to rate limiting. In this case, if a server is already going to reject client side requests, then I wonder what the goal of a client side rate limiter is: to evenly spread out requests (smooth)? I'm not sure a bursty rate limiter is useful since the server already performs that job for us (though I haven't thought about it much yet).
Yep, but they still could end up waiting longer than necessary if the |
Let's imagine the following scenario:
As you can see here, once the application is restarted the The response obtained by the target API is the one that has the source of truth.
Yes, that's exactly how I see it being used. Using an interceptor that checks the response headers (max, used, and remaining requests) and updates the
Perhaps I might have tried with a RateLimiter because, at first glance, it sounds like the most natural option. However, the rate information is returned by the target API and all my client needs to do is open/close the gate to allow more requests to go through. The other thing that I forgot to mention is that a failed request, because the rate has been exceeded (i.e. response code 429), also counts as a request. Meaning that if the |
That's what I'm wondering since the server is already doing the rate limiting for you. The client just needs to follow the response from the server.
Yea, I think either a With either policy, you'd start by handling a 429 response: handleResultIf(response -> response.getStatus() == 429) If your server returns something like an withDelayFn(ctx -> Duration.ofSeconds(ctx.getLastResult().getHeader("X-Retry-After"))) Else you'd have to guess at some other delay. For a withDelay(Duration.ofMillis(500)) For a withBackoff(Duration.ofMillis(10), Duration.ofSeconds(1)) When using a withMaxRetries(3) Whether to use |
Hi Jonathan, Apologies for the late reply but life got in the middle. In the end, I opted for doing the following:
This approach seems to be working quite well for now. Thanks a lot for your input on this issue – much appreciated. You can close the issue now. |
Hi,
I recently started using this library. Please my apologies if this is somewhere in the docs and I missed it.
My situation:
Problem:
When the application starts, the "remaining" value is unknown until the first response is received. For this reason, I cannot configure a RateLimiter properly.
Question:
Is there a way to update the state of the rate limiter dynamically? I'd like to, on every request/response loop, check for the rate limiter headers and update the rate limiter accordingly.
Would the above be possible, or is there any other way to achieve the same result?
I was also considering making an initial request to the API to get the value before instantiating the Rate Limiter.
Thanks!
Best,
Jose.-
The text was updated successfully, but these errors were encountered: