-
Notifications
You must be signed in to change notification settings - Fork 3.9k
grpclb: include fallback reason in error status of failing to fallback #8035
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
grpclb: include fallback reason in error status of failing to fallback #8035
Conversation
51142fc
to
e5e2e04
Compare
…lback (aka, no fallback addresses provided by resolver), by including the original cause of entering fallback. This falls into cases: - balancer RPC timeout (includes a timeout message) - balancer RPC failed before receiving any backend addresses (use the error occured in balancer RPC) - all balancer-provided addresses failed, while balancer RPC had failed causing fallback (use the error status for one of the balancer-provided backend)
e5e2e04
to
4558b74
Compare
@@ -717,6 +743,7 @@ private void handleStreamClosed(Status error) { | |||
cleanUp(); | |||
propagateError(error); | |||
balancerWorking = false; | |||
fallbackReason = error; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may not be UNAVAILABLE. We need to create a new Status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the propagateError(error)
two lines above? I was wanting to delete that line. That line fails RPCs for a short time window between balancer RPC closed and trying fallback. Right after fallback is attempted, if failing to fallback, RPCs will change to fail with fallbackReason
(which is the same status for the balancer's failure plus a "fail to fallback" message).
So I am wondering if we should remove the propagateError(error)
line here and fall RPCs with a single status, after attempting fallback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
propagateError()
is called two places. On of them isn't as it seems. InetAddress.getByAddress()
only throws UnknownHostException "if IP address is of illegal length" so the error string "Host for server not found" is wrong.
propagateError()
does two things: log and adjust the picker. For logging, we really want to log the original Status, so error
here. But we can't use error
directly for the picker, even if it is for a short period of time.
So I am wondering if we should remove the propagateError(error) line here and fall RPCs with a single status, after attempting fallback.
That's a functional change, as you no longer cause failures if fallback succeeds. I don't think we'd chose the behavior based on what makes the implementation easiest. I think we want it to behave a certain way in this case. I thought grpclb was supposed to try fallback before failing RPCs, at least when starting up. I honestly don't know where to look up the expected behavior in this case.
Calling @markdroth to help inform us of when gRPC-LB should begin failing RPCs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have enough context here to know which specific cases you're asking about.
In general, there are two types of grpclb fallback, fallback at startup and fallback after startup.
Fallback at startup is triggered in the following cases:
- When the fallback timer fires before we have received the first response from the balancer.
- When the balancer channel goes into TRANSIENT_FAILURE before reaching READY. (This short-circuits the fallback timer.)
- When the balancer call finishes (regardless of status) without receiving the first response from the balancer. (This short-circuits the fallback timer.)
Fallback after startup occurs only after we receive an initial response from the balancer. It is triggered in the following cases:
- When we get an explicit response from the balancer telling us go into fallback.
- When both of the following are true:
- The balancer call has finished (regardless of status) and we have not yet received the first response on the subsequent call.
- We cannot connect to any of the backends in the last response we received from the balancer.
None of these cases have anything to do with the status of individual data plane calls. However, there are two cases above where fallback is triggered by receiving status on the balancer call, but only when other conditions are also met.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still did not directly answer the question if we should fail RPCs before trying fallback. The specific case we are talking about is when the balancer RPC finishes (regardless of status) and none of the connections to any backends received previously has been READY. Do we fail RPCs immediately while trying to use fallback addresses (which implies RPCs may succeed back again if connections to fallback succeeds)? Or do we wait until fallback has been attempted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the fallback-at-startup case, we should be in state CONNECTING until we either get connected or go into fallback mode, so we should not fail data plane RPCs until one of those two things happens.
In the fallback-after-startup case, the "get an explicit response from the balancer telling us go into fallback" case should not depend on whether there are currently any READY connections to balancer-given backends, since it's intended to force clients to go to fallback regardless of whether they are currently connected to backends, and you should fix your implementation if it's not doing that. Given that, there are several cases here:
- If we can't reach any of the balancer-provided backends before we go into fallback mode (e.g., if the backend connections fail before either the balancer connection fails or the balancer explicitly tells us to go into fallback), then we will fail some data plane RPCs.
- If we are in contact with the balancer-provided backends and the balancer tells us to go into fallback mode, we should not fail any RPCs; we should keep using the balancer-provided backends while we get in contact with the fallback backends.
- If we are in contact with the balancer-provided backends and the balancer call fails, and then we lose contact with the balancer-provided backends, it's a bit of a grey area. In principle, I suppose we should go into state CONNECTING here and queue data plane RPCs instead of failing them, but if we actually fail some RPCs instead, I think we can probably live with that.
@apolcyn may want to weigh in here as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to everything @markdroth just described.
Also note that go/grpclb-explicit-fallback describes the expected behavior of clients when receiving a fallback response from a balancer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That line fails RPCs for a short time window between balancer RPC closed and trying fallback.
I just realized that sounded similar to b/138458426. I had found a path through the code that could cause that but #6657 looked like it'd fix it. Maybe there was a second path through the code? And apparently Go might still have this problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the description of "the client enters transient failure because all subchannels are "connecting", and one has entered "transient failure", so the pending pick fails." in b/138458426#comment4, I'd suspect that was due to the issue described in #7959, which was fixed recently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the fallback-after-startup case, the "get an explicit response from the balancer telling us go into fallback" case should not depend on whether there are currently any READY connections to balancer-given backends, since it's intended to force clients to go to fallback regardless of whether they are currently connected to backends, and you should fix your implementation if it's not doing that.
Sorry sorry, what I mentioned in #8035 (comment) was wrong. Balancer forcing entering fallback is correct. It will stop using balancer-provided backends immediately, even if there are READY connections.
Actually our implementation looks fine for handling the grey area:
- If connections to all balancer-provided backends fail before balancer RPC becomes broken. In this case, client RPCs fail with the status from one of the broken subchannels. After the balancer RPC fails, before attempting to fallback, the status used to fail client RPCs is changed to that from the balancer RPC.
- If balancer RPC fail before connections to balancer-provided backends all become broken. Client RPCs do not fail until the later happens. After connections to all balancer-provided backends fail, before attempting to fallback, the status used to fail client RPCs is from one of the broken subchannels.
…k reason being overwritten by timeout waiting for balancer.
…s when failing to fallback, attach the original fallback reason to it. This ensures all client RPCs fail with UNAVAILABLE status code. Errors being logged are still with its original status code.
I updated this a bit: RPCs failed caused by reasons not directly related to its connections to the backends (aka, anything happens before making connections to backends, such as balancer RPC broken before getting any backends, fail to fallback, etc) will end up with always UNAVAILABLE status code attached with cause and description from the original (immediate-fail or fallback) reason. Some examples are:
The status code for failing RPCs within the window of balancer RPC closed and attempting fallback (aka, caused by PTAL. |
Enhance error information reflected by RPC status when failing to fallback (aka, no fallback addresses provided by resolver), by including the original cause of entering fallback. This falls into cases:
Note for cases that connections to fallback address fail, it's already using one of fallback addresses' error. See
handleSubchannelState(...)
->maybeUseFallbackBackends()
(no-op as it's already using fallback backends) ->maybeUpdatePicker()
withbackendList
being non-empty.Fixes #7997