-
Notifications
You must be signed in to change notification settings - Fork 890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Watch Request Blocked When Member Cluster Offline #5672
Comments
/cc @RainbowMango @XiShanYongYe-Chang @ikaven1024 Let's take a look at this issue together. |
Are events in other normal clusters affected? |
@XiShanYongYe-Chang Unable to receive events from all member clusters through the aggregated apiserver, suspecting that the watch is blocked in
|
Thank you for your reply. According to your method, the watch connection will be disconnected after a certain period of time, and then the client needs to initiate a watch request again. Do I understand it correctly? |
Yes, but my scenario is quite special. The member cluster has gone offline, but since it hasn't been removed from the I will conduct a test to verify. |
/close |
@xigang: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hi @xigang, why close this issue? |
/reopen |
@xigang: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@XiShanYongYe-Chang I will submit a fix PR later. |
@XiShanYongYe-Chang PR submitted. PTAL. |
Has this been confirmed? If so, they can use this case to reproduce it. |
What happened:
When the member cluster goes offline, there is a scenario where the client's Watch request gets blocked and does not receive pod events.
What you expected to happen:
Should we set a timeout: Set a reasonable timeout for cache.Watch() calls using context.WithTimeout or context.WithDeadline to control the operation time?
https://github.com/karmada-io/karmada/blob/master/pkg/search/proxy/store/multi_cluster_cache.go#L354
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl-karmada version
orkarmadactl version
):The text was updated successfully, but these errors were encountered: