-
Notifications
You must be signed in to change notification settings - Fork 39
CloudEventBus circuit breaker stuck in a loop : The circuit is now open and is not allowing calls. #75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The error is unhappily due to a payload size restriction on |
Not sure, I guess the dead letter queue might be an option. Anything that isn't getting stuck in a loop forever is better that the current behavior I think. |
I tend to disagree. My main focus was to ensure reliable and ordered delivery of outbound cloud events. I personnally don't see the harm in being stuck in the loop (it's what it's intended for), even though the queue might quickly become abolustely huge in case of delivery failures. @tsurdilo @antmendoza Do you guys have any idea regarding that issue? |
no idea @cdavernas sorry. but the error is weird, that big is the payload? I have not been able to see the error in my environment, it is failing before I think,
do I have to provide any input to the workflow? |
sorry, I was running the previous version of synapse, now the error is different, pasting here just in case you find it useful.
|
@antmendoza you might need to create the user first on the petShop swagger demo api (https://petstore.swagger.io/#/user/createUser) and use a payload similar to this one to start an instance: {
"username": "test",
"password": "test",
"quantityToOrder": 1
} |
@antmendoza In addition, I see that whenever a worker exits, even successfully, it does not seem to gracefully shutdown the GRPC connection, thus resulting in the first set of errors. The second set, though, is due to a bad routing configuration for the error endpoint. |
Yes, in some case, like when taking the unfiltered result of Swagger's Petstore findPetByStatus action, the payload can (but really shouldn't) be considerable. And, as a free service, The question, here, is: do we want to enforce proper and ordered delivery of I personally prefer first option as the dead-letter queue will anyway grow continuously if/when/while facing delivery failures. |
Closed as fixed in #366 |
What happened:
The following error keeps appearing in the server logs:
(because the output payload of a workflow/state/action was too big???)
What you expected to happen:
No error, or, at best, a limited numbers of retries?
How to reproduce it:
Not sure, try:
Environment:
Win 10 x64 - self hosted
The text was updated successfully, but these errors were encountered: