Replies: 1 comment
-
@LewisRye Hello there! I'm here to help you with your Elasticsearch issue. Let's get to the bottom of this together. To address the "all shards failed" error in Elasticsearch when used as a Jaeger backend, you can try the following steps to increase the memory allocation for Elasticsearch:
These steps should help mitigate the "all shards failed" error by ensuring Elasticsearch has sufficient resources to handle the load. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What happened?
I have installed Elasticsearch as a Jaeger backend through the helm chart, and whenever I run a load on my deployment, traces are not collected and Jaeger UI returns error 503: service unavailable (all shards failed). It should be noted that everything works as intended, when there are less traces.
I believe this error is caused by lack of memory allocated to Elasticsearch, and I can verify this with the low amount of memory allocated through kubectl. However, I do not know how to increase the limit. I have tried the Java heap allocation environment variable, and kubernetes resource requests, but to no success.
Does anyone know why this is happening?
Steps to reproduce
Expected behavior
I want to give Elasticsearch more memory so it doesn't error 503.
Relevant log output
Screenshot
Additional context
No response
Jaeger backend version
No response
SDK
No response
Pipeline
No response
Storage backend
Elasticsearch
Operating system
No response
Deployment model
No response
Deployment configs
Beta Was this translation helpful? Give feedback.
All reactions