-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Performance issue #1695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I am writing this on the assumption that the GET requests between Vuls and DB(In this case, gost DB) is failing due to timeout. It may be solved by changing the gost timeout so that it can be adjusted. Line 133 in 4253550
Line 158 in 4253550
If you don't mind, could you please set the hard-coded timeout to a longer time and verify if the error does not occur? Alternatively, it may be possible to select a DB Type that responds a little faster than the DB currently used. If timeout is not the cause, a different countermeasure must be considered. |
For everyone with the same/similar error, the fix provided by @MaineK00n is working. We had the following errors:
After increasing the timeouts in vuls/gost/util.go everything works as expected. @MaineK00n Is there a possibility to get some new configuration parameters to set the timeouts in config.toml? |
@GEownt |
Hello,
We have deployed the Vuls application following the client database architecture. I will try to explain myself:
In our architecture the clients connect to the Vuls-Server through HTTP sending the properly curl:
We see this error in the vuls-server
And after 3 appearings of that message (MAX_RETRIES I guess) we see this error in the vuls-server
And if we check the curl output we see the following error:
The architecture that we are following is that the vuls server is in one k8s pod and each database in different pods.
The user connects to the server with the curl in HTTP and the server connects to databases also through HTTP.
We think that it is a performance issue, we have tried increasing resources to the pods and it seems to solve the problem partially for few endpoints, but as soon as we increase to multiple endpoints (more than 3) we seem to hit that performance ceiling, and the issues start to appear again. If we only get the CVEs for one endpoint everything works fine, but the bigger the amount, the more issues appear.
Do you know what could be causing the problem? Is there a parameter that we need to fix in order to improve the performance? Something like generate child threads or something like that?
Thank you for your time!
The text was updated successfully, but these errors were encountered: