Replies: 1 comment 9 replies
-
Re, Q2, resetting in 1s could have a huge impact on the CPU. You could implement a new sampling policy through a plugin. We don't have one in upstream, but it is not hard to do by yourself. Q3, 3 billion is huge, I am highly doubting that 3 machines could support that. Usually, 10-15 high-spec machines for OAP and another 30 nodes for elastic may support a billion-level deployment. 32U64G spec is very common for high-payload deployment. We are trying to have BanyanDB 1.0, which should require much less than Elastic. But BanyanDB is a new thing, it is only suitable for people willing adopt new things. Q4 has been covered by the answer to Q1. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I want get some help.
Q1:Isn't metrics aggregation affected by sample rate? I invoke a method 10 times in 1 minute, but the sample rate is 1/10000. Should the CPM be 10 or 0? If sample rate doesn't affect metrics aggregation, please show me where the metrics aggregation logic is.
Q2: Do you guys has a plan to support min time sample policy? Like in 1 second at least take at least 1 sample.
Q3: If i wanna use about 3 machine to build oap-cluster to support billion level of data, how large spec do you suggest? 16C64G will be fine?
Q4: If sample rate will affect metrics aggregate, how large spec can support 100% sample rate with billion data level? And any suggestion about accurate statistics of metrics like CPM?
Beta Was this translation helpful? Give feedback.
All reactions