Replies: 3 comments 1 reply
-
Currently we have this running and it looks like vector is having trouble keeping up with the Recv-Q from a high volume host when running "ss -t -a -n": |
Beta Was this translation helpful? Give feedback.
-
More comments on the issue. ss -t -m -n|grep 175 -A1ESTAB 1605607548 0 10.xxx.xxx.xxx:10001 10.xxx.xxx.xxx:37677 This indicates that the received buffer has almost been fully used. netstat -s... Both outputs indicate that the vector's capability of handling data is far behind data received. Notes that during these time, the CPU and memory and I/O are pretty good (util% < 5%) which means that vector is hang on something, or waiting for something , it is not fully using OS resource. |
Beta Was this translation helpful? Give feedback.
-
Performance begines to degrade after 22-35 TCP connections. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Any way to improve Event Per Second (EPS) using TCP/TLS? We seem to be throttled around 40k EPS which is less than our UDP which is about 80k EPS:
source_palo_tls:
type: syslog
address: 0.0.0.0:10001
mode: tcp
receive_buffer_bytes: 268435456
max_length: 409600
tls:
enabled: true
crt_file: /etc/vector/cert/logs.pem
key_file: /etc/vector/cert/logs.key
ca_file: /etc/vector/cert/ca_chain.pem
# if sending system presents a cert, set to true
verify_certificate: false
verify_hostname: false
We also set these settings in sysctl.d:
net.core.rmem_default = 268435456
net.core.wmem_default = 268435456
net.core.rmem_max = 268435456
net.core.wmem_max = 268435456
net.core.netdev_max_backlog = 2000
net.ipv4.tcp_rmem = 8192 2097152 268435456
net.ipv4.tcp_wmem = 4096 524288 268435456
Beta Was this translation helpful? Give feedback.
All reactions