-
Notifications
You must be signed in to change notification settings - Fork 75
BENCHMARK : Pipy 0.90 Multi Threads HTTP1.1
昨天flomesh发布了pipy 0.90版本,这个版本主要的变化是增加了多线程支持。详细的Release Note可以参考这里。
Pipy最初是作为sidecar proxy为目标进行设计和研发的,随着发展,pipy被使用在了越来越多的非sidecar proxy场景下。如0.50版本引入的MQTT协议支持,是为了满足用户对数亿IoT设备接入的需求。当前的0.90版本引入的多线程模式,是为了满足用户在高性能硬件平台上使用Pipy自建负载均衡器的需求。通过在高性能的白牌服务器上运行Pipy,可以实现媲美F5 BigIP等商业硬件产品的负载均衡能力,同时整体成本大幅降低。
Pipy多线程的实现基于asio的线程库,同时采用Linux内核的port reuse作为线程间的负载均衡。
这次的Benchmark测试里,我们主要关注pipy在多线程模式下随着线程数增加,pipy所能处理的HTTP1.1请求是否线性增长;以及在给定的硬件平台上,完成一百万TPS所需要的资源情况;以及在高负载基本HTTP1.1处理时候是否有明显的内存泄漏。
这次测试所采用的硬件,最开始我们选择了一台单处理Intel Xeon Gold 6144的服务器。这是2017年推出的当时高端处理器,具有8核心16线程,24M缓存。我们采购这种二手服务器做测试主要是成本原因。我们测试从pipy 一个线程、二个线程、四个线程、八个线程、十二个线程逐渐递加的模式。压测软件我们采用了wrk。在开始阶段,我们在同一台服务器运行pipy和wrk;但是在12个线程的测试中,我们需要在另外一台AMD Ryzen5 5600G台式机上运行wrk,pipy所在的Intel服务器和wrk所在的AMD台式机之间采用10G光纤连接。这次是最基础的HTTP1.1协议解析和网络IO处理的测试,因此对内存要求不高。Intel服务器配置了32G内存,AMD台式机配置了64G内存,但是实际测试用到的内存非常少。HTTP1.1是细节非常多的协议,是目前互联网上使用最广泛的协议。这次测试是基本的测试,pipy通过PipyJS直接返回“hi“,类似helloworld的测试;这次测试并不包含复杂的场景;更多的复杂场景可以以此为基础扩展。
运行pipy的Intel服务器:
[root@localhost ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6144 CPU @ 3.50GHz
Stepping: 4
CPU MHz: 3500.000
BogoMIPS: 7000.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp_epp
[root@localhost ~]# free
total used free shared buff/cache available
Mem: 32253728 839788 22518024 17416 8895916 30993832
Swap: 16252924 0 16252924
在做12线程测试时候,运行wrk的AMD台式机:
root@pve8:~# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 80
Model name: AMD Ryzen 5 5600G with Radeon Graphics
Stepping: 0
Frequency boost: enabled
CPU MHz: 782.422
CPU max MHz: 4463.6709
CPU min MHz: 1400.0000
BogoMIPS: 7785.53
Virtualization: AMD-V
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 3 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccom
p
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitizati
on
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP a
lways-on, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe
1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_ap
icid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2
movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm ext
apic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skini
t wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx
cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgs
base bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap cl
flushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_oc
cup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru
wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean fl
ushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmlo
ad vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_
recov succor smca fsrm
root@pve8:~# free
total used free shared buff/cache available
Mem: 61615912 48749632 5226164 59712 7640116 12094084
Swap: 8388604 54016 8334588
在做12线程测试的时候,Intel服务器和AMD台式机之间采用10G光纤连接,他们之间的ping值是:
root@pve8:~# ping 10.10.6.1
PING 10.10.6.1 (10.10.6.1) 56(84) bytes of data.
64 bytes from 10.10.6.1: icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from 10.10.6.1: icmp_seq=2 ttl=64 time=0.093 ms
64 bytes from 10.10.6.1: icmp_seq=3 ttl=64 time=0.094 ms
运行pipy的服务器采用CentOS 7.9版本:
[root@localhost conf]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@localhost conf]# uname -a
Linux localhost.localdomain 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
在12线程测试案例中,运行wrk的AMD服务器采用Debian11.1版本:
root@pve8:~# cat /etc/debian_version
11.1
root@pve8:~# uname -a
Linux pve8 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100) x86_64 GNU/Linux
pipy是从GitHub release页面直接下载的,下载链接,运行的命令是:
pipy -e pipy().listen(8080).serveHTTP(new Message("hi")) --reuse-port --threads=12
在测试中,我们尝试了--threads=1、2、4、8、10、12,对应的我们wrk的线程数采用了1、2、4、6、10、10。其中pipy 8 thread情况下,wrk采用了6 thread,且和pipy运行在同一个主机上;pipy在10和12线程情况下,我们在AMD台式机上运行了10线程的wrk。pipy和wrk版本如下:
[root@localhost conf]# pipy -v
Version : 0.90.0-18
Commit : d0ffc6f7613f8b6c4bf79461ea6b546eeb80b378
Commit Date : Thu, 26 Jan 2023 09:36:30 +0800
Host : Linux-5.15.0-1031-azure x86_64
OpenSSL : OpenSSL 1.1.1q 5 Jul 2022
Builtin GUI : No
Samples : No
[root@localhost conf]# wrk -v
wrk 4.2.0 [epoll] Copyright (C) 2012 Will Glozer
Usage: wrk <options> <url>
Options:
-c, --connections <N> Connections to keep open
-d, --duration <T> Duration of test
-t, --threads <N> Number of threads to use
-s, --script <S> Load Lua script file
-H, --header <H> Add header to request
--latency Print latency statistics
--timeout <T> Socket/request timeout
-v, --version Print version details
Numeric arguments may include a SI unit (1k, 1M, 1G)
Time arguments may include a time unit (2s, 2m, 2h)
在整个测试中,除了调大最大文件打开数,我们没有调整其他的内核参数。
[root@localhost conf]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 120116
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 120116
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited