-
Notifications
You must be signed in to change notification settings - Fork 328
Open
Description
I wrote 2 performance testing minimal HTTP servers, one for tide
(1.6) and one for Node.JS
, what I expect is that the tide
one should faster than the Node.JS
one. But the result is the opposite..., anyone has a try on this?:)
Testing environment:
macOS 10.14.6
Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz [ 6 cores ]
rustc 1.50.0 (cb75ad5db 2021-02-10)
Here are the source code for both versions:
Both HTTP server have the same routes and responses below:
-
HTML
response (Benchmark testing
) for the default route/
: -
JSON
response for the/json-benchmark
route:{ "name":"Wison Ye", "role":"Administrator", "settings":{ "prefer_language":"English", "reload_when_changed":true } }
Here is my ulimit -a
output:
Maximum size of core files created (kB, -c) 0
Maximum size of a processճ data segment (kB, -d) unlimited
Maximum size of files created by the shell (kB, -f) unlimited
Maximum size that may be locked into memory (kB, -l) unlimited
Maximum resident set size (kB, -m) unlimited
Maximum number of open file descriptors (-n) 1000000
Maximum stack size (kB, -s) 8192
Maximum amount of cpu time in seconds (seconds, -t) unlimited
Maximum number of processes available to a single user (-u) 3546
Maximum amount of virtual memory available to the shell (kB, -v) unlimited
Here is the test result:
-
NodeJS version:
recreate node project:
npm init -y // copy the `benchmark_server.js` to current folder npm install restify restify-errors node --version # v14.16.0
Node spwan 6 cluster workers to serve:
node benchmark_server.js # setupMaster Cluster worker amount: 6 # setupMaster Cluster worker "1" (PID: 10719) is online. # setupMaster Cluster worker "3" (PID: 10721) is online. # setupMaster Cluster worker "2" (PID: 10720) is online. # setupMaster Cluster worker "4" (PID: 10722) is online. # setupMaster Cluster worker "5" (PID: 10723) is online. # setupMaster Cluster worker "6" (PID: 10724) is online. # run Worker Process 3 (PID: 10721) | "Benchmark Http Server" is running at http://127.0.0.1:8080 # setupMaster Cluster worker "3" (PID: 10721) is listening on 127.0.0.1:8080. # run Worker Process 2 (PID: 10720) | "Benchmark Http Server" is running at http://127.0.0.1:8080 # setupMaster Cluster worker "2" (PID: 10720) is listening on 127.0.0.1:8080. # run Worker Process 6 (PID: 10724) | "Benchmark Http Server" is running at http://127.0.0.1:8080 # setupMaster Cluster worker "6" (PID: 10724) is listening on 127.0.0.1:8080. # run Worker Process 1 (PID: 10719) | "Benchmark Http Server" is running at http://127.0.0.1:8080 # setupMaster Cluster worker "1" (PID: 10719) is listening on 127.0.0.1:8080. # run Worker Process 4 (PID: 10722) | "Benchmark Http Server" is running at http://127.0.0.1:8080 # setupMaster Cluster worker "4" (PID: 10722) is listening on 127.0.0.1:8080. # run Worker Process 5 (PID: 10723) | "Benchmark Http Server" is running at http://127.0.0.1:8080 # setupMaster Cluster worker "5" (PID: 10723) is listening on 127.0.0.1:8080.
# `/` default route wrk --thread 8 --connections 5000 --duration 10s --latency http://127.0.0.1:8080/ Running 10s test @ http://127.0.0.1:8080/ 8 threads and 5000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 7.21ms 7.06ms 165.76ms 90.33% Req/Sec 10.55k 5.94k 40.11k 80.46% Latency Distribution 50% 5.57ms 75% 9.21ms 90% 14.07ms 99% 31.48ms 769416 requests in 10.06s, 151.16MB read Socket errors: connect 0, read 1251, write 0, timeout 0 Requests/sec: 76466.35 Transfer/sec: 15.02MB
# `/json-benchmark` route wrk --thread 8 --connections 5000 --duration 10s --latency http://127.0.0.1:8080/json-benchmark Running 10s test @ http://127.0.0.1:8080/json-benchmark 8 threads and 5000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 8.45ms 8.56ms 327.08ms 92.21% Req/Sec 9.94k 4.52k 28.71k 73.51% Latency Distribution 50% 6.66ms 75% 9.82ms 90% 15.27ms 99% 34.71ms 729305 requests in 10.06s, 206.57MB read Socket errors: connect 0, read 1488, write 3, timeout 0 Requests/sec: 72481.25 Transfer/sec: 20.53MB
-
Rust version:
recreate node project:
cargo new benchmark # Add the dependencies to `Cargo.toml`: tide = "~0.15" async-std = { version = "1.8.0", features = ["attributes"] } async-trait = "^0.1.41" serde = { version = "1.0", features = ["derive"] } serde_json = "1.0 # Build release version cargo build --bin benchmark_server --release
./target/release/benchmark_server [ Benchmark Server Demo ] Benchmark Server is listening on: 0.0.0.0:8080
# `/` default route wrk --thread 8 --connections 5000 --duration 10s --latency http://127.0.0.1:8080/ Running 10s test @ http://127.0.0.1:8080/ 8 threads and 5000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 10.16ms 6.56ms 228.77ms 69.81% Req/Sec 8.15k 5.77k 42.38k 88.54% Latency Distribution 50% 9.14ms 75% 14.20ms 90% 17.50ms 99% 23.26ms 577136 requests in 10.08s, 73.82MB read Socket errors: connect 0, read 1493, write 3, timeout 0 Requests/sec: 57266.86 Transfer/sec: 7.32MB
# `/json-benchmark` route wrk --thread 8 --connections 5000 --duration 10s --latency http://127.0.0.1:8080/json-benchmark Running 10s test @ http://127.0.0.1:8080/json-benchmark 8 threads and 5000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.39ms 5.93ms 216.63ms 66.60% Req/Sec 7.73k 3.47k 24.06k 67.97% Latency Distribution 50% 10.51ms 75% 15.89ms 90% 18.91ms 99% 22.02ms 573039 requests in 10.08s, 119.74MB read Socket errors: connect 0, read 1120, write 0, timeout 0 Requests/sec: 56862.43 Transfer/sec: 11.88MB
kennetpostigo, pickfire, pansila and mpfaff
Metadata
Metadata
Assignees
Labels
No labels