-
Notifications
You must be signed in to change notification settings - Fork 90
Open
Description
Intro
This issue tracks a potential bug or misbehaviour when using several worker on VPP (tested on VPP 23.06).
Problem
We are facing an issue when using several worker in the VPP startup config when sending a basic ping command through GoVPP CLI.
Here are the logs (1 main thread + 2 workers):
vpp# sh threads
ID Name Type LWP Sched Policy (Priority) lcore Core Socket State
0 vpp_main 160368 other (0) 1 0 0
1 vpp_wk_0 workers 160371 other (0) 2 0 0
2 vpp_wk_1 workers 160372 other (0) 3 0 0
vpp# sh hardware-interfaces
Name Idx Link Hardware
eth4 1 up eth4
Link speed: unknown
RX Queues:
queue thread mode
0 vpp_wk_0 (1) polling
TX Queues:
TX Hash: [name: hash-eth-l34 priority: 50 description: Hash ethernet L34 headers]
queue shared thread(s)
0 yes 0-2
Ethernet address fa:16:3e:67:61:ef
Red Hat Virtio
root@ddd:/home/lab# ./govpp -L trace cli ping 192.168.200.1
TRAC[0000]options.go:74 main.InitOptions() log level set to: trace
TRAC[0000]options.go:84 main.InitOptions() init global options: &{Debug:false LogLevel:trace Color:}
DEBU[0000]cmd_cli.go:127 main.newCliCommand.func1() provided 2 args
TRAC[0000]cmd_cli.go:227 main.newBinapiVppCli() connecting to VPP API socket "/run/vpp/api.sock"
vpp# ping 192.168.200.1
DEBU[0000]cmd_cli.go:192 main.runCliCmd() executing CLI command: ping 192.168.200.1
TRAC[0000]cmd_cli.go:255 main.(*vppcliBinapi).Execute() sending CLI command: "ping 192.168.200.1"
Statistics: 5 sent, 0 received, 100% packet loss
DEBU[0005]cmd_cli.go:269 main.(*vppcliBinapi).Close() disconnecting VPP API connection
We made some dpdk- input trace, and the icmp packets are well received but it seems it does not communicate properly the result. Is this because API is running on main thread ?
Here are the logs (1 main thread):
vpp# sh threads
ID Name Type LWP Sched Policy (Priority) lcore Core Socket State
0 vpp_main 160396 other (0) 1 0 0
vpp# sh hardware-interfaces
Name Idx Link Hardware
eth4 1 up eth4
Link speed: unknown
RX Queues:
queue thread mode
0 main (0) polling
TX Queues:
TX Hash: [name: hash-eth-l34 priority: 50 description: Hash ethernet L34 headers]
queue shared thread(s)
0 no 0
Ethernet address fa:16:3e:67:61:ef
root@ddd:/home/lab# ./govpp -L trace cli ping 192.168.200.1
TRAC[0000]options.go:74 main.InitOptions() log level set to: trace
TRAC[0000]options.go:84 main.InitOptions() init global options: &{Debug:false LogLevel:trace Color:}
DEBU[0000]cmd_cli.go:127 main.newCliCommand.func1() provided 2 args
TRAC[0000]cmd_cli.go:227 main.newBinapiVppCli() connecting to VPP API socket "/run/vpp/api.sock"
vpp# ping 192.168.200.1
DEBU[0000]cmd_cli.go:192 main.runCliCmd() executing CLI command: ping 192.168.200.1
TRAC[0000]cmd_cli.go:255 main.(*vppcliBinapi).Execute() sending CLI command: "ping 192.168.200.1"
116 bytes from 192.168.200.1: icmp_seq=1 ttl=64 time=10.8021 ms
116 bytes from 192.168.200.1: icmp_seq=2 ttl=64 time=.3975 ms
116 bytes from 192.168.200.1: icmp_seq=3 ttl=64 time=.3383 ms
116 bytes from 192.168.200.1: icmp_seq=4 ttl=64 time=.2925 ms
116 bytes from 192.168.200.1: icmp_seq=5 ttl=64 time=.2718 ms
Statistics: 5 sent, 5 received, 0% packet loss
DEBU[0005]cmd_cli.go:269 main.(*vppcliBinapi).Close() disconnecting VPP API connection
Solution
This problem can be solved by using only one main thread. Or by setting rx-placement on main queue : set interface rx-placement eth4 queue0 main
Metadata
Metadata
Assignees
Labels
No labels