Skip to content

Gossipsub backpressure seems to not work for forwarded messages #6117

@sirandreww-starkware

Description

@sirandreww-starkware

Summary

When there is a lot of traffic in a gossipsub network, message prioritization prioritizes published messages over forwarded messages. Thus published messages tend to succeed without issue. This results in the produces of these messages to not get any indication of back-pressure, although the entire networks is basically foregoing all gossip and all forwarding because these are not prioritized.

Expected behavior

That when the network is flooded with messages that publishing a message would start to fail as soon as forwarding or gossip are being dropped.

Actual behavior

No indication that this is happening on the publish producer side.

Relevant log output

When 10 nodes are flooding the network with messages as long as backpressure allows this is the combined logs I start receiving:

2025-08-05T15:52:33.265365Z  WARN libp2p_gossipsub::behaviour: Send Queue full. Could not send Forward { message: RawMessage { source: Some(PeerId("12D3KooWQYhTNQdmr3ArTeUHRYzFg94BKyTkoWBDWez9kSCVe2Xo")), data: [0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 7, 31, 0, 0, 0, 0, 0, 0, 0, 0, 24, 88, 233, 212, 41, 209, 248, 37, 0, 0, 0, 0, 0, 0, 0, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sequence_number: Some(1754409146581427900), topic: TopicHash { hash: "9pwO9iZEpm2K+wAEFbckQaShVNAVcmbmTr+sZZ/iW0s=" }, signature: Some([175, 42, 27, 53, 9, 247, 193, 143, 23, 169, 98, 238, 26, 253, 156, 144, 52, 11, 81, 95, 232, 205, 205, 121, 118, 121, 26, 10, 38, 253, 33, 106, 10, 17, 84, 204, 227, 51, 160, 74, 118, 13, 201, 172, 204, 38, 105, 219, 112, 150, 177, 98, 119, 16, 154, 92, 9, 69, 10, 111, 43, 168, 97, 9]), key: None, validated: true }, timeout: Delay }. peer=12D3KooWDMCQbZZvLgHiHntG1KwcHoqHPAxL37KvhgibWqFtpqUY
2025-08-05T15:52:33.265379Z  WARN libp2p_gossipsub::behaviour: Send Queue full. Could not send Forward { message: RawMessage { source: Some(PeerId("12D3KooWH3uVF6wv47WnArKHk5p6cvgCJEb74UTmxztmQDc298L3")), data: [0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 5, 243, 0, 0, 0, 0, 0, 0, 0, 0, 24, 88, 233, 211, 227, 232, 160, 112, 0, 0, 0, 0, 0, 0, 0, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sequence_number: Some(1754409146581785812), topic: TopicHash { hash: "9pwO9iZEpm2K+wAEFbckQaShVNAVcmbmTr+sZZ/iW0s=" }, signature: Some([31, 75, 20, 186, 227, 209, 209, 161, 216, 121, 30, 105, 21, 224, 113, 212, 93, 199, 75, 206, 25, 155, 110, 146, 48, 214, 83, 177, 226, 167, 223, 169, 204, 69, 175, 94, 98, 208, 142, 17, 97, 105, 80, 111, 218, 52, 105, 115, 229, 239, 224, 218, 212, 184, 186, 126, 179, 81, 27, 108, 130, 37, 181, 10]), key: None, validated: true }, timeout: Delay }. peer=12D3KooWLJtG8fd2hkQzTn96MrLvThmnNQjTUFZwGEsLRz5EmSzc
2025-08-05T15:52:33.265413Z  WARN libp2p_gossipsub::behaviour: Send Queue full. Could not send Forward { message: RawMessage { source: Some(PeerId("12D3KooWQYhTNQdmr3ArTeUHRYzFg94BKyTkoWBDWez9kSCVe2Xo")), data: [0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 7, 157, 0, 0, 0, 0, 0, 0, 0, 0, 24, 88, 233, 212, 62, 155, 77, 73, 0, 0, 0, 0, 0, 0, 0, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sequence_number: Some(1754409146581428026), topic: TopicHash { hash: "9pwO9iZEpm2K+wAEFbckQaShVNAVcmbmTr+sZZ/iW0s=" }, signature: Some([175, 142, 40, 145, 159, 138, 97, 56, 109, 240, 226, 35, 145, 37, 35, 157, 113, 171, 174, 229, 243, 199, 118, 220, 155, 181, 230, 8, 97, 145, 217, 227, 115, 240, 18, 208, 5, 182, 81, 71, 217, 38, 150, 78, 151, 235, 88, 88, 110, 36, 45, 166, 249, 226, 150, 240, 113, 198, 27, 98, 39, 255, 241, 9]), key: None, validated: true }, timeout: Delay }. peer=12D3KooWLJtG8fd2hkQzTn96MrLvThmnNQjTUFZwGEsLRz5EmSzc
2025-08-05T15:52:33.265487Z  WARN libp2p_gossipsub::behaviour: Send Queue full. Could not send Forward { message: RawMessage { source: Some(PeerId("12D3KooWLnZUpcaBwbz9uD1XsyyHnbXUrJRmxnsMiRnuCmvPix67")), data: [0, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 9, 107, 0, 0, 0, 0, 0, 0, 0, 0, 24, 88, 233, 212, 69, 110, 159, 45, 0, 0, 0, 0, 0, 0, 0, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sequence_number: Some(1754409146592128455), topic: TopicHash { hash: "9pwO9iZEpm2K+wAEFbckQaShVNAVcmbmTr+sZZ/iW0s=" }, signature: Some([167, 24, 77, 215, 227, 232, 194, 209, 180, 116, 225, 134, 97, 199, 48, 51, 104, 54, 10, 86, 236, 213, 183, 253, 51, 12, 125, 242, 217, 37, 115, 1, 210, 153, 189, 118, 167, 213, 114, 148, 76, 252, 232, 144, 49, 45, 182, 249, 255, 36, 70, 95, 38, 63, 43, 215, 173, 192, 16, 116, 168, 202, 49, 0]), key: None, validated: true }, timeout: Delay }. peer=12D3KooWLJtG8fd2hkQzTn96MrLvThmnNQjTUFZwGEsLRz5EmSzc
2025-08-05T15:52:33.265522Z  WARN libp2p_gossipsub::behaviour: Send Queue full. Could not send Forward { message: RawMessage { source: Some(PeerId("12D3KooWQYhTNQdmr3ArTeUHRYzFg94BKyTkoWBDWez9kSCVe2Xo")), data: [0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 7, 162, 0, 0, 0, 0, 0, 0, 0, 0, 24, 88, 233, 212, 64, 251, 67, 40, 0, 0, 0, 0, 0, 0, 0, 30, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], sequence_number: Some(1754409146581428031), topic: TopicHash { hash: "9pwO9iZEpm2K+wAEFbckQaShVNAVcmbmTr+sZZ/iW0s=" }, signature: Some([164, 204, 153, 224, 252, 7, 107, 162, 52, 99, 31, 142, 251, 50, 143, 54, 144, 93, 96, 67, 99, 56, 134, 103, 196, 151, 245, 203, 125, 124, 102, 224, 138, 164, 122, 235, 101, 77, 3, 25, 236, 229, 101, 180, 230, 19, 50, 85, 10, 57, 19, 93, 131, 223, 14, 122, 70, 221, 201, 75, 154, 153, 108, 2]), key: None, validated: true }, timeout: Delay }. peer=12D3KooWLJtG8fd2hkQzTn96MrLvThmnNQjTUFZwGEsLRz5EmSzc

Possible Solution

 #[allow(clippy::result_large_err)]
    pub(crate) fn send_message(&self, rpc: RpcOut) -> Result<(), RpcOut> {
        if let RpcOut::Publish { .. } = rpc {
            // Update number of publish message in queue.
            let len = self.len.load(Ordering::Relaxed);
            if len >= self.priority_cap {
                return Err(rpc);
            }
            self.len.store(len + 1, Ordering::Relaxed);
        }
        let sender = match rpc {
            RpcOut::Publish { .. }
            | RpcOut::Graft(_)
            | RpcOut::Prune(_)
            | RpcOut::Subscribe(_)
            | RpcOut::Unsubscribe(_) => &self.priority_sender,
            RpcOut::Forward { .. } | RpcOut::IHave(_) | RpcOut::IWant(_) | RpcOut::IDontWant(_) => {
                &self.non_priority_sender
            }
        };
        sender.try_send(rpc).map_err(|err| err.into_inner())
    }

Perhaps changing the priority of publish messages to be non_priority as well

Version

libp2p = "0.56.0"

Would you like to work on fixing this bug?

Yes

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions