Skip to content

Conversation

JasMetzger
Copy link
Contributor

The device is construed as an interface and Number of Queues specified in the Configuration. the open will create the sockets for each queue. The ids are assumed 0..., Nqueue - 1 which is like 99% of the cases.

Aside: If just going to outfit the existing solution with a single queueid it might as well be 0 and it might as well be hardwired; no change would be necessary. Dimensioning seemed the right way to go.

Everything should be compatible with existing API if the default is one queue and queueid = 0. Socket, stats, are all dimensioned by Number of Queues. Any queueid is checked to be less than the number of queues in configuration and will log an error if out of bounds. If the queueid is illegal for getStatistics it returns a nulled out Device Stats since that function returns something.

When I write the kernel side I can test but wanted to push this for consideration before I go further.

@JasMetzger JasMetzger requested a review from seladb as a code owner October 20, 2025 12:50
Copy link

codecov bot commented Oct 20, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 83.81%. Comparing base (0132d27) to head (739fd67).

Additional details and impacted files
@@            Coverage Diff             @@
##              dev    #2000      +/-   ##
==========================================
+ Coverage   83.41%   83.81%   +0.40%     
==========================================
  Files         311      307       -4     
  Lines       55019    54388     -631     
  Branches    11816    11658     -158     
==========================================
- Hits        45892    45588     -304     
- Misses       7852     7942      +90     
+ Partials     1275      858     -417     
Flag Coverage Δ
alpine320 75.89% <ø> (ø)
fedora42 75.83% <ø> (-0.01%) ⬇️
macos-14 81.50% <ø> (ø)
macos-15 81.51% <ø> (ø)
mingw32 70.53% <ø> (ø)
mingw64 70.51% <ø> (+0.10%) ⬆️
npcap ?
rhel94 75.85% <ø> (-0.01%) ⬇️
ubuntu2004 60.13% <ø> (ø)
ubuntu2004-zstd 60.23% <ø> (ø)
ubuntu2204 75.80% <ø> (+<0.01%) ⬆️
ubuntu2204-icpx 60.55% <ø> (ø)
ubuntu2404 75.88% <ø> (ø)
ubuntu2404-arm64 75.56% <ø> (+<0.01%) ⬆️
unittest 83.81% <ø> (+0.40%) ⬆️
windows-2022 85.42% <ø> (+0.16%) ⬆️
windows-2025 85.45% <ø> (+0.11%) ⬆️
winpcap 85.45% <ø> (-0.09%) ⬇️
xdp ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

@Dimi1010 Dimi1010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking the time to submit this PR. 🙂
I added some comments on the implementation.

{

// used to dimension sockets
#define MAXIMUM_NUMBER_QUEUES 8
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For hard constants, prefer using C++ const / constexpr variables.

If macros are needed in public headers, prefix them with PCPP_ so they don't conflict with other macros the users might have.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok---perhaps with the std::array I don't need this definition any more

/// Stop receiving packets. Call this method from within the callback passed to receivePackets() whenever you
/// want to stop receiving packets.
void stopReceivePackets();
void stopReceivePackets(uint32_t queueid = 0);
Copy link
Collaborator

@Dimi1010 Dimi1010 Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would the user call the correct queue id from inside the receive loop (OnPacketsArrive callback)? 🤔
The users are expected to call stopReceivePackets inside the receive callback for their own queue.

Of course, they can save it externally or in the user cookie, but that seems like a code smell to me, as it should be provided to them in the callback.

@seladb IMO, it would be useful if the callback signature be changed to have a param XdpReceiveContext&, instead of having a pointer to XdpDevice. It would allow some methods to be contextualized for the receive op. Access to the device can also be through the context. For example in the current case, XdpReceiveContext::stopReceive(), that automatically cancels further receives. It would also allow cleaner addition and removal of parameters to the callback. Downside is, that it is a breaking change.

Alternative is to have the callback require a bool "continue receiving" return value, which is easier for simpler cases, but does not help with future maintenance if other changes are needed.

XdpReceiveContext version sample:

Unsure if the raw packets should be added to the context. There are pros and cons to both approaches:

  • Packets inside context pros:
    • More consistent stable API signature. No need to change signature if packets structure changes.
      Each param change is breaking, while adding and deprecating accessors is non-breaking.
  • Packets as separate params pros:
    • Better focus on the packet variables. They are generally the primary parameters the users want.
    • Independent lifetime can allow reuse of the context from the previous callback, as packets change more often than the rest of the context values.
void handlePacket(RawPacket packets[], uint32_t packetCount, XdpReceiveContext& ctx)
{
  XdpDevice* device = ctx.getDevice(); // Get the device if needed.
  
  void* userCookie = ctx.getUserCookie();
  T* userCookie = ctx.getUserCookieAs<T>(); // Utility method to reinterpret_cast the cookie to T?
  
  // Assume foo is a future value that is added later. Notice how it does not modify the callback signature
  // and keeps backwards compatibility. Extra parameter would have required all callbacks to be modified.
  Foo* futureValue = ctx.getFoo();

   /* handle packet */
   
   // Stop operation sample.
   if(stopCondition)
   {
     ctx.requestStopReceiving();
   }
}

"Continue receiving" bool version sample:

bool handlePacket2(RawPacket packets[], uint32_t packetCount, XdpDevice* device, void* userCookie)
{
  /* handle packet */

  if (shouldStopCondition)
  {
    return false;
  }

  return true;
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PS: IMO, the context idea is relevant to all device type callback signatures, not just XdpDevice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the OnPacketArrive should indicate the queue they are arriving on. This and any other information should passback to the callback.

Comment on lines 306 to 311
uint32_t m_NumQueues;
bool m_ReceivingPackets[MAXIMUM_NUMBER_QUEUES];
XdpUmem* m_Umem[MAXIMUM_NUMBER_QUEUES];
void* m_SocketInfo[MAXIMUM_NUMBER_QUEUES];
XdpDeviceStats m_Stats[MAXIMUM_NUMBER_QUEUES];
XdpPrevDeviceStats m_PrevStats[MAXIMUM_NUMBER_QUEUES];
Copy link
Collaborator

@Dimi1010 Dimi1010 Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We prefer std::array<T, N> over raw arrays in new code. It gives better API for basically no extra cost.

IMO, the main benefit is the addition of std::array<T, N>::size() which makes it easy to get the size of the array without external variables.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I was just seeing how things propagate with the added dimension. I will make this change on my fork.

Comment on lines 306 to 311
uint32_t m_NumQueues;
bool m_ReceivingPackets[MAXIMUM_NUMBER_QUEUES];
XdpUmem* m_Umem[MAXIMUM_NUMBER_QUEUES];
void* m_SocketInfo[MAXIMUM_NUMBER_QUEUES];
XdpDeviceStats m_Stats[MAXIMUM_NUMBER_QUEUES];
XdpPrevDeviceStats m_PrevStats[MAXIMUM_NUMBER_QUEUES];
Copy link
Collaborator

@Dimi1010 Dimi1010 Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, I am debating on the viability of encapsulating a single open socket in XdpSocket.
It would allow simpler methods on a per socket basis. The XdpDevice can store std::vector<XdpSocket> m_sockets.

The current API can either forward to the selected socket (via queueId), or remain unchanged and be made to operate on all open sockets (except receive and send, those default to queue 0 for backwards compat). If option 2 is taken, XdpDevice can expose XdpSocket* getSocket() (WIP name), to allow per socket configurations.

The idea is because most elements are accessed on a "per thread queue" basis instead of "per array type" and would somewhat clean up the [queueId]s sprinkled everywhere. It would also allow easier addition (if needed) of sharing Umem regions between sockets as they aren't required to be on a 1-1 relationship with the open sockets.

Any thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so. Would the Xdp hold the socket info, queueid, and the umem parts? I am still groping around with XDP but it seems to be a natural progression.

Copy link
Collaborator

@Dimi1010 Dimi1010 Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Essentially, yes. The XdpSocket holds:

  • queue id
  • socket info
  • stats
  • prev stats
  • bool receivingPackets
  • any other per socket variables
  • (if needed) a pointer to the parent xdp device.

It can be a POD struct that just holds them for the device to use, or it can be a full class with its own methods (essentially some of the current XdpDevice methods that operate on a single socket) so it can be exposed to the user as a full XdpSocket object.

I somewhat like the full object solution, as it would provide simpler methods in the API.
You get the socket object from the device, store a reference to it somewhere, and then operate on it. There is no need to keep track of queue ids after fetching the open socket from the device for simple operations that run on a single queue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would make it a class subordinate to the XdpDevice namespace. I can start there and see how it pans out. For now might we define the Umem region part of the XdpSocket too since it's dimensioned that way currently,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed something that encapsulates in a Socket class.

Comment on lines 187 to 230
/// Start receiving packets. In order to use this method the device should be open. Note that this method is
/// blocking and will return if:
/// - stopReceivePackets() was called from within the user callback
/// - timeoutMS passed without receiving any packets
/// - Some error occurred (an error log will be printed)
/// @param[in] onPacketsArrive A callback to be called when packets are received
/// @param[in] onPacketsArriveUserCookie The callback is invoked with this cookie as a parameter. It can be used
/// to pass information from the user application to the callback
/// @param[in] timeoutMS Timeout in milliseconds to stop if no packets are received. The default value is 5000
/// ms
/// @return True if stopped receiving packets because stopReceivePackets() was called or because timeoutMS
/// passed, or false if an error occurred.
bool receivePackets(OnPacketsArrive onPacketsArrive, void* onPacketsArriveUserCookie, int timeoutMS = 5000);

/// Stop receiving packets. Call this method from within the callback passed to receivePackets() whenever you
/// want to stop receiving packets.
void stopReceivePackets();

/// Send a vector of packet pointers.
/// @param[in] packets A vector of packet pointers to send
/// @param[in] waitForTxCompletion Wait for confirmation from the kernel that packets were sent. If set to true
/// this method will wait until the number of packets in the completion ring is equal or greater to the number
/// of packets that were sent. The default value is false
/// @param[in] waitForTxCompletionTimeoutMS If waitForTxCompletion is set to true, poll the completion ring with
/// this timeout. The default value is 5000 ms
/// @return True if all packets were sent, or if waitForTxCompletion is true - all sent packets were confirmed.
/// Returns false if an error occurred or if poll timed out.
bool sendPackets(const RawPacketVector& packets, bool waitForTxCompletion = false,
int waitForTxCompletionTimeoutMS = 5000);

/// Send an array of packets.
/// @param[in] packets An array of raw packets to send
/// @param[in] packetCount The length of the packet array
/// @param[in] waitForTxCompletion Wait for confirmation from the kernel that packets were sent. If set to true
/// this method will wait until the number of packets in the completion ring is equal or greater to the number
/// of packets sent. The default value is false
/// @param[in] waitForTxCompletionTimeoutMS If waitForTxCompletion is set to true, poll the completion ring with
/// this timeout. The default value is 5000 ms
/// @return True if all packets were sent, or if waitForTxCompletion is true - all sent packets were confirmed.
/// Returns false if an error occurred or if poll timed out.
bool sendPackets(RawPacket packets[], size_t packetCount, bool waitForTxCompletion = false,
int waitForTxCompletionTimeoutMS = 5000);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we keep this API, but forward to XdpSocket m_Sockets[0] if called to keep the existing compatibility?

They can still fail if the device is not opened and there are no sockets.

Comment on lines 236 to 239
/// @return Current device statistics
XdpDeviceStats getStatistics();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto, API compat.

bool configureSocket();
bool initUmem();
uint32_t m_NumQueues; // number of queues
std::array<XdpSocket*, PCPP_MAXIMUM_NUMBER_QUEUES> m_Socket;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, any specific reason std::array<XdpSocket*, N> was chosen instead of std::vector<XdpSocket>?

Vector would allow keeping only the active sockets, without having to deal with nullptr or manually deleting them. It would also remove a level of indirection when accessing XdpSocket through the container.

Also, since we are mapping thread queue to socket 1:1, using a std::vector<XdpSocket> we can use m_Socket.size() to fetch the number of active queues, removing the need for m_NumQueues.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PS: It should probably allow further changes to XdpSocket to be fully RAII compliant, eliminating the need for configure and having it happen in the constructor, but that can be done later.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It also removes the MAX_QUEUES limitation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about these things. Because the index into the array is also the queueid I thought the array was more natural but we could do the same thing with vector. I also thought of configure as part of constructor but the current code has these as separate phases. I will give this another go---this was the initial concept of XdpSocket.

/// @param[in] packetCount The number of packets received
/// @param[in] device The XdpDevice packets are received from (represents the AF_XDP socket)
/// @param[in] userCookie A pointer to an object set by the user when receivePackets() started
typedef void (*OnPacketsArrive)(RawPacket packets[], uint32_t packetCount, XdpSocket* socket, void* userCookie);
Copy link
Collaborator

@Dimi1010 Dimi1010 Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Not critical, but having socket nested to the class might make the signature a bit long? 🤔

External code would have to define it as:
void handler(pcpp::RawPacket packets[], uint32_t packetCount, pcpp::XdpDevice::XdpSocket* socket, void* userCookie).

Anyway, its not critical and can be decided later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah---I was beginning to think that too. But it also suggests that the socket is subordinate to the device which it kind of is, and I thought that was a good idea. But the order of things in the XdpDevice class got messy. I also thought it should be in its own file set which might make things tidier; I'll have to figure that out if you agree.

Copy link
Collaborator

@Dimi1010 Dimi1010 Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. I don't think we need a new file for now.

I think the following order would work:

root (namespace pcpp)
- XdpDevice - forward declare.
- XdpSocket - full declaration.
  - XdpUmem - nested, private visibility.
- XdpDevice - full declaration.

I think this should be fine since XdpUmem was currently only used inside XdpSocket? (Famous last words)

/// @param[in] packetCount The number of packets received
/// @param[in] device The XdpDevice packets are received from (represents the AF_XDP socket)
/// @param[in] userCookie A pointer to an object set by the user when receivePackets() started
typedef void (*OnPacketsArrive)(RawPacket packets[], uint32_t packetCount, XdpSocket* socket, void* userCookie);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, breaking change in the signature with the existing callback API, but not much that can be done here.

If we are keeping the old XdpDevice API that works on Queue 0, we can probably provide a wrapper for code that essentially does:

void handler(RawPacket packets[], uint32_t packetCount, XdpSocket* socket, void* userCookie)
{
  oldCallback(packets, packetCount, socket->getDevice(), userCookie);
}

to allow code using the old API to work seamlessly if it does not need extra queues.

It would probably require storing the callback in std::function instead of raw function pointer, but that has the secondary effect of allowing user code to use lambdas. For function pointers the difference between std::function and a raw function pointer is pretty negligible as most mainstream implementations have that as an optimization.

delete m_Umem;
m_Umem = nullptr;
m_Socket[i] = new XdpSocket(this, i);
m_Socket[i]->configure();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Configure can fail. It should be handled by aborting the operation, cleaning up already opened resources and returning false.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will look into this. Perhaps configure can go away if the constructor can do stuff. I will investigate

Comment on lines 146 to 147
auto socketInfo = static_cast<xsk_socket_info*>(m_SocketInfo);
xsk_socket__delete(socketInfo->xsk);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should check for nullptr before delete.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed.

Comment on lines 149 to 150
delete m_Umem;
m_Umem = nullptr;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto: nullptr check before delete.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep--I have to go over this again. It was my initial description

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to: https://github.com/seladb/PcapPlusPlus/pull/2000/files#r2446086115

If we return the old API, revert the changes to the tests for it. XdpSocket can have new tests.

@seladb
Copy link
Owner

seladb commented Oct 21, 2025

@JasMetzger didn't we want to just add the RX queue to XdpDeviceConfiguration and use this value in xsk_socket__create instead of always passing 0 (link)?

This PR seems to do much more than just this...

@Dimi1010
Copy link
Collaborator

Dimi1010 commented Oct 21, 2025

@JasMetzger didn't we want to just add the RX queue to XdpDeviceConfiguration and use this value in xsk_socket__create instead of always passing 0 (link)?

This PR seems to do much more than just this...

@seladb Fair enough, we did initially talk about that.

IMO, this is a slightly better approach as it separates the XdpDevice, which would be the NIC, from the open sockets on it — XdpSocket.

Having the user create multiple devices from the same interface for different thread queues seems confusing to me as all other ***Device implementations seem to match the NIC it is used on 1:1 (one NIC makes 1 device instance). It would also be slightly brittler as we can't really prevent the user from opening a thread queue twice that way.

@seladb
Copy link
Owner

seladb commented Oct 21, 2025

@JasMetzger didn't we want to just add the RX queue to XdpDeviceConfiguration and use this value in xsk_socket__create instead of always passing 0 (link)?
This PR seems to do much more than just this...

@seladb Fair enough, we did initially talk about that.

IMO, this is a slightly better approach as it separates the XdpDevice, which would be the NIC, from the open sockets on it — XdpSocket.

Having the user create multiple devices from the same interface for different thread queues seems confusing to me as all other ***Device implementations seem to match the NIC it is used on 1:1 (one NIC makes 1 device instance). It would also be slightly brittler as we can't really prevent the user from opening a thread queue twice that way.

This PR introduces a breaking change, which might not be great for users already using this device. While I agree we can think about the bigger refactoring, I'd start simple by allowing RX queue ID configuration, then see the demand for the bigger change. Please let me know what you think

@Dimi1010
Copy link
Collaborator

This PR introduces a breaking change, which might not be great for users already using this device. While I agree we can think about the bigger refactoring, I'd start simple by allowing RX queue ID configuration, then see the demand for the bigger change. Please let me know what you think

Do you mean the breaking changes in the callbacks and API of XdpDevice? If so, I have ideas for mitigation.

We keep the old API. It forwards its operations to socket[0] to keep old behaviour that operated on queue 0.

For the callbacks, the mitigation is:

  • XdpDevice::OnPacketArrives remains unchanged and receives a device pointer.
  • XdpSocket::OnPacketArrives has XdpSocket pointer instead of XdpDevice pointer.
  • When the user uses the old XdpDevice::receivePackets, the user callback is wrapped in a lambda that does the following:
[userCallback](RawPacket* packets, int packetCount, XdpSocket* socket, void userCookie)
{
   return userCallback(packets, packetCount, socket->getDevice(), userCookie);
}

Alternatively, it may capture the device from the this pointer as it will be created inside receivePackets, to avoid a const cast (XdpSocket returns a const device).

This should keep the changes backward compatible. Let me know if I missed something.

@Dimi1010
Copy link
Collaborator

This PR introduces a breaking change, which might not be great for users already using this device. While I agree we can think about the bigger refactoring, I'd start simple by allowing RX queue ID configuration, then see the demand for the bigger change. Please let me know what you think

@seladb Do you mean the breaking changes in the callbacks and API of XdpDevice? If so, I have ideas for mitigation.

We keep the old API. It forwards its operations to socket[0] to keep old behaviour that operated on queue 0.

For the callbacks, the mitigation is:

  • XdpDevice::OnPacketArrives remains unchanged and receives a device pointer.
  • XdpSocket::OnPacketArrives has XdpSocket pointer instead of XdpDevice pointer.
  • When the user uses the old XdpDevice::receivePackets, the user callback is wrapped in a lambda that does the following:
[userCallback](RawPacket* packets, int packetCount, XdpSocket* socket, void userCookie)
{
   return userCallback(packets, packetCount, socket->getDevice(), userCookie);
}

Alternatively, it may capture the device from the this pointer as it will be created inside receivePackets, to avoid a const cast (XdpSocket returns a const device).

This should keep the changes backward compatible. Let me know if I missed something.

@seladb
Copy link
Owner

seladb commented Oct 21, 2025

I want to make this PR small and simple. Later on, we can discuss bigger changes if needed...

@Dimi1010
Copy link
Collaborator

I want to make this PR small and simple. Later on, we can discuss bigger changes if needed...

@seladb, the small change would make this refactor a breaking change. This means we are locking ourselves out of this cleaner solution without a bigger breaking change, while adding this initially makes this possible without breaking changes.

@seladb
Copy link
Owner

seladb commented Oct 21, 2025

I want to make this PR small and simple. Later on, we can discuss bigger changes if needed...

@seladb, the small change would make this refactor a breaking change. This means we are locking ourselves out of this cleaner solution without a bigger breaking change, while adding this initially makes this possible without breaking changes.

Since this feature was never brought up until now, I wonder how much it's going to be used Even @JasMetzger is not sure he really needs it. I suggest we start small. Worst case if this feature becomes popular, we can figure it out later

@JasMetzger
Copy link
Contributor Author

If all you want to do is set a single rxqueue id in the config, I'm not sure that will do anything because the likelihood of that rxqueue id remaining zero is extremely high. Just leave alone. To achieve wirespeeds of processing (e.g. 10G) likely require the multiple queues.

The code is not ready. I'll integrate the suggested changes and leave it.

@Dimi1010
Copy link
Collaborator

Dimi1010 commented Oct 21, 2025

Since this feature was never brought up until now, I wonder how much it's going to be used Even @JasMetzger is not sure he really needs it. I suggest we start small. Worst case if this feature becomes popular, we can figure it out later

IMO, figuring it out after it becomes popular is a bad way to go, as then you have to work around breaking user code that depends on the initial implementation of the change and all the headaches that brings. In the initial addition, you have a lot more freedom, as no code depends on it, so getting a good initial solution that is flexible and maintainable is important.

At its core, the PR is not that complex, only somewhat long. Wrap the logic that operates on a single socket in XdpSocket and make XdpDevice be able to hold more than one socket.

It can even be split in two stages:

  1. Wrapping the logic inside XdpSocket
  2. Making XdpDevice be able to hold more than 1 socket.

PS: Sorry for the mentions in the quoted text, Jas. 😅

@JasMetzger
Copy link
Contributor Author

I pushed changes that preserve the original API, tests, and integrate the suggested changes. Also did doxygen. Apologies for the thrash---this is the first time I've done this sort of thing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants