-
Notifications
You must be signed in to change notification settings - Fork 237
Description
Hi,
When porting drivers to use the new HAL traits we've found an issue that was "implementation specific" before, but now is part of the traits, and that is how to insert delays due to chip requirements.
Usually this means for example (these come from the DW1000 chip):
- CS must be low for 50 ns before a transaction starts
- CS must stay low for 200 ns after a transaction finishes
- CS must stay high for 250 ns between two transactions
With the current API there is no way to represent these kinds of delays in the Operation
API as it is microsecond based.
As it is now, either we need to pay up to 20x more time than required by selecting a 1 us delay, or more as it is in practice due to implementation details of timers, one will get between 1 and 2 us of actual delay.
So with the requirements stacked, one get 3-6 us of delay (!) instead of 500 ns in total.
I propose an Operation::DelayNs
is added to cover this use-case.
For the requirement between two transactions, I think this might be out of scope of this API?
But if there are any ideas here I'd love to hear them.
I would argue that if it cannot be added to this trait then it's not possible to achieve generic drivers over chips with CS requirements. This because you must always sidestep this trait then to actually implement support for a chip in a robust manner.
Generally though chips "work" even when these times are not upheld, but I do not think we should build the SPI traits on hope that it will work when requirements cannot be specified.
If I was to shoot from my hip I'd say that the trait could have a constant that is
const TIME_BETWEEN_TRANSACTIONS_NS: u32 = 0; // 0 for the current behavior
and that would use the same nanosecond API required above.
Looking forward to your feedback!
BR Emil