Skip to content

Commit 7f8ecd0

Browse files
committed
context: add nostd version of global context
My initial plan for this commit was to implement the nostd version without randomization support, and patch it in later. However, I realized that even without rerandomization, I still needed synchronization logic in order to initialize the global context object. (Upstream provides a static "no precomp" context object, but it has no precomputation tables and therefore can't be used for verification, which makes it unusable for our purposes). In order to implement initialization, with ChatGPT's help I implemented a simple spinlock. However, there are a number of problems with spinlocks -- see this article (from Kix in #346) for some of them: https://matklad.github.io/2020/01/02/spinlocks-considered-harmful.html To avoid these problems, we tweak the spinlock logic so that we only try spinning a small finite number of times, then give up. Our "give up" logic is: 1. When initializing the global context, if we can't get the lock, we just initialize a new stack-local context and use that. (A parallel thread must be initializing the context, which is wasteful but harmless.) 2. Once we unlock the context, we copy it onto the stack and re-lock it in order to minimize the time holding the lock. (The exception is during initialization where we hold the lock for the whole initialization, in the hopes that other threads will block on us instead of doing their own initialization.) If we rerandomize, we do this on the stack-local copy and then only re-lock to copy it back. 3. If we fail to get the lock to copy the rerandomized context back, we just don't copy it. The result is that we wasted some time rerandomizing without any benefit, which is not the end of the world. Next steps are: 1. Update the API to use this logic everywhere; on validation functions we don't need to rerandomize and on signing/keygen functions we should rerandomize using our secret key material. 2. Remove the existing "no context" API, along with the global-context and global-context-less-secure features. 3. Improve our entropy story on nostd by scraping system time or CPU jitter or something and hashing that into our rerandomization. We don't need to do a great job here -- if we can get even a bit or two per signature, that will completely BTFO a timing attacker.
1 parent dcba49c commit 7f8ecd0

File tree

2 files changed

+212
-8
lines changed

2 files changed

+212
-8
lines changed

src/context.rs

Lines changed: 209 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,213 @@ use crate::ffi::types::{c_uint, c_void, AlignedType};
1010
use crate::ffi::{self, CPtr};
1111
use crate::{Error, Secp256k1};
1212

13+
#[cfg(not(feature = "std"))]
14+
mod internal {
15+
use core::cell::UnsafeCell;
16+
use core::hint::spin_loop;
17+
use core::marker::PhantomData;
18+
use core::mem::ManuallyDrop;
19+
use core::ops::{Deref, DerefMut};
20+
use core::ptr::NonNull;
21+
use core::sync::atomic::{AtomicBool, Ordering};
22+
23+
use crate::ffi::types::{c_void, AlignedType};
24+
use crate::{ffi, AllPreallocated, Context, Secp256k1};
25+
26+
const MAX_SPINLOCK_ATTEMPTS: usize = 128;
27+
const MAX_PREALLOC_SIZE: usize = 16; // measured at 208 bytes on Andrew's 64-bit system
28+
29+
static SECP256K1: SpinLock = SpinLock::new();
30+
31+
// Simple spinlock-gated structure which holds the backing store for a
32+
// secp256k1 context.
33+
//
34+
// To obtain exclusive access, call [`Self::try_lock`], which will spinlock
35+
// for some small number of iterations before giving up. By trying again in
36+
// a loop, you can emulate a "true" spinlock that will only yield once it
37+
// has access. However, this would be very dangerous, especially in a nostd
38+
// environment, because if we are pre-empted by an interrupt handler while
39+
// the lock is held, and that interrupt handler attempts to take the lock,
40+
// then we deadlock.
41+
//
42+
// Instead, the strategy we take within this module is to simply create a
43+
// new stack-local context object if we are unable to obtain a lock on the
44+
// global one. This is slow and loses the defense-in-depth "rerandomization"
45+
// anti-sidechannel measure, but it is better than deadlocking..
46+
struct SpinLock {
47+
flag: AtomicBool,
48+
// Invariant: if this is non-None, then the store is valid and can be
49+
// used with `ffi::secp256k1_context_preallocated_create`.
50+
data: UnsafeCell<([AlignedType; MAX_PREALLOC_SIZE], Option<NonNull<ffi::Context>>)>,
51+
}
52+
53+
// Required by rustc if we have a static of this type.
54+
// Safety: `data` is accessed only while the `flag` is held.
55+
unsafe impl Sync for SpinLock {}
56+
unsafe impl Send for SpinLock {}
57+
58+
impl SpinLock {
59+
const fn new() -> Self {
60+
Self {
61+
flag: AtomicBool::new(false),
62+
data: UnsafeCell::new(([AlignedType::ZERO; MAX_PREALLOC_SIZE], None)),
63+
}
64+
}
65+
66+
/// Blocks until the lock is acquired, then returns an RAII guard.
67+
fn try_lock(&self) -> Option<SpinLockGuard<'_>> {
68+
for _ in 0..MAX_SPINLOCK_ATTEMPTS {
69+
// `compare_exchange_weak` is fine here: we’re spinning anyway.
70+
if self
71+
.flag
72+
.compare_exchange_weak(false, true, Ordering::Acquire, Ordering::Relaxed)
73+
.is_ok()
74+
{
75+
return Some(SpinLockGuard { lock: self });
76+
}
77+
spin_loop();
78+
}
79+
None
80+
}
81+
82+
#[inline(always)]
83+
fn unlock(&self) { self.flag.store(false, Ordering::Release); }
84+
}
85+
86+
/// Drops the lock when it goes out of scope.
87+
pub struct SpinLockGuard<'a> {
88+
lock: &'a SpinLock,
89+
}
90+
91+
impl Deref for SpinLockGuard<'_> {
92+
type Target = ([AlignedType; MAX_PREALLOC_SIZE], Option<NonNull<ffi::Context>>);
93+
fn deref(&self) -> &Self::Target {
94+
// Safe: we hold the lock.
95+
unsafe { &*self.lock.data.get() }
96+
}
97+
}
98+
99+
impl DerefMut for SpinLockGuard<'_> {
100+
fn deref_mut(&mut self) -> &mut Self::Target {
101+
// Safe: mutable access is unique while the guard lives.
102+
unsafe { &mut *self.lock.data.get() }
103+
}
104+
}
105+
106+
impl Drop for SpinLockGuard<'_> {
107+
fn drop(&mut self) { self.lock.unlock(); }
108+
}
109+
110+
/// Borrows the global context and do some operation on it.
111+
///
112+
/// If `randomize_seed` is provided, it is used to call [`rerandomize_global_context`]
113+
/// the context after the operation is complete. If it is not provided, randomization
114+
/// is skipped.
115+
///
116+
/// Only a bit or two per signing operation is needed; if you have any entropy at all,
117+
/// you should provide it, even if you can't provide 32 random bytes.
118+
pub fn with_global_context<T, Ctx: Context, F: FnOnce(&Secp256k1<Ctx>) -> T>(
119+
f: F,
120+
rerandomize_seed: Option<&[u8; 32]>,
121+
) -> T {
122+
with_raw_global_context(
123+
|ctx| {
124+
let secp = ManuallyDrop::new(Secp256k1 { ctx, phantom: PhantomData });
125+
f(&*secp)
126+
},
127+
rerandomize_seed,
128+
)
129+
}
130+
131+
/// Borrows the global context as a raw pointer and do some operation on it.
132+
///
133+
/// If `randomize_seed` is provided, it is used to call [`rerandomize_global_context`]
134+
/// the context after the operation is complete. If it is not provided, randomization
135+
/// is skipped.
136+
///
137+
/// Only a bit or two per signing operation is needed; if you have any entropy at all,
138+
/// you should provide it, even if you can't provide 32 random bytes.
139+
pub fn with_raw_global_context<T, F: FnOnce(NonNull<ffi::Context>) -> T>(
140+
f: F,
141+
rerandomize_seed: Option<&[u8; 32]>,
142+
) -> T {
143+
assert!(
144+
unsafe {
145+
ffi::secp256k1_context_preallocated_size(AllPreallocated::FLAGS)
146+
<= core::mem::size_of::<[AlignedType; MAX_PREALLOC_SIZE]>()
147+
},
148+
"prealloc size exceeds our guessed compile-time upper bound"
149+
);
150+
151+
// Our function may be expensive, so before calling it, we copy the global
152+
// context into this local buffer on the stack. Then we can release it,
153+
// allowing other callers to use it simultaneously.
154+
let mut store = [AlignedType::ZERO; MAX_PREALLOC_SIZE];
155+
let buf = NonNull::new(store.as_mut_ptr() as *mut c_void).unwrap();
156+
157+
let ctx = match SECP256K1.try_lock() {
158+
None => unsafe {
159+
// If we can't get the lock, just do everything on the stack.
160+
ffi::secp256k1_context_preallocated_create(buf, AllPreallocated::FLAGS)
161+
},
162+
Some(ref mut guard) => unsafe {
163+
// If we *can* get the lock, use it and update it.
164+
let (ref mut store, ref mut ctx) = **guard;
165+
let global_ctx = ctx.get_or_insert_with(|| {
166+
let buf = NonNull::new(store.as_mut_ptr() as *mut c_void).unwrap();
167+
ffi::secp256k1_context_preallocated_create(buf, AllPreallocated::FLAGS)
168+
});
169+
ffi::secp256k1_context_preallocated_clone(global_ctx.as_ptr(), buf)
170+
},
171+
};
172+
// The lock is now dropped. Call the function.
173+
let ret = f(ctx);
174+
// ...then rerandomize the local copy, and try to replace the global one
175+
// with this. There are three cases for how this can work:
176+
//
177+
// 1. In the happy path, we succeeded in getting the lock above, have
178+
// a copy of the global context, are rerandomizing and storing it.
179+
// Great.
180+
// 2. Same as above, except that another thread is doing the same thing
181+
// in parallel. Now we both have copies that we're rerandomizing, and
182+
// both will try to store it. One of us will clobber the other, wasting
183+
// work but otherwise not causing any problems.
184+
// 3. If we -failed- to get the lock above, we are rerandomizing a fresh
185+
// copy of the context object. This may "undo" previous rerandomization.
186+
// In theory if an attacker is able to reliably and repeatedly trigger
187+
// this situation, they will have defeated the rerandomization. Since
188+
// this is a defense-in-depth measure, we will accept this.
189+
if let Some(seed) = rerandomize_seed {
190+
// Safety: this is a FFI call. It's fine.
191+
unsafe {
192+
assert_eq!(ffi::secp256k1_context_randomize(ctx, seed.as_ptr()), 1);
193+
}
194+
if let Some(ref mut guard) = SECP256K1.try_lock() {
195+
let (ref mut global_store, ref mut global_ctx_ptr) = **guard;
196+
unsafe {
197+
ffi::secp256k1_context_preallocated_clone(
198+
ctx.as_ptr(),
199+
NonNull::new(global_store.as_mut_ptr() as *mut _).unwrap(),
200+
);
201+
}
202+
203+
// 2. Update the pointer to refer to the *global* buffer, **not** the stack
204+
*global_ctx_ptr =
205+
Some(NonNull::new(global_store.as_mut_ptr() as *mut ffi::Context).unwrap());
206+
}
207+
}
208+
ret
209+
}
210+
211+
/// Rerandomize the global context, using the given data as a seed.
212+
///
213+
/// The provided data will be mixed with the entropy from previous calls in a timing
214+
/// analysis resistant way. It is safe to directly pass secret data to this function.
215+
pub fn rerandomize_global_context(seed: &[u8; 32]) {
216+
with_raw_global_context(|_| {}, Some(seed))
217+
}
218+
}
219+
13220
#[cfg(feature = "std")]
14221
mod internal {
15222
use std::cell::RefCell;
@@ -109,7 +316,6 @@ mod internal {
109316
});
110317
}
111318
}
112-
#[cfg(feature = "std")]
113319
pub use internal::{rerandomize_global_context, with_global_context, with_raw_global_context};
114320

115321
#[cfg(all(feature = "global-context", feature = "std"))]
@@ -471,7 +677,8 @@ impl<'buf> Secp256k1<AllPreallocated<'buf>> {
471677
/// * The version of `libsecp256k1` used to create `raw_ctx` must be **exactly the one linked
472678
/// into this library**.
473679
/// * The lifetime of the `raw_ctx` pointer must outlive `'buf`.
474-
/// * `raw_ctx` must point to writable memory (cannot be `ffi::secp256k1_context_no_precomp`).
680+
/// * `raw_ctx` must point to writable memory (cannot be `ffi::secp256k1_context_no_precomp`),
681+
/// **or** the user must never attempt to rerandomize the context.
475682
pub unsafe fn from_raw_all(
476683
raw_ctx: NonNull<ffi::Context>,
477684
) -> ManuallyDrop<Secp256k1<AllPreallocated<'buf>>> {

src/lib.rs

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -184,16 +184,13 @@ pub use secp256k1_sys as ffi;
184184
#[cfg(feature = "serde")]
185185
pub use serde;
186186

187-
#[cfg(feature = "std")]
188187
pub use crate::context::{
189-
rerandomize_global_context, with_global_context, with_raw_global_context,
188+
rerandomize_global_context, with_global_context, with_raw_global_context, AllPreallocated,
189+
Context, PreallocatedContext, SignOnlyPreallocated, Signing, Verification,
190+
VerifyOnlyPreallocated,
190191
};
191192
#[cfg(feature = "alloc")]
192193
pub use crate::context::{All, SignOnly, VerifyOnly};
193-
pub use crate::context::{
194-
AllPreallocated, Context, PreallocatedContext, SignOnlyPreallocated, Signing, Verification,
195-
VerifyOnlyPreallocated,
196-
};
197194
use crate::ffi::types::AlignedType;
198195
use crate::ffi::CPtr;
199196
pub use crate::key::{InvalidParityValue, Keypair, Parity, PublicKey, SecretKey, XOnlyPublicKey};

0 commit comments

Comments
 (0)