Skip to content

Commit bb1779b

Browse files
real-or-randomtheStack
authored andcommitted
Add secp256k1_memclear() for clearing secret data
We rely on memset() and an __asm__ memory barrier where it's available or on SecureZeroMemory() on Windows. The fallback implementation uses a volatile function pointer to memset which the compiler is not clever enough to optimize.
1 parent 331586e commit bb1779b

File tree

1 file changed

+30
-0
lines changed

1 file changed

+30
-0
lines changed

src/util.h

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,15 @@
99

1010
#include "../include/secp256k1.h"
1111

12+
#include <string.h>
1213
#include <stdlib.h>
1314
#include <stdint.h>
1415
#include <stdio.h>
1516
#include <limits.h>
17+
#if defined(_MSC_VER)
18+
// For SecureZeroMemory
19+
#include <Windows.h>
20+
#endif
1621

1722
#define STR_(x) #x
1823
#define STR(x) STR_(x)
@@ -221,6 +226,31 @@ static SECP256K1_INLINE void secp256k1_memczero(void *s, size_t len, int flag) {
221226
}
222227
}
223228

229+
/* Cleanses memory to prevent leaking sensitive info. Won't be optimized out. */
230+
static SECP256K1_INLINE void secp256k1_memclear(void *ptr, size_t len) {
231+
#if defined(_MSC_VER)
232+
/* SecureZeroMemory is guaranteed not to be optimized out by MSVC. */
233+
SecureZeroMemory(ptr, n);
234+
#elif defined(__GNUC__)
235+
/* We use a memory barrier that scares the compiler away from optimizing out the memset.
236+
*
237+
* Quoting Adam Langley <agl@google.com> in commit ad1907fe73334d6c696c8539646c21b11178f20f
238+
* in BoringSSL (ISC License):
239+
* As best as we can tell, this is sufficient to break any optimisations that
240+
* might try to eliminate "superfluous" memsets.
241+
* This method used in memzero_explicit() the Linux kernel, too. Its advantage is that it is
242+
* pretty efficient, because the compiler can still implement the memset() efficently,
243+
* just not remove it entirely. See "Dead Store Elimination (Still) Considered Harmful" by
244+
* Yang et al. (USENIX Security 2017) for more background.
245+
*/
246+
memset(ptr, 0, len);
247+
__asm__ __volatile__("" : : "r"(ptr) : "memory");
248+
#else
249+
void *(*volatile const volatile_memset)(void *, int, size_t) = memset;
250+
volatile_memset(ptr, 0, len);
251+
#endif
252+
}
253+
224254
/** Semantics like memcmp. Variable-time.
225255
*
226256
* We use this to avoid possible compiler bugs with memcmp, e.g.

0 commit comments

Comments
 (0)