-
Notifications
You must be signed in to change notification settings - Fork 13.3k
Document unsafe in libcore #66506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document unsafe in libcore #66506
Changes from 1 commit
4679271
26ef195
267aebc
5797593
d78c056
d74823a
ac20308
36a0974
cd3052e
3399c35
edeaf2f
8e4353b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -112,8 +112,6 @@ | |
//! println!("live threads: {}", old_thread_count + 1); | ||
//! ``` | ||
|
||
// ignore-tidy-undocumented-unsafe | ||
|
||
#![stable(feature = "rust1", since = "1.0.0")] | ||
#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))] | ||
#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))] | ||
|
@@ -355,6 +353,7 @@ impl AtomicBool { | |
#[inline] | ||
#[stable(feature = "atomic_access", since = "1.15.0")] | ||
pub fn get_mut(&mut self) -> &mut bool { | ||
// SAFETY: the mutable reference guarantees unique ownership | ||
unsafe { &mut *(self.v.get() as *mut bool) } | ||
} | ||
|
||
|
@@ -405,6 +404,7 @@ impl AtomicBool { | |
#[inline] | ||
#[stable(feature = "rust1", since = "1.0.0")] | ||
pub fn load(&self, order: Ordering) -> bool { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_load(self.v.get(), order) != 0 } | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We could change the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There are other reasons. Atomic operation require natural alignment (align=size) which is sometimes more than the "ABI alignment" required by references (e.g., for u64 on some 32 bit targets). There may be more caveats I'm forgetting right now. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's also #55005, which basically requires the use of raw pointers for some kinds of concurrent code (the kind that synchronizes to handle deallocation). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Heh, ok. so alignment and dangliness are what should be mentioned here, not data races There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Data races are also relevant: this must not race with a non-atomic access to the same memory. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So what would be the best thing to say here? I'm not sure what kind of guarantees we can make about There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I created another pull request with a smaller number of changes (including |
||
} | ||
|
||
|
@@ -437,6 +437,7 @@ impl AtomicBool { | |
#[inline] | ||
#[stable(feature = "rust1", since = "1.0.0")] | ||
pub fn store(&self, val: bool, order: Ordering) { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { | ||
atomic_store(self.v.get(), val as u8, order); | ||
} | ||
|
@@ -468,6 +469,7 @@ impl AtomicBool { | |
#[stable(feature = "rust1", since = "1.0.0")] | ||
#[cfg(target_has_atomic = "8")] | ||
pub fn swap(&self, val: bool, order: Ordering) -> bool { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 } | ||
} | ||
|
||
|
@@ -562,6 +564,7 @@ impl AtomicBool { | |
success: Ordering, | ||
failure: Ordering) | ||
-> Result<bool, bool> { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
match unsafe { | ||
atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure) | ||
} { | ||
|
@@ -618,6 +621,7 @@ impl AtomicBool { | |
success: Ordering, | ||
failure: Ordering) | ||
-> Result<bool, bool> { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
match unsafe { | ||
atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure) | ||
} { | ||
|
@@ -664,6 +668,7 @@ impl AtomicBool { | |
#[stable(feature = "rust1", since = "1.0.0")] | ||
#[cfg(target_has_atomic = "8")] | ||
pub fn fetch_and(&self, val: bool, order: Ordering) -> bool { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_and(self.v.get(), val as u8, order) != 0 } | ||
} | ||
|
||
|
@@ -759,6 +764,7 @@ impl AtomicBool { | |
#[stable(feature = "rust1", since = "1.0.0")] | ||
#[cfg(target_has_atomic = "8")] | ||
pub fn fetch_or(&self, val: bool, order: Ordering) -> bool { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_or(self.v.get(), val as u8, order) != 0 } | ||
} | ||
|
||
|
@@ -800,6 +806,7 @@ impl AtomicBool { | |
#[stable(feature = "rust1", since = "1.0.0")] | ||
#[cfg(target_has_atomic = "8")] | ||
pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 } | ||
} | ||
} | ||
|
@@ -839,6 +846,7 @@ impl<T> AtomicPtr<T> { | |
#[inline] | ||
#[stable(feature = "atomic_access", since = "1.15.0")] | ||
pub fn get_mut(&mut self) -> &mut *mut T { | ||
// SAFETY: the mutable reference guarantees unique ownership | ||
unsafe { &mut *self.p.get() } | ||
} | ||
|
||
|
@@ -890,6 +898,7 @@ impl<T> AtomicPtr<T> { | |
#[inline] | ||
#[stable(feature = "rust1", since = "1.0.0")] | ||
pub fn load(&self, order: Ordering) -> *mut T { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_load(self.p.get() as *mut usize, order) as *mut T } | ||
} | ||
|
||
|
@@ -924,6 +933,7 @@ impl<T> AtomicPtr<T> { | |
#[inline] | ||
#[stable(feature = "rust1", since = "1.0.0")] | ||
pub fn store(&self, ptr: *mut T, order: Ordering) { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { | ||
atomic_store(self.p.get() as *mut usize, ptr as usize, order); | ||
} | ||
|
@@ -957,6 +967,7 @@ impl<T> AtomicPtr<T> { | |
#[stable(feature = "rust1", since = "1.0.0")] | ||
#[cfg(target_has_atomic = "ptr")] | ||
pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_swap(self.p.get() as *mut usize, ptr as usize, order) as *mut T } | ||
} | ||
|
||
|
@@ -1040,6 +1051,7 @@ impl<T> AtomicPtr<T> { | |
success: Ordering, | ||
failure: Ordering) | ||
-> Result<*mut T, *mut T> { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { | ||
let res = atomic_compare_exchange(self.p.get() as *mut usize, | ||
current as usize, | ||
|
@@ -1100,6 +1112,7 @@ impl<T> AtomicPtr<T> { | |
success: Ordering, | ||
failure: Ordering) | ||
-> Result<*mut T, *mut T> { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { | ||
let res = atomic_compare_exchange_weak(self.p.get() as *mut usize, | ||
current as usize, | ||
|
@@ -1245,6 +1258,7 @@ assert_eq!(some_var.load(Ordering::SeqCst), 5); | |
#[inline] | ||
#[$stable_access] | ||
pub fn get_mut(&mut self) -> &mut $int_type { | ||
// SAFETY: the mutable reference guarantees unique ownership | ||
unsafe { &mut *self.v.get() } | ||
} | ||
} | ||
|
@@ -1299,6 +1313,7 @@ assert_eq!(some_var.load(Ordering::Relaxed), 5); | |
#[inline] | ||
#[$stable] | ||
pub fn load(&self, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_load(self.v.get(), order) } | ||
} | ||
} | ||
|
@@ -1333,6 +1348,7 @@ assert_eq!(some_var.load(Ordering::Relaxed), 10); | |
#[inline] | ||
#[$stable] | ||
pub fn store(&self, val: $int_type, order: Ordering) { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_store(self.v.get(), val, order); } | ||
} | ||
} | ||
|
@@ -1363,6 +1379,7 @@ assert_eq!(some_var.swap(10, Ordering::Relaxed), 5); | |
#[$stable] | ||
#[$cfg_cas] | ||
pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_swap(self.v.get(), val, order) } | ||
} | ||
} | ||
|
@@ -1465,6 +1482,7 @@ assert_eq!(some_var.load(Ordering::Relaxed), 10); | |
new: $int_type, | ||
success: Ordering, | ||
failure: Ordering) -> Result<$int_type, $int_type> { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) } | ||
} | ||
} | ||
|
@@ -1517,6 +1535,7 @@ loop { | |
new: $int_type, | ||
success: Ordering, | ||
failure: Ordering) -> Result<$int_type, $int_type> { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { | ||
atomic_compare_exchange_weak(self.v.get(), current, new, success, failure) | ||
} | ||
|
@@ -1551,6 +1570,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 10); | |
#[$stable] | ||
#[$cfg_cas] | ||
pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_add(self.v.get(), val, order) } | ||
} | ||
} | ||
|
@@ -1583,6 +1603,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 10); | |
#[$stable] | ||
#[$cfg_cas] | ||
pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_sub(self.v.get(), val, order) } | ||
} | ||
} | ||
|
@@ -1618,6 +1639,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 0b100001); | |
#[$stable] | ||
#[$cfg_cas] | ||
pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_and(self.v.get(), val, order) } | ||
} | ||
} | ||
|
@@ -1654,6 +1676,7 @@ assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31)); | |
#[$stable_nand] | ||
#[$cfg_cas] | ||
pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_nand(self.v.get(), val, order) } | ||
} | ||
} | ||
|
@@ -1689,6 +1712,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 0b111111); | |
#[$stable] | ||
#[$cfg_cas] | ||
pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_or(self.v.get(), val, order) } | ||
} | ||
} | ||
|
@@ -1724,6 +1748,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 0b011110); | |
#[$stable] | ||
#[$cfg_cas] | ||
pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { atomic_xor(self.v.get(), val, order) } | ||
} | ||
} | ||
|
@@ -1835,6 +1860,7 @@ assert!(max_foo == 42); | |
issue = "48655")] | ||
#[$cfg_cas] | ||
pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { $max_fn(self.v.get(), val, order) } | ||
} | ||
} | ||
|
@@ -1887,6 +1913,7 @@ assert_eq!(min_foo, 12); | |
issue = "48655")] | ||
#[$cfg_cas] | ||
pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type { | ||
// SAFETY: data races are prevented by atomic intrinsics | ||
unsafe { $min_fn(self.v.get(), val, order) } | ||
} | ||
} | ||
|
@@ -2429,7 +2456,9 @@ pub fn fence(order: Ordering) { | |
// to conventionally implement fences at | ||
// https://github.com/WebAssembly/tool-conventions/issues/59. We should | ||
// follow that discussion and implement a solution when one comes about! | ||
|
||
#[cfg(not(target_arch = "wasm32"))] | ||
// SAFETY: using an atomic fence is safe | ||
unsafe { | ||
match order { | ||
Acquire => intrinsics::atomic_fence_acq(), | ||
|
@@ -2518,6 +2547,7 @@ pub fn fence(order: Ordering) { | |
#[inline] | ||
#[stable(feature = "compiler_fences", since = "1.21.0")] | ||
pub fn compiler_fence(order: Ordering) { | ||
// SAFETY: doesn't compile to machine code | ||
unsafe { | ||
match order { | ||
Acquire => intrinsics::atomic_singlethreadfence_acq(), | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. are any of these intrinsics unsafe for a reason? Invoking the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Found this:
https://doc.rust-lang.org/1.1.0/book/intrinsics.html The docs are old but I think they're still true. Meaning, I don't think there's a way around it unless you add a wrapper function but I'm not sure that adds anything. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We have safe intrinsics these days, like |
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the reason this needs an unsafe block is the passing of the raw pointer, and we know that raw pointer is valid because we got it from a reference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noticed this. Would saying something like
be better? Or should we just leave out the first bit about atomics preventing data races?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Atomic and non-atomic accesses still race with each other, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true, but I think in the context of AtomicBool, any accesses are going to be atomic or unique because of
mut
access, which is why it's safe.