Skip to content

Document unsafe in libcore #66506

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 12 commits into from
34 changes: 32 additions & 2 deletions src/libcore/sync/atomic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -112,8 +112,6 @@
//! println!("live threads: {}", old_thread_count + 1);
//! ```

// ignore-tidy-undocumented-unsafe

#![stable(feature = "rust1", since = "1.0.0")]
#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
Expand Down Expand Up @@ -355,6 +353,7 @@ impl AtomicBool {
#[inline]
#[stable(feature = "atomic_access", since = "1.15.0")]
pub fn get_mut(&mut self) -> &mut bool {
// SAFETY: the mutable reference guarantees unique ownership
unsafe { &mut *(self.v.get() as *mut bool) }
}

Expand Down Expand Up @@ -405,6 +404,7 @@ impl AtomicBool {
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn load(&self, order: Ordering) -> bool {
// SAFETY: data races are prevented by atomic intrinsics
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the reason this needs an unsafe block is the passing of the raw pointer, and we know that raw pointer is valid because we got it from a reference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed this. Would saying something like

SAFETY: any data races are prevented by atomic intrinsics, and the raw pointer passed in is valid because we got it from a reference

be better? Or should we just leave out the first bit about atomics preventing data races?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Atomic and non-atomic accesses still race with each other, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true, but I think in the context of AtomicBool, any accesses are going to be atomic or unique because of mut access, which is why it's safe.

unsafe { atomic_load(self.v.get(), order) != 0 }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could change the atomic_* intrinsics to take references instead of pointers and make them safe. Afaict the only reason they are unsafe is because they take raw pointers

Copy link
Contributor

@hanna-kruppe hanna-kruppe Nov 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are other reasons. Atomic operation require natural alignment (align=size) which is sometimes more than the "ABI alignment" required by references (e.g., for u64 on some 32 bit targets). There may be more caveats I'm forgetting right now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also #55005, which basically requires the use of raw pointers for some kinds of concurrent code (the kind that synchronizes to handle deallocation).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, ok. so alignment and dangliness are what should be mentioned here, not data races

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Data races are also relevant: this must not race with a non-atomic access to the same memory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So what would be the best thing to say here? I'm not sure what kind of guarantees we can make about UnsafeCell.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created another pull request with a smaller number of changes (including core::sync::atomic). I'm going to close this request, so it might be better to finish this discussion there: #66564

}

Expand Down Expand Up @@ -437,6 +437,7 @@ impl AtomicBool {
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn store(&self, val: bool, order: Ordering) {
// SAFETY: data races are prevented by atomic intrinsics
unsafe {
atomic_store(self.v.get(), val as u8, order);
}
Expand Down Expand Up @@ -468,6 +469,7 @@ impl AtomicBool {
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(target_has_atomic = "8")]
pub fn swap(&self, val: bool, order: Ordering) -> bool {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
}

Expand Down Expand Up @@ -562,6 +564,7 @@ impl AtomicBool {
success: Ordering,
failure: Ordering)
-> Result<bool, bool> {
// SAFETY: data races are prevented by atomic intrinsics
match unsafe {
atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
} {
Expand Down Expand Up @@ -618,6 +621,7 @@ impl AtomicBool {
success: Ordering,
failure: Ordering)
-> Result<bool, bool> {
// SAFETY: data races are prevented by atomic intrinsics
match unsafe {
atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
} {
Expand Down Expand Up @@ -664,6 +668,7 @@ impl AtomicBool {
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(target_has_atomic = "8")]
pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
}

Expand Down Expand Up @@ -759,6 +764,7 @@ impl AtomicBool {
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(target_has_atomic = "8")]
pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
}

Expand Down Expand Up @@ -800,6 +806,7 @@ impl AtomicBool {
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(target_has_atomic = "8")]
pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
}
}
Expand Down Expand Up @@ -839,6 +846,7 @@ impl<T> AtomicPtr<T> {
#[inline]
#[stable(feature = "atomic_access", since = "1.15.0")]
pub fn get_mut(&mut self) -> &mut *mut T {
// SAFETY: the mutable reference guarantees unique ownership
unsafe { &mut *self.p.get() }
}

Expand Down Expand Up @@ -890,6 +898,7 @@ impl<T> AtomicPtr<T> {
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn load(&self, order: Ordering) -> *mut T {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_load(self.p.get() as *mut usize, order) as *mut T }
}

Expand Down Expand Up @@ -924,6 +933,7 @@ impl<T> AtomicPtr<T> {
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn store(&self, ptr: *mut T, order: Ordering) {
// SAFETY: data races are prevented by atomic intrinsics
unsafe {
atomic_store(self.p.get() as *mut usize, ptr as usize, order);
}
Expand Down Expand Up @@ -957,6 +967,7 @@ impl<T> AtomicPtr<T> {
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(target_has_atomic = "ptr")]
pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_swap(self.p.get() as *mut usize, ptr as usize, order) as *mut T }
}

Expand Down Expand Up @@ -1040,6 +1051,7 @@ impl<T> AtomicPtr<T> {
success: Ordering,
failure: Ordering)
-> Result<*mut T, *mut T> {
// SAFETY: data races are prevented by atomic intrinsics
unsafe {
let res = atomic_compare_exchange(self.p.get() as *mut usize,
current as usize,
Expand Down Expand Up @@ -1100,6 +1112,7 @@ impl<T> AtomicPtr<T> {
success: Ordering,
failure: Ordering)
-> Result<*mut T, *mut T> {
// SAFETY: data races are prevented by atomic intrinsics
unsafe {
let res = atomic_compare_exchange_weak(self.p.get() as *mut usize,
current as usize,
Expand Down Expand Up @@ -1245,6 +1258,7 @@ assert_eq!(some_var.load(Ordering::SeqCst), 5);
#[inline]
#[$stable_access]
pub fn get_mut(&mut self) -> &mut $int_type {
// SAFETY: the mutable reference guarantees unique ownership
unsafe { &mut *self.v.get() }
}
}
Expand Down Expand Up @@ -1299,6 +1313,7 @@ assert_eq!(some_var.load(Ordering::Relaxed), 5);
#[inline]
#[$stable]
pub fn load(&self, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_load(self.v.get(), order) }
}
}
Expand Down Expand Up @@ -1333,6 +1348,7 @@ assert_eq!(some_var.load(Ordering::Relaxed), 10);
#[inline]
#[$stable]
pub fn store(&self, val: $int_type, order: Ordering) {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_store(self.v.get(), val, order); }
}
}
Expand Down Expand Up @@ -1363,6 +1379,7 @@ assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
#[$stable]
#[$cfg_cas]
pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_swap(self.v.get(), val, order) }
}
}
Expand Down Expand Up @@ -1465,6 +1482,7 @@ assert_eq!(some_var.load(Ordering::Relaxed), 10);
new: $int_type,
success: Ordering,
failure: Ordering) -> Result<$int_type, $int_type> {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
}
}
Expand Down Expand Up @@ -1517,6 +1535,7 @@ loop {
new: $int_type,
success: Ordering,
failure: Ordering) -> Result<$int_type, $int_type> {
// SAFETY: data races are prevented by atomic intrinsics
unsafe {
atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
}
Expand Down Expand Up @@ -1551,6 +1570,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 10);
#[$stable]
#[$cfg_cas]
pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_add(self.v.get(), val, order) }
}
}
Expand Down Expand Up @@ -1583,6 +1603,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 10);
#[$stable]
#[$cfg_cas]
pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_sub(self.v.get(), val, order) }
}
}
Expand Down Expand Up @@ -1618,6 +1639,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
#[$stable]
#[$cfg_cas]
pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_and(self.v.get(), val, order) }
}
}
Expand Down Expand Up @@ -1654,6 +1676,7 @@ assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
#[$stable_nand]
#[$cfg_cas]
pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_nand(self.v.get(), val, order) }
}
}
Expand Down Expand Up @@ -1689,6 +1712,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
#[$stable]
#[$cfg_cas]
pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_or(self.v.get(), val, order) }
}
}
Expand Down Expand Up @@ -1724,6 +1748,7 @@ assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
#[$stable]
#[$cfg_cas]
pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { atomic_xor(self.v.get(), val, order) }
}
}
Expand Down Expand Up @@ -1835,6 +1860,7 @@ assert!(max_foo == 42);
issue = "48655")]
#[$cfg_cas]
pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { $max_fn(self.v.get(), val, order) }
}
}
Expand Down Expand Up @@ -1887,6 +1913,7 @@ assert_eq!(min_foo, 12);
issue = "48655")]
#[$cfg_cas]
pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
// SAFETY: data races are prevented by atomic intrinsics
unsafe { $min_fn(self.v.get(), val, order) }
}
}
Expand Down Expand Up @@ -2429,7 +2456,9 @@ pub fn fence(order: Ordering) {
// to conventionally implement fences at
// https://github.com/WebAssembly/tool-conventions/issues/59. We should
// follow that discussion and implement a solution when one comes about!

#[cfg(not(target_arch = "wasm32"))]
// SAFETY: using an atomic fence is safe
unsafe {
match order {
Acquire => intrinsics::atomic_fence_acq(),
Expand Down Expand Up @@ -2518,6 +2547,7 @@ pub fn fence(order: Ordering) {
#[inline]
#[stable(feature = "compiler_fences", since = "1.21.0")]
pub fn compiler_fence(order: Ordering) {
// SAFETY: doesn't compile to machine code
unsafe {
match order {
Acquire => intrinsics::atomic_singlethreadfence_acq(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are any of these intrinsics unsafe for a reason? Invoking the compiler_fence function is safe, so the intrinsics could be safe

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found this:

As with any other FFI functions, these are always unsafe to call.

https://doc.rust-lang.org/1.1.0/book/intrinsics.html

The docs are old but I think they're still true. Meaning, I don't think there's a way around it unless you add a wrapper function but I'm not sure that adds anything.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have safe intrinsics these days, like size_of.

Expand Down