core\sync/atomic.rs
1//! Atomic types
2//!
3//! Atomic types provide primitive shared-memory communication between
4//! threads, and are the building blocks of other concurrent
5//! types.
6//!
7//! This module defines atomic versions of a select number of primitive
8//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
9//! [`AtomicI8`], [`AtomicU16`], etc.
10//! Atomic types present operations that, when used correctly, synchronize
11//! updates between threads.
12//!
13//! Atomic variables are safe to share between threads (they implement [`Sync`])
14//! but they do not themselves provide the mechanism for sharing and follow the
15//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
16//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
17//! atomically-reference-counted shared pointer).
18//!
19//! [arc]: ../../../std/sync/struct.Arc.html
20//!
21//! Atomic types may be stored in static variables, initialized using
22//! the constant initializers like [`AtomicBool::new`]. Atomic statics
23//! are often used for lazy global initialization.
24//!
25//! ## Memory model for atomic accesses
26//!
27//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically the rules
28//! from the [`intro.races`][cpp-intro.races] section, without the "consume" memory ordering. Since
29//! C++ uses an object-based memory model whereas Rust is access-based, a bit of translation work
30//! has to be done to apply the C++ rules to Rust: whenever C++ talks about "the value of an
31//! object", we understand that to mean the resulting bytes obtained when doing a read. When the C++
32//! standard talks about "the value of an atomic object", this refers to the result of doing an
33//! atomic load (via the operations provided in this module). A "modification of an atomic object"
34//! refers to an atomic store.
35//!
36//! The end result is *almost* equivalent to saying that creating a *shared reference* to one of the
37//! Rust atomic types corresponds to creating an `atomic_ref` in C++, with the `atomic_ref` being
38//! destroyed when the lifetime of the shared reference ends. The main difference is that Rust
39//! permits concurrent atomic and non-atomic reads to the same memory as those cause no issue in the
40//! C++ memory model, they are just forbidden in C++ because memory is partitioned into "atomic
41//! objects" and "non-atomic objects" (with `atomic_ref` temporarily converting a non-atomic object
42//! into an atomic object).
43//!
44//! The most important aspect of this model is that *data races* are undefined behavior. A data race
45//! is defined as conflicting non-synchronized accesses where at least one of the accesses is
46//! non-atomic. Here, accesses are *conflicting* if they affect overlapping regions of memory and at
47//! least one of them is a write. (A `compare_exchange` or `compare_exchange_weak` that does not
48//! succeed is not considered a write.) They are *non-synchronized* if neither of them
49//! *happens-before* the other, according to the happens-before order of the memory model.
50//!
51//! The other possible cause of undefined behavior in the memory model are mixed-size accesses: Rust
52//! inherits the C++ limitation that non-synchronized conflicting atomic accesses may not partially
53//! overlap. In other words, every pair of non-synchronized atomic accesses must be either disjoint,
54//! access the exact same memory (including using the same access size), or both be reads.
55//!
56//! Each atomic access takes an [`Ordering`] which defines how the operation interacts with the
57//! happens-before order. These orderings behave the same as the corresponding [C++20 atomic
58//! orderings][cpp_memory_order]. For more information, see the [nomicon].
59//!
60//! [cpp]: https://en.cppreference.com/w/cpp/atomic
61//! [cpp-intro.races]: https://timsong-cpp.github.io/cppwp/n4868/intro.multithread#intro.races
62//! [cpp_memory_order]: https://en.cppreference.com/w/cpp/atomic/memory_order
63//! [nomicon]: ../../../nomicon/atomics.html
64//!
65//! ```rust,no_run undefined_behavior
66//! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering};
67//! use std::mem::transmute;
68//! use std::thread;
69//!
70//! let atomic = AtomicU16::new(0);
71//!
72//! thread::scope(|s| {
73//! // This is UB: conflicting non-synchronized accesses, at least one of which is non-atomic.
74//! s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
75//! s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
76//! });
77//!
78//! thread::scope(|s| {
79//! // This is fine: the accesses do not conflict (as none of them performs any modification).
80//! // In C++ this would be disallowed since creating an `atomic_ref` precludes
81//! // further non-atomic accesses, but Rust does not have that limitation.
82//! s.spawn(|| atomic.load(Ordering::Relaxed)); // atomic load
83//! s.spawn(|| unsafe { atomic.as_ptr().read() }); // non-atomic read
84//! });
85//!
86//! thread::scope(|s| {
87//! // This is fine: `join` synchronizes the code in a way such that the atomic
88//! // store happens-before the non-atomic write.
89//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store
90//! handle.join().expect("thread won't panic"); // synchronize
91//! s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write
92//! });
93//!
94//! thread::scope(|s| {
95//! // This is UB: non-synchronized conflicting differently-sized atomic accesses.
96//! s.spawn(|| atomic.store(1, Ordering::Relaxed));
97//! s.spawn(|| unsafe {
98//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
99//! differently_sized.store(2, Ordering::Relaxed);
100//! });
101//! });
102//!
103//! thread::scope(|s| {
104//! // This is fine: `join` synchronizes the code in a way such that
105//! // the 1-byte store happens-before the 2-byte store.
106//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
107//! handle.join().expect("thread won't panic");
108//! s.spawn(|| unsafe {
109//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
110//! differently_sized.store(2, Ordering::Relaxed);
111//! });
112//! });
113//! ```
114//!
115//! # Portability
116//!
117//! All atomic types in this module are guaranteed to be [lock-free] if they're
118//! available. This means they don't internally acquire a global mutex. Atomic
119//! types and operations are not guaranteed to be wait-free. This means that
120//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
121//!
122//! Atomic operations may be implemented at the instruction layer with
123//! larger-size atomics. For example some platforms use 4-byte atomic
124//! instructions to implement `AtomicI8`. Note that this emulation should not
125//! have an impact on correctness of code, it's just something to be aware of.
126//!
127//! The atomic types in this module might not be available on all platforms. The
128//! atomic types here are all widely available, however, and can generally be
129//! relied upon existing. Some notable exceptions are:
130//!
131//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
132//! `AtomicI64` types.
133//! * ARM platforms like `armv5te` that aren't for Linux only provide `load`
134//! and `store` operations, and do not support Compare and Swap (CAS)
135//! operations, such as `swap`, `fetch_add`, etc. Additionally on Linux,
136//! these CAS operations are implemented via [operating system support], which
137//! may come with a performance penalty.
138//! * ARM targets with `thumbv6m` only provide `load` and `store` operations,
139//! and do not support Compare and Swap (CAS) operations, such as `swap`,
140//! `fetch_add`, etc.
141//!
142//! [operating system support]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
143//!
144//! Note that future platforms may be added that also do not have support for
145//! some atomic operations. Maximally portable code will want to be careful
146//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
147//! generally the most portable, but even then they're not available everywhere.
148//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
149//! `core` does not.
150//!
151//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
152//! compile based on the target's supported bit widths. It is a key-value
153//! option set for each supported size, with values "8", "16", "32", "64",
154//! "128", and "ptr" for pointer-sized atomics.
155//!
156//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
157//!
158//! # Atomic accesses to read-only memory
159//!
160//! In general, *all* atomic accesses on read-only memory are undefined behavior. For instance, attempting
161//! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only
162//! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since
163//! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault
164//! on read-only memory.
165//!
166//! For the purpose of this section, "read-only memory" is defined as memory that is read-only in
167//! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write
168//! will cause a page fault. In particular, an `&u128` reference that points to memory that is
169//! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory
170//! is read-write; the only exceptions are memory created by `const` items or `static` items without
171//! interior mutability, and memory that was specifically marked as read-only by the operating
172//! system via platform-specific APIs.
173//!
174//! As an exception from the general rule stated above, "sufficiently small" atomic loads with
175//! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not
176//! undefined behavior. The exact size limit for what makes a load "sufficiently small" varies
177//! depending on the target:
178//!
179//! | `target_arch` | Size limit |
180//! |---------------|---------|
181//! | `x86`, `arm`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes |
182//! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes |
183//!
184//! Atomics loads that are larger than this limit as well as atomic loads with ordering other
185//! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be
186//! read-only under certain conditions, but that is not a stable guarantee and should not be relied
187//! upon.
188//!
189//! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an
190//! acquire fence instead.
191//!
192//! # Examples
193//!
194//! A simple spinlock:
195//!
196//! ```ignore-wasm
197//! use std::sync::Arc;
198//! use std::sync::atomic::{AtomicUsize, Ordering};
199//! use std::{hint, thread};
200//!
201//! fn main() {
202//! let spinlock = Arc::new(AtomicUsize::new(1));
203//!
204//! let spinlock_clone = Arc::clone(&spinlock);
205//!
206//! let thread = thread::spawn(move || {
207//! spinlock_clone.store(0, Ordering::Release);
208//! });
209//!
210//! // Wait for the other thread to release the lock
211//! while spinlock.load(Ordering::Acquire) != 0 {
212//! hint::spin_loop();
213//! }
214//!
215//! if let Err(panic) = thread.join() {
216//! println!("Thread had an error: {panic:?}");
217//! }
218//! }
219//! ```
220//!
221//! Keep a global count of live threads:
222//!
223//! ```
224//! use std::sync::atomic::{AtomicUsize, Ordering};
225//!
226//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
227//!
228//! // Note that Relaxed ordering doesn't synchronize anything
229//! // except the global thread counter itself.
230//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed);
231//! // Note that this number may not be true at the moment of printing
232//! // because some other thread may have changed static value already.
233//! println!("live threads: {}", old_thread_count + 1);
234//! ```
235
236#![stable(feature = "rust1", since = "1.0.0")]
237#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
238#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
239#![rustc_diagnostic_item = "atomic_mod"]
240// Clippy complains about the pattern of "safe function calling unsafe function taking pointers".
241// This happens with AtomicPtr intrinsics but is fine, as the pointers clippy is concerned about
242// are just normal values that get loaded/stored, but not dereferenced.
243#![allow(clippy::not_unsafe_ptr_arg_deref)]
244
245use self::Ordering::*;
246use crate::cell::UnsafeCell;
247use crate::hint::spin_loop;
248use crate::intrinsics::AtomicOrdering as AO;
249use crate::{fmt, intrinsics};
250
251trait Sealed {}
252
253/// A marker trait for primitive types which can be modified atomically.
254///
255/// This is an implementation detail for <code>[Atomic]\<T></code> which may disappear or be replaced at any time.
256///
257/// # Safety
258///
259/// Types implementing this trait must be primitives that can be modified atomically.
260///
261/// The associated `Self::AtomicInner` type must have the same size and bit validity as `Self`,
262/// but may have a higher alignment requirement, so the following `transmute`s are sound:
263///
264/// - `&mut Self::AtomicInner` as `&mut Self`
265/// - `Self` as `Self::AtomicInner` or the reverse
266#[unstable(
267 feature = "atomic_internals",
268 reason = "implementation detail which may disappear or be replaced at any time",
269 issue = "none"
270)]
271#[expect(private_bounds)]
272pub unsafe trait AtomicPrimitive: Sized + Copy + Sealed {
273 /// Temporary implementation detail.
274 type AtomicInner: Sized;
275}
276
277macro impl_atomic_primitive(
278 $Atom:ident $(<$T:ident>)? ($Primitive:ty),
279 size($size:literal),
280 align($align:literal) $(,)?
281) {
282 impl $(<$T>)? Sealed for $Primitive {}
283
284 #[unstable(
285 feature = "atomic_internals",
286 reason = "implementation detail which may disappear or be replaced at any time",
287 issue = "none"
288 )]
289 #[cfg(target_has_atomic_load_store = $size)]
290 unsafe impl $(<$T>)? AtomicPrimitive for $Primitive {
291 type AtomicInner = $Atom $(<$T>)?;
292 }
293}
294
295impl_atomic_primitive!(AtomicBool(bool), size("8"), align(1));
296impl_atomic_primitive!(AtomicI8(i8), size("8"), align(1));
297impl_atomic_primitive!(AtomicU8(u8), size("8"), align(1));
298impl_atomic_primitive!(AtomicI16(i16), size("16"), align(2));
299impl_atomic_primitive!(AtomicU16(u16), size("16"), align(2));
300impl_atomic_primitive!(AtomicI32(i32), size("32"), align(4));
301impl_atomic_primitive!(AtomicU32(u32), size("32"), align(4));
302impl_atomic_primitive!(AtomicI64(i64), size("64"), align(8));
303impl_atomic_primitive!(AtomicU64(u64), size("64"), align(8));
304impl_atomic_primitive!(AtomicI128(i128), size("128"), align(16));
305impl_atomic_primitive!(AtomicU128(u128), size("128"), align(16));
306
307#[cfg(target_pointer_width = "16")]
308impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(2));
309#[cfg(target_pointer_width = "32")]
310impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(4));
311#[cfg(target_pointer_width = "64")]
312impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(8));
313
314#[cfg(target_pointer_width = "16")]
315impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(2));
316#[cfg(target_pointer_width = "32")]
317impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(4));
318#[cfg(target_pointer_width = "64")]
319impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(8));
320
321#[cfg(target_pointer_width = "16")]
322impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(2));
323#[cfg(target_pointer_width = "32")]
324impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(4));
325#[cfg(target_pointer_width = "64")]
326impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(8));
327
328/// A memory location which can be safely modified from multiple threads.
329///
330/// This has the same size and bit validity as the underlying type `T`. However,
331/// the alignment of this type is always equal to its size, even on targets where
332/// `T` has alignment less than its size.
333///
334/// For more about the differences between atomic types and non-atomic types as
335/// well as information about the portability of this type, please see the
336/// [module-level documentation].
337///
338/// **Note:** This type is only available on platforms that support atomic loads
339/// and stores of `T`.
340///
341/// [module-level documentation]: crate::sync::atomic
342#[unstable(feature = "generic_atomic", issue = "130539")]
343pub type Atomic<T> = <T as AtomicPrimitive>::AtomicInner;
344
345// Some architectures don't have byte-sized atomics, which results in LLVM
346// emulating them using a LL/SC loop. However for AtomicBool we can take
347// advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND
348// instead, which LLVM can emulate using a larger atomic OR/AND operation.
349//
350// This list should only contain architectures which have word-sized atomic-or/
351// atomic-and instructions but don't natively support byte-sized atomics.
352#[cfg(target_has_atomic = "8")]
353const EMULATE_ATOMIC_BOOL: bool =
354 cfg!(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"));
355
356/// A boolean type which can be safely shared between threads.
357///
358/// This type has the same size, alignment, and bit validity as a [`bool`].
359///
360/// **Note**: This type is only available on platforms that support atomic
361/// loads and stores of `u8`.
362#[cfg(target_has_atomic_load_store = "8")]
363#[stable(feature = "rust1", since = "1.0.0")]
364#[rustc_diagnostic_item = "AtomicBool"]
365#[repr(C, align(1))]
366pub struct AtomicBool {
367 v: UnsafeCell<u8>,
368}
369
370#[cfg(target_has_atomic_load_store = "8")]
371#[stable(feature = "rust1", since = "1.0.0")]
372impl Default for AtomicBool {
373 /// Creates an `AtomicBool` initialized to `false`.
374 #[inline]
375 fn default() -> Self {
376 Self::new(false)
377 }
378}
379
380// Send is implicitly implemented for AtomicBool.
381#[cfg(target_has_atomic_load_store = "8")]
382#[stable(feature = "rust1", since = "1.0.0")]
383unsafe impl Sync for AtomicBool {}
384
385/// A raw pointer type which can be safely shared between threads.
386///
387/// This type has the same size and bit validity as a `*mut T`.
388///
389/// **Note**: This type is only available on platforms that support atomic
390/// loads and stores of pointers. Its size depends on the target pointer's size.
391#[cfg(target_has_atomic_load_store = "ptr")]
392#[stable(feature = "rust1", since = "1.0.0")]
393#[rustc_diagnostic_item = "AtomicPtr"]
394#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
395#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
396#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
397pub struct AtomicPtr<T> {
398 p: UnsafeCell<*mut T>,
399}
400
401#[cfg(target_has_atomic_load_store = "ptr")]
402#[stable(feature = "rust1", since = "1.0.0")]
403impl<T> Default for AtomicPtr<T> {
404 /// Creates a null `AtomicPtr<T>`.
405 fn default() -> AtomicPtr<T> {
406 AtomicPtr::new(crate::ptr::null_mut())
407 }
408}
409
410#[cfg(target_has_atomic_load_store = "ptr")]
411#[stable(feature = "rust1", since = "1.0.0")]
412unsafe impl<T> Send for AtomicPtr<T> {}
413#[cfg(target_has_atomic_load_store = "ptr")]
414#[stable(feature = "rust1", since = "1.0.0")]
415unsafe impl<T> Sync for AtomicPtr<T> {}
416
417/// Atomic memory orderings
418///
419/// Memory orderings specify the way atomic operations synchronize memory.
420/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
421/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
422/// operations synchronize other memory while additionally preserving a total order of such
423/// operations across all threads.
424///
425/// Rust's memory orderings are [the same as those of
426/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
427///
428/// For more information see the [nomicon].
429///
430/// [nomicon]: ../../../nomicon/atomics.html
431#[stable(feature = "rust1", since = "1.0.0")]
432#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
433#[non_exhaustive]
434#[rustc_diagnostic_item = "Ordering"]
435pub enum Ordering {
436 /// No ordering constraints, only atomic operations.
437 ///
438 /// Corresponds to [`memory_order_relaxed`] in C++20.
439 ///
440 /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
441 #[stable(feature = "rust1", since = "1.0.0")]
442 Relaxed,
443 /// When coupled with a store, all previous operations become ordered
444 /// before any load of this value with [`Acquire`] (or stronger) ordering.
445 /// In particular, all previous writes become visible to all threads
446 /// that perform an [`Acquire`] (or stronger) load of this value.
447 ///
448 /// Notice that using this ordering for an operation that combines loads
449 /// and stores leads to a [`Relaxed`] load operation!
450 ///
451 /// This ordering is only applicable for operations that can perform a store.
452 ///
453 /// Corresponds to [`memory_order_release`] in C++20.
454 ///
455 /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
456 #[stable(feature = "rust1", since = "1.0.0")]
457 Release,
458 /// When coupled with a load, if the loaded value was written by a store operation with
459 /// [`Release`] (or stronger) ordering, then all subsequent operations
460 /// become ordered after that store. In particular, all subsequent loads will see data
461 /// written before the store.
462 ///
463 /// Notice that using this ordering for an operation that combines loads
464 /// and stores leads to a [`Relaxed`] store operation!
465 ///
466 /// This ordering is only applicable for operations that can perform a load.
467 ///
468 /// Corresponds to [`memory_order_acquire`] in C++20.
469 ///
470 /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
471 #[stable(feature = "rust1", since = "1.0.0")]
472 Acquire,
473 /// Has the effects of both [`Acquire`] and [`Release`] together:
474 /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
475 ///
476 /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
477 /// not performing any store and hence it has just [`Acquire`] ordering. However,
478 /// `AcqRel` will never perform [`Relaxed`] accesses.
479 ///
480 /// This ordering is only applicable for operations that combine both loads and stores.
481 ///
482 /// Corresponds to [`memory_order_acq_rel`] in C++20.
483 ///
484 /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
485 #[stable(feature = "rust1", since = "1.0.0")]
486 AcqRel,
487 /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
488 /// operations, respectively) with the additional guarantee that all threads see all
489 /// sequentially consistent operations in the same order.
490 ///
491 /// Corresponds to [`memory_order_seq_cst`] in C++20.
492 ///
493 /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
494 #[stable(feature = "rust1", since = "1.0.0")]
495 SeqCst,
496}
497
498/// An [`AtomicBool`] initialized to `false`.
499#[cfg(target_has_atomic_load_store = "8")]
500#[stable(feature = "rust1", since = "1.0.0")]
501#[deprecated(
502 since = "1.34.0",
503 note = "the `new` function is now preferred",
504 suggestion = "AtomicBool::new(false)"
505)]
506pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
507
508#[cfg(target_has_atomic_load_store = "8")]
509impl AtomicBool {
510 /// Creates a new `AtomicBool`.
511 ///
512 /// # Examples
513 ///
514 /// ```
515 /// use std::sync::atomic::AtomicBool;
516 ///
517 /// let atomic_true = AtomicBool::new(true);
518 /// let atomic_false = AtomicBool::new(false);
519 /// ```
520 #[inline]
521 #[stable(feature = "rust1", since = "1.0.0")]
522 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
523 #[must_use]
524 pub const fn new(v: bool) -> AtomicBool {
525 AtomicBool { v: UnsafeCell::new(v as u8) }
526 }
527
528 /// Creates a new `AtomicBool` from a pointer.
529 ///
530 /// # Examples
531 ///
532 /// ```
533 /// use std::sync::atomic::{self, AtomicBool};
534 ///
535 /// // Get a pointer to an allocated value
536 /// let ptr: *mut bool = Box::into_raw(Box::new(false));
537 ///
538 /// assert!(ptr.cast::<AtomicBool>().is_aligned());
539 ///
540 /// {
541 /// // Create an atomic view of the allocated value
542 /// let atomic = unsafe { AtomicBool::from_ptr(ptr) };
543 ///
544 /// // Use `atomic` for atomic operations, possibly share it with other threads
545 /// atomic.store(true, atomic::Ordering::Relaxed);
546 /// }
547 ///
548 /// // It's ok to non-atomically access the value behind `ptr`,
549 /// // since the reference to the atomic ended its lifetime in the block above
550 /// assert_eq!(unsafe { *ptr }, true);
551 ///
552 /// // Deallocate the value
553 /// unsafe { drop(Box::from_raw(ptr)) }
554 /// ```
555 ///
556 /// # Safety
557 ///
558 /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since
559 /// `align_of::<AtomicBool>() == 1`).
560 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
561 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
562 /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
563 /// without synchronization.
564 ///
565 /// [valid]: crate::ptr#safety
566 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
567 #[inline]
568 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
569 #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
570 pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
571 // SAFETY: guaranteed by the caller
572 unsafe { &*ptr.cast() }
573 }
574
575 /// Returns a mutable reference to the underlying [`bool`].
576 ///
577 /// This is safe because the mutable reference guarantees that no other threads are
578 /// concurrently accessing the atomic data.
579 ///
580 /// # Examples
581 ///
582 /// ```
583 /// use std::sync::atomic::{AtomicBool, Ordering};
584 ///
585 /// let mut some_bool = AtomicBool::new(true);
586 /// assert_eq!(*some_bool.get_mut(), true);
587 /// *some_bool.get_mut() = false;
588 /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
589 /// ```
590 #[inline]
591 #[stable(feature = "atomic_access", since = "1.15.0")]
592 pub fn get_mut(&mut self) -> &mut bool {
593 // SAFETY: the mutable reference guarantees unique ownership.
594 unsafe { &mut *(self.v.get() as *mut bool) }
595 }
596
597 /// Gets atomic access to a `&mut bool`.
598 ///
599 /// # Examples
600 ///
601 /// ```
602 /// #![feature(atomic_from_mut)]
603 /// use std::sync::atomic::{AtomicBool, Ordering};
604 ///
605 /// let mut some_bool = true;
606 /// let a = AtomicBool::from_mut(&mut some_bool);
607 /// a.store(false, Ordering::Relaxed);
608 /// assert_eq!(some_bool, false);
609 /// ```
610 #[inline]
611 #[cfg(target_has_atomic_equal_alignment = "8")]
612 #[unstable(feature = "atomic_from_mut", issue = "76314")]
613 pub fn from_mut(v: &mut bool) -> &mut Self {
614 // SAFETY: the mutable reference guarantees unique ownership, and
615 // alignment of both `bool` and `Self` is 1.
616 unsafe { &mut *(v as *mut bool as *mut Self) }
617 }
618
619 /// Gets non-atomic access to a `&mut [AtomicBool]` slice.
620 ///
621 /// This is safe because the mutable reference guarantees that no other threads are
622 /// concurrently accessing the atomic data.
623 ///
624 /// # Examples
625 ///
626 /// ```ignore-wasm
627 /// #![feature(atomic_from_mut)]
628 /// use std::sync::atomic::{AtomicBool, Ordering};
629 ///
630 /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
631 ///
632 /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
633 /// assert_eq!(view, [false; 10]);
634 /// view[..5].copy_from_slice(&[true; 5]);
635 ///
636 /// std::thread::scope(|s| {
637 /// for t in &some_bools[..5] {
638 /// s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
639 /// }
640 ///
641 /// for f in &some_bools[5..] {
642 /// s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
643 /// }
644 /// });
645 /// ```
646 #[inline]
647 #[unstable(feature = "atomic_from_mut", issue = "76314")]
648 pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
649 // SAFETY: the mutable reference guarantees unique ownership.
650 unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
651 }
652
653 /// Gets atomic access to a `&mut [bool]` slice.
654 ///
655 /// # Examples
656 ///
657 /// ```rust,ignore-wasm
658 /// #![feature(atomic_from_mut)]
659 /// use std::sync::atomic::{AtomicBool, Ordering};
660 ///
661 /// let mut some_bools = [false; 10];
662 /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
663 /// std::thread::scope(|s| {
664 /// for i in 0..a.len() {
665 /// s.spawn(move || a[i].store(true, Ordering::Relaxed));
666 /// }
667 /// });
668 /// assert_eq!(some_bools, [true; 10]);
669 /// ```
670 #[inline]
671 #[cfg(target_has_atomic_equal_alignment = "8")]
672 #[unstable(feature = "atomic_from_mut", issue = "76314")]
673 pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
674 // SAFETY: the mutable reference guarantees unique ownership, and
675 // alignment of both `bool` and `Self` is 1.
676 unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
677 }
678
679 /// Consumes the atomic and returns the contained value.
680 ///
681 /// This is safe because passing `self` by value guarantees that no other threads are
682 /// concurrently accessing the atomic data.
683 ///
684 /// # Examples
685 ///
686 /// ```
687 /// use std::sync::atomic::AtomicBool;
688 ///
689 /// let some_bool = AtomicBool::new(true);
690 /// assert_eq!(some_bool.into_inner(), true);
691 /// ```
692 #[inline]
693 #[stable(feature = "atomic_access", since = "1.15.0")]
694 #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
695 pub const fn into_inner(self) -> bool {
696 self.v.into_inner() != 0
697 }
698
699 /// Loads a value from the bool.
700 ///
701 /// `load` takes an [`Ordering`] argument which describes the memory ordering
702 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
703 ///
704 /// # Panics
705 ///
706 /// Panics if `order` is [`Release`] or [`AcqRel`].
707 ///
708 /// # Examples
709 ///
710 /// ```
711 /// use std::sync::atomic::{AtomicBool, Ordering};
712 ///
713 /// let some_bool = AtomicBool::new(true);
714 ///
715 /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
716 /// ```
717 #[inline]
718 #[stable(feature = "rust1", since = "1.0.0")]
719 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
720 pub fn load(&self, order: Ordering) -> bool {
721 // SAFETY: any data races are prevented by atomic intrinsics and the raw
722 // pointer passed in is valid because we got it from a reference.
723 unsafe { atomic_load(self.v.get(), order) != 0 }
724 }
725
726 /// Stores a value into the bool.
727 ///
728 /// `store` takes an [`Ordering`] argument which describes the memory ordering
729 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
730 ///
731 /// # Panics
732 ///
733 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
734 ///
735 /// # Examples
736 ///
737 /// ```
738 /// use std::sync::atomic::{AtomicBool, Ordering};
739 ///
740 /// let some_bool = AtomicBool::new(true);
741 ///
742 /// some_bool.store(false, Ordering::Relaxed);
743 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
744 /// ```
745 #[inline]
746 #[stable(feature = "rust1", since = "1.0.0")]
747 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
748 pub fn store(&self, val: bool, order: Ordering) {
749 // SAFETY: any data races are prevented by atomic intrinsics and the raw
750 // pointer passed in is valid because we got it from a reference.
751 unsafe {
752 atomic_store(self.v.get(), val as u8, order);
753 }
754 }
755
756 /// Stores a value into the bool, returning the previous value.
757 ///
758 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
759 /// of this operation. All ordering modes are possible. Note that using
760 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
761 /// using [`Release`] makes the load part [`Relaxed`].
762 ///
763 /// **Note:** This method is only available on platforms that support atomic
764 /// operations on `u8`.
765 ///
766 /// # Examples
767 ///
768 /// ```
769 /// use std::sync::atomic::{AtomicBool, Ordering};
770 ///
771 /// let some_bool = AtomicBool::new(true);
772 ///
773 /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
774 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
775 /// ```
776 #[inline]
777 #[stable(feature = "rust1", since = "1.0.0")]
778 #[cfg(target_has_atomic = "8")]
779 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
780 pub fn swap(&self, val: bool, order: Ordering) -> bool {
781 if EMULATE_ATOMIC_BOOL {
782 if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
783 } else {
784 // SAFETY: data races are prevented by atomic intrinsics.
785 unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
786 }
787 }
788
789 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
790 ///
791 /// The return value is always the previous value. If it is equal to `current`, then the value
792 /// was updated.
793 ///
794 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
795 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
796 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
797 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
798 /// happens, and using [`Release`] makes the load part [`Relaxed`].
799 ///
800 /// **Note:** This method is only available on platforms that support atomic
801 /// operations on `u8`.
802 ///
803 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
804 ///
805 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
806 /// memory orderings:
807 ///
808 /// Original | Success | Failure
809 /// -------- | ------- | -------
810 /// Relaxed | Relaxed | Relaxed
811 /// Acquire | Acquire | Acquire
812 /// Release | Release | Relaxed
813 /// AcqRel | AcqRel | Acquire
814 /// SeqCst | SeqCst | SeqCst
815 ///
816 /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
817 /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
818 /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
819 /// rather than to infer success vs failure based on the value that was read.
820 ///
821 /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
822 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
823 /// which allows the compiler to generate better assembly code when the compare and swap
824 /// is used in a loop.
825 ///
826 /// # Examples
827 ///
828 /// ```
829 /// use std::sync::atomic::{AtomicBool, Ordering};
830 ///
831 /// let some_bool = AtomicBool::new(true);
832 ///
833 /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
834 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
835 ///
836 /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
837 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
838 /// ```
839 #[inline]
840 #[stable(feature = "rust1", since = "1.0.0")]
841 #[deprecated(
842 since = "1.50.0",
843 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
844 )]
845 #[cfg(target_has_atomic = "8")]
846 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
847 pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
848 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
849 Ok(x) => x,
850 Err(x) => x,
851 }
852 }
853
854 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
855 ///
856 /// The return value is a result indicating whether the new value was written and containing
857 /// the previous value. On success this value is guaranteed to be equal to `current`.
858 ///
859 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
860 /// ordering of this operation. `success` describes the required ordering for the
861 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
862 /// `failure` describes the required ordering for the load operation that takes place when
863 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
864 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
865 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
866 ///
867 /// **Note:** This method is only available on platforms that support atomic
868 /// operations on `u8`.
869 ///
870 /// # Examples
871 ///
872 /// ```
873 /// use std::sync::atomic::{AtomicBool, Ordering};
874 ///
875 /// let some_bool = AtomicBool::new(true);
876 ///
877 /// assert_eq!(some_bool.compare_exchange(true,
878 /// false,
879 /// Ordering::Acquire,
880 /// Ordering::Relaxed),
881 /// Ok(true));
882 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
883 ///
884 /// assert_eq!(some_bool.compare_exchange(true, true,
885 /// Ordering::SeqCst,
886 /// Ordering::Acquire),
887 /// Err(false));
888 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
889 /// ```
890 #[inline]
891 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
892 #[doc(alias = "compare_and_swap")]
893 #[cfg(target_has_atomic = "8")]
894 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
895 pub fn compare_exchange(
896 &self,
897 current: bool,
898 new: bool,
899 success: Ordering,
900 failure: Ordering,
901 ) -> Result<bool, bool> {
902 if EMULATE_ATOMIC_BOOL {
903 // Pick the strongest ordering from success and failure.
904 let order = match (success, failure) {
905 (SeqCst, _) => SeqCst,
906 (_, SeqCst) => SeqCst,
907 (AcqRel, _) => AcqRel,
908 (_, AcqRel) => {
909 panic!("there is no such thing as an acquire-release failure ordering")
910 }
911 (Release, Acquire) => AcqRel,
912 (Acquire, _) => Acquire,
913 (_, Acquire) => Acquire,
914 (Release, Relaxed) => Release,
915 (_, Release) => panic!("there is no such thing as a release failure ordering"),
916 (Relaxed, Relaxed) => Relaxed,
917 };
918 let old = if current == new {
919 // This is a no-op, but we still need to perform the operation
920 // for memory ordering reasons.
921 self.fetch_or(false, order)
922 } else {
923 // This sets the value to the new one and returns the old one.
924 self.swap(new, order)
925 };
926 if old == current { Ok(old) } else { Err(old) }
927 } else {
928 // SAFETY: data races are prevented by atomic intrinsics.
929 match unsafe {
930 atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
931 } {
932 Ok(x) => Ok(x != 0),
933 Err(x) => Err(x != 0),
934 }
935 }
936 }
937
938 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
939 ///
940 /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
941 /// comparison succeeds, which can result in more efficient code on some platforms. The
942 /// return value is a result indicating whether the new value was written and containing the
943 /// previous value.
944 ///
945 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
946 /// ordering of this operation. `success` describes the required ordering for the
947 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
948 /// `failure` describes the required ordering for the load operation that takes place when
949 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
950 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
951 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
952 ///
953 /// **Note:** This method is only available on platforms that support atomic
954 /// operations on `u8`.
955 ///
956 /// # Examples
957 ///
958 /// ```
959 /// use std::sync::atomic::{AtomicBool, Ordering};
960 ///
961 /// let val = AtomicBool::new(false);
962 ///
963 /// let new = true;
964 /// let mut old = val.load(Ordering::Relaxed);
965 /// loop {
966 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
967 /// Ok(_) => break,
968 /// Err(x) => old = x,
969 /// }
970 /// }
971 /// ```
972 #[inline]
973 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
974 #[doc(alias = "compare_and_swap")]
975 #[cfg(target_has_atomic = "8")]
976 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
977 pub fn compare_exchange_weak(
978 &self,
979 current: bool,
980 new: bool,
981 success: Ordering,
982 failure: Ordering,
983 ) -> Result<bool, bool> {
984 if EMULATE_ATOMIC_BOOL {
985 return self.compare_exchange(current, new, success, failure);
986 }
987
988 // SAFETY: data races are prevented by atomic intrinsics.
989 match unsafe {
990 atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
991 } {
992 Ok(x) => Ok(x != 0),
993 Err(x) => Err(x != 0),
994 }
995 }
996
997 /// Logical "and" with a boolean value.
998 ///
999 /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1000 /// the new value to the result.
1001 ///
1002 /// Returns the previous value.
1003 ///
1004 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1005 /// of this operation. All ordering modes are possible. Note that using
1006 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1007 /// using [`Release`] makes the load part [`Relaxed`].
1008 ///
1009 /// **Note:** This method is only available on platforms that support atomic
1010 /// operations on `u8`.
1011 ///
1012 /// # Examples
1013 ///
1014 /// ```
1015 /// use std::sync::atomic::{AtomicBool, Ordering};
1016 ///
1017 /// let foo = AtomicBool::new(true);
1018 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1019 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1020 ///
1021 /// let foo = AtomicBool::new(true);
1022 /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1023 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1024 ///
1025 /// let foo = AtomicBool::new(false);
1026 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1027 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1028 /// ```
1029 #[inline]
1030 #[stable(feature = "rust1", since = "1.0.0")]
1031 #[cfg(target_has_atomic = "8")]
1032 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1033 pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1034 // SAFETY: data races are prevented by atomic intrinsics.
1035 unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
1036 }
1037
1038 /// Logical "nand" with a boolean value.
1039 ///
1040 /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1041 /// the new value to the result.
1042 ///
1043 /// Returns the previous value.
1044 ///
1045 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1046 /// of this operation. All ordering modes are possible. Note that using
1047 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1048 /// using [`Release`] makes the load part [`Relaxed`].
1049 ///
1050 /// **Note:** This method is only available on platforms that support atomic
1051 /// operations on `u8`.
1052 ///
1053 /// # Examples
1054 ///
1055 /// ```
1056 /// use std::sync::atomic::{AtomicBool, Ordering};
1057 ///
1058 /// let foo = AtomicBool::new(true);
1059 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1060 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1061 ///
1062 /// let foo = AtomicBool::new(true);
1063 /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1064 /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1065 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1066 ///
1067 /// let foo = AtomicBool::new(false);
1068 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1069 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1070 /// ```
1071 #[inline]
1072 #[stable(feature = "rust1", since = "1.0.0")]
1073 #[cfg(target_has_atomic = "8")]
1074 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1075 pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1076 // We can't use atomic_nand here because it can result in a bool with
1077 // an invalid value. This happens because the atomic operation is done
1078 // with an 8-bit integer internally, which would set the upper 7 bits.
1079 // So we just use fetch_xor or swap instead.
1080 if val {
1081 // !(x & true) == !x
1082 // We must invert the bool.
1083 self.fetch_xor(true, order)
1084 } else {
1085 // !(x & false) == true
1086 // We must set the bool to true.
1087 self.swap(true, order)
1088 }
1089 }
1090
1091 /// Logical "or" with a boolean value.
1092 ///
1093 /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1094 /// new value to the result.
1095 ///
1096 /// Returns the previous value.
1097 ///
1098 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1099 /// of this operation. All ordering modes are possible. Note that using
1100 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1101 /// using [`Release`] makes the load part [`Relaxed`].
1102 ///
1103 /// **Note:** This method is only available on platforms that support atomic
1104 /// operations on `u8`.
1105 ///
1106 /// # Examples
1107 ///
1108 /// ```
1109 /// use std::sync::atomic::{AtomicBool, Ordering};
1110 ///
1111 /// let foo = AtomicBool::new(true);
1112 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1113 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1114 ///
1115 /// let foo = AtomicBool::new(true);
1116 /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1117 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1118 ///
1119 /// let foo = AtomicBool::new(false);
1120 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1121 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1122 /// ```
1123 #[inline]
1124 #[stable(feature = "rust1", since = "1.0.0")]
1125 #[cfg(target_has_atomic = "8")]
1126 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1127 pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1128 // SAFETY: data races are prevented by atomic intrinsics.
1129 unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
1130 }
1131
1132 /// Logical "xor" with a boolean value.
1133 ///
1134 /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1135 /// the new value to the result.
1136 ///
1137 /// Returns the previous value.
1138 ///
1139 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1140 /// of this operation. All ordering modes are possible. Note that using
1141 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1142 /// using [`Release`] makes the load part [`Relaxed`].
1143 ///
1144 /// **Note:** This method is only available on platforms that support atomic
1145 /// operations on `u8`.
1146 ///
1147 /// # Examples
1148 ///
1149 /// ```
1150 /// use std::sync::atomic::{AtomicBool, Ordering};
1151 ///
1152 /// let foo = AtomicBool::new(true);
1153 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1154 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1155 ///
1156 /// let foo = AtomicBool::new(true);
1157 /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1158 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1159 ///
1160 /// let foo = AtomicBool::new(false);
1161 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1162 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1163 /// ```
1164 #[inline]
1165 #[stable(feature = "rust1", since = "1.0.0")]
1166 #[cfg(target_has_atomic = "8")]
1167 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1168 pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1169 // SAFETY: data races are prevented by atomic intrinsics.
1170 unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
1171 }
1172
1173 /// Logical "not" with a boolean value.
1174 ///
1175 /// Performs a logical "not" operation on the current value, and sets
1176 /// the new value to the result.
1177 ///
1178 /// Returns the previous value.
1179 ///
1180 /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1181 /// of this operation. All ordering modes are possible. Note that using
1182 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1183 /// using [`Release`] makes the load part [`Relaxed`].
1184 ///
1185 /// **Note:** This method is only available on platforms that support atomic
1186 /// operations on `u8`.
1187 ///
1188 /// # Examples
1189 ///
1190 /// ```
1191 /// use std::sync::atomic::{AtomicBool, Ordering};
1192 ///
1193 /// let foo = AtomicBool::new(true);
1194 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1195 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1196 ///
1197 /// let foo = AtomicBool::new(false);
1198 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1199 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1200 /// ```
1201 #[inline]
1202 #[stable(feature = "atomic_bool_fetch_not", since = "1.81.0")]
1203 #[cfg(target_has_atomic = "8")]
1204 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1205 pub fn fetch_not(&self, order: Ordering) -> bool {
1206 self.fetch_xor(true, order)
1207 }
1208
1209 /// Returns a mutable pointer to the underlying [`bool`].
1210 ///
1211 /// Doing non-atomic reads and writes on the resulting boolean can be a data race.
1212 /// This method is mostly useful for FFI, where the function signature may use
1213 /// `*mut bool` instead of `&AtomicBool`.
1214 ///
1215 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1216 /// atomic types work with interior mutability. All modifications of an atomic change the value
1217 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1218 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
1219 /// restriction: operations on it must be atomic.
1220 ///
1221 /// # Examples
1222 ///
1223 /// ```ignore (extern-declaration)
1224 /// # fn main() {
1225 /// use std::sync::atomic::AtomicBool;
1226 ///
1227 /// extern "C" {
1228 /// fn my_atomic_op(arg: *mut bool);
1229 /// }
1230 ///
1231 /// let mut atomic = AtomicBool::new(true);
1232 /// unsafe {
1233 /// my_atomic_op(atomic.as_ptr());
1234 /// }
1235 /// # }
1236 /// ```
1237 #[inline]
1238 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1239 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
1240 #[rustc_never_returns_null_ptr]
1241 pub const fn as_ptr(&self) -> *mut bool {
1242 self.v.get().cast()
1243 }
1244
1245 /// Fetches the value, and applies a function to it that returns an optional
1246 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1247 /// returned `Some(_)`, else `Err(previous_value)`.
1248 ///
1249 /// Note: This may call the function multiple times if the value has been
1250 /// changed from other threads in the meantime, as long as the function
1251 /// returns `Some(_)`, but the function will have been applied only once to
1252 /// the stored value.
1253 ///
1254 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1255 /// ordering of this operation. The first describes the required ordering for
1256 /// when the operation finally succeeds while the second describes the
1257 /// required ordering for loads. These correspond to the success and failure
1258 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1259 ///
1260 /// Using [`Acquire`] as success ordering makes the store part of this
1261 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1262 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1263 /// [`Acquire`] or [`Relaxed`].
1264 ///
1265 /// **Note:** This method is only available on platforms that support atomic
1266 /// operations on `u8`.
1267 ///
1268 /// # Considerations
1269 ///
1270 /// This method is not magic; it is not provided by the hardware.
1271 /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
1272 /// In particular, this method will not circumvent the [ABA Problem].
1273 ///
1274 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1275 ///
1276 /// # Examples
1277 ///
1278 /// ```rust
1279 /// use std::sync::atomic::{AtomicBool, Ordering};
1280 ///
1281 /// let x = AtomicBool::new(false);
1282 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1283 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1284 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1285 /// assert_eq!(x.load(Ordering::SeqCst), false);
1286 /// ```
1287 #[inline]
1288 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1289 #[cfg(target_has_atomic = "8")]
1290 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1291 pub fn fetch_update<F>(
1292 &self,
1293 set_order: Ordering,
1294 fetch_order: Ordering,
1295 mut f: F,
1296 ) -> Result<bool, bool>
1297 where
1298 F: FnMut(bool) -> Option<bool>,
1299 {
1300 let mut prev = self.load(fetch_order);
1301 while let Some(next) = f(prev) {
1302 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1303 x @ Ok(_) => return x,
1304 Err(next_prev) => prev = next_prev,
1305 }
1306 }
1307 Err(prev)
1308 }
1309
1310 /// Fetches the value, and applies a function to it that returns an optional
1311 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1312 /// returned `Some(_)`, else `Err(previous_value)`.
1313 ///
1314 /// See also: [`update`](`AtomicBool::update`).
1315 ///
1316 /// Note: This may call the function multiple times if the value has been
1317 /// changed from other threads in the meantime, as long as the function
1318 /// returns `Some(_)`, but the function will have been applied only once to
1319 /// the stored value.
1320 ///
1321 /// `try_update` takes two [`Ordering`] arguments to describe the memory
1322 /// ordering of this operation. The first describes the required ordering for
1323 /// when the operation finally succeeds while the second describes the
1324 /// required ordering for loads. These correspond to the success and failure
1325 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1326 ///
1327 /// Using [`Acquire`] as success ordering makes the store part of this
1328 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1329 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1330 /// [`Acquire`] or [`Relaxed`].
1331 ///
1332 /// **Note:** This method is only available on platforms that support atomic
1333 /// operations on `u8`.
1334 ///
1335 /// # Considerations
1336 ///
1337 /// This method is not magic; it is not provided by the hardware.
1338 /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
1339 /// In particular, this method will not circumvent the [ABA Problem].
1340 ///
1341 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1342 ///
1343 /// # Examples
1344 ///
1345 /// ```rust
1346 /// #![feature(atomic_try_update)]
1347 /// use std::sync::atomic::{AtomicBool, Ordering};
1348 ///
1349 /// let x = AtomicBool::new(false);
1350 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1351 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1352 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1353 /// assert_eq!(x.load(Ordering::SeqCst), false);
1354 /// ```
1355 #[inline]
1356 #[unstable(feature = "atomic_try_update", issue = "135894")]
1357 #[cfg(target_has_atomic = "8")]
1358 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1359 pub fn try_update(
1360 &self,
1361 set_order: Ordering,
1362 fetch_order: Ordering,
1363 f: impl FnMut(bool) -> Option<bool>,
1364 ) -> Result<bool, bool> {
1365 // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
1366 // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
1367 self.fetch_update(set_order, fetch_order, f)
1368 }
1369
1370 /// Fetches the value, applies a function to it that it return a new value.
1371 /// The new value is stored and the old value is returned.
1372 ///
1373 /// See also: [`try_update`](`AtomicBool::try_update`).
1374 ///
1375 /// Note: This may call the function multiple times if the value has been changed from other threads in
1376 /// the meantime, but the function will have been applied only once to the stored value.
1377 ///
1378 /// `update` takes two [`Ordering`] arguments to describe the memory
1379 /// ordering of this operation. The first describes the required ordering for
1380 /// when the operation finally succeeds while the second describes the
1381 /// required ordering for loads. These correspond to the success and failure
1382 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1383 ///
1384 /// Using [`Acquire`] as success ordering makes the store part
1385 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
1386 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1387 ///
1388 /// **Note:** This method is only available on platforms that support atomic operations on `u8`.
1389 ///
1390 /// # Considerations
1391 ///
1392 /// This method is not magic; it is not provided by the hardware.
1393 /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
1394 /// In particular, this method will not circumvent the [ABA Problem].
1395 ///
1396 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1397 ///
1398 /// # Examples
1399 ///
1400 /// ```rust
1401 /// #![feature(atomic_try_update)]
1402 ///
1403 /// use std::sync::atomic::{AtomicBool, Ordering};
1404 ///
1405 /// let x = AtomicBool::new(false);
1406 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), false);
1407 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), true);
1408 /// assert_eq!(x.load(Ordering::SeqCst), false);
1409 /// ```
1410 #[inline]
1411 #[unstable(feature = "atomic_try_update", issue = "135894")]
1412 #[cfg(target_has_atomic = "8")]
1413 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1414 pub fn update(
1415 &self,
1416 set_order: Ordering,
1417 fetch_order: Ordering,
1418 mut f: impl FnMut(bool) -> bool,
1419 ) -> bool {
1420 let mut prev = self.load(fetch_order);
1421 loop {
1422 match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
1423 Ok(x) => break x,
1424 Err(next_prev) => prev = next_prev,
1425 }
1426 }
1427 }
1428}
1429
1430#[cfg(target_has_atomic_load_store = "ptr")]
1431impl<T> AtomicPtr<T> {
1432 /// Creates a new `AtomicPtr`.
1433 ///
1434 /// # Examples
1435 ///
1436 /// ```
1437 /// use std::sync::atomic::AtomicPtr;
1438 ///
1439 /// let ptr = &mut 5;
1440 /// let atomic_ptr = AtomicPtr::new(ptr);
1441 /// ```
1442 #[inline]
1443 #[stable(feature = "rust1", since = "1.0.0")]
1444 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
1445 pub const fn new(p: *mut T) -> AtomicPtr<T> {
1446 AtomicPtr { p: UnsafeCell::new(p) }
1447 }
1448
1449 /// Creates a new `AtomicPtr` from a pointer.
1450 ///
1451 /// # Examples
1452 ///
1453 /// ```
1454 /// use std::sync::atomic::{self, AtomicPtr};
1455 ///
1456 /// // Get a pointer to an allocated value
1457 /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1458 ///
1459 /// assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
1460 ///
1461 /// {
1462 /// // Create an atomic view of the allocated value
1463 /// let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1464 ///
1465 /// // Use `atomic` for atomic operations, possibly share it with other threads
1466 /// atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1467 /// }
1468 ///
1469 /// // It's ok to non-atomically access the value behind `ptr`,
1470 /// // since the reference to the atomic ended its lifetime in the block above
1471 /// assert!(!unsafe { *ptr }.is_null());
1472 ///
1473 /// // Deallocate the value
1474 /// unsafe { drop(Box::from_raw(ptr)) }
1475 /// ```
1476 ///
1477 /// # Safety
1478 ///
1479 /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1480 /// can be bigger than `align_of::<*mut T>()`).
1481 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1482 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
1483 /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
1484 /// without synchronization.
1485 ///
1486 /// [valid]: crate::ptr#safety
1487 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
1488 #[inline]
1489 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
1490 #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
1491 pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1492 // SAFETY: guaranteed by the caller
1493 unsafe { &*ptr.cast() }
1494 }
1495
1496 /// Returns a mutable reference to the underlying pointer.
1497 ///
1498 /// This is safe because the mutable reference guarantees that no other threads are
1499 /// concurrently accessing the atomic data.
1500 ///
1501 /// # Examples
1502 ///
1503 /// ```
1504 /// use std::sync::atomic::{AtomicPtr, Ordering};
1505 ///
1506 /// let mut data = 10;
1507 /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1508 /// let mut other_data = 5;
1509 /// *atomic_ptr.get_mut() = &mut other_data;
1510 /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1511 /// ```
1512 #[inline]
1513 #[stable(feature = "atomic_access", since = "1.15.0")]
1514 pub fn get_mut(&mut self) -> &mut *mut T {
1515 self.p.get_mut()
1516 }
1517
1518 /// Gets atomic access to a pointer.
1519 ///
1520 /// # Examples
1521 ///
1522 /// ```
1523 /// #![feature(atomic_from_mut)]
1524 /// use std::sync::atomic::{AtomicPtr, Ordering};
1525 ///
1526 /// let mut data = 123;
1527 /// let mut some_ptr = &mut data as *mut i32;
1528 /// let a = AtomicPtr::from_mut(&mut some_ptr);
1529 /// let mut other_data = 456;
1530 /// a.store(&mut other_data, Ordering::Relaxed);
1531 /// assert_eq!(unsafe { *some_ptr }, 456);
1532 /// ```
1533 #[inline]
1534 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1535 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1536 pub fn from_mut(v: &mut *mut T) -> &mut Self {
1537 let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1538 // SAFETY:
1539 // - the mutable reference guarantees unique ownership.
1540 // - the alignment of `*mut T` and `Self` is the same on all platforms
1541 // supported by rust, as verified above.
1542 unsafe { &mut *(v as *mut *mut T as *mut Self) }
1543 }
1544
1545 /// Gets non-atomic access to a `&mut [AtomicPtr]` slice.
1546 ///
1547 /// This is safe because the mutable reference guarantees that no other threads are
1548 /// concurrently accessing the atomic data.
1549 ///
1550 /// # Examples
1551 ///
1552 /// ```ignore-wasm
1553 /// #![feature(atomic_from_mut)]
1554 /// use std::ptr::null_mut;
1555 /// use std::sync::atomic::{AtomicPtr, Ordering};
1556 ///
1557 /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1558 ///
1559 /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1560 /// assert_eq!(view, [null_mut::<String>(); 10]);
1561 /// view
1562 /// .iter_mut()
1563 /// .enumerate()
1564 /// .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1565 ///
1566 /// std::thread::scope(|s| {
1567 /// for ptr in &some_ptrs {
1568 /// s.spawn(move || {
1569 /// let ptr = ptr.load(Ordering::Relaxed);
1570 /// assert!(!ptr.is_null());
1571 ///
1572 /// let name = unsafe { Box::from_raw(ptr) };
1573 /// println!("Hello, {name}!");
1574 /// });
1575 /// }
1576 /// });
1577 /// ```
1578 #[inline]
1579 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1580 pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1581 // SAFETY: the mutable reference guarantees unique ownership.
1582 unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1583 }
1584
1585 /// Gets atomic access to a slice of pointers.
1586 ///
1587 /// # Examples
1588 ///
1589 /// ```ignore-wasm
1590 /// #![feature(atomic_from_mut)]
1591 /// use std::ptr::null_mut;
1592 /// use std::sync::atomic::{AtomicPtr, Ordering};
1593 ///
1594 /// let mut some_ptrs = [null_mut::<String>(); 10];
1595 /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1596 /// std::thread::scope(|s| {
1597 /// for i in 0..a.len() {
1598 /// s.spawn(move || {
1599 /// let name = Box::new(format!("thread{i}"));
1600 /// a[i].store(Box::into_raw(name), Ordering::Relaxed);
1601 /// });
1602 /// }
1603 /// });
1604 /// for p in some_ptrs {
1605 /// assert!(!p.is_null());
1606 /// let name = unsafe { Box::from_raw(p) };
1607 /// println!("Hello, {name}!");
1608 /// }
1609 /// ```
1610 #[inline]
1611 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1612 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1613 pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1614 // SAFETY:
1615 // - the mutable reference guarantees unique ownership.
1616 // - the alignment of `*mut T` and `Self` is the same on all platforms
1617 // supported by rust, as verified above.
1618 unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1619 }
1620
1621 /// Consumes the atomic and returns the contained value.
1622 ///
1623 /// This is safe because passing `self` by value guarantees that no other threads are
1624 /// concurrently accessing the atomic data.
1625 ///
1626 /// # Examples
1627 ///
1628 /// ```
1629 /// use std::sync::atomic::AtomicPtr;
1630 ///
1631 /// let mut data = 5;
1632 /// let atomic_ptr = AtomicPtr::new(&mut data);
1633 /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1634 /// ```
1635 #[inline]
1636 #[stable(feature = "atomic_access", since = "1.15.0")]
1637 #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")]
1638 pub const fn into_inner(self) -> *mut T {
1639 self.p.into_inner()
1640 }
1641
1642 /// Loads a value from the pointer.
1643 ///
1644 /// `load` takes an [`Ordering`] argument which describes the memory ordering
1645 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1646 ///
1647 /// # Panics
1648 ///
1649 /// Panics if `order` is [`Release`] or [`AcqRel`].
1650 ///
1651 /// # Examples
1652 ///
1653 /// ```
1654 /// use std::sync::atomic::{AtomicPtr, Ordering};
1655 ///
1656 /// let ptr = &mut 5;
1657 /// let some_ptr = AtomicPtr::new(ptr);
1658 ///
1659 /// let value = some_ptr.load(Ordering::Relaxed);
1660 /// ```
1661 #[inline]
1662 #[stable(feature = "rust1", since = "1.0.0")]
1663 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1664 pub fn load(&self, order: Ordering) -> *mut T {
1665 // SAFETY: data races are prevented by atomic intrinsics.
1666 unsafe { atomic_load(self.p.get(), order) }
1667 }
1668
1669 /// Stores a value into the pointer.
1670 ///
1671 /// `store` takes an [`Ordering`] argument which describes the memory ordering
1672 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1673 ///
1674 /// # Panics
1675 ///
1676 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1677 ///
1678 /// # Examples
1679 ///
1680 /// ```
1681 /// use std::sync::atomic::{AtomicPtr, Ordering};
1682 ///
1683 /// let ptr = &mut 5;
1684 /// let some_ptr = AtomicPtr::new(ptr);
1685 ///
1686 /// let other_ptr = &mut 10;
1687 ///
1688 /// some_ptr.store(other_ptr, Ordering::Relaxed);
1689 /// ```
1690 #[inline]
1691 #[stable(feature = "rust1", since = "1.0.0")]
1692 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1693 pub fn store(&self, ptr: *mut T, order: Ordering) {
1694 // SAFETY: data races are prevented by atomic intrinsics.
1695 unsafe {
1696 atomic_store(self.p.get(), ptr, order);
1697 }
1698 }
1699
1700 /// Stores a value into the pointer, returning the previous value.
1701 ///
1702 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1703 /// of this operation. All ordering modes are possible. Note that using
1704 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1705 /// using [`Release`] makes the load part [`Relaxed`].
1706 ///
1707 /// **Note:** This method is only available on platforms that support atomic
1708 /// operations on pointers.
1709 ///
1710 /// # Examples
1711 ///
1712 /// ```
1713 /// use std::sync::atomic::{AtomicPtr, Ordering};
1714 ///
1715 /// let ptr = &mut 5;
1716 /// let some_ptr = AtomicPtr::new(ptr);
1717 ///
1718 /// let other_ptr = &mut 10;
1719 ///
1720 /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1721 /// ```
1722 #[inline]
1723 #[stable(feature = "rust1", since = "1.0.0")]
1724 #[cfg(target_has_atomic = "ptr")]
1725 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1726 pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1727 // SAFETY: data races are prevented by atomic intrinsics.
1728 unsafe { atomic_swap(self.p.get(), ptr, order) }
1729 }
1730
1731 /// Stores a value into the pointer if the current value is the same as the `current` value.
1732 ///
1733 /// The return value is always the previous value. If it is equal to `current`, then the value
1734 /// was updated.
1735 ///
1736 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1737 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1738 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1739 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1740 /// happens, and using [`Release`] makes the load part [`Relaxed`].
1741 ///
1742 /// **Note:** This method is only available on platforms that support atomic
1743 /// operations on pointers.
1744 ///
1745 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1746 ///
1747 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1748 /// memory orderings:
1749 ///
1750 /// Original | Success | Failure
1751 /// -------- | ------- | -------
1752 /// Relaxed | Relaxed | Relaxed
1753 /// Acquire | Acquire | Acquire
1754 /// Release | Release | Relaxed
1755 /// AcqRel | AcqRel | Acquire
1756 /// SeqCst | SeqCst | SeqCst
1757 ///
1758 /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
1759 /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
1760 /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
1761 /// rather than to infer success vs failure based on the value that was read.
1762 ///
1763 /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
1764 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1765 /// which allows the compiler to generate better assembly code when the compare and swap
1766 /// is used in a loop.
1767 ///
1768 /// # Examples
1769 ///
1770 /// ```
1771 /// use std::sync::atomic::{AtomicPtr, Ordering};
1772 ///
1773 /// let ptr = &mut 5;
1774 /// let some_ptr = AtomicPtr::new(ptr);
1775 ///
1776 /// let other_ptr = &mut 10;
1777 ///
1778 /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1779 /// ```
1780 #[inline]
1781 #[stable(feature = "rust1", since = "1.0.0")]
1782 #[deprecated(
1783 since = "1.50.0",
1784 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1785 )]
1786 #[cfg(target_has_atomic = "ptr")]
1787 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1788 pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1789 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1790 Ok(x) => x,
1791 Err(x) => x,
1792 }
1793 }
1794
1795 /// Stores a value into the pointer if the current value is the same as the `current` value.
1796 ///
1797 /// The return value is a result indicating whether the new value was written and containing
1798 /// the previous value. On success this value is guaranteed to be equal to `current`.
1799 ///
1800 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1801 /// ordering of this operation. `success` describes the required ordering for the
1802 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1803 /// `failure` describes the required ordering for the load operation that takes place when
1804 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1805 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1806 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1807 ///
1808 /// **Note:** This method is only available on platforms that support atomic
1809 /// operations on pointers.
1810 ///
1811 /// # Examples
1812 ///
1813 /// ```
1814 /// use std::sync::atomic::{AtomicPtr, Ordering};
1815 ///
1816 /// let ptr = &mut 5;
1817 /// let some_ptr = AtomicPtr::new(ptr);
1818 ///
1819 /// let other_ptr = &mut 10;
1820 ///
1821 /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1822 /// Ordering::SeqCst, Ordering::Relaxed);
1823 /// ```
1824 #[inline]
1825 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1826 #[cfg(target_has_atomic = "ptr")]
1827 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1828 pub fn compare_exchange(
1829 &self,
1830 current: *mut T,
1831 new: *mut T,
1832 success: Ordering,
1833 failure: Ordering,
1834 ) -> Result<*mut T, *mut T> {
1835 // SAFETY: data races are prevented by atomic intrinsics.
1836 unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
1837 }
1838
1839 /// Stores a value into the pointer if the current value is the same as the `current` value.
1840 ///
1841 /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1842 /// comparison succeeds, which can result in more efficient code on some platforms. The
1843 /// return value is a result indicating whether the new value was written and containing the
1844 /// previous value.
1845 ///
1846 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1847 /// ordering of this operation. `success` describes the required ordering for the
1848 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1849 /// `failure` describes the required ordering for the load operation that takes place when
1850 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1851 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1852 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1853 ///
1854 /// **Note:** This method is only available on platforms that support atomic
1855 /// operations on pointers.
1856 ///
1857 /// # Examples
1858 ///
1859 /// ```
1860 /// use std::sync::atomic::{AtomicPtr, Ordering};
1861 ///
1862 /// let some_ptr = AtomicPtr::new(&mut 5);
1863 ///
1864 /// let new = &mut 10;
1865 /// let mut old = some_ptr.load(Ordering::Relaxed);
1866 /// loop {
1867 /// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1868 /// Ok(_) => break,
1869 /// Err(x) => old = x,
1870 /// }
1871 /// }
1872 /// ```
1873 #[inline]
1874 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1875 #[cfg(target_has_atomic = "ptr")]
1876 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1877 pub fn compare_exchange_weak(
1878 &self,
1879 current: *mut T,
1880 new: *mut T,
1881 success: Ordering,
1882 failure: Ordering,
1883 ) -> Result<*mut T, *mut T> {
1884 // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
1885 // but we know for sure that the pointer is valid (we just got it from
1886 // an `UnsafeCell` that we have by reference) and the atomic operation
1887 // itself allows us to safely mutate the `UnsafeCell` contents.
1888 unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
1889 }
1890
1891 /// Fetches the value, and applies a function to it that returns an optional
1892 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1893 /// returned `Some(_)`, else `Err(previous_value)`.
1894 ///
1895 /// Note: This may call the function multiple times if the value has been
1896 /// changed from other threads in the meantime, as long as the function
1897 /// returns `Some(_)`, but the function will have been applied only once to
1898 /// the stored value.
1899 ///
1900 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1901 /// ordering of this operation. The first describes the required ordering for
1902 /// when the operation finally succeeds while the second describes the
1903 /// required ordering for loads. These correspond to the success and failure
1904 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
1905 ///
1906 /// Using [`Acquire`] as success ordering makes the store part of this
1907 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1908 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1909 /// [`Acquire`] or [`Relaxed`].
1910 ///
1911 /// **Note:** This method is only available on platforms that support atomic
1912 /// operations on pointers.
1913 ///
1914 /// # Considerations
1915 ///
1916 /// This method is not magic; it is not provided by the hardware.
1917 /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
1918 /// In particular, this method will not circumvent the [ABA Problem].
1919 ///
1920 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1921 ///
1922 /// # Examples
1923 ///
1924 /// ```rust
1925 /// use std::sync::atomic::{AtomicPtr, Ordering};
1926 ///
1927 /// let ptr: *mut _ = &mut 5;
1928 /// let some_ptr = AtomicPtr::new(ptr);
1929 ///
1930 /// let new: *mut _ = &mut 10;
1931 /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
1932 /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
1933 /// if x == ptr {
1934 /// Some(new)
1935 /// } else {
1936 /// None
1937 /// }
1938 /// });
1939 /// assert_eq!(result, Ok(ptr));
1940 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
1941 /// ```
1942 #[inline]
1943 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1944 #[cfg(target_has_atomic = "ptr")]
1945 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1946 pub fn fetch_update<F>(
1947 &self,
1948 set_order: Ordering,
1949 fetch_order: Ordering,
1950 mut f: F,
1951 ) -> Result<*mut T, *mut T>
1952 where
1953 F: FnMut(*mut T) -> Option<*mut T>,
1954 {
1955 let mut prev = self.load(fetch_order);
1956 while let Some(next) = f(prev) {
1957 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1958 x @ Ok(_) => return x,
1959 Err(next_prev) => prev = next_prev,
1960 }
1961 }
1962 Err(prev)
1963 }
1964 /// Fetches the value, and applies a function to it that returns an optional
1965 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1966 /// returned `Some(_)`, else `Err(previous_value)`.
1967 ///
1968 /// See also: [`update`](`AtomicPtr::update`).
1969 ///
1970 /// Note: This may call the function multiple times if the value has been
1971 /// changed from other threads in the meantime, as long as the function
1972 /// returns `Some(_)`, but the function will have been applied only once to
1973 /// the stored value.
1974 ///
1975 /// `try_update` takes two [`Ordering`] arguments to describe the memory
1976 /// ordering of this operation. The first describes the required ordering for
1977 /// when the operation finally succeeds while the second describes the
1978 /// required ordering for loads. These correspond to the success and failure
1979 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
1980 ///
1981 /// Using [`Acquire`] as success ordering makes the store part of this
1982 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1983 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1984 /// [`Acquire`] or [`Relaxed`].
1985 ///
1986 /// **Note:** This method is only available on platforms that support atomic
1987 /// operations on pointers.
1988 ///
1989 /// # Considerations
1990 ///
1991 /// This method is not magic; it is not provided by the hardware.
1992 /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
1993 /// In particular, this method will not circumvent the [ABA Problem].
1994 ///
1995 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1996 ///
1997 /// # Examples
1998 ///
1999 /// ```rust
2000 /// #![feature(atomic_try_update)]
2001 /// use std::sync::atomic::{AtomicPtr, Ordering};
2002 ///
2003 /// let ptr: *mut _ = &mut 5;
2004 /// let some_ptr = AtomicPtr::new(ptr);
2005 ///
2006 /// let new: *mut _ = &mut 10;
2007 /// assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2008 /// let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2009 /// if x == ptr {
2010 /// Some(new)
2011 /// } else {
2012 /// None
2013 /// }
2014 /// });
2015 /// assert_eq!(result, Ok(ptr));
2016 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2017 /// ```
2018 #[inline]
2019 #[unstable(feature = "atomic_try_update", issue = "135894")]
2020 #[cfg(target_has_atomic = "ptr")]
2021 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2022 pub fn try_update(
2023 &self,
2024 set_order: Ordering,
2025 fetch_order: Ordering,
2026 f: impl FnMut(*mut T) -> Option<*mut T>,
2027 ) -> Result<*mut T, *mut T> {
2028 // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
2029 // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
2030 self.fetch_update(set_order, fetch_order, f)
2031 }
2032
2033 /// Fetches the value, applies a function to it that it return a new value.
2034 /// The new value is stored and the old value is returned.
2035 ///
2036 /// See also: [`try_update`](`AtomicPtr::try_update`).
2037 ///
2038 /// Note: This may call the function multiple times if the value has been changed from other threads in
2039 /// the meantime, but the function will have been applied only once to the stored value.
2040 ///
2041 /// `update` takes two [`Ordering`] arguments to describe the memory
2042 /// ordering of this operation. The first describes the required ordering for
2043 /// when the operation finally succeeds while the second describes the
2044 /// required ordering for loads. These correspond to the success and failure
2045 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
2046 ///
2047 /// Using [`Acquire`] as success ordering makes the store part
2048 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2049 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2050 ///
2051 /// **Note:** This method is only available on platforms that support atomic
2052 /// operations on pointers.
2053 ///
2054 /// # Considerations
2055 ///
2056 /// This method is not magic; it is not provided by the hardware.
2057 /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
2058 /// In particular, this method will not circumvent the [ABA Problem].
2059 ///
2060 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2061 ///
2062 /// # Examples
2063 ///
2064 /// ```rust
2065 /// #![feature(atomic_try_update)]
2066 ///
2067 /// use std::sync::atomic::{AtomicPtr, Ordering};
2068 ///
2069 /// let ptr: *mut _ = &mut 5;
2070 /// let some_ptr = AtomicPtr::new(ptr);
2071 ///
2072 /// let new: *mut _ = &mut 10;
2073 /// let result = some_ptr.update(Ordering::SeqCst, Ordering::SeqCst, |_| new);
2074 /// assert_eq!(result, ptr);
2075 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2076 /// ```
2077 #[inline]
2078 #[unstable(feature = "atomic_try_update", issue = "135894")]
2079 #[cfg(target_has_atomic = "8")]
2080 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2081 pub fn update(
2082 &self,
2083 set_order: Ordering,
2084 fetch_order: Ordering,
2085 mut f: impl FnMut(*mut T) -> *mut T,
2086 ) -> *mut T {
2087 let mut prev = self.load(fetch_order);
2088 loop {
2089 match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
2090 Ok(x) => break x,
2091 Err(next_prev) => prev = next_prev,
2092 }
2093 }
2094 }
2095
2096 /// Offsets the pointer's address by adding `val` (in units of `T`),
2097 /// returning the previous pointer.
2098 ///
2099 /// This is equivalent to using [`wrapping_add`] to atomically perform the
2100 /// equivalent of `ptr = ptr.wrapping_add(val);`.
2101 ///
2102 /// This method operates in units of `T`, which means that it cannot be used
2103 /// to offset the pointer by an amount which is not a multiple of
2104 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2105 /// work with a deliberately misaligned pointer. In such cases, you may use
2106 /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2107 ///
2108 /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2109 /// memory ordering of this operation. All ordering modes are possible. Note
2110 /// that using [`Acquire`] makes the store part of this operation
2111 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2112 ///
2113 /// **Note**: This method is only available on platforms that support atomic
2114 /// operations on [`AtomicPtr`].
2115 ///
2116 /// [`wrapping_add`]: pointer::wrapping_add
2117 ///
2118 /// # Examples
2119 ///
2120 /// ```
2121 /// #![feature(strict_provenance_atomic_ptr)]
2122 /// use core::sync::atomic::{AtomicPtr, Ordering};
2123 ///
2124 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2125 /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2126 /// // Note: units of `size_of::<i64>()`.
2127 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2128 /// ```
2129 #[inline]
2130 #[cfg(target_has_atomic = "ptr")]
2131 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2132 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2133 pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2134 self.fetch_byte_add(val.wrapping_mul(size_of::<T>()), order)
2135 }
2136
2137 /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2138 /// returning the previous pointer.
2139 ///
2140 /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2141 /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2142 ///
2143 /// This method operates in units of `T`, which means that it cannot be used
2144 /// to offset the pointer by an amount which is not a multiple of
2145 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2146 /// work with a deliberately misaligned pointer. In such cases, you may use
2147 /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2148 ///
2149 /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2150 /// ordering of this operation. All ordering modes are possible. Note that
2151 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2152 /// and using [`Release`] makes the load part [`Relaxed`].
2153 ///
2154 /// **Note**: This method is only available on platforms that support atomic
2155 /// operations on [`AtomicPtr`].
2156 ///
2157 /// [`wrapping_sub`]: pointer::wrapping_sub
2158 ///
2159 /// # Examples
2160 ///
2161 /// ```
2162 /// #![feature(strict_provenance_atomic_ptr)]
2163 /// use core::sync::atomic::{AtomicPtr, Ordering};
2164 ///
2165 /// let array = [1i32, 2i32];
2166 /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2167 ///
2168 /// assert!(core::ptr::eq(
2169 /// atom.fetch_ptr_sub(1, Ordering::Relaxed),
2170 /// &array[1],
2171 /// ));
2172 /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2173 /// ```
2174 #[inline]
2175 #[cfg(target_has_atomic = "ptr")]
2176 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2177 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2178 pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2179 self.fetch_byte_sub(val.wrapping_mul(size_of::<T>()), order)
2180 }
2181
2182 /// Offsets the pointer's address by adding `val` *bytes*, returning the
2183 /// previous pointer.
2184 ///
2185 /// This is equivalent to using [`wrapping_byte_add`] to atomically
2186 /// perform `ptr = ptr.wrapping_byte_add(val)`.
2187 ///
2188 /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2189 /// memory ordering of this operation. All ordering modes are possible. Note
2190 /// that using [`Acquire`] makes the store part of this operation
2191 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2192 ///
2193 /// **Note**: This method is only available on platforms that support atomic
2194 /// operations on [`AtomicPtr`].
2195 ///
2196 /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
2197 ///
2198 /// # Examples
2199 ///
2200 /// ```
2201 /// #![feature(strict_provenance_atomic_ptr)]
2202 /// use core::sync::atomic::{AtomicPtr, Ordering};
2203 ///
2204 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2205 /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2206 /// // Note: in units of bytes, not `size_of::<i64>()`.
2207 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2208 /// ```
2209 #[inline]
2210 #[cfg(target_has_atomic = "ptr")]
2211 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2212 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2213 pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2214 // SAFETY: data races are prevented by atomic intrinsics.
2215 unsafe { atomic_add(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2216 }
2217
2218 /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2219 /// previous pointer.
2220 ///
2221 /// This is equivalent to using [`wrapping_byte_sub`] to atomically
2222 /// perform `ptr = ptr.wrapping_byte_sub(val)`.
2223 ///
2224 /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2225 /// memory ordering of this operation. All ordering modes are possible. Note
2226 /// that using [`Acquire`] makes the store part of this operation
2227 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2228 ///
2229 /// **Note**: This method is only available on platforms that support atomic
2230 /// operations on [`AtomicPtr`].
2231 ///
2232 /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
2233 ///
2234 /// # Examples
2235 ///
2236 /// ```
2237 /// #![feature(strict_provenance_atomic_ptr)]
2238 /// use core::sync::atomic::{AtomicPtr, Ordering};
2239 ///
2240 /// let atom = AtomicPtr::<i64>::new(core::ptr::without_provenance_mut(1));
2241 /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2242 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2243 /// ```
2244 #[inline]
2245 #[cfg(target_has_atomic = "ptr")]
2246 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2247 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2248 pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2249 // SAFETY: data races are prevented by atomic intrinsics.
2250 unsafe { atomic_sub(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2251 }
2252
2253 /// Performs a bitwise "or" operation on the address of the current pointer,
2254 /// and the argument `val`, and stores a pointer with provenance of the
2255 /// current pointer and the resulting address.
2256 ///
2257 /// This is equivalent to using [`map_addr`] to atomically perform
2258 /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2259 /// pointer schemes to atomically set tag bits.
2260 ///
2261 /// **Caveat**: This operation returns the previous value. To compute the
2262 /// stored value without losing provenance, you may use [`map_addr`]. For
2263 /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2264 ///
2265 /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2266 /// ordering of this operation. All ordering modes are possible. Note that
2267 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2268 /// and using [`Release`] makes the load part [`Relaxed`].
2269 ///
2270 /// **Note**: This method is only available on platforms that support atomic
2271 /// operations on [`AtomicPtr`].
2272 ///
2273 /// This API and its claimed semantics are part of the Strict Provenance
2274 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2275 /// details.
2276 ///
2277 /// [`map_addr`]: pointer::map_addr
2278 ///
2279 /// # Examples
2280 ///
2281 /// ```
2282 /// #![feature(strict_provenance_atomic_ptr)]
2283 /// use core::sync::atomic::{AtomicPtr, Ordering};
2284 ///
2285 /// let pointer = &mut 3i64 as *mut i64;
2286 ///
2287 /// let atom = AtomicPtr::<i64>::new(pointer);
2288 /// // Tag the bottom bit of the pointer.
2289 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2290 /// // Extract and untag.
2291 /// let tagged = atom.load(Ordering::Relaxed);
2292 /// assert_eq!(tagged.addr() & 1, 1);
2293 /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2294 /// ```
2295 #[inline]
2296 #[cfg(target_has_atomic = "ptr")]
2297 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2298 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2299 pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2300 // SAFETY: data races are prevented by atomic intrinsics.
2301 unsafe { atomic_or(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2302 }
2303
2304 /// Performs a bitwise "and" operation on the address of the current
2305 /// pointer, and the argument `val`, and stores a pointer with provenance of
2306 /// the current pointer and the resulting address.
2307 ///
2308 /// This is equivalent to using [`map_addr`] to atomically perform
2309 /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2310 /// pointer schemes to atomically unset tag bits.
2311 ///
2312 /// **Caveat**: This operation returns the previous value. To compute the
2313 /// stored value without losing provenance, you may use [`map_addr`]. For
2314 /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2315 ///
2316 /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2317 /// ordering of this operation. All ordering modes are possible. Note that
2318 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2319 /// and using [`Release`] makes the load part [`Relaxed`].
2320 ///
2321 /// **Note**: This method is only available on platforms that support atomic
2322 /// operations on [`AtomicPtr`].
2323 ///
2324 /// This API and its claimed semantics are part of the Strict Provenance
2325 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2326 /// details.
2327 ///
2328 /// [`map_addr`]: pointer::map_addr
2329 ///
2330 /// # Examples
2331 ///
2332 /// ```
2333 /// #![feature(strict_provenance_atomic_ptr)]
2334 /// use core::sync::atomic::{AtomicPtr, Ordering};
2335 ///
2336 /// let pointer = &mut 3i64 as *mut i64;
2337 /// // A tagged pointer
2338 /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2339 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2340 /// // Untag, and extract the previously tagged pointer.
2341 /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
2342 /// .map_addr(|a| a & !1);
2343 /// assert_eq!(untagged, pointer);
2344 /// ```
2345 #[inline]
2346 #[cfg(target_has_atomic = "ptr")]
2347 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2348 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2349 pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2350 // SAFETY: data races are prevented by atomic intrinsics.
2351 unsafe { atomic_and(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2352 }
2353
2354 /// Performs a bitwise "xor" operation on the address of the current
2355 /// pointer, and the argument `val`, and stores a pointer with provenance of
2356 /// the current pointer and the resulting address.
2357 ///
2358 /// This is equivalent to using [`map_addr`] to atomically perform
2359 /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2360 /// pointer schemes to atomically toggle tag bits.
2361 ///
2362 /// **Caveat**: This operation returns the previous value. To compute the
2363 /// stored value without losing provenance, you may use [`map_addr`]. For
2364 /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2365 ///
2366 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2367 /// ordering of this operation. All ordering modes are possible. Note that
2368 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2369 /// and using [`Release`] makes the load part [`Relaxed`].
2370 ///
2371 /// **Note**: This method is only available on platforms that support atomic
2372 /// operations on [`AtomicPtr`].
2373 ///
2374 /// This API and its claimed semantics are part of the Strict Provenance
2375 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2376 /// details.
2377 ///
2378 /// [`map_addr`]: pointer::map_addr
2379 ///
2380 /// # Examples
2381 ///
2382 /// ```
2383 /// #![feature(strict_provenance_atomic_ptr)]
2384 /// use core::sync::atomic::{AtomicPtr, Ordering};
2385 ///
2386 /// let pointer = &mut 3i64 as *mut i64;
2387 /// let atom = AtomicPtr::<i64>::new(pointer);
2388 ///
2389 /// // Toggle a tag bit on the pointer.
2390 /// atom.fetch_xor(1, Ordering::Relaxed);
2391 /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2392 /// ```
2393 #[inline]
2394 #[cfg(target_has_atomic = "ptr")]
2395 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2396 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2397 pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2398 // SAFETY: data races are prevented by atomic intrinsics.
2399 unsafe { atomic_xor(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2400 }
2401
2402 /// Returns a mutable pointer to the underlying pointer.
2403 ///
2404 /// Doing non-atomic reads and writes on the resulting pointer can be a data race.
2405 /// This method is mostly useful for FFI, where the function signature may use
2406 /// `*mut *mut T` instead of `&AtomicPtr<T>`.
2407 ///
2408 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2409 /// atomic types work with interior mutability. All modifications of an atomic change the value
2410 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2411 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
2412 /// restriction: operations on it must be atomic.
2413 ///
2414 /// # Examples
2415 ///
2416 /// ```ignore (extern-declaration)
2417 /// use std::sync::atomic::AtomicPtr;
2418 ///
2419 /// extern "C" {
2420 /// fn my_atomic_op(arg: *mut *mut u32);
2421 /// }
2422 ///
2423 /// let mut value = 17;
2424 /// let atomic = AtomicPtr::new(&mut value);
2425 ///
2426 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2427 /// unsafe {
2428 /// my_atomic_op(atomic.as_ptr());
2429 /// }
2430 /// ```
2431 #[inline]
2432 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2433 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2434 #[rustc_never_returns_null_ptr]
2435 pub const fn as_ptr(&self) -> *mut *mut T {
2436 self.p.get()
2437 }
2438}
2439
2440#[cfg(target_has_atomic_load_store = "8")]
2441#[stable(feature = "atomic_bool_from", since = "1.24.0")]
2442impl From<bool> for AtomicBool {
2443 /// Converts a `bool` into an `AtomicBool`.
2444 ///
2445 /// # Examples
2446 ///
2447 /// ```
2448 /// use std::sync::atomic::AtomicBool;
2449 /// let atomic_bool = AtomicBool::from(true);
2450 /// assert_eq!(format!("{atomic_bool:?}"), "true")
2451 /// ```
2452 #[inline]
2453 fn from(b: bool) -> Self {
2454 Self::new(b)
2455 }
2456}
2457
2458#[cfg(target_has_atomic_load_store = "ptr")]
2459#[stable(feature = "atomic_from", since = "1.23.0")]
2460impl<T> From<*mut T> for AtomicPtr<T> {
2461 /// Converts a `*mut T` into an `AtomicPtr<T>`.
2462 #[inline]
2463 fn from(p: *mut T) -> Self {
2464 Self::new(p)
2465 }
2466}
2467
2468#[allow(unused_macros)] // This macro ends up being unused on some architectures.
2469macro_rules! if_8_bit {
2470 (u8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2471 (i8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) };
2472 ($_:ident, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($no)*)?) };
2473}
2474
2475#[cfg(target_has_atomic_load_store)]
2476macro_rules! atomic_int {
2477 ($cfg_cas:meta,
2478 $cfg_align:meta,
2479 $stable:meta,
2480 $stable_cxchg:meta,
2481 $stable_debug:meta,
2482 $stable_access:meta,
2483 $stable_from:meta,
2484 $stable_nand:meta,
2485 $const_stable_new:meta,
2486 $const_stable_into_inner:meta,
2487 $diagnostic_item:meta,
2488 $s_int_type:literal,
2489 $extra_feature:expr,
2490 $min_fn:ident, $max_fn:ident,
2491 $align:expr,
2492 $int_type:ident $atomic_type:ident) => {
2493 /// An integer type which can be safely shared between threads.
2494 ///
2495 /// This type has the same
2496 #[doc = if_8_bit!(
2497 $int_type,
2498 yes = ["size, alignment, and bit validity"],
2499 no = ["size and bit validity"],
2500 )]
2501 /// as the underlying integer type, [`
2502 #[doc = $s_int_type]
2503 /// `].
2504 #[doc = if_8_bit! {
2505 $int_type,
2506 no = [
2507 "However, the alignment of this type is always equal to its ",
2508 "size, even on targets where [`", $s_int_type, "`] has a ",
2509 "lesser alignment."
2510 ],
2511 }]
2512 ///
2513 /// For more about the differences between atomic types and
2514 /// non-atomic types as well as information about the portability of
2515 /// this type, please see the [module-level documentation].
2516 ///
2517 /// **Note:** This type is only available on platforms that support
2518 /// atomic loads and stores of [`
2519 #[doc = $s_int_type]
2520 /// `].
2521 ///
2522 /// [module-level documentation]: crate::sync::atomic
2523 #[$stable]
2524 #[$diagnostic_item]
2525 #[repr(C, align($align))]
2526 pub struct $atomic_type {
2527 v: UnsafeCell<$int_type>,
2528 }
2529
2530 #[$stable]
2531 impl Default for $atomic_type {
2532 #[inline]
2533 fn default() -> Self {
2534 Self::new(Default::default())
2535 }
2536 }
2537
2538 #[$stable_from]
2539 impl From<$int_type> for $atomic_type {
2540 #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2541 #[inline]
2542 fn from(v: $int_type) -> Self { Self::new(v) }
2543 }
2544
2545 #[$stable_debug]
2546 impl fmt::Debug for $atomic_type {
2547 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2548 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2549 }
2550 }
2551
2552 // Send is implicitly implemented.
2553 #[$stable]
2554 unsafe impl Sync for $atomic_type {}
2555
2556 impl $atomic_type {
2557 /// Creates a new atomic integer.
2558 ///
2559 /// # Examples
2560 ///
2561 /// ```
2562 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2563 ///
2564 #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2565 /// ```
2566 #[inline]
2567 #[$stable]
2568 #[$const_stable_new]
2569 #[must_use]
2570 pub const fn new(v: $int_type) -> Self {
2571 Self {v: UnsafeCell::new(v)}
2572 }
2573
2574 /// Creates a new reference to an atomic integer from a pointer.
2575 ///
2576 /// # Examples
2577 ///
2578 /// ```
2579 #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2580 ///
2581 /// // Get a pointer to an allocated value
2582 #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2583 ///
2584 #[doc = concat!("assert!(ptr.cast::<", stringify!($atomic_type), ">().is_aligned());")]
2585 ///
2586 /// {
2587 /// // Create an atomic view of the allocated value
2588 // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2589 #[doc = concat!(" let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2590 ///
2591 /// // Use `atomic` for atomic operations, possibly share it with other threads
2592 /// atomic.store(1, atomic::Ordering::Relaxed);
2593 /// }
2594 ///
2595 /// // It's ok to non-atomically access the value behind `ptr`,
2596 /// // since the reference to the atomic ended its lifetime in the block above
2597 /// assert_eq!(unsafe { *ptr }, 1);
2598 ///
2599 /// // Deallocate the value
2600 /// unsafe { drop(Box::from_raw(ptr)) }
2601 /// ```
2602 ///
2603 /// # Safety
2604 ///
2605 /// * `ptr` must be aligned to
2606 #[doc = concat!(" `align_of::<", stringify!($atomic_type), ">()`")]
2607 #[doc = if_8_bit!{
2608 $int_type,
2609 yes = [
2610 " (note that this is always true, since `align_of::<",
2611 stringify!($atomic_type), ">() == 1`)."
2612 ],
2613 no = [
2614 " (note that on some platforms this can be bigger than `align_of::<",
2615 stringify!($int_type), ">()`)."
2616 ],
2617 }]
2618 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2619 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
2620 /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
2621 /// without synchronization.
2622 ///
2623 /// [valid]: crate::ptr#safety
2624 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
2625 #[inline]
2626 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
2627 #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")]
2628 pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2629 // SAFETY: guaranteed by the caller
2630 unsafe { &*ptr.cast() }
2631 }
2632
2633
2634 /// Returns a mutable reference to the underlying integer.
2635 ///
2636 /// This is safe because the mutable reference guarantees that no other threads are
2637 /// concurrently accessing the atomic data.
2638 ///
2639 /// # Examples
2640 ///
2641 /// ```
2642 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2643 ///
2644 #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2645 /// assert_eq!(*some_var.get_mut(), 10);
2646 /// *some_var.get_mut() = 5;
2647 /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2648 /// ```
2649 #[inline]
2650 #[$stable_access]
2651 pub fn get_mut(&mut self) -> &mut $int_type {
2652 self.v.get_mut()
2653 }
2654
2655 #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2656 ///
2657 #[doc = if_8_bit! {
2658 $int_type,
2659 no = [
2660 "**Note:** This function is only available on targets where `",
2661 stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`."
2662 ],
2663 }]
2664 ///
2665 /// # Examples
2666 ///
2667 /// ```
2668 /// #![feature(atomic_from_mut)]
2669 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2670 ///
2671 /// let mut some_int = 123;
2672 #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2673 /// a.store(100, Ordering::Relaxed);
2674 /// assert_eq!(some_int, 100);
2675 /// ```
2676 ///
2677 #[inline]
2678 #[$cfg_align]
2679 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2680 pub fn from_mut(v: &mut $int_type) -> &mut Self {
2681 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2682 // SAFETY:
2683 // - the mutable reference guarantees unique ownership.
2684 // - the alignment of `$int_type` and `Self` is the
2685 // same, as promised by $cfg_align and verified above.
2686 unsafe { &mut *(v as *mut $int_type as *mut Self) }
2687 }
2688
2689 #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2690 ///
2691 /// This is safe because the mutable reference guarantees that no other threads are
2692 /// concurrently accessing the atomic data.
2693 ///
2694 /// # Examples
2695 ///
2696 /// ```ignore-wasm
2697 /// #![feature(atomic_from_mut)]
2698 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2699 ///
2700 #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2701 ///
2702 #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2703 /// assert_eq!(view, [0; 10]);
2704 /// view
2705 /// .iter_mut()
2706 /// .enumerate()
2707 /// .for_each(|(idx, int)| *int = idx as _);
2708 ///
2709 /// std::thread::scope(|s| {
2710 /// some_ints
2711 /// .iter()
2712 /// .enumerate()
2713 /// .for_each(|(idx, int)| {
2714 /// s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2715 /// })
2716 /// });
2717 /// ```
2718 #[inline]
2719 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2720 pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2721 // SAFETY: the mutable reference guarantees unique ownership.
2722 unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2723 }
2724
2725 #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2726 ///
2727 /// # Examples
2728 ///
2729 /// ```ignore-wasm
2730 /// #![feature(atomic_from_mut)]
2731 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2732 ///
2733 /// let mut some_ints = [0; 10];
2734 #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2735 /// std::thread::scope(|s| {
2736 /// for i in 0..a.len() {
2737 /// s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2738 /// }
2739 /// });
2740 /// for (i, n) in some_ints.into_iter().enumerate() {
2741 /// assert_eq!(i, n as usize);
2742 /// }
2743 /// ```
2744 #[inline]
2745 #[$cfg_align]
2746 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2747 pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2748 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2749 // SAFETY:
2750 // - the mutable reference guarantees unique ownership.
2751 // - the alignment of `$int_type` and `Self` is the
2752 // same, as promised by $cfg_align and verified above.
2753 unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2754 }
2755
2756 /// Consumes the atomic and returns the contained value.
2757 ///
2758 /// This is safe because passing `self` by value guarantees that no other threads are
2759 /// concurrently accessing the atomic data.
2760 ///
2761 /// # Examples
2762 ///
2763 /// ```
2764 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2765 ///
2766 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2767 /// assert_eq!(some_var.into_inner(), 5);
2768 /// ```
2769 #[inline]
2770 #[$stable_access]
2771 #[$const_stable_into_inner]
2772 pub const fn into_inner(self) -> $int_type {
2773 self.v.into_inner()
2774 }
2775
2776 /// Loads a value from the atomic integer.
2777 ///
2778 /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2779 /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2780 ///
2781 /// # Panics
2782 ///
2783 /// Panics if `order` is [`Release`] or [`AcqRel`].
2784 ///
2785 /// # Examples
2786 ///
2787 /// ```
2788 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2789 ///
2790 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2791 ///
2792 /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2793 /// ```
2794 #[inline]
2795 #[$stable]
2796 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2797 pub fn load(&self, order: Ordering) -> $int_type {
2798 // SAFETY: data races are prevented by atomic intrinsics.
2799 unsafe { atomic_load(self.v.get(), order) }
2800 }
2801
2802 /// Stores a value into the atomic integer.
2803 ///
2804 /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2805 /// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2806 ///
2807 /// # Panics
2808 ///
2809 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2810 ///
2811 /// # Examples
2812 ///
2813 /// ```
2814 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2815 ///
2816 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2817 ///
2818 /// some_var.store(10, Ordering::Relaxed);
2819 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2820 /// ```
2821 #[inline]
2822 #[$stable]
2823 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2824 pub fn store(&self, val: $int_type, order: Ordering) {
2825 // SAFETY: data races are prevented by atomic intrinsics.
2826 unsafe { atomic_store(self.v.get(), val, order); }
2827 }
2828
2829 /// Stores a value into the atomic integer, returning the previous value.
2830 ///
2831 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
2832 /// of this operation. All ordering modes are possible. Note that using
2833 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2834 /// using [`Release`] makes the load part [`Relaxed`].
2835 ///
2836 /// **Note**: This method is only available on platforms that support atomic operations on
2837 #[doc = concat!("[`", $s_int_type, "`].")]
2838 ///
2839 /// # Examples
2840 ///
2841 /// ```
2842 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2843 ///
2844 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2845 ///
2846 /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2847 /// ```
2848 #[inline]
2849 #[$stable]
2850 #[$cfg_cas]
2851 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2852 pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2853 // SAFETY: data races are prevented by atomic intrinsics.
2854 unsafe { atomic_swap(self.v.get(), val, order) }
2855 }
2856
2857 /// Stores a value into the atomic integer if the current value is the same as
2858 /// the `current` value.
2859 ///
2860 /// The return value is always the previous value. If it is equal to `current`, then the
2861 /// value was updated.
2862 ///
2863 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
2864 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
2865 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
2866 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
2867 /// happens, and using [`Release`] makes the load part [`Relaxed`].
2868 ///
2869 /// **Note**: This method is only available on platforms that support atomic operations on
2870 #[doc = concat!("[`", $s_int_type, "`].")]
2871 ///
2872 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
2873 ///
2874 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
2875 /// memory orderings:
2876 ///
2877 /// Original | Success | Failure
2878 /// -------- | ------- | -------
2879 /// Relaxed | Relaxed | Relaxed
2880 /// Acquire | Acquire | Acquire
2881 /// Release | Release | Relaxed
2882 /// AcqRel | AcqRel | Acquire
2883 /// SeqCst | SeqCst | SeqCst
2884 ///
2885 /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use
2886 /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`,
2887 /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err`
2888 /// rather than to infer success vs failure based on the value that was read.
2889 ///
2890 /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead.
2891 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
2892 /// which allows the compiler to generate better assembly code when the compare and swap
2893 /// is used in a loop.
2894 ///
2895 /// # Examples
2896 ///
2897 /// ```
2898 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2899 ///
2900 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2901 ///
2902 /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
2903 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2904 ///
2905 /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
2906 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2907 /// ```
2908 #[inline]
2909 #[$stable]
2910 #[deprecated(
2911 since = "1.50.0",
2912 note = "Use `compare_exchange` or `compare_exchange_weak` instead")
2913 ]
2914 #[$cfg_cas]
2915 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2916 pub fn compare_and_swap(&self,
2917 current: $int_type,
2918 new: $int_type,
2919 order: Ordering) -> $int_type {
2920 match self.compare_exchange(current,
2921 new,
2922 order,
2923 strongest_failure_ordering(order)) {
2924 Ok(x) => x,
2925 Err(x) => x,
2926 }
2927 }
2928
2929 /// Stores a value into the atomic integer if the current value is the same as
2930 /// the `current` value.
2931 ///
2932 /// The return value is a result indicating whether the new value was written and
2933 /// containing the previous value. On success this value is guaranteed to be equal to
2934 /// `current`.
2935 ///
2936 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
2937 /// ordering of this operation. `success` describes the required ordering for the
2938 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2939 /// `failure` describes the required ordering for the load operation that takes place when
2940 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2941 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2942 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2943 ///
2944 /// **Note**: This method is only available on platforms that support atomic operations on
2945 #[doc = concat!("[`", $s_int_type, "`].")]
2946 ///
2947 /// # Examples
2948 ///
2949 /// ```
2950 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2951 ///
2952 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2953 ///
2954 /// assert_eq!(some_var.compare_exchange(5, 10,
2955 /// Ordering::Acquire,
2956 /// Ordering::Relaxed),
2957 /// Ok(5));
2958 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2959 ///
2960 /// assert_eq!(some_var.compare_exchange(6, 12,
2961 /// Ordering::SeqCst,
2962 /// Ordering::Acquire),
2963 /// Err(10));
2964 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2965 /// ```
2966 #[inline]
2967 #[$stable_cxchg]
2968 #[$cfg_cas]
2969 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2970 pub fn compare_exchange(&self,
2971 current: $int_type,
2972 new: $int_type,
2973 success: Ordering,
2974 failure: Ordering) -> Result<$int_type, $int_type> {
2975 // SAFETY: data races are prevented by atomic intrinsics.
2976 unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
2977 }
2978
2979 /// Stores a value into the atomic integer if the current value is the same as
2980 /// the `current` value.
2981 ///
2982 #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
2983 /// this function is allowed to spuriously fail even
2984 /// when the comparison succeeds, which can result in more efficient code on some
2985 /// platforms. The return value is a result indicating whether the new value was
2986 /// written and containing the previous value.
2987 ///
2988 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
2989 /// ordering of this operation. `success` describes the required ordering for the
2990 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2991 /// `failure` describes the required ordering for the load operation that takes place when
2992 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2993 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2994 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2995 ///
2996 /// **Note**: This method is only available on platforms that support atomic operations on
2997 #[doc = concat!("[`", $s_int_type, "`].")]
2998 ///
2999 /// # Examples
3000 ///
3001 /// ```
3002 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3003 ///
3004 #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
3005 ///
3006 /// let mut old = val.load(Ordering::Relaxed);
3007 /// loop {
3008 /// let new = old * 2;
3009 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3010 /// Ok(_) => break,
3011 /// Err(x) => old = x,
3012 /// }
3013 /// }
3014 /// ```
3015 #[inline]
3016 #[$stable_cxchg]
3017 #[$cfg_cas]
3018 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3019 pub fn compare_exchange_weak(&self,
3020 current: $int_type,
3021 new: $int_type,
3022 success: Ordering,
3023 failure: Ordering) -> Result<$int_type, $int_type> {
3024 // SAFETY: data races are prevented by atomic intrinsics.
3025 unsafe {
3026 atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
3027 }
3028 }
3029
3030 /// Adds to the current value, returning the previous value.
3031 ///
3032 /// This operation wraps around on overflow.
3033 ///
3034 /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3035 /// of this operation. All ordering modes are possible. Note that using
3036 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3037 /// using [`Release`] makes the load part [`Relaxed`].
3038 ///
3039 /// **Note**: This method is only available on platforms that support atomic operations on
3040 #[doc = concat!("[`", $s_int_type, "`].")]
3041 ///
3042 /// # Examples
3043 ///
3044 /// ```
3045 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3046 ///
3047 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
3048 /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3049 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3050 /// ```
3051 #[inline]
3052 #[$stable]
3053 #[$cfg_cas]
3054 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3055 pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3056 // SAFETY: data races are prevented by atomic intrinsics.
3057 unsafe { atomic_add(self.v.get(), val, order) }
3058 }
3059
3060 /// Subtracts from the current value, returning the previous value.
3061 ///
3062 /// This operation wraps around on overflow.
3063 ///
3064 /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3065 /// of this operation. All ordering modes are possible. Note that using
3066 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3067 /// using [`Release`] makes the load part [`Relaxed`].
3068 ///
3069 /// **Note**: This method is only available on platforms that support atomic operations on
3070 #[doc = concat!("[`", $s_int_type, "`].")]
3071 ///
3072 /// # Examples
3073 ///
3074 /// ```
3075 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3076 ///
3077 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
3078 /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3079 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
3080 /// ```
3081 #[inline]
3082 #[$stable]
3083 #[$cfg_cas]
3084 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3085 pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3086 // SAFETY: data races are prevented by atomic intrinsics.
3087 unsafe { atomic_sub(self.v.get(), val, order) }
3088 }
3089
3090 /// Bitwise "and" with the current value.
3091 ///
3092 /// Performs a bitwise "and" operation on the current value and the argument `val`, and
3093 /// sets the new value to the result.
3094 ///
3095 /// Returns the previous value.
3096 ///
3097 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3098 /// of this operation. All ordering modes are possible. Note that using
3099 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3100 /// using [`Release`] makes the load part [`Relaxed`].
3101 ///
3102 /// **Note**: This method is only available on platforms that support atomic operations on
3103 #[doc = concat!("[`", $s_int_type, "`].")]
3104 ///
3105 /// # Examples
3106 ///
3107 /// ```
3108 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3109 ///
3110 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3111 /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3112 /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3113 /// ```
3114 #[inline]
3115 #[$stable]
3116 #[$cfg_cas]
3117 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3118 pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3119 // SAFETY: data races are prevented by atomic intrinsics.
3120 unsafe { atomic_and(self.v.get(), val, order) }
3121 }
3122
3123 /// Bitwise "nand" with the current value.
3124 ///
3125 /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
3126 /// sets the new value to the result.
3127 ///
3128 /// Returns the previous value.
3129 ///
3130 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3131 /// of this operation. All ordering modes are possible. Note that using
3132 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3133 /// using [`Release`] makes the load part [`Relaxed`].
3134 ///
3135 /// **Note**: This method is only available on platforms that support atomic operations on
3136 #[doc = concat!("[`", $s_int_type, "`].")]
3137 ///
3138 /// # Examples
3139 ///
3140 /// ```
3141 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3142 ///
3143 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
3144 /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3145 /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3146 /// ```
3147 #[inline]
3148 #[$stable_nand]
3149 #[$cfg_cas]
3150 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3151 pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3152 // SAFETY: data races are prevented by atomic intrinsics.
3153 unsafe { atomic_nand(self.v.get(), val, order) }
3154 }
3155
3156 /// Bitwise "or" with the current value.
3157 ///
3158 /// Performs a bitwise "or" operation on the current value and the argument `val`, and
3159 /// sets the new value to the result.
3160 ///
3161 /// Returns the previous value.
3162 ///
3163 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3164 /// of this operation. All ordering modes are possible. Note that using
3165 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3166 /// using [`Release`] makes the load part [`Relaxed`].
3167 ///
3168 /// **Note**: This method is only available on platforms that support atomic operations on
3169 #[doc = concat!("[`", $s_int_type, "`].")]
3170 ///
3171 /// # Examples
3172 ///
3173 /// ```
3174 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3175 ///
3176 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3177 /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3178 /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3179 /// ```
3180 #[inline]
3181 #[$stable]
3182 #[$cfg_cas]
3183 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3184 pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3185 // SAFETY: data races are prevented by atomic intrinsics.
3186 unsafe { atomic_or(self.v.get(), val, order) }
3187 }
3188
3189 /// Bitwise "xor" with the current value.
3190 ///
3191 /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
3192 /// sets the new value to the result.
3193 ///
3194 /// Returns the previous value.
3195 ///
3196 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3197 /// of this operation. All ordering modes are possible. Note that using
3198 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3199 /// using [`Release`] makes the load part [`Relaxed`].
3200 ///
3201 /// **Note**: This method is only available on platforms that support atomic operations on
3202 #[doc = concat!("[`", $s_int_type, "`].")]
3203 ///
3204 /// # Examples
3205 ///
3206 /// ```
3207 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3208 ///
3209 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
3210 /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3211 /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3212 /// ```
3213 #[inline]
3214 #[$stable]
3215 #[$cfg_cas]
3216 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3217 pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3218 // SAFETY: data races are prevented by atomic intrinsics.
3219 unsafe { atomic_xor(self.v.get(), val, order) }
3220 }
3221
3222 /// Fetches the value, and applies a function to it that returns an optional
3223 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3224 /// `Err(previous_value)`.
3225 ///
3226 /// Note: This may call the function multiple times if the value has been changed from other threads in
3227 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3228 /// only once to the stored value.
3229 ///
3230 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3231 /// The first describes the required ordering for when the operation finally succeeds while the second
3232 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3233 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3234 /// respectively.
3235 ///
3236 /// Using [`Acquire`] as success ordering makes the store part
3237 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3238 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3239 ///
3240 /// **Note**: This method is only available on platforms that support atomic operations on
3241 #[doc = concat!("[`", $s_int_type, "`].")]
3242 ///
3243 /// # Considerations
3244 ///
3245 /// This method is not magic; it is not provided by the hardware.
3246 /// It is implemented in terms of
3247 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
3248 /// and suffers from the same drawbacks.
3249 /// In particular, this method will not circumvent the [ABA Problem].
3250 ///
3251 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3252 ///
3253 /// # Examples
3254 ///
3255 /// ```rust
3256 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3257 ///
3258 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3259 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3260 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3261 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3262 /// assert_eq!(x.load(Ordering::SeqCst), 9);
3263 /// ```
3264 #[inline]
3265 #[stable(feature = "no_more_cas", since = "1.45.0")]
3266 #[$cfg_cas]
3267 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3268 pub fn fetch_update<F>(&self,
3269 set_order: Ordering,
3270 fetch_order: Ordering,
3271 mut f: F) -> Result<$int_type, $int_type>
3272 where F: FnMut($int_type) -> Option<$int_type> {
3273 let mut prev = self.load(fetch_order);
3274 while let Some(next) = f(prev) {
3275 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3276 x @ Ok(_) => return x,
3277 Err(next_prev) => prev = next_prev
3278 }
3279 }
3280 Err(prev)
3281 }
3282
3283 /// Fetches the value, and applies a function to it that returns an optional
3284 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3285 /// `Err(previous_value)`.
3286 ///
3287 #[doc = concat!("See also: [`update`](`", stringify!($atomic_type), "::update`).")]
3288 ///
3289 /// Note: This may call the function multiple times if the value has been changed from other threads in
3290 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3291 /// only once to the stored value.
3292 ///
3293 /// `try_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3294 /// The first describes the required ordering for when the operation finally succeeds while the second
3295 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3296 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3297 /// respectively.
3298 ///
3299 /// Using [`Acquire`] as success ordering makes the store part
3300 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3301 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3302 ///
3303 /// **Note**: This method is only available on platforms that support atomic operations on
3304 #[doc = concat!("[`", $s_int_type, "`].")]
3305 ///
3306 /// # Considerations
3307 ///
3308 /// This method is not magic; it is not provided by the hardware.
3309 /// It is implemented in terms of
3310 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
3311 /// and suffers from the same drawbacks.
3312 /// In particular, this method will not circumvent the [ABA Problem].
3313 ///
3314 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3315 ///
3316 /// # Examples
3317 ///
3318 /// ```rust
3319 /// #![feature(atomic_try_update)]
3320 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3321 ///
3322 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3323 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3324 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3325 /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3326 /// assert_eq!(x.load(Ordering::SeqCst), 9);
3327 /// ```
3328 #[inline]
3329 #[unstable(feature = "atomic_try_update", issue = "135894")]
3330 #[$cfg_cas]
3331 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3332 pub fn try_update(
3333 &self,
3334 set_order: Ordering,
3335 fetch_order: Ordering,
3336 f: impl FnMut($int_type) -> Option<$int_type>,
3337 ) -> Result<$int_type, $int_type> {
3338 // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`;
3339 // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`.
3340 self.fetch_update(set_order, fetch_order, f)
3341 }
3342
3343 /// Fetches the value, applies a function to it that it return a new value.
3344 /// The new value is stored and the old value is returned.
3345 ///
3346 #[doc = concat!("See also: [`try_update`](`", stringify!($atomic_type), "::try_update`).")]
3347 ///
3348 /// Note: This may call the function multiple times if the value has been changed from other threads in
3349 /// the meantime, but the function will have been applied only once to the stored value.
3350 ///
3351 /// `update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3352 /// The first describes the required ordering for when the operation finally succeeds while the second
3353 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3354 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
3355 /// respectively.
3356 ///
3357 /// Using [`Acquire`] as success ordering makes the store part
3358 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3359 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3360 ///
3361 /// **Note**: This method is only available on platforms that support atomic operations on
3362 #[doc = concat!("[`", $s_int_type, "`].")]
3363 ///
3364 /// # Considerations
3365 ///
3366 /// This method is not magic; it is not provided by the hardware.
3367 /// It is implemented in terms of
3368 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
3369 /// and suffers from the same drawbacks.
3370 /// In particular, this method will not circumvent the [ABA Problem].
3371 ///
3372 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3373 ///
3374 /// # Examples
3375 ///
3376 /// ```rust
3377 /// #![feature(atomic_try_update)]
3378 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3379 ///
3380 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
3381 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 7);
3382 /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 8);
3383 /// assert_eq!(x.load(Ordering::SeqCst), 9);
3384 /// ```
3385 #[inline]
3386 #[unstable(feature = "atomic_try_update", issue = "135894")]
3387 #[$cfg_cas]
3388 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3389 pub fn update(
3390 &self,
3391 set_order: Ordering,
3392 fetch_order: Ordering,
3393 mut f: impl FnMut($int_type) -> $int_type,
3394 ) -> $int_type {
3395 let mut prev = self.load(fetch_order);
3396 loop {
3397 match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) {
3398 Ok(x) => break x,
3399 Err(next_prev) => prev = next_prev,
3400 }
3401 }
3402 }
3403
3404 /// Maximum with the current value.
3405 ///
3406 /// Finds the maximum of the current value and the argument `val`, and
3407 /// sets the new value to the result.
3408 ///
3409 /// Returns the previous value.
3410 ///
3411 /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3412 /// of this operation. All ordering modes are possible. Note that using
3413 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3414 /// using [`Release`] makes the load part [`Relaxed`].
3415 ///
3416 /// **Note**: This method is only available on platforms that support atomic operations on
3417 #[doc = concat!("[`", $s_int_type, "`].")]
3418 ///
3419 /// # Examples
3420 ///
3421 /// ```
3422 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3423 ///
3424 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3425 /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3426 /// assert_eq!(foo.load(Ordering::SeqCst), 42);
3427 /// ```
3428 ///
3429 /// If you want to obtain the maximum value in one step, you can use the following:
3430 ///
3431 /// ```
3432 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3433 ///
3434 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3435 /// let bar = 42;
3436 /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3437 /// assert!(max_foo == 42);
3438 /// ```
3439 #[inline]
3440 #[stable(feature = "atomic_min_max", since = "1.45.0")]
3441 #[$cfg_cas]
3442 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3443 pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3444 // SAFETY: data races are prevented by atomic intrinsics.
3445 unsafe { $max_fn(self.v.get(), val, order) }
3446 }
3447
3448 /// Minimum with the current value.
3449 ///
3450 /// Finds the minimum of the current value and the argument `val`, and
3451 /// sets the new value to the result.
3452 ///
3453 /// Returns the previous value.
3454 ///
3455 /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3456 /// of this operation. All ordering modes are possible. Note that using
3457 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3458 /// using [`Release`] makes the load part [`Relaxed`].
3459 ///
3460 /// **Note**: This method is only available on platforms that support atomic operations on
3461 #[doc = concat!("[`", $s_int_type, "`].")]
3462 ///
3463 /// # Examples
3464 ///
3465 /// ```
3466 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3467 ///
3468 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3469 /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3470 /// assert_eq!(foo.load(Ordering::Relaxed), 23);
3471 /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3472 /// assert_eq!(foo.load(Ordering::Relaxed), 22);
3473 /// ```
3474 ///
3475 /// If you want to obtain the minimum value in one step, you can use the following:
3476 ///
3477 /// ```
3478 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
3479 ///
3480 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
3481 /// let bar = 12;
3482 /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3483 /// assert_eq!(min_foo, 12);
3484 /// ```
3485 #[inline]
3486 #[stable(feature = "atomic_min_max", since = "1.45.0")]
3487 #[$cfg_cas]
3488 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3489 pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3490 // SAFETY: data races are prevented by atomic intrinsics.
3491 unsafe { $min_fn(self.v.get(), val, order) }
3492 }
3493
3494 /// Returns a mutable pointer to the underlying integer.
3495 ///
3496 /// Doing non-atomic reads and writes on the resulting integer can be a data race.
3497 /// This method is mostly useful for FFI, where the function signature may use
3498 #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
3499 ///
3500 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
3501 /// atomic types work with interior mutability. All modifications of an atomic change the value
3502 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
3503 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
3504 /// restriction: operations on it must be atomic.
3505 ///
3506 /// # Examples
3507 ///
3508 /// ```ignore (extern-declaration)
3509 /// # fn main() {
3510 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
3511 ///
3512 /// extern "C" {
3513 #[doc = concat!(" fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
3514 /// }
3515 ///
3516 #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
3517 ///
3518 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
3519 /// unsafe {
3520 /// my_atomic_op(atomic.as_ptr());
3521 /// }
3522 /// # }
3523 /// ```
3524 #[inline]
3525 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
3526 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
3527 #[rustc_never_returns_null_ptr]
3528 pub const fn as_ptr(&self) -> *mut $int_type {
3529 self.v.get()
3530 }
3531 }
3532 }
3533}
3534
3535#[cfg(target_has_atomic_load_store = "8")]
3536atomic_int! {
3537 cfg(target_has_atomic = "8"),
3538 cfg(target_has_atomic_equal_alignment = "8"),
3539 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3540 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3541 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3542 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3543 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3544 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3545 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3546 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3547 rustc_diagnostic_item = "AtomicI8",
3548 "i8",
3549 "",
3550 atomic_min, atomic_max,
3551 1,
3552 i8 AtomicI8
3553}
3554#[cfg(target_has_atomic_load_store = "8")]
3555atomic_int! {
3556 cfg(target_has_atomic = "8"),
3557 cfg(target_has_atomic_equal_alignment = "8"),
3558 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3559 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3560 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3561 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3562 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3563 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3564 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3565 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3566 rustc_diagnostic_item = "AtomicU8",
3567 "u8",
3568 "",
3569 atomic_umin, atomic_umax,
3570 1,
3571 u8 AtomicU8
3572}
3573#[cfg(target_has_atomic_load_store = "16")]
3574atomic_int! {
3575 cfg(target_has_atomic = "16"),
3576 cfg(target_has_atomic_equal_alignment = "16"),
3577 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3578 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3579 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3580 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3581 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3582 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3583 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3584 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3585 rustc_diagnostic_item = "AtomicI16",
3586 "i16",
3587 "",
3588 atomic_min, atomic_max,
3589 2,
3590 i16 AtomicI16
3591}
3592#[cfg(target_has_atomic_load_store = "16")]
3593atomic_int! {
3594 cfg(target_has_atomic = "16"),
3595 cfg(target_has_atomic_equal_alignment = "16"),
3596 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3597 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3598 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3599 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3600 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3601 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3602 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3603 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3604 rustc_diagnostic_item = "AtomicU16",
3605 "u16",
3606 "",
3607 atomic_umin, atomic_umax,
3608 2,
3609 u16 AtomicU16
3610}
3611#[cfg(target_has_atomic_load_store = "32")]
3612atomic_int! {
3613 cfg(target_has_atomic = "32"),
3614 cfg(target_has_atomic_equal_alignment = "32"),
3615 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3616 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3617 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3618 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3619 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3620 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3621 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3622 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3623 rustc_diagnostic_item = "AtomicI32",
3624 "i32",
3625 "",
3626 atomic_min, atomic_max,
3627 4,
3628 i32 AtomicI32
3629}
3630#[cfg(target_has_atomic_load_store = "32")]
3631atomic_int! {
3632 cfg(target_has_atomic = "32"),
3633 cfg(target_has_atomic_equal_alignment = "32"),
3634 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3635 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3636 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3637 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3638 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3639 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3640 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3641 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3642 rustc_diagnostic_item = "AtomicU32",
3643 "u32",
3644 "",
3645 atomic_umin, atomic_umax,
3646 4,
3647 u32 AtomicU32
3648}
3649#[cfg(target_has_atomic_load_store = "64")]
3650atomic_int! {
3651 cfg(target_has_atomic = "64"),
3652 cfg(target_has_atomic_equal_alignment = "64"),
3653 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3654 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3655 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3656 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3657 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3658 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3659 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3660 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3661 rustc_diagnostic_item = "AtomicI64",
3662 "i64",
3663 "",
3664 atomic_min, atomic_max,
3665 8,
3666 i64 AtomicI64
3667}
3668#[cfg(target_has_atomic_load_store = "64")]
3669atomic_int! {
3670 cfg(target_has_atomic = "64"),
3671 cfg(target_has_atomic_equal_alignment = "64"),
3672 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3673 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3674 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3675 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3676 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3677 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3678 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3679 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3680 rustc_diagnostic_item = "AtomicU64",
3681 "u64",
3682 "",
3683 atomic_umin, atomic_umax,
3684 8,
3685 u64 AtomicU64
3686}
3687#[cfg(target_has_atomic_load_store = "128")]
3688atomic_int! {
3689 cfg(target_has_atomic = "128"),
3690 cfg(target_has_atomic_equal_alignment = "128"),
3691 unstable(feature = "integer_atomics", issue = "99069"),
3692 unstable(feature = "integer_atomics", issue = "99069"),
3693 unstable(feature = "integer_atomics", issue = "99069"),
3694 unstable(feature = "integer_atomics", issue = "99069"),
3695 unstable(feature = "integer_atomics", issue = "99069"),
3696 unstable(feature = "integer_atomics", issue = "99069"),
3697 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3698 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3699 rustc_diagnostic_item = "AtomicI128",
3700 "i128",
3701 "#![feature(integer_atomics)]\n\n",
3702 atomic_min, atomic_max,
3703 16,
3704 i128 AtomicI128
3705}
3706#[cfg(target_has_atomic_load_store = "128")]
3707atomic_int! {
3708 cfg(target_has_atomic = "128"),
3709 cfg(target_has_atomic_equal_alignment = "128"),
3710 unstable(feature = "integer_atomics", issue = "99069"),
3711 unstable(feature = "integer_atomics", issue = "99069"),
3712 unstable(feature = "integer_atomics", issue = "99069"),
3713 unstable(feature = "integer_atomics", issue = "99069"),
3714 unstable(feature = "integer_atomics", issue = "99069"),
3715 unstable(feature = "integer_atomics", issue = "99069"),
3716 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3717 rustc_const_unstable(feature = "integer_atomics", issue = "99069"),
3718 rustc_diagnostic_item = "AtomicU128",
3719 "u128",
3720 "#![feature(integer_atomics)]\n\n",
3721 atomic_umin, atomic_umax,
3722 16,
3723 u128 AtomicU128
3724}
3725
3726#[cfg(target_has_atomic_load_store = "ptr")]
3727macro_rules! atomic_int_ptr_sized {
3728 ( $($target_pointer_width:literal $align:literal)* ) => { $(
3729 #[cfg(target_pointer_width = $target_pointer_width)]
3730 atomic_int! {
3731 cfg(target_has_atomic = "ptr"),
3732 cfg(target_has_atomic_equal_alignment = "ptr"),
3733 stable(feature = "rust1", since = "1.0.0"),
3734 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3735 stable(feature = "atomic_debug", since = "1.3.0"),
3736 stable(feature = "atomic_access", since = "1.15.0"),
3737 stable(feature = "atomic_from", since = "1.23.0"),
3738 stable(feature = "atomic_nand", since = "1.27.0"),
3739 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3740 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3741 rustc_diagnostic_item = "AtomicIsize",
3742 "isize",
3743 "",
3744 atomic_min, atomic_max,
3745 $align,
3746 isize AtomicIsize
3747 }
3748 #[cfg(target_pointer_width = $target_pointer_width)]
3749 atomic_int! {
3750 cfg(target_has_atomic = "ptr"),
3751 cfg(target_has_atomic_equal_alignment = "ptr"),
3752 stable(feature = "rust1", since = "1.0.0"),
3753 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3754 stable(feature = "atomic_debug", since = "1.3.0"),
3755 stable(feature = "atomic_access", since = "1.15.0"),
3756 stable(feature = "atomic_from", since = "1.23.0"),
3757 stable(feature = "atomic_nand", since = "1.27.0"),
3758 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3759 rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"),
3760 rustc_diagnostic_item = "AtomicUsize",
3761 "usize",
3762 "",
3763 atomic_umin, atomic_umax,
3764 $align,
3765 usize AtomicUsize
3766 }
3767
3768 /// An [`AtomicIsize`] initialized to `0`.
3769 #[cfg(target_pointer_width = $target_pointer_width)]
3770 #[stable(feature = "rust1", since = "1.0.0")]
3771 #[deprecated(
3772 since = "1.34.0",
3773 note = "the `new` function is now preferred",
3774 suggestion = "AtomicIsize::new(0)",
3775 )]
3776 pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0);
3777
3778 /// An [`AtomicUsize`] initialized to `0`.
3779 #[cfg(target_pointer_width = $target_pointer_width)]
3780 #[stable(feature = "rust1", since = "1.0.0")]
3781 #[deprecated(
3782 since = "1.34.0",
3783 note = "the `new` function is now preferred",
3784 suggestion = "AtomicUsize::new(0)",
3785 )]
3786 pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0);
3787 )* };
3788}
3789
3790#[cfg(target_has_atomic_load_store = "ptr")]
3791atomic_int_ptr_sized! {
3792 "16" 2
3793 "32" 4
3794 "64" 8
3795}
3796
3797#[inline]
3798#[cfg(target_has_atomic)]
3799fn strongest_failure_ordering(order: Ordering) -> Ordering {
3800 match order {
3801 Release => Relaxed,
3802 Relaxed => Relaxed,
3803 SeqCst => SeqCst,
3804 Acquire => Acquire,
3805 AcqRel => Acquire,
3806 }
3807}
3808
3809#[inline]
3810#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3811unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
3812 // SAFETY: the caller must uphold the safety contract for `atomic_store`.
3813 unsafe {
3814 match order {
3815 Relaxed => intrinsics::atomic_store::<T, { AO::Relaxed }>(dst, val),
3816 Release => intrinsics::atomic_store::<T, { AO::Release }>(dst, val),
3817 SeqCst => intrinsics::atomic_store::<T, { AO::SeqCst }>(dst, val),
3818 Acquire => panic!("there is no such thing as an acquire store"),
3819 AcqRel => panic!("there is no such thing as an acquire-release store"),
3820 }
3821 }
3822}
3823
3824#[inline]
3825#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3826unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
3827 // SAFETY: the caller must uphold the safety contract for `atomic_load`.
3828 unsafe {
3829 match order {
3830 Relaxed => intrinsics::atomic_load::<T, { AO::Relaxed }>(dst),
3831 Acquire => intrinsics::atomic_load::<T, { AO::Acquire }>(dst),
3832 SeqCst => intrinsics::atomic_load::<T, { AO::SeqCst }>(dst),
3833 Release => panic!("there is no such thing as a release load"),
3834 AcqRel => panic!("there is no such thing as an acquire-release load"),
3835 }
3836 }
3837}
3838
3839#[inline]
3840#[cfg(target_has_atomic)]
3841#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3842unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3843 // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
3844 unsafe {
3845 match order {
3846 Relaxed => intrinsics::atomic_xchg::<T, { AO::Relaxed }>(dst, val),
3847 Acquire => intrinsics::atomic_xchg::<T, { AO::Acquire }>(dst, val),
3848 Release => intrinsics::atomic_xchg::<T, { AO::Release }>(dst, val),
3849 AcqRel => intrinsics::atomic_xchg::<T, { AO::AcqRel }>(dst, val),
3850 SeqCst => intrinsics::atomic_xchg::<T, { AO::SeqCst }>(dst, val),
3851 }
3852 }
3853}
3854
3855/// Returns the previous value (like __sync_fetch_and_add).
3856#[inline]
3857#[cfg(target_has_atomic)]
3858#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3859unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3860 // SAFETY: the caller must uphold the safety contract for `atomic_add`.
3861 unsafe {
3862 match order {
3863 Relaxed => intrinsics::atomic_xadd::<T, { AO::Relaxed }>(dst, val),
3864 Acquire => intrinsics::atomic_xadd::<T, { AO::Acquire }>(dst, val),
3865 Release => intrinsics::atomic_xadd::<T, { AO::Release }>(dst, val),
3866 AcqRel => intrinsics::atomic_xadd::<T, { AO::AcqRel }>(dst, val),
3867 SeqCst => intrinsics::atomic_xadd::<T, { AO::SeqCst }>(dst, val),
3868 }
3869 }
3870}
3871
3872/// Returns the previous value (like __sync_fetch_and_sub).
3873#[inline]
3874#[cfg(target_has_atomic)]
3875#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3876unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3877 // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
3878 unsafe {
3879 match order {
3880 Relaxed => intrinsics::atomic_xsub::<T, { AO::Relaxed }>(dst, val),
3881 Acquire => intrinsics::atomic_xsub::<T, { AO::Acquire }>(dst, val),
3882 Release => intrinsics::atomic_xsub::<T, { AO::Release }>(dst, val),
3883 AcqRel => intrinsics::atomic_xsub::<T, { AO::AcqRel }>(dst, val),
3884 SeqCst => intrinsics::atomic_xsub::<T, { AO::SeqCst }>(dst, val),
3885 }
3886 }
3887}
3888
3889/// Publicly exposed for stdarch; nobody else should use this.
3890#[inline]
3891#[cfg(target_has_atomic)]
3892#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3893#[unstable(feature = "core_intrinsics", issue = "none")]
3894#[doc(hidden)]
3895pub unsafe fn atomic_compare_exchange<T: Copy>(
3896 dst: *mut T,
3897 old: T,
3898 new: T,
3899 success: Ordering,
3900 failure: Ordering,
3901) -> Result<T, T> {
3902 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
3903 let (val, ok) = unsafe {
3904 match (success, failure) {
3905 (Relaxed, Relaxed) => {
3906 intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
3907 }
3908 (Relaxed, Acquire) => {
3909 intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
3910 }
3911 (Relaxed, SeqCst) => {
3912 intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
3913 }
3914 (Acquire, Relaxed) => {
3915 intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
3916 }
3917 (Acquire, Acquire) => {
3918 intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
3919 }
3920 (Acquire, SeqCst) => {
3921 intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
3922 }
3923 (Release, Relaxed) => {
3924 intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
3925 }
3926 (Release, Acquire) => {
3927 intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
3928 }
3929 (Release, SeqCst) => {
3930 intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
3931 }
3932 (AcqRel, Relaxed) => {
3933 intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
3934 }
3935 (AcqRel, Acquire) => {
3936 intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
3937 }
3938 (AcqRel, SeqCst) => {
3939 intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
3940 }
3941 (SeqCst, Relaxed) => {
3942 intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
3943 }
3944 (SeqCst, Acquire) => {
3945 intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
3946 }
3947 (SeqCst, SeqCst) => {
3948 intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
3949 }
3950 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
3951 (_, Release) => panic!("there is no such thing as a release failure ordering"),
3952 }
3953 };
3954 if ok { Ok(val) } else { Err(val) }
3955}
3956
3957#[inline]
3958#[cfg(target_has_atomic)]
3959#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3960unsafe fn atomic_compare_exchange_weak<T: Copy>(
3961 dst: *mut T,
3962 old: T,
3963 new: T,
3964 success: Ordering,
3965 failure: Ordering,
3966) -> Result<T, T> {
3967 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
3968 let (val, ok) = unsafe {
3969 match (success, failure) {
3970 (Relaxed, Relaxed) => {
3971 intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new)
3972 }
3973 (Relaxed, Acquire) => {
3974 intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new)
3975 }
3976 (Relaxed, SeqCst) => {
3977 intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new)
3978 }
3979 (Acquire, Relaxed) => {
3980 intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new)
3981 }
3982 (Acquire, Acquire) => {
3983 intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new)
3984 }
3985 (Acquire, SeqCst) => {
3986 intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new)
3987 }
3988 (Release, Relaxed) => {
3989 intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new)
3990 }
3991 (Release, Acquire) => {
3992 intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Acquire }>(dst, old, new)
3993 }
3994 (Release, SeqCst) => {
3995 intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new)
3996 }
3997 (AcqRel, Relaxed) => {
3998 intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new)
3999 }
4000 (AcqRel, Acquire) => {
4001 intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new)
4002 }
4003 (AcqRel, SeqCst) => {
4004 intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new)
4005 }
4006 (SeqCst, Relaxed) => {
4007 intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new)
4008 }
4009 (SeqCst, Acquire) => {
4010 intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new)
4011 }
4012 (SeqCst, SeqCst) => {
4013 intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new)
4014 }
4015 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
4016 (_, Release) => panic!("there is no such thing as a release failure ordering"),
4017 }
4018 };
4019 if ok { Ok(val) } else { Err(val) }
4020}
4021
4022#[inline]
4023#[cfg(target_has_atomic)]
4024#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4025unsafe fn atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4026 // SAFETY: the caller must uphold the safety contract for `atomic_and`
4027 unsafe {
4028 match order {
4029 Relaxed => intrinsics::atomic_and::<T, { AO::Relaxed }>(dst, val),
4030 Acquire => intrinsics::atomic_and::<T, { AO::Acquire }>(dst, val),
4031 Release => intrinsics::atomic_and::<T, { AO::Release }>(dst, val),
4032 AcqRel => intrinsics::atomic_and::<T, { AO::AcqRel }>(dst, val),
4033 SeqCst => intrinsics::atomic_and::<T, { AO::SeqCst }>(dst, val),
4034 }
4035 }
4036}
4037
4038#[inline]
4039#[cfg(target_has_atomic)]
4040#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4041unsafe fn atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4042 // SAFETY: the caller must uphold the safety contract for `atomic_nand`
4043 unsafe {
4044 match order {
4045 Relaxed => intrinsics::atomic_nand::<T, { AO::Relaxed }>(dst, val),
4046 Acquire => intrinsics::atomic_nand::<T, { AO::Acquire }>(dst, val),
4047 Release => intrinsics::atomic_nand::<T, { AO::Release }>(dst, val),
4048 AcqRel => intrinsics::atomic_nand::<T, { AO::AcqRel }>(dst, val),
4049 SeqCst => intrinsics::atomic_nand::<T, { AO::SeqCst }>(dst, val),
4050 }
4051 }
4052}
4053
4054#[inline]
4055#[cfg(target_has_atomic)]
4056#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4057unsafe fn atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4058 // SAFETY: the caller must uphold the safety contract for `atomic_or`
4059 unsafe {
4060 match order {
4061 SeqCst => intrinsics::atomic_or::<T, { AO::SeqCst }>(dst, val),
4062 Acquire => intrinsics::atomic_or::<T, { AO::Acquire }>(dst, val),
4063 Release => intrinsics::atomic_or::<T, { AO::Release }>(dst, val),
4064 AcqRel => intrinsics::atomic_or::<T, { AO::AcqRel }>(dst, val),
4065 Relaxed => intrinsics::atomic_or::<T, { AO::Relaxed }>(dst, val),
4066 }
4067 }
4068}
4069
4070#[inline]
4071#[cfg(target_has_atomic)]
4072#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4073unsafe fn atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4074 // SAFETY: the caller must uphold the safety contract for `atomic_xor`
4075 unsafe {
4076 match order {
4077 SeqCst => intrinsics::atomic_xor::<T, { AO::SeqCst }>(dst, val),
4078 Acquire => intrinsics::atomic_xor::<T, { AO::Acquire }>(dst, val),
4079 Release => intrinsics::atomic_xor::<T, { AO::Release }>(dst, val),
4080 AcqRel => intrinsics::atomic_xor::<T, { AO::AcqRel }>(dst, val),
4081 Relaxed => intrinsics::atomic_xor::<T, { AO::Relaxed }>(dst, val),
4082 }
4083 }
4084}
4085
4086/// Updates `*dst` to the max value of `val` and the old value (signed comparison)
4087#[inline]
4088#[cfg(target_has_atomic)]
4089#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4090unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4091 // SAFETY: the caller must uphold the safety contract for `atomic_max`
4092 unsafe {
4093 match order {
4094 Relaxed => intrinsics::atomic_max::<T, { AO::Relaxed }>(dst, val),
4095 Acquire => intrinsics::atomic_max::<T, { AO::Acquire }>(dst, val),
4096 Release => intrinsics::atomic_max::<T, { AO::Release }>(dst, val),
4097 AcqRel => intrinsics::atomic_max::<T, { AO::AcqRel }>(dst, val),
4098 SeqCst => intrinsics::atomic_max::<T, { AO::SeqCst }>(dst, val),
4099 }
4100 }
4101}
4102
4103/// Updates `*dst` to the min value of `val` and the old value (signed comparison)
4104#[inline]
4105#[cfg(target_has_atomic)]
4106#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4107unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4108 // SAFETY: the caller must uphold the safety contract for `atomic_min`
4109 unsafe {
4110 match order {
4111 Relaxed => intrinsics::atomic_min::<T, { AO::Relaxed }>(dst, val),
4112 Acquire => intrinsics::atomic_min::<T, { AO::Acquire }>(dst, val),
4113 Release => intrinsics::atomic_min::<T, { AO::Release }>(dst, val),
4114 AcqRel => intrinsics::atomic_min::<T, { AO::AcqRel }>(dst, val),
4115 SeqCst => intrinsics::atomic_min::<T, { AO::SeqCst }>(dst, val),
4116 }
4117 }
4118}
4119
4120/// Updates `*dst` to the max value of `val` and the old value (unsigned comparison)
4121#[inline]
4122#[cfg(target_has_atomic)]
4123#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4124unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4125 // SAFETY: the caller must uphold the safety contract for `atomic_umax`
4126 unsafe {
4127 match order {
4128 Relaxed => intrinsics::atomic_umax::<T, { AO::Relaxed }>(dst, val),
4129 Acquire => intrinsics::atomic_umax::<T, { AO::Acquire }>(dst, val),
4130 Release => intrinsics::atomic_umax::<T, { AO::Release }>(dst, val),
4131 AcqRel => intrinsics::atomic_umax::<T, { AO::AcqRel }>(dst, val),
4132 SeqCst => intrinsics::atomic_umax::<T, { AO::SeqCst }>(dst, val),
4133 }
4134 }
4135}
4136
4137/// Updates `*dst` to the min value of `val` and the old value (unsigned comparison)
4138#[inline]
4139#[cfg(target_has_atomic)]
4140#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4141unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
4142 // SAFETY: the caller must uphold the safety contract for `atomic_umin`
4143 unsafe {
4144 match order {
4145 Relaxed => intrinsics::atomic_umin::<T, { AO::Relaxed }>(dst, val),
4146 Acquire => intrinsics::atomic_umin::<T, { AO::Acquire }>(dst, val),
4147 Release => intrinsics::atomic_umin::<T, { AO::Release }>(dst, val),
4148 AcqRel => intrinsics::atomic_umin::<T, { AO::AcqRel }>(dst, val),
4149 SeqCst => intrinsics::atomic_umin::<T, { AO::SeqCst }>(dst, val),
4150 }
4151 }
4152}
4153
4154/// An atomic fence.
4155///
4156/// Fences create synchronization between themselves and atomic operations or fences in other
4157/// threads. To achieve this, a fence prevents the compiler and CPU from reordering certain types of
4158/// memory operations around it.
4159///
4160/// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes
4161/// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there
4162/// exist operations X and Y, both operating on some atomic object 'm' such
4163/// that A is sequenced before X, Y is sequenced before B and Y observes
4164/// the change to m. This provides a happens-before dependence between A and B.
4165///
4166/// ```text
4167/// Thread 1 Thread 2
4168///
4169/// fence(Release); A --------------
4170/// m.store(3, Relaxed); X --------- |
4171/// | |
4172/// | |
4173/// -------------> Y if m.load(Relaxed) == 3 {
4174/// |-------> B fence(Acquire);
4175/// ...
4176/// }
4177/// ```
4178///
4179/// Note that in the example above, it is crucial that the accesses to `m` are atomic. Fences cannot
4180/// be used to establish synchronization among non-atomic accesses in different threads. However,
4181/// thanks to the happens-before relationship between A and B, any non-atomic accesses that
4182/// happen-before A are now also properly synchronized with any non-atomic accesses that
4183/// happen-after B.
4184///
4185/// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize
4186/// with a fence.
4187///
4188/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`]
4189/// and [`Release`] semantics, participates in the global program order of the
4190/// other [`SeqCst`] operations and/or fences.
4191///
4192/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
4193///
4194/// # Panics
4195///
4196/// Panics if `order` is [`Relaxed`].
4197///
4198/// # Examples
4199///
4200/// ```
4201/// use std::sync::atomic::AtomicBool;
4202/// use std::sync::atomic::fence;
4203/// use std::sync::atomic::Ordering;
4204///
4205/// // A mutual exclusion primitive based on spinlock.
4206/// pub struct Mutex {
4207/// flag: AtomicBool,
4208/// }
4209///
4210/// impl Mutex {
4211/// pub fn new() -> Mutex {
4212/// Mutex {
4213/// flag: AtomicBool::new(false),
4214/// }
4215/// }
4216///
4217/// pub fn lock(&self) {
4218/// // Wait until the old value is `false`.
4219/// while self
4220/// .flag
4221/// .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
4222/// .is_err()
4223/// {}
4224/// // This fence synchronizes-with store in `unlock`.
4225/// fence(Ordering::Acquire);
4226/// }
4227///
4228/// pub fn unlock(&self) {
4229/// self.flag.store(false, Ordering::Release);
4230/// }
4231/// }
4232/// ```
4233#[inline]
4234#[stable(feature = "rust1", since = "1.0.0")]
4235#[rustc_diagnostic_item = "fence"]
4236#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4237pub fn fence(order: Ordering) {
4238 // SAFETY: using an atomic fence is safe.
4239 unsafe {
4240 match order {
4241 Acquire => intrinsics::atomic_fence::<{ AO::Acquire }>(),
4242 Release => intrinsics::atomic_fence::<{ AO::Release }>(),
4243 AcqRel => intrinsics::atomic_fence::<{ AO::AcqRel }>(),
4244 SeqCst => intrinsics::atomic_fence::<{ AO::SeqCst }>(),
4245 Relaxed => panic!("there is no such thing as a relaxed fence"),
4246 }
4247 }
4248}
4249
4250/// A "compiler-only" atomic fence.
4251///
4252/// Like [`fence`], this function establishes synchronization with other atomic operations and
4253/// fences. However, unlike [`fence`], `compiler_fence` only establishes synchronization with
4254/// operations *in the same thread*. This may at first sound rather useless, since code within a
4255/// thread is typically already totally ordered and does not need any further synchronization.
4256/// However, there are cases where code can run on the same thread without being ordered:
4257/// - The most common case is that of a *signal handler*: a signal handler runs in the same thread
4258/// as the code it interrupted, but it is not ordered with respect to that code. `compiler_fence`
4259/// can be used to establish synchronization between a thread and its signal handler, the same way
4260/// that `fence` can be used to establish synchronization across threads.
4261/// - Similar situations can arise in embedded programming with interrupt handlers, or in custom
4262/// implementations of preemptive green threads. In general, `compiler_fence` can establish
4263/// synchronization with code that is guaranteed to run on the same hardware CPU.
4264///
4265/// See [`fence`] for how a fence can be used to achieve synchronization. Note that just like
4266/// [`fence`], synchronization still requires atomic operations to be used in both threads -- it is
4267/// not possible to perform synchronization entirely with fences and non-atomic operations.
4268///
4269/// `compiler_fence` does not emit any machine code, but restricts the kinds of memory re-ordering
4270/// the compiler is allowed to do. `compiler_fence` corresponds to [`atomic_signal_fence`] in C and
4271/// C++.
4272///
4273/// [`atomic_signal_fence`]: https://en.cppreference.com/w/cpp/atomic/atomic_signal_fence
4274///
4275/// # Panics
4276///
4277/// Panics if `order` is [`Relaxed`].
4278///
4279/// # Examples
4280///
4281/// Without the two `compiler_fence` calls, the read of `IMPORTANT_VARIABLE` in `signal_handler`
4282/// is *undefined behavior* due to a data race, despite everything happening in a single thread.
4283/// This is because the signal handler is considered to run concurrently with its associated
4284/// thread, and explicit synchronization is required to pass data between a thread and its
4285/// signal handler. The code below uses two `compiler_fence` calls to establish the usual
4286/// release-acquire synchronization pattern (see [`fence`] for an image).
4287///
4288/// ```
4289/// use std::sync::atomic::AtomicBool;
4290/// use std::sync::atomic::Ordering;
4291/// use std::sync::atomic::compiler_fence;
4292///
4293/// static mut IMPORTANT_VARIABLE: usize = 0;
4294/// static IS_READY: AtomicBool = AtomicBool::new(false);
4295///
4296/// fn main() {
4297/// unsafe { IMPORTANT_VARIABLE = 42 };
4298/// // Marks earlier writes as being released with future relaxed stores.
4299/// compiler_fence(Ordering::Release);
4300/// IS_READY.store(true, Ordering::Relaxed);
4301/// }
4302///
4303/// fn signal_handler() {
4304/// if IS_READY.load(Ordering::Relaxed) {
4305/// // Acquires writes that were released with relaxed stores that we read from.
4306/// compiler_fence(Ordering::Acquire);
4307/// assert_eq!(unsafe { IMPORTANT_VARIABLE }, 42);
4308/// }
4309/// }
4310/// ```
4311#[inline]
4312#[stable(feature = "compiler_fences", since = "1.21.0")]
4313#[rustc_diagnostic_item = "compiler_fence"]
4314#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4315pub fn compiler_fence(order: Ordering) {
4316 // SAFETY: using an atomic fence is safe.
4317 unsafe {
4318 match order {
4319 Acquire => intrinsics::atomic_singlethreadfence::<{ AO::Acquire }>(),
4320 Release => intrinsics::atomic_singlethreadfence::<{ AO::Release }>(),
4321 AcqRel => intrinsics::atomic_singlethreadfence::<{ AO::AcqRel }>(),
4322 SeqCst => intrinsics::atomic_singlethreadfence::<{ AO::SeqCst }>(),
4323 Relaxed => panic!("there is no such thing as a relaxed fence"),
4324 }
4325 }
4326}
4327
4328#[cfg(target_has_atomic_load_store = "8")]
4329#[stable(feature = "atomic_debug", since = "1.3.0")]
4330impl fmt::Debug for AtomicBool {
4331 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4332 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4333 }
4334}
4335
4336#[cfg(target_has_atomic_load_store = "ptr")]
4337#[stable(feature = "atomic_debug", since = "1.3.0")]
4338impl<T> fmt::Debug for AtomicPtr<T> {
4339 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4340 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
4341 }
4342}
4343
4344#[cfg(target_has_atomic_load_store = "ptr")]
4345#[stable(feature = "atomic_pointer", since = "1.24.0")]
4346impl<T> fmt::Pointer for AtomicPtr<T> {
4347 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
4348 fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
4349 }
4350}
4351
4352/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
4353///
4354/// This function is deprecated in favor of [`hint::spin_loop`].
4355///
4356/// [`hint::spin_loop`]: crate::hint::spin_loop
4357#[inline]
4358#[stable(feature = "spin_loop_hint", since = "1.24.0")]
4359#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
4360pub fn spin_loop_hint() {
4361 spin_loop()
4362}