Loading...
Loading...
C++ and Rust memory model skill for concurrent programming. Use when understanding memory ordering, writing lock-free data structures, using std::atomic or Rust atomics, diagnosing data races, or selecting the correct memory order for atomic operations. Activates on queries about memory ordering, acquire-release, seq_cst, relaxed atomics, happens-before, memory barriers, std::atomic, or Rust atomic ordering.
npx skill4agent add mohitmishra786/low-level-dev-skills memory-modelOrdering strength (weakest to strongest):
Relaxed < Release/Acquire < AcqRel < SeqCst
Stronger ordering = more synchronization = more correct, but slower
Weaker ordering = fewer barriers = faster, but needs careful analysis| Order | C++ | Rust | What it means |
|---|---|---|---|
| Relaxed | | | No ordering guarantee; just atomicity |
| Consume | | (use Acquire) | Data dependency ordering |
| Acquire | | | This load sees all writes before the matching release |
| Release | | | All writes before this store are visible to acquire |
| AcqRel | | | Both acquire and release on RMW ops |
| SeqCst | | | Total order across all seq_cst operations |
#include <atomic>
#include <thread>
std::atomic<int> counter{0};
std::atomic<bool> ready{false};
// Producer thread
void producer() {
data = 42; // (1) write data
ready.store(true, std::memory_order_release); // (2) signal
}
// Consumer thread
void consumer() {
while (!ready.load(std::memory_order_acquire)); // (3) wait
assert(data == 42); // (4) guaranteed to see (1)
}Use case?
├── Counter (just needs atomicity, order irrelevant) → Relaxed
├── Reference counting (decrement + final check) → AcqRel (dec), Acquire (load 0 check)
├── Publish data from one thread to another → Release (store), Acquire (load)
├── Mutual exclusion / mutex implementation → AcqRel / SeqCst
├── Lock-free queue multiple producers/consumers → SeqCst (safest to start)
└── Sequence number check (simple flag) → Release + Acquire// Pattern 1: Spinlock
class Spinlock {
std::atomic_flag flag = ATOMIC_FLAG_INIT;
public:
void lock() {
while (flag.test_and_set(std::memory_order_acquire))
; // spin
}
void unlock() {
flag.clear(std::memory_order_release);
}
};
// Pattern 2: Reference counting
class RefCounted {
std::atomic<int> refcount{1};
public:
void addref() {
refcount.fetch_add(1, std::memory_order_relaxed); // only need atomicity
}
void release() {
if (refcount.fetch_sub(1, std::memory_order_acq_rel) == 1) {
// AcqRel ensures we see all writes from other releasers
delete this;
}
}
};
// Pattern 3: One-time initialisation
class LazyInit {
std::atomic<void*> ptr{nullptr};
std::mutex mtx;
public:
void* get() {
void* p = ptr.load(std::memory_order_acquire);
if (p == nullptr) {
std::lock_guard lock(mtx);
p = ptr.load(std::memory_order_relaxed);
if (p == nullptr) {
p = create();
ptr.store(p, std::memory_order_release);
}
}
return p;
}
};use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
use std::sync::Arc;
// Simple counter
let counter = Arc::new(AtomicUsize::new(0));
// Increment
counter.fetch_add(1, Ordering::Relaxed);
// Read
let val = counter.load(Ordering::Relaxed);
// Publish/subscribe pattern
static READY: AtomicBool = AtomicBool::new(false);
// Publisher thread
unsafe { DATA = 42; } // Write data
READY.store(true, Ordering::Release); // Signal
// Subscriber thread
while !READY.load(Ordering::Acquire) {}
let d = unsafe { DATA }; // Safe: guaranteed to see publisher's write// C++ fence — equivalent to a global memory barrier
std::atomic_thread_fence(std::memory_order_acquire); // Acquire fence
std::atomic_thread_fence(std::memory_order_release); // Release fence
// Typical use: multiple atomic writes then one fence
relaxed_atomic_a.store(1, std::memory_order_relaxed);
relaxed_atomic_b.store(2, std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_release); // barrier for all above
sentinel.store(true, std::memory_order_relaxed);| Mistake | Fix |
|---|---|
| Using Relaxed for publish/subscribe | Use Release on store, Acquire on load |
| Using SeqCst everywhere | Profile first; use weakest correct ordering |
| Forgetting that non-atomic loads are not atomic | All shared mutable data needs atomic or mutex |
Using | |
| Assuming sequential consistency without SeqCst | Each platform has different default consistency |
skills/runtimes/sanitizersskills/rust/rust-sanitizers-miriskills/low-level-programming/assembly-x86skills/debuggers/gdb