Shared Memory Archeitecure Easy
Shared Memory Archeitecure Easy
👉 Imagine a library where everyone takes the same time to reach the bookshelves. 📚
Think of it like: A single big fridge in the kitchen where everyone gets food at the same speed.
🏠
2.2 Non-Uniform Memory Access (NUMA) - "Faster for Nearby, Slower for
Far"
👉 Imagine a large supermarket 🛒—if you’re closer to an aisle, you get items faster than
someone far away.
Definition: Some processors access memory faster than others, depending on location.
How it Works: Memory is divided into multiple sections, and each processor has its
own closest section.
Best For: Large-scale, high-performance computing (HPC) systems.
Example: Supercomputers, data centers.
Think of it like: A house with multiple fridges—people grab food faster from the fridge closest
to them! 🏠🏠
👉 Imagine a fast-food restaurant where every table already has some food, so you don’t need to
go to the kitchen. 🍔
Think of it like: A restaurant where waiters keep food already on the table instead of bringing
it from the kitchen every time! 🍽️
Key Takeaways 📝
1. UMA – Same memory speed for all (like a single fridge for the whole family).
2. NUMA – Faster for nearby processors (like multiple fridges in different rooms).
3. COMA – No fixed memory; data moves around (like food already at your table).
High Performance (for small systems) – Works great when there are fewer processors and
fast memory access is needed.
1. Scalability Issues – Too many processors slow down memory access (like traffic on a
busy road). 🚗
2. Synchronization Overhead – Preventing data conflicts adds complexity (like taking
turns writing on a shared board). 📝
3. Hardware Complexity – Needs advanced tech to manage memory properly (like a
smart toll system). 🚦
4. Non-Uniform Performance – Some processors work faster than others due to memory
placement (like students sitting close to a teacher). 🎓
All processors share the same Each processor has its own separate
Memory Access 🧠
memory. memory.
Speed 🚀 Faster for small systems (low Can be faster for large systems (scales
Feature 🏷️ Shared Memory 🖥️ Distributed Memory 🌐
latency). better).
Programming
Easier (data sharing is automatic). Harder (requires sending messages).
Complexity 🖥️
If memory fails, the whole system More fault-tolerant (if one node fails,
Failure Handling ⚠️
may fail. others keep running).