0% found this document useful (0 votes)
6 views

Multicore02 1 Updated

The document provides an introduction to multicore computing, detailing the structure and function of multicore processors, including examples from Intel and Apple. It discusses parallel computing, the distinction between parallelism and concurrency, and various programming techniques and environments for parallel processing. Additionally, it covers concepts like cluster and grid computing, as well as cloud computing, emphasizing the importance of writing efficient parallel programs.

Uploaded by

성연욱
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Multicore02 1 Updated

The document provides an introduction to multicore computing, detailing the structure and function of multicore processors, including examples from Intel and Apple. It discusses parallel computing, the distinction between parallelism and concurrency, and various programming techniques and environments for parallel processing. Additionally, it covers concepts like cluster and grid computing, as well as cloud computing, emphasizing the importance of writing efficient parallel programs.

Uploaded by

성연욱
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Lecture 2-1 :

Introduction to Multicore
Computing
Bong-Soo Sohn
Associate Professor
School of Computer Science and Engineer-
ing
Chung-Ang University
Multicore Processor
 A single computing component with two or
more independent cores
 Core (CPU): computing unit that reads/executes program instruc-
tions
 Ex) dual-core, quad-core, hexa-core, octa-core, …
 share cache or not
 symmetric or asymmetric
 Ex) intel i9-13900K (link)

Intel Quad-core processors (Sandy Bridge)


Intel Multicore CPU (Re-
cent)[source (2024.3): enuri.com]
 Performance Cores vs Efficiency Cores

Intel i9-13900K
L3 cache (shared
Apple CPUs: M1 (2021) , M3 (2023)
 CPU clock rate: 3.2GHz (M1) , 4.05GHz
(M3)
Apple M4 CPU (2024)
Apple Unified Memory

 Unified memory
 CPU, GPU, NPU share memory on a single
chip
 +: efficient
 -: cannot be changed/replaced after produc-
Multicore Processor
 Multiple cores run multiple instructions at
the same time (concurrently)

 Increase overall program speed (perfor-


mance)

 performance gained by multi-core processor


 strongly dependent on the software algorithms
and implementation.

 desktop PCs, mobile devices, servers


Manycore processor (GPU)
 multi-core architectures with an espe-
cially high number of cores (thounds)
 Ex) nVidia GeForce RTX 3080 Ti (2021.06)
Manycore processor (GPU)
 CUDA
 Compute Unified Device Architecture
 parallel computing platform and programming model
created by NVIDIA and implemented by the graphics
processing units (GPUs) that they produce
 GPGPU (General Purpose Graphics Processing Unit)

 OpenCL
 Open Standard parallel programming platform
Parallel Applications
 Image and video processing
 Encoding, editing, filtering
 3D graphics (rendering, animation)
 3D gaming
 Simulation
 e.g. protein folding, climate modeling
 Machine learning, deep learning
Thread/Process
 Both
 Independent sequence of execution
 Process
A process with two
 run in separate memory space threads of execution
 Thread on a single processor

 Run in shared memory space in a process


 One process may have multiple threads

 Multithreaded Program
 a program running with multiple threads that is executed
simultaneously
What is Parallel Comput-
ing?
 Parallel computing
 using multiple processors in parallel to solve problems
more quickly than with a single processor
 Examples of parallel machines:
 A cluster computer that contains multiple PCs com-
bined together with a high speed network
 A shared memory multiprocessor by connecting
multiple processors to a single memory system
 A Chip Multi-Processor (CMP) contains multiple pro-
cessors (called cores) on a single chip
 Concurrent execution comes from desire for per-
formance; unlike the inherent concurrency in a
multi-user distributed system
Parallelism vs Concurrency
 Parallel Programming
 Using additional computational resources to
produce an answer faster
 Problem of using extra resources effectively?
 Example
 summing up all the numbers in an array with mul-
tiple n processors
Parallelism vs Concurrency
 Concurrent Programming
 Correctly and efficiently controlling access by
multiple threads to shared resources
 Problem of preventing a bad interleaving of
operations from different threads
 Example
 Implementation of dictionary with hashtable
 operations insert, update, lookup, delete occur simulta-
neously (concurrently)
 Multiple threads access the same hashtable
 Web Visit Counter
Parallelism vs Concurrency
 Often Used interchangeably

 In practice, the distinction between


parallelism and concurrency is not abso-
lute

 Many multithreaded programs have as-


pects of both
Parallel Programming Tech-
niques
 Shared Memory
 OpenMP, pthreads

 Distributed Memory
 MPI

 Distributed/Shared Memory
 Hybrid (MPI+OpenMP)

 GPU Parallel Programming


 CUDA programming (NVIDIA)
 OpenCL
Parallel Processing Sys-
tems
 Small-Scale Multicore Environment
 Notebook, Workstation, Server
 OS supports multicore
 POSIX threads (pthread) , win32 thread
 GPGPU-based supercomputer
 Development of CUDA/OpenCL/GPGPU

 Large-Scale Multicore Environment


 Supercomputer : more than 10,000 cores
 Clusters
 Servers
 Grid Computing
Parallel Computing vs. Distributed
Computing

 Parallel Computing
 all processors may have access to a shared memory to
exchange information between processors.
 more tightly coupled to multi-threading

 Distributed Computing
 multiple computers communicate through network
 each processor has its own private memory
(distributed memory).
 executing sub-tasks on different machines and then
merging the results.
Parallel Computing vs. Distributed
Computing
Distributed Computing

Parallel Computing

 No Clear Distinction
Cluster Computing vs. Grid Com-
puting
 Cluster Computing
 a set of loosely connected computers that work together
so that in many respects they can be viewed as a single
system
 good price / performance
 memory not shared

 Grid Computing
 federation of computer resources from multiple locations
to reach a common goal (a large scale distributed sys-
tem)
 grids tend to be more loosely coupled, heterogeneous,
and geographically dispersed
Cluster Computing vs. Grid Com-
puting
Example of Grid Computing (Fold-
ing@Home)
Example of Grid Computing (Fold-
ing@Home)
Cloud Computing

 shares networked computing resources rather


than having local servers or personal devices to
handle applications.

 “Cloud” is used as a metaphor for “Internet"


meaning "a type of Internet-based computing,“

 different services - such as servers, storage and


applications - are delivered to an user’s comput-
ers and smart phones through the Internet.
Good Parallel Program
 Writing good parallel programs
 Correct (Result)
 Good Performance
 Scalability
 Load Balance
 Portability
 Hardware Specific Utilization

You might also like