0% found this document useful (0 votes)
4 views

Introduction To High Performance Computing: Shaohao Chen Research Computing Services (RCS) Boston University

Uploaded by

Saswat Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Introduction To High Performance Computing: Shaohao Chen Research Computing Services (RCS) Boston University

Uploaded by

Saswat Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Introduction to High

Performance Computing

Shaohao Chen
Research Computing Services (RCS)
Boston University
Outline

• What is HPC? Why computer cluster?


• Basic structure of a computer cluster
• Computer performance and the top 500 list
• HPC for scientific research and parallel computing
• National-wide HPC resources: XSEDE
• BU SCC and RCS tutorials
What is HPC?
• High Performance Computing (HPC) refers to the practice of aggregating computing power
in order to solve large problems in science, engineering, or business.
• Purpose of HPC: accelerates computer programs, and thus accelerates work process.
• Computer cluster: A set of connected computers that work together. They can be viewed
as a single system.

• Similar terminologies: supercomputing, parallel computing.


• Parallel computing: many computations are carried out simultaneously, typically
computed on a computer cluster.

• Related terminologies: grid computing, cloud computing.


Computing power of
a single CPU chip

• Moore‘s law is the


observation that the
computing power of CPU
doubles approximately every
two years.

• Nowadays the multi-core


technique is the key to keep
up with Moore's law.
Why computer cluster?
• Drawbacks of increasing CPU clock frequency:
--- Electric power consumption is proportional to the cubic of CPU clock frequency (ν3).
--- Generates more heat.
• A drawback of increasing the number of cores within one CPU chip:
--- Difficult for heat dissipation.

• Computer cluster: connect many computers with high-


speed networks.
• Currently computer cluster is the best solution to scale
up computer power.
• Consequently software/programs need to be designed
in the manner of parallel computing.
Basic structure of a computer cluster
• Cluster – a collection of many computers/nodes.
• Rack – a closet to hold a bunch of nodes.  Figure: IBM Blue Gene supercomputer
• Node – a computer (with processors,
memory, hard disk, etc.)
• Socket/processor – one multi-core processor.
• Core/processor – one actual processing unit.

• Network switch
• Storage system

• Power supply system


• Cooling system
Inside a node
3. Memory:
1. Network device --- Infiniband card:
fast and temporal storage,
to transfers data between nodes.
to store data for immediate use.
2. CPU --- Xeon multi-core processors:
4. Hard disk:
to carry out the instructions of programs.
slow and permanent storage
to store data permanently.
5. Space for possible upgrade
6. Accelorator --- Intel Xeon Phi
Coprocessor (Knights Corner):
to accelerate programs.

 Figure: A node of the supercomputer


Stampede at TACC.
Accelerators
 NVIDA GPU (Tesla P100):  Intel Xeon Phi MIC processor (Knights Landing):
• Multiprocessors: 56 • Cores: 68; Threads: 272
• CUDA cores: 3584 • Frequency: 1.4 GHz; Two 512-bit VPUs
• Memory: 12 GB • Memory: 16 GB MCDRAM + external RAM
• PCI connection to host CPU • Self-hosted
• Peak DP compute: 4036‒4670 GFLOPS • Peak DP compute: 3046 GFLOPS
What resources does an HPC system provide?
• A large number of compute nodes and cores.
• Large-size (~ TB) and high-bandwidth memory.
• Large-size (~ PB) and fast-speed storage system; storage for parallel I/O.
• High-speed network: high-bandwidth Ethernet, Infiniband, Omni Path, etc.
• Graphic Processor Unit (GPU)
• Xeon Phi many-integrated-core (MIC) processor/coprocessor.

• A stable and efficient operation system.


• A large number of software or applications.
• User services.
How to measure computer performance?
• Floating-point operations per second (FLOPS):

𝑐𝑜𝑟𝑒𝑠 𝑐𝑦𝑐𝑙𝑒𝑠 𝐹𝐿𝑂𝑃𝑠


𝐹𝐿𝑂𝑃𝑆 = 𝑛𝑜𝑑𝑒𝑠 × × ×
𝑛𝑜𝑑𝑒𝑠 𝑠𝑒𝑐𝑜𝑛𝑑 𝑐𝑦𝑐𝑙𝑒

• The 3rd term clock cycles per second is known as the clock frequency, typically 2 ~ 3 GHz.
• The 4th term FLOPs per cycle is how many floating-point operations are done in one clock cycle.
Typical values for Intel Xeon CPUs are:
--- Sandy Bridge and Ivy Bridge: 8 DP FLOPs/cycle, 16 SP FLOPs/cycle.
--- Haswell and Broadwell : 16 DP FLOPs/cycle, 32 SP FLOPs/cycle.
• GigaFLOPS – 109 FLOPS; TeraFLOPS – 1012 FLOPS; PetaFLOPS – 1015 FLOPS; ExaFLOPS – 1018 FLOPS.
Computer power grows rapidly
• Iphone 4 vs. 1985 Cray-2 supercomputer • Rapid growth of the power of the top-500 supercomputers
(logarithmic y-axis, in GFLOPS)
Top 500 Supercomputers
• The list of June 2017
Statistics of the Top 500
HPC user environment
• Operation system: Linux (Redhat/CentOS, Ubuntu, etc), Unix.
• Login: ssh, gsissh.
• File transfer: secure ftp (scp), grid ftp (globus).
• Job scheduler: Slurm, PBS, SGE, Loadleveler.
• Software management: module.
• Compilers: Intel, GNU, PGI.
• MPI implementations: OpenMPI, MPICH, MVAPICH, Intel MPI.
• Debugging and profiling tools: Totalview, Tau, DDT, Vtune.

• Programming Languages: C, C++, Fortran, Python, Perl, R, MATLAB, Julia


Scientific disciplines in HPC
 Typical scientific computing catalogs that can benefit from HPC:
• Computational Physics • Linear algebra
• High-energy physics • Computer science
• Astrophysics • Data science
• Geophysics • Machine/deep learning
• Climate and weather science • Biophysics
• Computational fluid dynamics • Bioinformatics
• Computer aided engineering • Finance informatics
• Material sciences • Scientific Visualization
• Computational chemistry • Social sciences
• Molecular dynamics
CPU-hours by field of science
• Statistics from XSEDE
Scientific computing software

• Numerical Libraries: Lapack/Blas, FFTw, MKL, GSL, PETSc, Slepc, HDF5, NetCDF, Numpy, Scipy.
• Physics and Engineering: BerkeleyGW, Root, Gurobi, Abaqus, Openfoam, Fluent, Ansys, WRF
• Chemistry and material science: Gaussian, NWChem, VASP, QuantumEspresso, Gamess, Octopus
• Molecular dynamics: Lammps, Namd, Gromacs, Charmm, Amber
• Bioinformatics: Bowtie, BLAST, Bwa, Impute, Minimac, Picard, Plink, Solar, Tophat, Velvet.
• Data science and machine learning: Hadoop, Spark, Tensorflow, Caffe, Torch, cuDNN, Scikit-learn.
 XSEDE software: https://portal.xsede.org/software/
 BU SCC software: http://sccsvc.bu.edu/software/
Parallel Computing
 Parallel computing is a type of computation in
which many calculations are carried out
simultaneously, based on the principle that
large problems can often be divided into
smaller ones, which are then solved at the
same time.
 Speedup of a parallel program,

p: number of processors/cores,
α: fraction of the program that is serial.
• The figure is from: https://en.wikipedia.org/wiki/Parallel_computing
Distributed or shared memory systems

• Shared memory system • Distributed memory system


• For example, a single node on a cluster • For example, multi-nodes on a cluster
• Open Multi-processing (OpenMP) or MPI • Message Passing Interface (MPI)

 Figures are from the book Using OpenMP: Portable Shared Memory Parallel Programming
An example: weather science
• Serial weather model
• Shared-memory weather model (for several cores within one node)
• Distributed-memory weather model (for many nodes within one cluster)
National-wide HPC resources: XSEDE
• XSEDE (eXtreme Science and Engineering Discovery Environment) is a virtual system that
provides compute resources for scientists and researchers from all over the US.
• Its mission is to facilitate research collaboration among institutions, enhance research
productivity, provide remote data transfer, and enable remote instrumentation.
• A combination of supercomputers in many institutions in the US.
• Available to BU users. How to apply for an XSEDE account and allocations? See details at
http://www.bu.edu/tech/support/research/computing-resources/external/xsede/ .
• XSEDE provides regular HPC trainings and workshops:
--- online training: https://www.xsede.org/web/xup/online-training
---- monthly workshops: https://www.xsede.org/web/xup/course-calendar
XSEDE resources (1)
XSEDE resources (2)
XSEDE resources (3)
BU Shared Computer Cluster (SCC)
• A Linux cluster with over 580 nodes, 11,000 processors,
and 252 GPUs. Currently over 3 Petabytes of disk.
• Located in Holyoke, MA at the Massachusetts Green
High Performance Computing Center (MGHPCC), a
collaboration between 5 major universities and the
Commonwealth of Massachusetts.
• Went into production in June, 2013 for Research
Computing. Continues to be updated/expanded.

• Webpage:
http://www.bu.edu/tech/support/research/computing-
resources/scc/
BU RCS tutorials (1)
 Linux system:  Mathematics and Data Analysis:
• Introduction to Linux • Introduction to R
• Build software from Source Codes in Linux • Graphics in R
• Programming in R
 BU SCC: • R Code Optimization
• Introduction to SCC • Introduction to MATLAB
• Intermediate Usage of SCC • Introduction to SPSS
• Managing Projects on the SCC • Introduction to SAS
• Python for Data Analysis
 Visualization:
• Introduction to Maya
• Introduction to ImageJ
BU RCS tutorials (2)
 Computer programming:  High-performance computing:
• Introduction to C • Introduction to MPI
• Introduction to C++ • Introduction to OpenMP
• Introduction to Python • Introduction to GPU
• Introduction to Python for Non-programmers • Introduction to CUDA
• Introduction to Perl • Introduction to OpenACC
• Version Control and GIT. • MATLAB for HPC
• MATLAB Parallel Tool Box.

 Upcoming tutorials: http://www.bu.edu/tech/about/training/classroom/rcs-tutorials/


 Tutorial documents: http://www.bu.edu/tech/support/research/training-consulting/live-tutorials/

You might also like