0% found this document useful (0 votes)
82 views11 pages

Overview of Computing Paradigms

Uploaded by

21wh1a1218
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views11 pages

Overview of Computing Paradigms

Uploaded by

21wh1a1218
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

COMPUTER PARADIGMS

Distributed computing on the internet Or delivery of computing service over the internet.

The chapter will cover multiple computing paradigms:

1. High Performance Computing (HPC): Focuses on maximizing computational speed


and efficiency, often used for tasks that require immense processing power, like scientific
simulations.
2. Cluster Computing: Utilizes a group of linked computers (a cluster) to work together on
tasks as if they are a single system, sharing resources to handle workloads more
efficiently.
3. Grid Computing: Similar to cluster computing but on a larger scale, it connects
geographically dispersed computers to share resources for large computational tasks,
often across organizations.
4. Cloud Computing: Provides on-demand access to computing resources over the internet,
enabling users to access storage, processing power, and applications without needing to
manage physical hardware.
5. Bio-computing: Combines biology and computing, focusing on computational
techniques that can model or manipulate biological systems, often in fields like genomics
or bioinformatics.
6. Mobile Computing: Centers on devices like smartphones and tablets, allowing
computing on-the-go, with considerations for power efficiency, connectivity, and user
interface.
7. Quantum Computing: Uses quantum-mechanical phenomena to process information at
speeds and complexities that surpass traditional computers, promising advancements in
cryptography, material science, and more.
8. Optical Computing: Processes data using light (photons) instead of electrical signals,
which could potentially lead to faster and more energy-efficient computing.
9. Nanocomputing: Involves computing at the nanoscale, often dealing with incredibly
small components, opening possibilities for highly compact and efficient computing
systems.
10. Network Computing: Focuses on interconnected computers that communicate over a
network to share resources, enabling distributed computing and information sharing.
HIGH PERFORMANCE COMPUTING(HPC)

HPC systems consist of a networked pool of processors (like CPUs) linked to resources such as
memory, storage, and input/output devices. These systems allow software to run across all
connected components. HPC systems are commonly associated with supercomputers but also
include smaller setups, like clusters of desktop PCs. They are designed to solve complex
scientific problems requiring vast computational power.

Key Features of HPC

● High-Speed Data Processing: HPC systems are capable of performing billions to


quadrillions of calculations per second, far beyond ordinary desktops.
● Parallel Processing: Thousands of compute nodes (similar to PCs) work together in HPC
systems, enhancing processing speed by performing tasks in parallel.
● Use of Clusters: Multiple servers form a cluster and run algorithms concurrently,
connected to data storage to retrieve results.

Components of HPC

1. Compute: High-speed processors perform calculations.


2. Network: The connectivity between servers enables data sharing.
3. Storage: Linked storage systems handle large volumes of data required for simulations
and calculations.

Examples and Applications of HPC

● Scientific Research: Used in protein folding, nuclear fusion modeling, and


environmental modeling.
● Industries: HPC systems are used in areas such as finance (for real-time stock analysis),
healthcare (for disease diagnosis and research), and the entertainment industry (for film
editing and special effects).
● Oil and Gas: Helps in analyzing drilling sites.
● Artificial Intelligence (AI) and Machine Learning (ML): Enhances processes like
cancer screening and autonomous driving.

Benefits of HPC

1. Cost Savings: HPC’s fast processing ability allows companies to produce results quickly,
reducing costs.
2. Process Efficiency: It streamlines business processes, helping companies analyze data
faster and identify issues.
Performance Evolution

● HPC systems have evolved from Gflops (billions of floating-point operations per
second) in the 1990s to Pflops (quadrillions of floating-point operations per second)
by 2010.
● Scientific and engineering communities primarily drive these improvements, although
most users today rely on desktops and servers for internet searches and commercial
computing tasks.

High-Throughput Computing (HTC)

In contrast to HPC, HTC focuses on high-flux computing for handling large volumes of tasks,
often used for internet-based services (e.g., web searches). HTC emphasizes task completion
rate rather than raw processing speed.

New Computing Paradigms Linked to HPC

1. Service-Oriented Architecture (SOA) and Web 2.0 expand web services for users.
2. Virtualization leads to cloud computing, a modern computing paradigm.
3. IoT (Internet of Things): Enabled by technologies like RFID, GPS, and sensors,
integrating computing with physical devices in real time.

PARALLEL COMPUTING
Parallel computing is a key component of High-Performance Computing (HPC) where multiple
processors work together to solve a single computational problem. This approach differs from
traditional serial computing by enabling simultaneous processing, which significantly speeds up
complex calculations.

Characteristics of Parallel Computing

● Processor Type: Generally, homogeneous (identical) processors are used.


● Cooperative Work: Multiple processors work in parallel to handle different parts of a
problem.
● Inclusion of Supercomputers: Parallel computing includes supercomputers with
hundreds or thousands of interconnected processors.

Serial vs. Parallel Computing

1. Serial Computing:
○ Runs on a single CPU.
○ A problem is broken into a series of instructions executed one after the other.
○ Limited by the speed and capacity of a single processor.
2. Parallel Computing:
○ Uses multiple CPUs.
○ A problem is divided into independent parts, allowing concurrent processing.
○ Each segment of the problem is further broken down, with instructions executed
simultaneously across different processors.
○ Requires a control mechanism to coordinate processes across CPUs.

Types of Parallel Computing Systems

● Tightly Coupled Systems: Processors share a centralized memory, allowing direct


communication.
● Loosely Coupled Systems: Each processor has its memory, and communication happens
via message passing.

Key Terminology in Parallel Computing

● Parallel Computers: Systems capable of parallel processing.


● Parallel Programs: Software specifically designed to run on parallel computing systems.
● Parallel Programming: The practice of writing code that can execute across multiple
processors simultaneously.
DISTRIBUTED COMPUTING

Distributed computing involves a network of multiple autonomous computers, each with its own
memory, working together as a single system. These computers, connected through either a local
network or a wide-area network, communicate via message passing. Distributed computing
systems can include a variety of devices, such as mainframes, PCs, workstations, and
minicomputers, and may vary between homogeneous and heterogeneous configurations.

Key Characteristics

1. Scalability: Easily expands by adding more machines as needed without disrupting the
current system.
2. Redundancy/Replication: Multiple machines provide the same services, ensuring
continuity of service even if one machine fails.

Advantages

Distributed computing offers flexibility and reliability over centralized systems. By allowing
geographically distant processors to work in harmony, it makes efficient use of resources while
minimizing the risk of system-wide failure.
CLUSTER COMPUTING

Cluster computing is an HPC (High-Performance Computing) setup where a group of similar or


identical processor machines, known as nodes, are interconnected through a dedicated network.
All nodes in a cluster share resources such as a common directory and have software like a
Message Passing Interface (MPI) installed, enabling them to run programs across all nodes
simultaneously.

Key Features

● Resource Sharing: Nodes share a common home directory.


● Communication: Nodes use MPI to communicate, allowing them to work together on
large problems.
● Homogeneity: Typically, nodes are of the same or similar type; however, experimental
clusters may include heterogeneous nodes.

Purpose

Clusters are used in situations where individual computers cannot efficiently handle large tasks.
By working cooperatively, cluster nodes tackle complex computations more efficiently than a
single machine could manage on its own.
GRID COMPUTING

Grid computing is a distributed computing model where geographically dispersed computers


work collaboratively to achieve a common goal. Unlike cluster computing, where nodes often
share tasks, grid computing assigns each node a different task or application, making it ideal for
utilizing underused resources across different organizations. The managing of resources in a grid
network is handled by middleware, which provides grid services for access control, security, and
data management.

Key Features

● Resource Utilization: Makes use of unused computing power, enhancing ROI.


● Heterogeneity: Incorporates diverse resources, including supercomputers, PCs, and
laptops.
● Middleware Management: Middleware facilitates resource access and control remotely.

Electrical Power Grid Grid Computing

Supplies electricity seamlessly across Supplies computing resources across


regions. networks.

Infrastructure connects diverse power plants Infrastructure links computers and storage
to homes. resources.

Electricity is available everywhere. Computing power is accessible from various


devices.

Pay for electricity as a utility. Pay for computing power or storage as a


utility.

Advantages

● Cost-effective: Maximizes unused resources, reducing the need for new investments.
● Collaborative Problem Solving: Heterogeneous systems work together on scientific or
HPC applications.
CLOUD COMPUTING

Cloud computing is an evolution from grid computing, emphasizing resource scalability and
on-demand service delivery. It uses the concept of computing power as a utility, allowing IT and
business resources—such as servers, storage, and applications—to be dynamically provisioned
based on user needs and workload. While grid computing aggregates computers in parallel for
specific tasks, cloud computing delivers a unified service by pooling multiple resources, not
limited to computing alone.

Key Differences from Grid Computing

● Resource Aggregation: Cloud leverages diverse resources (e.g., servers, storage,


applications) for unified services, while grid focuses on parallel computing.
● Dynamic Provisioning: Resources are provisioned according to changing workloads.
● Versatility: Cloud environments support both grid and non grid setups, like multi-tier
web architectures.

Key Features

● On-demand Resource Provisioning: Meets varying user demands instantly.


● Scalability: Resources scale up or down as needed, ideal for fluctuating workloads.
● Unified Service Delivery: Provides a seamless, integrated experience for end-users,
supporting diverse applications.
BIOCOMPUTING

Biocomputing is a field combining biology and computational science, using biologically


derived or simulated molecules, like DNA and proteins, to perform computational tasks and
solve complex problems. It provides both theoretical frameworks and practical tools for
scientists, especially in the study of proteins and DNA, allowing for the exploration of biological
structures and processes.

Key Aspects

● Biologically-Derived Models: Uses molecular structures to form computational models


that influence program design for various applications.
● Protein and DNA Exploration: These molecules serve as fundamental units, with their
functional roles dependent on their sequence order, guiding bio-computing applications.
● Disease Understanding: Facilitates insights into molecular structures and functions that
help uncover the causes of diseases at a biological level.

Applications

● Biochemical Computing Components: Construction of computing devices using


biochemical elements.
● Biologically-Inspired Algorithms: Approaches in programming that model biological
processes.
● Biological Context Computing: Applications within biological research and
environments, aiding in disease research, genetic analysis, and synthetic biology.

MOBILE COMPUTING

Mobile computing involves small, handheld devices (like smartphones and tablets) that utilize
wireless communication to exchange information. This technology is widespread, especially in
voice communication (e.g., cellular phones), with a rapid expansion in subscribers and
functionality.

Key Aspects

● Handheld Devices: Compact computing elements used for various applications.


● Wireless Communication: Data and voice transmitted via cellular and wireless
networks.
● Data Transmission: Capability for remote data exchange, such as video calls and
conferencing.
Applications

● Voice and Data Communication: Extends beyond voice to include data services,
enabling functions like video calls and online conferencing.
● Remote Data Access: Allows users to send and receive information from remote to fixed
or other remote locations, making it vital for real-time data access and communication
across distances.

Mobile computing is advancing quickly, providing flexible, on-the-go computing solutions for a
range of personal and business applications.

QUANTUM COMPUTING

Quantum computing represents a significant shift in computational technology, addressing


limitations faced by traditional silicon-based systems.

Key Features

● Transistor Limitations: Current technology is approaching physical limits in


miniaturizing transistors on integrated circuits, typically doubling processing power every
18 months.
● Quantum Speed: Quantum computers operate on principles of quantum mechanics,
allowing them to process information at speeds millions of times faster than today's most
advanced supercomputers.

Fundamental Differences

● Quantum Information: Utilizes quantum bits (qubits) that can exist in multiple states
simultaneously, unlike classical bits that are either 0 or 1. This enables more complex
computations in parallel.

Current Status

● Prototypes: Although there are working quantum computer prototypes, they have not yet
reached a level where they can replace or serve as practical alternatives to conventional
computing systems.

Quantum computing holds the potential to revolutionize areas such as cryptography,


optimization, and complex simulations, but further advancements are needed before it can fully
integrate into mainstream computing.
OPTICAL COMPUTING

Optical computing utilizes photons from visible light or infrared beams instead of electric
currents to perform digital computations, offering a potential speed advantage over traditional
electronic systems.

Key Features

● Photon Use: By leveraging photons, optical computing systems can transmit data more
quickly than systems reliant on electric currents.
● Speed Limitations of Electric Currents: Electric currents travel at about 10% of the
speed of light, limiting data exchange rates, especially over long distances.

Advantages

● Increased Speed: Optical computers can potentially perform operations at speeds 10


times faster or more than conventional electronic computers.
● Impact on Data Transmission: The principles of optical computing have contributed to
the development of optical fiber technology, improving long-distance communication.

Optical computing represents a promising frontier in computing technology, focusing on speed


and efficiency, and is poised to address some limitations of existing electronic systems.

NANO COMPUTING

Nanocomputing involves computing systems built from nanoscale components, promising


advancements in performance and efficiency over traditional computing technologies.

Key Features

● Nanoscale Components: Nanocomputing systems may replace silicon transistors with


transistors made from carbon nanotubes, which can enhance processing speed and reduce
size.
● Scale Issues: The dimensions of the components are typically just a few nanometers
across, posing unique engineering challenges.
● Integration Challenges:
○ Manufacturing Complexity: Creating complex patterns at the nanoscale can be
economically challenging.
○ Device Density: Nanocomputers may need to integrate vast quantities of devices,
complicating their design and production.

You might also like