0% found this document useful (0 votes)
4 views

Ch3 Cloud Data Center

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Ch3 Cloud Data Center

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Cloud computing & Data center

Thoai Nam
High Performance Computing Lab (HPC Lab)
Faculty of Computer Science and Engineering
HCMC University of Technology

HPC Lab-CSE-HCMUT 1
Cloud computing
Cloud computing is the on-demand delivery of IT resources over
the Internet with pay-as-you-go pricing.

Cloud computing is the delivery of computing resources — including storage,


processing power, databases, networking, analytics, artificial intelligence, and
software applications — over the internet (the cloud).

▪ Instead of buying, owning, and maintaining physical data centers and servers, you can access
technology services, such as computing power, storage, and databases, on an as-needed basis
from a cloud provider like Amazon Web Services (AWS)
▪ By outsourcing these resources, companies can access the computational assets they need,
when they need them, without needing to purchase and maintain a physical, on-premise IT
infrastructure
▪ This provides flexible resources, faster innovation, and economies of scale
▪ For many companies, a cloud migration is directly related to data and IT modernization.

HPC Lab-CSE-HCMUT 2
Deloyments models
Public cloud
Delivery models Private cloud
Software as a Service (SaaS) Community cloud
Platform as a Service (PaaS) Hydrid cloud.
Infrastructure as a Service (IaaS).

Defining attributes
Cloud Massive infrastructure
Infrastructure
computing Utility computing
Distributed infrastructure Pay-per-usage
Resource virtualization Accessible via the Internet
Autonomous systems. Elasticity.

Resource
Compute & storage servers
Networks
Services
Applications.

HPC Lab-CSE-HCMUT 3
Other models
▪ Basic reasoning: information and data processing can be done more efficiently
on large farms of computing and storage systems accessible via the Internet
▪ Two early models:
▪ Grid computing – initiated by the National Labs in the early 1990s;
targeted primarily at scientific computing
“Grid computing is the collection of computer resources from multiple locations to reach
a common goal. The grid can be thought of as a distributed system with non-interactive
workloads that involve a large number of files.” from Wikipedia
▪ Utility computing – initiated in 2005-2006 by IT companies and targeted at
enterprise computing
“Utility computing is a service provisioning model in which a service provider makes
computing resources and infrastructure management available to the customer as
needed, and charges them for specific usage rather than a flat rate.” from Wikipedia

HPC Lab-CSE-HCMUT 4
Cloud computing: characteristics (1)
“Cloud Computing offers on-demand, scalable and elastic computing (and storage services).
The resources used for these services can be metered and users are charged only for the
resources used. “

Shared Resources and Resource Management:


1. Cloud uses a shared pool of resources
2. Uses Internet techn. to offer scalable and elastic services.
3. The term “elastic computing” refers to the ability of dynamically and on-
demand acquiring computing resources and supporting a variable workload.
4. Resources are metered and users are charged accordingly.
5. It is more cost-effective due to resource-multiplexing. Lower costs for the cloud
service provider are past to the cloud users.

HPC Lab-CSE-HCMUT 5
Cloud computing: characteristics (2)
Data Storage:
6. Data is stored:
o in the “cloud”, in certain cases closer to the site where it is used.
o appears to the users as if stored in a location-independent manner.
7. The data storage strategy can increase reliability, as well as security, and can
lower communication costs.
Management:
8. The maintenance and security are operated by service providers.
9. The service providers can operate more efficiently due to specialization and
centralization.

HPC Lab-CSE-HCMUT 6
Cloud computing: advantages
1. Resources, such as CPU cycles, storage, network bandwidth, are shared
2. When multiple applications share a system, their peak demands for resources are not synchronized
thus, multiplexing leads to a higher resource utilization
3. Resources can be aggregated to support data-intensive applications
4. Data sharing facilitates collaborative activities. Many applications require multiple types of analysis of
shared data sets and multiple decisions carried out by groups scattered around the globe
5. Eliminates the initial investment costs for a private computing infrastructure and the maintenance
and operation costs
6. Cost reduction: concentration of resources creates the opportunity to pay as you go for computing
7. Elasticity: the ability to accommodate workloads with very large peak-to-average ratios
8. User convenience: virtualization allows users to operate in familiar environments rather than in
idiosyncratic ones.

HPC Lab-CSE-HCMUT 7
Deployment models
▪ Public Cloud - the infrastructure is made available to the general public or a large
industry group and is owned by the organization selling cloud services
▪ Private Cloud – the infrastructure is operated solely for an organization
▪ Hybrid Cloud - composition of two or more Clouds (public, private, or community) as
unique entities but bound by a standardized technology that enables data and
application portability
▪ Other types: e.g., Community/Federated Cloud - the infrastructure is shared by
several organizations and supports a community that has shared concerns.

HPC Lab-CSE-HCMUT 8
Why is Cloud computing successful?
▪ It is in a better position to exploit recent advances in software, networking,
storage, and processor technologies promoted by the same companies who
provide Cloud services
▪ Economical reasons: It is used for enterprise computing; its adoption by
industrial organizations, financial institutions, government, and so on has a huge
impact on the economy
▪ Infrastructures Management reasons:
o A single Cloud consists of a mostly homogeneous (now more heterogeneous)
set of hardware and software resources
o The resources are in a single administrative domain (AD). Security, resource
management, fault-tolerance, and quality of service are less challenging than
in a heterogeneous environment with resources in multiple ADs.
HPC Lab-CSE-HCMUT 9
Challenges for Cloud computing
▪ Availability of service: what happens when the service provider cannot deliver?
▪ Data confidentiality and auditability, a serious problem
▪ Diversity of services, data organization, user interfaces available at different service providers limit
user mobility; once a customer is hooked to one provider it is hard to move to another
▪ Data transfer bottleneck; many applications are data-intensive
▪ Performance unpredictability, one of the consequences of resource sharing.
o How to use resource virtualization and performance isolation for QoS guarantees?
o How to support elasticity, the ability to scale up and down quickly?
▪ Resource management: It is a big challenge to manage different workloads running on large data
centers. Are self-organization and self-management the solution?
▪ Security and confidentiality: major concern for sensitive applications, e.g., healthcare applications.
=> Addressing these challenges is on-going work!

HPC Lab-CSE-HCMUT 10
Infrastructure as a Service (IaaS)
1. Software as a Service (SaaS) (high level)
2. Platform as a Service (PaaS)
3. Infrastructure as a Service (IaaS) (low level).

HPC Lab-CSE-HCMUT 11
Infrastructure as a Service (IaaS)
▪ Infrastructure is compute resources, CPU, VMs, storage, etc.

▪ The user is able to deploy and run arbitrary software, which can include
operating systems and applications
▪ The user does not manage or control the underlying Cloud infrastructure but
has control over operating systems, storage, deployed applications, and
possibly limited control of some networking components, e.g., host firewalls.

▪ Services offered by this delivery model include: server hosting, storage,


computing hardware, operating systems, virtual instances, load balancing,
Internet access, and bandwidth provisioning

▪ Example: Amazon EC2.

HPC Lab-CSE-HCMUT 12
Platform as a Service (PaaS)
▪ Allows a cloud user to deploy consumer-created or acquired applications using
programming languages and tools supported by the service provider.
▪ The user:
o Has control over the deployed applications and, possibly, application
hosting environment configurations.
o Does not manage or control the underlying Cloud infrastructure including
network, servers, operating systems, or storage.
▪ Not particularly useful when:
o The application must be portable.
o Proprietary programming languages are used.
o The hardware and software must be customized to improve the
performance of the application.
▪ Examples: Google App Engine, Windows Azure.
HPC Lab-CSE-HCMUT 13
Software as a Service (SaaS)
▪ Applications are supplied by the service provider
▪ The user does not manage or control the underlying Cloud infrastructure or
individual application capabilities
▪ Services offered include:
o Enterprise services such as: workflow management, communications,
digital signature, customer relationship management (CRM), desktop
software, financial management, geo-spatial, and search
▪ Not suitable for real-time applications or for those where data is not allowed
to be hosted externally

▪ Examples: Gmail, Salesforce.

HPC Lab-CSE-HCMUT 14
HPC Lab-CSE-HCMUT 15
Cloud activities (1)
▪ Service management and provisioning including:
o Virtualization.
o Service provisioning.
o Call center.
o Operations management.
o Systems management.
o QoS management.
o Billing and accounting, asset management.
o SLA management.
o Technical support and backups.

HPC Lab-CSE-HCMUT 16
Cloud activities (2)
▪ Security management including:
o ID and authentication.
o Certification and accreditation.
o Intrusion prevention.
o Intrusion detection.
o Virus protection.
o Cryptography.
o Physical security, incident response.
o Access control, audit and trails, and firewalls.

HPC Lab-CSE-HCMUT 17
Cloud activities (3)
▪ Customer services such as:
o Customer assistance and on-line help
o Subscriptions
o Business intelligence
o Reporting
o Customer preferences
o Personalization.
▪ Integration services including:
o Data management
o Development.

HPC Lab-CSE-HCMUT 18
Ethical issues
▪ Paradigm shift with implications on computing ethics:
o The control is relinquished to third party services
o Data is stored on multiple sites administered by several organizations
o Multiple services interoperate across the network

▪ Implications:
o Unauthorized access
o Data corruption
o Infrastructure failure, and service unavailability.

HPC Lab-CSE-HCMUT 19
De-perimeterisation
▪ Systems can span the boundaries of multiple organizations and cross the
security borders
▪ The complex structure of Cloud services can make it difficult to determine who
is responsible in case something undesirable happens
▪ Identity fraud and theft are made possible by the unauthorized access to
personal data in circulation and by new forms of dissemination through social
networks and they could also pose a danger to Cloud Computing.

HPC Lab-CSE-HCMUT 20
Privacy issues
▪ Cloud service providers have already collected petabytes of sensitive personal
information stored in data centers around the world. The acceptance of Cloud
Computing therefore will be determined by privacy issues addressed by these
companies and the countries where the data centers are located
▪ Privacy is affected by cultural differences; some cultures favor privacy, others
emphasize community. This leads to an ambivalent attitude towards privacy in
the Internet which is a global system.

HPC Lab-CSE-HCMUT 21
Cloud vulnerabilities
▪ Clouds are affected by malicious attacks and failures of the infrastructure,
e.g., power failures
▪ Such events can affect the Internet domain name servers and prevent
access to a Cloud or can directly affect the Clouds:
o in 2004 an attack at Akamai caused a domain name outage and a major blackout
that affected Google, Yahoo, and other sites
o in 2009, Google was the target of a denial-of-service attack which took down
Google News and Gmail for several days
o in 2012 lightning caused a prolonged down time at Amazon.

HPC Lab-CSE-HCMUT 22
Cloud native
▪ Cloud native refers less to where an application resides and more to how it is built
and deployed
▪ A cloud native application consists of discrete, reusable components that are
known as microservices that are designed to integrate into any cloud environment
▪ These microservices act as building blocks and are often packaged in containers
▪ Microservices work together as a whole to comprise an application, yet each can
be independently scaled, continuously improved, and quickly iterated through
automation and orchestration processes
▪ The flexibility of each microservice adds to the agility and continuous improvement
of cloud-native applications.

HPC Lab-CSE-HCMUT 23
Serverless computing

HPC Lab-CSE-HCMUT 24
Serverless computing
Serverless computing is an application development and execution model that
enables developers to build and run application code without provisioning or
managing servers or back-end infrastructure.
IBM

▪ Along with infrastructure as a service (IaaS), platform as a service (PaaS), function as a


service (FaaS) and software as a service (SaaS), serverless has become a leading cloud
service offering
▪ Together, serverless computing, microservices and containers form a triumvirate of
technologies at the core of cloud-native application development.

HPC Lab-CSE-HCMUT 25
Serverless computing history
▪ Serverless originated in 2008 when Google released Google App Engine (GAE), a platform
for developing and hosting web applications in Google-managed data centers
With GAE, a software developer might create and launch software on Google's Cloud
without worrying about server management tasks like patching or load balancing, which
Google handled.
▪ The term ''serverless'' first appeared in a tech article by cloud computing specialist Ken
Fromm in 2012
▪ In 2014, Amazon introduced AWS Lambda, the first serverless platform
▪ In 2016, Microsoft Azure Functions and Google Cloud Functions launched their serverless
platforms
▪ Other major players in today's serverless platform market include IBM Cloud® Code
Engine, Oracle Cloud Infrastructure (OCI) Functions, Cloudflare Workers and Alibaba
Function Compute.

HPC Lab-CSE-HCMUT 26
Serverless and FaaS
▪ Serverless is more than Function as a Service (FaaS) – the cloud computing service that
enables developers to run code or containers in response to specific events or requests
without specifying or managing the infrastructure required to run the code
▪ Compared to FaaS, serverless is an entire stack of services that can respond to specific
events or requests and scale to zero when no longer in use—and for which provisioning,
management and billing are handled by the cloud provider and invisible to developers
▪ FaaS is the compute model central to serverless, and the two terms are often used
interchangeably
▪ In addition to FaaS, these services include databases and storage, Application
programming interface (API) gateways and event-driven architecture.

HPC Lab-CSE-HCMUT 27
Storage and API gateways

▪ Serverless databases and storage


o Databases (SQL and NoSQL) and storage (particularly object storage) are the foundation of the
data layer
o A serverless approach to these technologies involves transitioning away from provisioning
"instances" with defined capacity, connection and query limits and moving toward models that
scale linearly with demand in both infrastructure and pricing
▪ API gateways
o API gateways act as proxies to web application actions and provide HTTP method routing, client
ID and secrets, rate limits, CORS, viewing API usage, viewing response logs and API sharing
policies.

HPC Lab-CSE-HCMUT 28
Serverless versus PaaS (1)
▪ Provisioning time: Measured in milliseconds for serverless versus minutes to hours for the
other models
▪ Administrative burden: None for serverless, compared to a continuum from light to
medium to heavy for PaaS, containers and VMs, respectively
▪ Maintenance: Serverless architectures are managed 100% by CSPs. The same is true for
PaaS, but containers and VMs require significant maintenance, including updating and
managing operating systems, container images, connections and so on
▪ Scaling: Autoscaling, including autoscaling to zero, is instant and inherent for serverless.
The other models offer automatic but slow scaling that requires careful tuning of
autoscaling rules and no scaling to zero
▪ Capacity planning: None is needed for serverless. The other models require a mix of
automatic scalability and capacity planning

HPC Lab-CSE-HCMUT 29
Serverless versus PaaS (2)
▪ Statelessness: Inherent for serverless, which means scalability is never a problem; state is
maintained in an external service or resource. PaaS, containers and VMs can use HTTP,
keep an open socket or connection for long periods and store state in memory between
calls
▪ High availability (HA) and disaster recovery (DR): Serverless offers both high availability
and disaster recovery with no extra effort or extra cost. The other models require extra
cost and management effort. Infrastructure can be restarted automatically with VMs and
containers
▪ Resource utilization: Serverless is 100% efficient because there is no idle capacity—it is
invoked only upon request. All other models feature at least some degree of idle capacity
▪ Billing and savings: Serverless is metered in units of 100 milliseconds. PaaS, containers
and VMs are typically metered by the hour or the minute.

HPC Lab-CSE-HCMUT 30
Pros of Serverless
▪ Improved developer productivity: As noted, serverless enables development teams to focus on
writing code, not managing infrastructure. It gives developers more time to innovate and optimize
their front-end application functions and business logic
▪ Pay for execution only: The meter starts when the request is made and ends when execution
finishes. Compare this to the IaaS compute model, where customers pay for the physical servers,
VMs and other resources required to run applications, from when they provision those resources
until they explicitly decommission them
▪ Develop in any language: Serverless is a polyglot environment that enables developers to code in
any language or framework—Java, Python, JavaScript, node.js—that they're comfortable with
▪ Streamlined development or DevOps cycles: Serverless simplifies deployment and, in a larger
sense, simplifies DevOps because developers don't spend time defining the infrastructure required
to integrate, test, deliver and deploy code builds into production
▪ Cost-effective performance: For specific workloads (for example, embarrassingly parallel processing,
stream processing, specific data processing tasks), serverless computing can be both faster and
more cost-effective than other forms of compute
▪ Reduce latency: In a serverless environment, code can run closer to the end user, decreasing latency
▪ Usage visibility: Serverless platforms provide near-total visibility into system and user times and can
aggregate usage information systematically.
HPC Lab-CSE-HCMUT 31
Cons of Serverless
▪ Less control: In a serverless setting, an organization hands server control over to a third-party CSP,
thus relinquishing the management of hardware and execution environments
▪ Vendor lock-in: Each service provider offers unique serverless capabilities and features that are
incompatible with other vendors
▪ Slow startup: Also known as "cold start," slow startup can affect the performance and
responsiveness of serverless applications, particularly in real-time demand environments
▪ Complex testing and debugging: Debugging can be more complicated with a serverless computing
model as developers lack visibility into back-end processes
▪ Higher cost for running long applications: Serverless execution models are not designed to execute
code for extended periods. Therefore, long-running processes can cost more than traditional
dedicated server or VM environments.

HPC Lab-CSE-HCMUT 32
Sustainability
▪ Unlike traditional on-prem data center environments, a serverless computing
model can help organizations reduce energy consumption and lower their
carbon footprint for IT operations.
▪ A serverless model allows companies to optimize their emissions through
resource efficiency by only paying for and by using the needed resources. This
feature results in less energy wasted on idle or excess processes.

HPC Lab-CSE-HCMUT 33
Edge computing & Cloud

HPC Lab-CSE-HCMUT 34
[Source: https://www.winsystems.com/cloud-fog-and-edge-computing-whats-the-difference/]
HPC Lab-CSE-HCMUT 35
Fog (including Edge) computing
▪ Fog technology complements the role of
cloud computing and distributes the data
processing at the edge of the network,
which provides faster responses to
application queries and saves the network
resources
▪ Fog computing model
o Sensors
o Actuators
o Fog nodes at T1, T2, T3, etc. levels
o Cloud
▪ Benefits of Fog computing
o Move data to the best place for processing
o Optimize latency
o Conserve network bandwidth
o Collect and secure data

HPC Lab-CSE-HCMUT 36
Cloud is the centralized storage situated further from the endpoints than any other type of storage. This
explains the highest latency, bandwidth cost, and network requirements. On the other hand, cloud is a
powerful global solution that can handle huge amounts of data and scale effectively by engaging more
computing resources and server space. It works great for big data analytics, long-term data storage and
historical data analysis.

Fog acts as a middle layer between cloud and edge and provides the benefits of both. It relies on and works
directly with the cloud handing out data that don't need to be processed on the go. At the same time, fog is
placed closer to the edge. If necessary, it engages local computing and storage resources for real-time
analytics and quick response to events.

Just like edge, fog is decentralized meaning that it consists of many nodes. However, unlike edge, fog has a
network architecture. Fog nodes are connected with each other and can redistribute computing and storage
to better solve given tasks.

Edge is the closest you can get to end devices, hence the lowest latency and immediate response to data.
This approach allows to perform computing and store some (only limited) volume of data directly on devices,
applications and edge gateways. It usually has a loosely connected structure where edge nodes work with
data independently. This is what differentiates edge from network-based fog.

Here's a cloud vs. fog vs. edge computing comparison chart that gives a quick overview of these and other
differences between these approaches.
HPC Lab-CSE-HCMUT 37
Cloudlet
▪ A cloudlet is a mobility-enhanced small-scale
cloud datacenter that is located at the edge of
the Internet
▪ The main purpose of the cloudlet is supporting
resource-intensive and interactive mobile
applications by providing powerful computing
resources to mobile devices with lower latency
▪ It is a new architectural element that extends
today’s cloud computing infrastructure
▪ It represents the middle tier of a 3-tier
hierarchy: mobile device - cloudlet - cloud.

HPC Lab-CSE-HCMUT 38
Edge computing paradigms comparison

HPC Lab-CSE-HCMUT 39
From cloud to frog to edge

HPC Lab-CSE-HCMUT 40
Evolution of edge computing

HPC Lab-CSE-HCMUT 41
Data center

HPC Lab-CSE-HCMUT 42
What is Data Center
▪ A data center is a physical facility that organizations use to house their critical
applications and data
▪ A data center's design is based on a network of computing and storage
resources that enable the delivery of shared applications and data
▪ The key components of a data center design include routers, switches,
firewalls, storage systems, servers, and application-delivery controllers
CISCO

HPC Lab-CSE-HCMUT 43
A modern Data Center
▪ Infrastructure has shifted from traditional on-premises physical servers to
virtual networks that support applications and workloads across pools of
physical infrastructure and into a multicloud environment
▪ In this era, data exists and is connected across multiple data centers, the edge,
and public and private clouds
▪ The data center must be able to communicate across these multiple sites,
both on-premises and in the cloud
▪ Even the public cloud is a collection of data centers. When applications are
hosted in the cloud, they are using data center resources from the cloud
provider.

HPC Lab-CSE-HCMUT 44
Why are data centers important to business?
▪ Email and file sharing
▪ Productivity applications
▪ Customer relationship management (CRM)
▪ Enterprise resource planning (ERP) and databases
▪ Big data, artificial intelligence, and machine learning
▪ Virtual desktops, communications and collaboration services.

HPC Lab-CSE-HCMUT 45
What are the core components of a data center?
▪ Network infrastructure: This connects servers (physical and virtualized), data
center services, storage, and external connectivity to end-user locations
▪ Storage infrastructure: Data is the fuel of the modern data center. Storage
systems are used to hold this valuable commodity
▪ Computing resources: Applications are the engines of a data center. These
servers provide the processing, memory, local storage, and network
connectivity that drive applications.

Data center design includes routers, switches, firewalls, storage systems, servers,
and application delivery controllers. Because these components store and manage
business-critical data and applications, data center security is critical in data center
design.

HPC Lab-CSE-HCMUT 46
How do data centers operate?
▪ Data center services are typically deployed to protect the performance and
integrity of the core data center components
▪ Network security appliances: these include firewall and intrusion protection to
safeguard the data center
▪ Application delivery assurance: to maintain application performance, these
mechanisms provide application resiliency and availability via automatic
failover and load balancing.

HPC Lab-CSE-HCMUT 47
What is in a data center facility?
▪ Data center components require significant infrastructure to support the
center's hardware and software:
o Power subsystems
o Uninterruptible power supplies (UPS)
o Ventilation
o Cooling systems
o Fire suppression
o Backup generators
o Connections to external networks.

HPC Lab-CSE-HCMUT 48
The standards for data center infrastructure
The most widely adopted standard for data center design and data center infrastructure is ANSI/TIA-942.
It includes standards for ANSI/TIA-942-ready certification, which ensures compliance with one of four
categories of data center tiers rated for levels of redundancy and fault tolerance.

Tier 1: Basic site Tier 2: Redundant- Tier 3: Concurrently Tier 4: Fault-tolerant


infrastructure capacity component maintainable site site infrastructure
A Tier 1 data center site infrastructure infrastructure This data center provides
offers limited protection This data center offers This data center protects the highest levels of fault
against physical events. It improved protection against virtually all tolerance and
has single-capacity against physical events. It physical events, redundancy. Redundant-
components and a single, has redundant-capacity providing redundant- capacity components and
nonredundant components and a single, capacity components and multiple independent
distribution path. nonredundant multiple independent distribution paths enable
distribution path. distribution paths. Each concurrent
component can be maintainability and one
removed or replaced fault anywhere in the
without disrupting installation without
services to end users. causing downtime.

HPC Lab-CSE-HCMUT 49
Types of data centers
▪ Enterprise data centers: These are built, owned, and operated by companies and are
optimized for their end users. Most often they are housed on the corporate campus
▪ Managed services data centers: These data centers are managed by a third party (or a
managed services provider) on behalf of a company. The company leases the equipment and
infrastructure instead of buying it
▪ Colocation data centers: In colocation ("colo") data centers, a company rents space within a
data center owned by others and located off company premises. The colocation data center
hosts the infrastructure: building, cooling, bandwidth, security, etc., while the company
provides and manages the components, including servers, storage, and firewalls
▪ Cloud data centers: In this off-premises form of data center, data and applications are hosted
by a cloud services provider such as Amazon Web Services (AWS), Microsoft (Azure), or IBM
Cloud or other public cloud provider.

HPC Lab-CSE-HCMUT 50
Infrastructure evolution
Computing infrastructure has experienced three macro waves of evolution over
the last 65 years:
▪ The first wave saw the shift from proprietary mainframes to x86-based
servers, based on-premises and managed by internal IT teams
▪ A second wave saw widespread virtualization of the infrastructure that
supported applications. This allowed for improved use of resources and
mobility of workloads across pools of physical infrastructure
▪ The third wave finds us in the present, where we are seeing the move to
cloud, hybrid cloud and cloud-native. The latter describes applications born in
the cloud.

HPC Lab-CSE-HCMUT 51
Technologies used Data Center Infrastructure (1)
▪ Liquid cooling: While air cooling remains the norm, the potential of liquid cooling
technologies, particularly for high-density installations, is on the rise. In high-density settings,
liquid cooling offers a more cost-effective solution than air cooling, and we’re at the cusp of a
trial period for user-friendly solutions, changing deployment methods, and improved
manufacturer support that is expected to be widely adopted in the future
▪ New application architecture: The need for sophisticated infrastructure solutions is rising due
to the widespread use of microservices and cloud-native applications. New application
architectures that adapt to the ever-changing requirements of contemporary application
development are expected to become the main focus in recent years. Serverless computing is
one of these architectural innovations that is getting a lot of attention because of its
scalability and operational efficiency.
▪ Artificial Intelligence: AI will be a major topic of discussion as it becomes more integrated into
data centers, enabling enhanced security, proactive issue prediction, and optimized
operations. Imagine cooling systems driven by AI that dynamically adjust to server loads in
real time, guaranteeing peak performance and minimal energy use. Software with intelligence
will constantly monitor network traffic, identify and isolate cyber threats in advance, and
provide countermeasures before they become more serious.
HPC Lab-CSE-HCMUT 52
Technologies used Data Center Infrastructure (2)
▪ Sustainability: The emphasis will be on sustainability approaches that minimize the
environmental effects of data centers. Anticipate a rise in the use of renewable energy
sources like wind and solar energy, along with cutting-edge cooling solutions that use as little
water as possible. Data center operators will integrate their operations with environmentally
sustainable practices by prioritizing energy savings through better server design and AI-
powered optimization
o Data processing and storage reliability are maintained while the environmental effect of a
sustainable data center is minimized. Prioritizing energy efficiency, utilizing renewable
energy sources, saving water, and cutting back on waste may all help achieve this
▪ Hybrid Cloud Deployment: more businesses are expected to use multi-cloud and hybrid
strategies. This hybrid method uses the public cloud’s agility and scalability while retaining the
on-premises infrastructure’s security and control, enabling the best possible workload
allocation according to particular requirements. Meeting contemporary enterprises’ changing
needs results in better performance and cost-effectiveness.

HPC Lab-CSE-HCMUT 53
Emerging technologies shaping the future of Data Center
Infrastructure (1)
▪ Edge computing: It brings data processing closer to the source of data generation, reducing
latency and improving performance for applications like gaming, IoT devices, and autonomous
vehicles.
▪ AI and ML: Data centers are getting smarter with the help of artificial intelligence (AI) and
machine learning. These technologies optimize resource allocation, predict equipment failures
before they happen, and automate routine tasks, making data centers more efficient and
reliable.
▪ Quantum computing: Quantum computing is like a supercharged version of traditional
computing. It has the potential to solve complex problems at lightning speed, making it
invaluable for tasks like cryptography, drug discovery, and weather forecasting. Data centers
are exploring how to integrate quantum computing into their infrastructure to unlock new
possibilities
▪ 5G networks: With 5G, we're entering the era of ultra-fast, low-latency wireless connectivity.
Data centers are adapting to support the massive increase in data traffic that 5G brings,
ensuring seamless connectivity for mobile users and IoT devices

HPC Lab-CSE-HCMUT 54
Emerging technologies shaping the future of Data Center
Infrastructure (2)
▪ Immersive Technologies (AR/VR): Augmented reality (AR) and virtual reality (VR) are
transforming how we interact with digital content. Data centers are enhancing their
capabilities to deliver immersive experiences by providing high-performance computing and
low-latency networks for AR/VR applications
▪ Modular Data Centers: Modular data centers are like building blocks that can be quickly
assembled and deployed wherever they're needed. These prefabricated units offer scalability,
flexibility, and cost-effectiveness, allowing organizations to expand their infrastructure rapidly
without compromising reliability
▪ Sustainable Practices: As concerns about climate change grow, data centers are adopting
more sustainable practices. From using renewable energy sources to implementing innovative
cooling technologies, data centers are reducing their environmental impact while maintaining
optimal performance

HPC Lab-CSE-HCMUT 55
Emerging technologies shaping the future of Data Center
Infrastructure (3)
▪ Blockchain Technology: Blockchain is revolutionizing data security and transparency. Data
centers are exploring how blockchain can be used to secure transactions, authenticate users,
and ensure the integrity of data stored within their infrastructure
▪ Cybersecurity Innovations: With cyber threats on the rise, data centers are investing in
advanced cybersecurity solutions to protect against breaches and attacks. These include AI-
driven threat detection, encryption technologies, and rigorous access controls to safeguard
sensitive data
▪ Data Center Automation: Automation is streamlining data center operations by reducing
manual intervention and human error. Through automation, data centers can provision
resources, optimize workloads, and troubleshoot issues more efficiently, freeing up IT staff to
focus on strategic initiatives.

HPC Lab-CSE-HCMUT 56

You might also like