0% found this document useful (0 votes)
58 views

DSC_Unit-II[1].pdf

Uploaded by

Sam Hunter
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

DSC_Unit-II[1].pdf

Uploaded by

Sam Hunter
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Digital $ Smart Cities

Paper code: OAE421T


Unit-II
Digital Infrastructure for Smart Cities

Digital Infrastructure for Smart Cities: Digital infrastructure in cities is that series of elements
that are crucial when it comes to adapting its urban management to a predominantly digital
environment. Consolidated thanks to an integration plan, this infrastructure provides support for all
aspects of urban management, providing significant competitive advantages such as cost savings,
process automation or knowledge acquisition.

Smart city’s digital infrastructure: Digital infrastructure, often called framework


infrastructure, are those elements capable of providing support for urban management. In the
case of a city, this includes the companies that lay fiber optic cables, IoT objects that collect
data or the possibility of completing bureaucratic processes online. It is a framework on
which to work.
As digitalization has permeated the sectors, there has been an increasing demand for this
infrastructure. What initially only require d a connection and basic hardware, now focuses on
more specific tools: connected transport, Bluetooth markers in stores, smart lighting, big data
anonymization, management services via apps.
The importance of the integration plan
The digital infrastructure integration plan is a recurring route (it is repeated over time,
improving with each version), which consists of designing a route to follow in order to adopt
the digital benefits in smart municipalities and regions. Some of the advantages include:
Process simplification
Investing in digital infrastructure tends to help with process automation and making processes
simpler. This helps improve the productivity or efficiency of the systems. For example,
digitalizing public transport enables statistics to be obtained that were previously collected by
hand, it prevents people from slipping through by issuing warnings to security when it occurs,
it enables the fleet to be managed in real time and helps to automate signage, which increases
safety.
Information about everything and advanced analytics
As systems in a city become digital and as they are integrated in the same database via
instances, integrated data analyses can be conducted.__ This advanced analytics enables
greater knowledge of the city to be obtained.__ Following the example: Which lines are used
the most? When are more buses needed during the day?
Automation and smart machines
Some of the solutions that have emerged thanks to this improved infrastructure are
conversational chatbots, which help urban procedures, knowledge acquisition
through machine learning techniques, which determine the bus routes or the order in which
trash bins are collected.
Examples of how digital infrastructure can be improved

Although each city is unique, there are many projects designed to improve digital
infrastructure. Below are three possibilities, all non-exclusive, without one being superior to
the other.
Connection and IoT: digitalize the physical world
The first step toward digitizing a city is sensorization. And this requires a good connection in
all its points. This is why fiber optics and the use of state-of-the-art networks tends to be
essential. After this, some alternatives are to geolocate objects or convert passive objects into
connected objects. For example, smart traffic lights or trash bins that tell you when they are
full.
Database integration
Working with a single database or making different databases connect with one another
provides key value in urban management. If the cleaning service knows the route and
location of public transport lines, together with complaints about dirt, it will be able to do its
job more efficiently.
Blockchain technology and transparency
Blockchain technology, given the way in which this data registry is designed, facilitates
transparency. A city that embraces this system to index its systems, will also, apart from
creating traceable mechanisms, provide its citizens with more transparent information.
Urban sensing and data collection technologies:
Urban sensing and data collection technologies refer to systems and devices used to monitor
and gather data from city environments. These technologies help understand urban dynamics,
improve infrastructure, and enhance city management. Key technologies include:
1. IoT Sensors: These devices monitor environmental factors like air quality, traffic
patterns, and energy consumption. Placed throughout the city, they collect real-time
data to help with urban planning and resource management.
2. Smart Cameras: Used for traffic monitoring, safety, and security, they analyze
movements, identify anomalies, and provide insights for law enforcement and city
planners.
3. Mobile and Wearable Devices: Smartphones and wearable tech (e.g., fitness trackers)
contribute data on mobility, public transportation usage, and citizen health patterns.
4. Drones and Satellites: Used for mapping, surveillance, and environmental monitoring,
drones and satellites provide aerial views and data for urban planning and emergency
response.
5. Crowdsourcing Platforms: Citizens contribute data on issues like road conditions,
noise levels, or pollution, offering a bottom-up approach to urban sensing.
6. Big Data Analytics: The collected data is processed and analyzed to generate
actionable insights for decision-making, improving city services, sustainability, and
residents' quality of life.
These technologies help create smart cities, enhancing efficiency and the overall urban
experience.

Cloud computing:
Cloud computing is the on-demand access of computing resources—physical servers or
virtual servers, data storage, networking capabilities, application development tools, software,
AI-powered analytic tools and more—over the internet with pay-per-use pricing.
The cloud computing model offers customers greater flexibility and scalability compared to
traditional on-premises infrastructure.
Cloud computing plays a pivotal role in our everyday lives, whether accessing a cloud
application like Google Gmail, streaming a movie on Netflix or playing a cloud-hosted video
game.
Cloud computing has also become indispensable in business settings, from small startups to
global enterprises. Its many business applications include enabling remote work by making
data and applications accessible from anywhere, creating the framework for seamless
omnichannel customer engagement and providing the vast computing power and other
resources needed to take advantage of cutting-edge technologies like generative
AI and quantum computing.
A cloud services provider (CSP) manages cloud-based technology services hosted at a
remote data center and typically makes these resources available for a pay-as-you-go or
monthly subscription fee.
Benefits of cloud computing
Compared to traditional on-premises IT that involves a company owning and maintaining
physical data centers and servers to access computing power, data storage and other resources
(and depending on the cloud services you select), cloud computing offers many benefits,
including the following:
Cost-effectiveness
Cloud computing lets you offload some or all of the expense and effort of purchasing,
installing, configuring and managing mainframe computers and other on-premises
infrastructure. You pay only for cloud-based infrastructure and other computing resources as
you use them.
Increased speed and agility
With cloud computing, your organization can use enterprise applications in minutes instead
of waiting weeks or months for IT to respond to a request, purchase and configure supporting
hardware and install software. This feature empowers users—specifically DevOps and other
development teams—to help leverage cloud-based software and support infrastructure.
Unlimited scalability
Cloud computing provides elasticity and self-service provisioning, so instead of purchasing
excess capacity that sits unused during slow periods, you can scale capacity up and down in
response to spikes and dips in traffic. You can also use your cloud provider’s global network
to spread your applications closer to users worldwide.
Enhanced strategic value
Cloud computing enables organizations to use various technologies and the most up-to-date
innovations to gain a competitive edge. For instance, in retail, banking and other customer-
facing industries, generative AI-powered virtual assistants deployed over the cloud can
deliver better customer response time and free up teams to focus on higher-level work. In
manufacturing, teams can collaborate and use cloud-based software to monitor real-time data
across logistics and supply chain processes.
Origins of cloud computing
The origins of cloud computing technology go back to the early 1960s when Dr. Joseph Carl
Robnett Licklider (link resides outside ibm.com), an American computer scientist and
psychologist known as the "father of cloud computing", introduced the earliest ideas of global
networking in a series of memos discussing an Intergalactic Computer Network. However, it
wasn’t until the early 2000s that modern cloud infrastructure for business emerged.
In 2002, Amazon Web Services started cloud-based storage and computing services. In 2006,
it introduced Elastic Compute Cloud (EC2), an offering that allowed users to rent virtual
computers to run their applications. That same year, Google introduced the Google Apps suite
(now called Google Workspace), a collection of SaaS productivity applications. In 2009,
Microsoft started its first SaaS application, Microsoft Office 2011. Today, Gartner
predicts worldwide end-user spending on the public cloud will total USD 679 billion and is
projected to exceed USD 1 trillion in 2027 (link resides outside ibm.com).
Cloud computing components :The following are a few of the most integral components of
today’s modern cloud computing architecture.
Data centers
CSPs own and operate remote data centers that house physical or bare metal servers, cloud
storage systems and other physical hardware that create the underlying infrastructure and
provide the physical foundation for cloud computing.
Networking capabilities
In cloud computing, high-speed networking connections are crucial. Typically, an internet
connection known as a wide-area network (WAN) connects front-end users (for example,
client-side interface made visible through web-enabled devices) with back-end functions (for
example, data centers and cloud-based applications and services). Other advanced cloud
computing networking technologies, including load balancers, content delivery networks
(CDNs) and software-defined networking (SDN), are also incorporated to ensure data flows
quickly, easily and securely between front-end users and back-end resources.
Virtualization
Cloud computing relies heavily on the virtualization of IT infrastructure—servers, operating
system software, networking and other infrastructure that’s abstracted using special software
so that it can be pooled and divided irrespective of physical hardware boundaries. For
example, a single hardware server can be divided into multiple virtual servers. Virtualization
enables cloud providers to make maximum use of their data center resources.
Cloud computing services
IaaS (Infrastructure-as-a-Service), PaaS (Platform-as-a-Service), SaaS (Software-as-a-
Service) and serverless computing are the most common models of cloud services, and it’s
not uncommon for an organization to use some combination of all four.
IaaS (Infrastructure-as-a-Service)
IaaS (Infrastructure-as-a-Service) provides on-demand access to fundamental computing
resources—physical and virtual servers, networking and storage—over the internet on a pay-
as-you-go basis. IaaS enables end users to scale and shrink resources on an as-needed basis,
reducing the need for high up-front capital expenditures or unnecessary on-premises or
"owned" infrastructure and for overbuying resources to accommodate periodic spikes in
usage.
According to a Business Research Company report (link resides outside ibm.com), the IaaS
market is predicted to grow rapidly in the next few years, growing to $212.34 billion in 2028
at a compound annual growth rate (CAGR) of 14.2%.
PaaS (Platform-as-a-Service)
PaaS (Platform-as-a-Service) provides software developers with an on-demand platform—
hardware, complete software stack, infrastructure and development tools—for running,
developing and managing applications without the cost, complexity and inflexibility of
maintaining that platform on-premises. With PaaS, the cloud provider hosts everything at
their data center. These include servers, networks, storage, operating system
software, middleware and databases. Developers simply pick from a menu to spin up servers
and environments they need to run, build, test, deploy, maintain, update and scale
applications.
Today, PaaS is typically built around containers, a virtualized compute model one step
removed from virtual servers. Containers virtualize the operating system, enabling developers
to package the application with only the operating system services it needs to run on any
platform without modification and the need for middleware.
Red Hat® OpenShift® is a popular PaaS built around Docker containers and Kubernetes, an
open source container orchestration solution that automates deployment, scaling, load
balancing and more for container-based applications.
SaaS (Software-as-a-Service)
SaaS (Software-as-a-Service), also known as cloud-based software or cloud applications, is
application software hosted in the cloud. Users access SaaS through a web browser, a
dedicated desktop client or an API that integrates with a desktop or mobile operating system.
Cloud service providers offer SaaS based on a monthly or annual subscription fee. They may
also provide these services through pay-per-usage pricing.
In addition to the cost savings, time-to-value and scalability benefits of cloud, SaaS offers the
following:
 Automatic upgrades: With SaaS, users use new features when the cloud service
provider adds them without orchestrating an on-premises upgrade.
 Protection from data loss: Because SaaS stores application data in the cloud with the
application, users don’t lose data if their device crashes or breaks.
SaaS is the primary delivery model for most commercial software today. Hundreds of SaaS
solutions exist, from focused industry and broad administrative (for example, Salesforce) to
robust enterprise database and artificial intelligence (AI) software. According to an
International Data Center (IDC) survey (the link resides outside IBM), SaaS applications
represent the largest cloud computing segment, accounting for more than 48% of the $778
billion worldwide cloud software revenue.

Serverless computing
Serverless computing, or simply serverless, is a cloud computing model that offloads all the
back-end infrastructure management tasks, including provisioning, scaling, scheduling and
patching to the cloud provider. This frees developers to focus all their time and effort on the
code and business logic specific to their applications.
Moreover, serverless runs application code on a per-request basis only and automatically
scales the supporting infrastructure up and down in response to the number of requests. With
serverless, customers pay only for the resources used when the application runs; they never
pay for idle capacity.
FaaS, or Function-as-a-Service, is often confused with serverless computing when, in fact,
it’s a subset of serverless. FaaS allows developers to run portions of application code (called
functions) in response to specific events. Everything besides the code—physical hardware,
virtual machine (VM) operating system and web server software management—is
provisioned automatically by the cloud service provider in real-time as the code runs and is
spun back down once the execution is complete. Billing starts when execution starts and
stops when execution stops.
Types of cloud computing
Public cloud
A public cloud is a type of cloud computing in which a cloud service provider makes
computing resources available to users over the public internet. These include SaaS
applications, individual virtual machines (VMs), bare metal computing hardware, complete
enterprise-grade infrastructures and development platforms. These resources might be
accessible for free or according to subscription-based or pay-per-usage pricing models.
The public cloud provider owns, manages and assumes all responsibility for the data centers,
hardware and infrastructure on which its customers’ workloads run. It typically provides
high-bandwidth network connectivity to ensure high performance and rapid access to
applications and data.
Public cloud is a multi-tenant environment where all customers pool and share the cloud
provider’s data center infrastructure and other resources. In the world of the leading public
cloud vendors, such as Amazon Web Services (AWS), Google Cloud, IBM Cloud®,
Microsoft Azure and Oracle Cloud, these customers can number in the millions.
Most enterprises have moved portions of their computing infrastructure to the public cloud
since public cloud services are elastic and readily scalable, flexibly adjusting to meet
changing workload demands. The promise of greater efficiency and cost savings through
paying only for what they use attracts customers to the public cloud. Still, others seek to
reduce spending on hardware and on-premises infrastructure. Gartner predicts (link resides
outside ibm.com) that by 2026, 75% of organizations will adopt a digital transformation
model predicated on cloud as the fundamental underlying platform.

Private cloud
A private cloud is a cloud environment where all cloud infrastructure and computing
resources are dedicated to one customer only. Private cloud combines many benefits of cloud
computing—including elasticity, scalability and ease of service delivery—with the access
control, security and resource customization of on-premises infrastructure.
A private cloud is typically hosted on-premises in the customer’s data center. However, it can
also be hosted on an independent cloud provider’s infrastructure or built on rented
infrastructure housed in an offsite data center.
Many companies choose a private cloud over a public cloud environment to meet their
regulatory compliance requirements. Entities like government agencies, healthcare
organizations and financial institutions often opt for private cloud settings for workloads that
deal with confidential documents, personally identifiable information (PII), intellectual
property, medical records, financial data or other sensitive data.
By building private cloud architecture according to cloud-native principles, an organization
can quickly move workloads to a public cloud or run them within a hybrid cloud (see below)
environment whenever ready.
Hybrid cloud
A hybrid cloud is just what it sounds like: a combination of public cloud, private cloud and
on-premises environments. Specifically (and ideally), a hybrid cloud connects a combination
of these three environments into a single, flexible infrastructure for running the organization’s
applications and workloads.
At first, organizations turned to hybrid cloud computing models primarily to migrate portions
of their on-premises data into private cloud infrastructure and then connect that infrastructure
to public cloud infrastructure hosted off-premises by cloud vendors. This process was done
through a packaged hybrid cloud solution like Red Hat® OpenShift® or middleware and IT
management tools to create a "single pane of glass." Teams and administrators rely on this
unified dashboard to view their applications, networks and systems.
Today, hybrid cloud architecture has expanded beyond physical connectivity and cloud
migration to offer a flexible, secure and cost-effective environment that supports the
portability and automated deployment of workloads across multiple environments. This
feature enables an organization to meet its technical and business objectives more effectively
and cost-efficiently than with a public or private cloud alone. For instance, a hybrid cloud
environment is ideal for DevOps and other teams to develop and test web applications. This
frees organizations from purchasing and expanding the on-premises physical hardware
needed to run application testing, offering faster time to market. Once a team has developed
an application in the public cloud, they may move it to a private cloud environment based on
business needs or security factors.
A public cloud also allows companies to quickly scale resources in response to unplanned
spikes in traffic without impacting private cloud workloads, a feature known as cloud
bursting. Streaming channels like Amazon use cloud bursting to support the increased
viewership traffic when they start new shows.
Most enterprise organizations today rely on a hybrid cloud model because it offers greater
flexibility, scalability and cost optimization than traditional on-premises infrastructure setups.
According to the IBM Transformation Index: State of Cloud, more than 77% of businesses
and IT professionals have adopted a hybrid cloud approach.
Multicloud
Multicloud uses two or more clouds from two or more different cloud providers. A
multicloud environment can be as simple as email SaaS from one vendor and image editing
SaaS from another. But when enterprises talk about multicloud, they typically refer to using
multiple cloud services—including SaaS, PaaS and IaaS services—from two or more leading
public cloud providers.
Organizations choose multicloud to avoid vendor lock-in, to have more services to select
from and to access more innovation. With multicloud, organizations can choose and
customize a unique set of cloud features and services to meet their business needs. This
freedom of choice includes selecting “best-of-breed” technologies from any CSP, as needed
or as they emerge, rather than being locked into offering from a single vendor. For example,
an organization may choose AWS for its global reach with web-hosting, IBM Cloud for data
analytics and machine learning platforms and Microsoft Azure for its security features.
A multicloud environment also reduces exposure to licensing, security and compatibility
issues that can result from "shadow IT"— any software, hardware or IT resource used on an
enterprise network without the IT department’s approval and often without IT’s knowledge or
oversight.
The modern hybrid multicloud
Today, most enterprise organizations use a hybrid multicloud model. Apart from the
flexibility to choose the most cost-effective cloud service, hybrid multicloud offers the most
control over workload deployment, enabling organizations to operate more efficiently,
improve performance and optimize costs. According to an IBM® Institute for Business Value
study, the value derived from a full hybrid multicloud platform technology and operating
model at scale is two-and-a-half times the value derived from a single-platform, single-cloud
vendor approach.
Yet the modern hybrid multicloud model comes with more complexity. The more clouds you
use—each with its own management tools, data transmission rates and security protocols—
the more difficult it can be to manage your environment. With over 97% of enterprises
operating on more than one cloud and most organizations running 10 or more clouds, a
hybrid cloud management approach has become crucial. Hybrid multicloud management
platforms provide visibility across multiple provider clouds through a central dashboard
where development teams can see their projects and deployments, operations teams can
monitor clusters and nodes and the cybersecurity staff can monitor for threats.
Cloud security
Traditionally, security concerns have been the primary obstacle for organizations considering
cloud services, mainly public cloud services. Maintaining cloud security demands different
procedures and employee skillsets than in legacy IT environments. Some cloud security best
practices include the following:
 Shared responsibility for security: Generally, the cloud service provider is responsible
for securing cloud infrastructure, and the customer is responsible for protecting its
data within the cloud. However, it’s also essential to clearly define data ownership
between private and public third parties.
 Data encryption: Data should be encrypted while at rest, in transit and in use.
Customers need to maintain complete control over security keys and hardware
security modules.
 Collaborative management: Proper communication and clear, understandable
processes between IT, operations and security teams will ensure seamless cloud
integrations that are secure and sustainable.
 Security and compliance monitoring: This begins with understanding all regulatory
compliance standards applicable to your industry and establishing active monitoring
of all connected systems and cloud-based services to maintain visibility of all data
exchanges across all environments, on-premises, private cloud, hybrid cloud and
edge.
Cloud security is constantly changing to keep pace with new threats. Today’s CSPs offer a
wide array of cloud security management tools, including the following:
 Identity and access management (IAM): IAM tools and services that automate policy-
driven enforcement protocols for all users attempting to access both on-premises and
cloud-based services.
 Data loss prevention (DLP): DLP services that combine remediation alerts data
encryption and other preventive measures to protect all stored data, whether at rest or
in motion.
 Security information and event management (SIEM): SIEM is a comprehensive
security orchestration solution that automates threat monitoring, detection and
response in cloud-based environments. SIEM technology uses artificial intelligence
(AI)-driven technologies to correlate log data across multiple platforms and digital
assets. This allows IT teams to successfully apply their network security protocols,
enabling them to react to potential threats quickly.
 Automated data compliance platforms: Automated software solutions provide
compliance controls and centralized data collection to help organizations adhere to
regulations specific to their industry. Regular compliance updates can be baked into
these platforms so organizations can adapt to ever-changing regulatory compliance
standards.
Cloud sustainability
Sustainability in business, a company’s strategy to reduce negative environmental impact
from their operations in a particular market, has become an essential corporate governance
mandate. Moreover, Gartner predicts (link resides outside ibm.com) that by 2025, the carbon
emissions of hyperscale cloud services will be a top-three criterion in cloud purchase
decisions.
As companies strive to advance their sustainability objectives, cloud computing has evolved
to play a significant role in helping them reduce their carbon emissions and manage climate-
related risks. For instance, traditional data centers require power supplies and cooling
systems, which depend on large amounts of electrical power. By migrating IT resources and
applications to the cloud, organizations only enhance operational and cost efficiencies and
boost overall energy efficiency through pooled CSP resources.
All major cloud players have made net-zero commitments to reduce their carbon footprints
and help clients reduce the energy they typically consume using an on-premises setup. For
instance, IBM is driven by sustainable procurement initiatives to reach NetZero by 2030. By
2025, IBM Cloud worldwide data centers will comprise energy procurement drawn from
75% renewable sources.

Edge Computing:
Edge computing optimizes Internet devices and web applications by bringing computing
closer to the source of the data. This minimizes the need for long distance communications
between client and server, which reduces latency and bandwidth usage.
Edge computing is a networking philosophy focused on bringing computing as close to the
source of data as possible in order to reduce latency and bandwidth use. In simpler terms,
edge computing means running fewer processes in the cloud and moving those processes to
local places, such as on a user’s computer, an IoT device, or an edge server. Bringing
computation to the network’s edge minimizes the amount of long-distance communication
that has to happen between a client and server.
Network edge: For Internet devices, the network edge is where the device, or the local
network containing the device, communicates with the Internet. The edge is a bit of a fuzzy
term; for example a user’s computer or the processor inside of an IoT camera can be
considered the network edge, but the user’s router, ISP, or local edge server are also
considered the edge. The important takeaway is that the edge of the network is geographically
close to the device, unlike origin servers and cloud servers, which can be very far from the
devices they communicate with.
What differentiates edge computing from other computing models?
The first computers were large, bulky machines that could only be accessed directly or via
terminals that were basically an extension of the computer. With the invention of personal
computers, computing could take place in a much more distributed fashion. For a time,
personal computing was the dominant computing model. Applications ran and data was
stored locally on a user's device, or sometimes within an on-premise data center.
Cloud computing, a more recent development, offered a number of advantages over this
locally based, on-premise computing. Cloud services are centralized in a vendor-managed
"cloud" (or collection of data centers) and can be accessed from any device over the Internet.
However, cloud computing can introduce latency because of the distance between users and
the data centers where cloud services are hosted. Edge computing moves computing closer to
end users to minimize the distance that data has to travel, while still retaining the centralized
nature of cloud computing.
To summarize:
 Early computing: Centralized applications only running on one isolated computer
 Personal computing: Decentralized applications running locally
 Cloud computing: Centralized applications running in data centers
 Edge computing: Centralized applications running close to users, either on the device
itself or on the network edge
Example of edge computing:
Consider a building secured with dozens of high-definition IoT video cameras. These are
"dumb" cameras that simply output a raw video signal and continuously stream that signal to
a cloud server. On the cloud server, the video output from all the cameras is put through a
motion-detection application to ensure that only clips featuring activity are saved to the
server’s database. This means there is a constant and significant strain on the building’s
Internet infrastructure, as significant bandwidth gets consumed by the high volume of video
footage being transferred. Additionally, there is very heavy load on the cloud server that has
to process the video footage from all the cameras simultaneously.
Now imagine that the motion sensor computation is moved to the network edge. What if each
camera used its own internal computer to run the motion-detecting application and then sent
footage to the cloud server as needed? This would result in a significant reduction in
bandwidth use, because much of the camera footage will never have to travel to the cloud
server.
Additionally, the cloud server would now only be responsible for storing the important
footage, meaning that the server could communicate with a higher number of cameras
without getting overloaded. This is what edge computing looks like.
Other possible use cases for edge computing:
Edge computing can be incorporated into a wide variety of applications, products, and
services. A few possibilities include:
 Security system monitoring: As described above.
 IoT devices: Smart devices that connect to the Internet can benefit from running code
on the device itself, rather than in the cloud, for more efficient user interactions.
 Self-driving cars: Autonomous vehicles need to react in real time, without waiting for
instructions from a server.
 More efficient caching: By running code on a CDN edge network, an application can
customize how content is cached to more efficiently serve content to users.
 Medical monitoring devices: It is crucial for medical devices to respond in real time
without waiting to hear from a cloud server.
 Video conferencing: Interactive live video takes quite a bit of bandwidth, so moving
backend processes closer to the source of the video can decrease lag and latency.
Benefits of edge computing:
Cost savings
As seen in the example above, edge computing helps minimize bandwidth use and server
resources. Bandwidth and cloud resources are finite and cost money. With every household
and office becoming equipped with smart cameras, printers, thermostats, and even toasters,
Statista predicts that by 2025 there will be over 75 billion IoT devices installed worldwide. In
order to support all those devices, significant amounts of computation will have to be moved
to the edge.
Performance
Another significant benefit of moving processes to the edge is to reduce latency. Every time a
device needs to communicate with a distant server somewhere, that creates a delay. For
example, two coworkers in the same office chatting over an IM platform might experience a
sizable delay because each message has to be routed out of the building, communicate with a
server somewhere across the globe, and be brought back before it appears on the recipient’s
screen. If that process is brought to the edge, and the company’s internal router is in charge of
transferring intra-office chats, that noticeable delay would not exist.
Similarly, when users of all kinds of web applications run into processes that have to
communicate with an external server, they will encounter delays. The duration of these delays
will vary based upon their available bandwidth and the location of the server, but these delays
can be avoided altogether by bringing more processes to the network edge.
New functionality
In addition, edge computing can provide new functionality that wasn’t previously available.
For example, a company can use edge computing to process and analyze their data at the
edge, which makes it possible to do so in real time.
The key benefits of edge computing are:
 Decreased latency
 Decrease in bandwidth use and associated cost
 Decrease in server resources and associated cost
 Added functionality
Drawbacks of edge computing:
One drawback of edge computing is that it can increase attack vectors. With the addition of
more "smart" devices into the mix, such as edge servers and IoT devices that have robust
built-in computers, there are new opportunities for malicious attackers to compromise these
devices.
Another drawback with edge computing is that it requires more local hardware. For example,
while an IoT camera needs a built-in computer to send its raw video data to a web server, it
would require a much more sophisticated computer with more processing power in order for
it to run its own motion-detection algorithms. But the dropping costs of hardware are making
it cheaper to build smarter devices.
One way to completely mitigate the need for extra hardware is to take advantage of edge
servers. For example, with Cloudflare’s network of 330 geographically distributed edge
locations, Cloudflare customers can have edge code running worldwide using Cloudflare
Workers.

Urban sensing and data collection technologies refer to systems and devices used to monitor
and gather data from city environments. These technologies help understand urban dynamics,
improve infrastructure, and enhance city management. Key technologies include:
1. IoT Sensors: These devices monitor environmental factors like air quality, traffic
patterns, and energy consumption. Placed throughout the city, they collect real-time
data to help with urban planning and resource management.
2. Smart Cameras: Used for traffic monitoring, safety, and security, they analyze
movements, identify anomalies, and provide insights for law enforcement and city
planners.
3. Mobile and Wearable Devices: Smartphones and wearable tech (e.g., fitness
trackers) contribute data on mobility, public transportation usage, and citizen health
patterns.
4. Drones and Satellites: Used for mapping, surveillance, and environmental
monitoring, drones and satellites provide aerial views and data for urban planning and
emergency response.
5. Crowdsourcing Platforms: Citizens contribute data on issues like road conditions,
noise levels, or pollution, offering a bottom-up approach to urban sensing.
6. Big Data Analytics: The collected data is processed and analyzed to generate
actionable insights for decision-making, improving city services, sustainability, and
residents' quality of life.
These technologies help create smart cities, enhancing efficiency and the overall urban
experience.

Data centers in smart cities


Data centers in smart cities play a crucial role in managing, processing, and storing the vast
amounts of data generated by various urban sensing and data collection technologies. Their
importance lies in supporting the digital infrastructure that powers smart city initiatives.
Here’s how data centers function within smart cities:
1. Data Storage and Processing:
 Smart cities generate enormous data from IoT sensors, cameras, traffic systems, and
citizen services. Data centers store and process this information, enabling real-time
decision-making and efficient urban management.
2. Cloud Computing:
 Many smart cities rely on cloud services for scalable computing power and storage.
Data centers act as the backbone for these cloud services, facilitating data processing
and analytics without the need for localized infrastructure.
3. Edge Computing:
 To reduce latency, some smart cities deploy edge data centers closer to the source of
data generation. These localized centers process data at the edge of the network,
ensuring faster response times, especially for critical applications like traffic
management and emergency services.

4. Energy Efficiency and Sustainability:


 Smart cities prioritize sustainability, and data centers are a key component. Many data
centers adopt energy-efficient technologies, renewable energy sources, and cooling
systems to reduce their environmental impact and align with the green goals of smart
city initiatives.
5. Security and Privacy:
 Data centers are responsible for maintaining the security and privacy of sensitive
urban data, such as surveillance footage, health data, and personal information. They
implement strong cybersecurity measures and compliance with regulations (e.g.,
GDPR) to protect citizens' data.
6. Smart Grids and Infrastructure:
 In smart cities, data centers help manage smart grids, optimizing energy distribution
and consumption. They also play a role in infrastructure management, such as
monitoring water supply systems, waste management, and public utilities.
7. Supporting AI and Machine Learning:
 Advanced analytics, AI, and machine learning require significant computational
resources. Data centers in smart cities support these technologies, allowing for
predictive analytics and automation in areas like traffic control, crime prevention, and
energy optimization.
Data centers are thus the digital hub of smart cities, enabling the seamless operation of
various connected systems that improve city living.

Cybersecurity:
Cybersecurity is the practice of protecting systems, networks, and programs from digital
attacks. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive
information; extorting money from users via ransomware; or interrupting normal business
processes.
Implementing effective cybersecurity measures is particularly challenging today because
there are more devices than people, and attackers are becoming more innovative.
A successful cybersecurity approach has multiple layers of protection spread across the
computers, networks, programs, or data that one intends to keep safe. In an organization, the
people, processes, and technology must all complement one another to create an effective
defense from cyber attacks. A unified threat management system can automate integrations
across select Cisco Security products and accelerate key security operations functions:
detection, investigation, and remediation.
People
Users must understand and comply with basic data security principles like choosing strong
passwords, being wary of attachments in email, and backing up data.
Processes
Organizations must have a framework for how they deal with both attempted and successful
cyber attacks. One well-respected framework can guide you. It explains how you can identify
attacks, protect systems, detect and respond to threats, and recover from successful attacks.
Technology
Technology is essential to giving organizations and individuals the computer security tools
needed to protect themselves from cyber attacks. Three main entities must be protected:
endpoint devices like computers, smart devices, and routers; networks; and the cloud.
Common technology used to protect these entities include next-generation firewalls, DNS
filtering, malware protection, antivirus software, and email security solutions.
Importance of Cyber Security:
In today’s connected world, everyone benefits from advanced cyber defense programs. At an
individual level, a cybersecurity attack can result in everything from identity theft, to
extortion attempts, to the loss of important data like family photos. Everyone relies on critical
infrastructure like power plants, hospitals, and financial service companies. Securing these
and other organizations is essential to keeping our society functioning.
Everyone also benefits from the work of cyberthreat researchers, like the team of 250 threat
researchers at Talos, who investigate new and emerging threats and cyber attack strategies.
They reveal new vulnerabilities, educate the public on the importance of cybersecurity, and
strengthen open source tools. Their work makes the Internet safer for everyone.
Types of cybersecurity threats
Phishing
Phishing is the practice of sending fraudulent emails that resemble emails from reputable
sources. The aim is to steal sensitive data like credit card numbers and login information. It’s
the most common type of cyber attack. You can help protect yourself through education or a
technology solution that filters malicious emails.
Social engineering
Social engineering is a tactic that adversaries use to trick you into revealing sensitive
information. They can solicit a monetary payment or gain access to your confidential data.
Social engineering can be combined with any of the threats listed above to make you more
likely to click on links, download malware, or trust a malicious source.
Ransomware
Ransomware is a type of malicious software. It is designed to extort money by blocking
access to files or the computer system until the ransom is paid. Paying the ransom does not
guarantee that the files will be recovered or the system restored.
Malware
Malware is a type of software designed to gain unauthorized access or to cause damage to a
computer.
Cybersecurity and privacy challenges in smart city infrastructures:
Smart city infrastructures face several cybersecurity and privacy challenges due to their
interconnected systems and the vast amounts of personal and sensitive data they collect.
These challenges arise from the reliance on technology like IoT sensors, data centers, and
cloud services. Below are some of the major concerns:
1. Data Breaches and Unauthorized Access:
 Smart cities generate large volumes of personal and sensitive data (e.g., traffic
patterns, healthcare, surveillance footage). If not secured properly, this data is
vulnerable to breaches and unauthorized access, leading to identity theft, surveillance
abuse, or the exposure of private information.
2. IoT Vulnerabilities:
 Many smart city systems rely on Internet of Things (IoT) devices, which are often
difficult to secure due to their limited computing power and mass deployment. These
devices can become entry points for cyberattacks, such as Distributed Denial of
Service (DDoS) attacks or ransomware, compromising the entire network.
3. Data Privacy Concerns:
 The extensive data collection required for services like traffic monitoring, smart
utilities, and public safety can infringe on citizens' privacy. Continuous surveillance or
personal data tracking (e.g., location data, shopping habits) raises concerns about how
this data is used, who controls it, and whether it could be misused or sold to third
parties.
4. Interconnected Systems Risk:
 The integration of multiple city services (e.g., transportation, energy, healthcare)
increases the risk of cascading failures. If one system is compromised, it can lead to a
wider network attack, disrupting essential services such as power grids, emergency
response systems, or public transportation.
5. Legacy Systems and Infrastructures:
 Many cities still use older, outdated infrastructure that may not have been designed
with cybersecurity in mind. Integrating these legacy systems into a smart city network
can introduce vulnerabilities that hackers can exploit.
6. Lack of Standardization:
 There is no universal cybersecurity standard for smart cities. Different vendors and
systems may have different levels of security, creating inconsistencies and gaps. The
lack of common frameworks for securing urban infrastructures leads to fragmentation
and difficulty in enforcing strong cybersecurity policies across all components.
7. Ransomware and Malware Attacks:
 Critical smart city services like water management, transportation, and electricity
grids can be targeted by ransomware attacks. Hackers can encrypt or disable these
systems and demand ransom, potentially causing significant disruptions to city
services and infrastructure.
8. Third-Party Risk:
 Many smart city solutions involve partnerships with third-party vendors, which
increases the risk of data leakage or cybersecurity threats through external contractors.
Without strict security protocols, vulnerabilities can be introduced through external
sources.
9. Surveillance and Citizen Trust:
 Widespread deployment of cameras, facial recognition, and tracking systems for
public safety or law enforcement can lead to concerns about mass surveillance.
Citizens may feel their privacy is being invaded, and there are concerns about the
misuse of data for profiling, discrimination, or authoritarian purposes.
10. Weak Encryption and Data Transmission:
 Many smart city devices transmit data over wireless networks that may use weak or
outdated encryption protocols. If these networks are compromised, attackers can
intercept and manipulate data, leading to altered outcomes (e.g., traffic signals being
tampered with or falsified sensor data).

Mitigation Strategies:
 Strong Encryption: Encrypt all data, both at rest and in transit, to protect against
unauthorized access.
 Regular Updates and Patch Management: Ensure IoT devices and systems are
updated regularly to fix vulnerabilities.
 Multi-Factor Authentication (MFA): Implement MFA for accessing critical smart
city systems.
 Zero Trust Architecture: Adopt a “zero trust” approach, where no device or user is
trusted by default.
 Data Minimization and Anonymization: Collect only the necessary data, and
anonymize sensitive information to protect citizen privacy.
 Public Awareness and Legal Frameworks: Engage the public about privacy
concerns and implement transparent legal frameworks (e.g., GDPR) to ensure
accountability in data use and storage.
Addressing these cybersecurity and privacy challenges is vital for the trust, safety, and
resilience of smart city infrastructures.

You might also like