0% found this document useful (0 votes)
17 views

6387 Note IOT Module 3

Uploaded by

sohampandey689
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

6387 Note IOT Module 3

Uploaded by

sohampandey689
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

BRAINWARE UNIVERSITY

[ESCD 301] CLASS NOTES [Internet of Things]

4 Pillars of Internet of Things (IoT)

1. Device: The Internet of Things (IoT), is a technology that links gadgets to the internet so they may
share and exchange data. The device is the first pillar of IoT. Mobile devices, medical equipment, cars,
and electronic appliances are all examples of IoT devices.
2. Data: The prior aim of IoT is to collect & store data. This data is processed to enhance the functionality
of various devices and software. The primary function of IoT is to gather a large amount of data to
improve application functionality and process information.
3. Analytics: The proper analysis and efficient processing of data are crucial for the effectiveness of IoT
applications in daily life. The use of data analysis tools and procedures is employed to analyze the
various types of generated data. Analytics, for example, let you keep track of your daily steps walked
and the equivalent number of calories burned in a workout app.
4. Connectivity: The fourth and final pillar of IoT is connectivity, it allows the previous three pillars to
coexist peacefully. It is crucial to maintain uninterrupted connectivity to ensure the smooth flow of real-
time data processing and analysis. Without connectivity, it is impossible to optimize the processing
and use of data in different systems and software. Poor connectivity can also cause inaccuracies and
data loss during data analysis.

M2M

Machine-to-Machine (M2M) refers to networking of machines (or devices) for the purpose of remote
monitoring and control and data exchange. M2M refers to the communication between devices without
human intervention. End-to-end architecture for M2M systems comprising of following components
• M2M Device
• M2M Area Network (Device Domain)
• M2M Gateway
• M2M Communication Networks (Network Domain)

M2M Device: Device capable of replying to request for data contained within those devices or capable
of transmitting data autonomously. Sensors and communication devices are the endpoints of M2M
applications. Generally, devices can connect directly to an operator’s network, or they will probably
interconnect using WPAN technologies such as ZigBee or Bluetooth.

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]
M2M Area Network (Device Domain): This is a local area network (LAN) or a Personal Area Network
(PAN) and provides connectivity between M2M Devices and M2M Gateways. • Typical networking
technologies are IEEE 802.15.1 (Bluetooth), IEEE 802.15.4 (ZigBee).

M2M Gateway: Equipment that uses M2M capabilities to ensure M2M Devices inter-working and
interconnection to the communication network. Gateways and routers are the endpoints of the
operator’s network in scenarios where sensors and M2M devices do not connect directly to the
network. Thus, the task of gateways and routers are twofold. Firstly, they have to ensure that the
devices of the capillary network may be reached from outside and vice versa.

[A capillary network, in the context of networking and technology, typically refers to a type of
network used to connect a large number of devices in a dense, localized area, often for the purpose
of data collection, monitoring, and control.]

These functions are addressed by the access enablers, such as identification, addressing, accounting
etc., from the operator’s platform and have to be supported at the gateway’s side as well.

Supporting Functions at the Gateway

Gateways play a critical role in managing communication between networks, especially when they have
different protocols or addressing schemes. Here’s what they need to support:

• Protocol Translation: Capillary networks may use different communication protocols compared
to the external network. The gateway must translate between these protocols to ensure
seamless communication.
• Address Translation: The gateway often needs to handle address translation (like Network
Address Translation, NAT) to map addresses from the internal network to the external network
and vice versa.
• Security: Implementing security measures like encryption, authentication, and authorization to
protect the data and ensure that only authorized entities can access the network.
• Data Management: Handling data format conversions, aggregating data, and possibly
performing local processing before sending it to the external network.

M2M Communication Networks (Network Domain): It covers the communications between the M2M
Gateway(s) and M2M application(s), e.g. xDSL, LTE, WiMAX, and WLAN.

M2M Applications: As the name suggests, the M2M application domain offers applications to use M2M
technology conveniently. Examples include server and end-user applications.

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]

Applications and examples of M2M

The most common use of M2M is remote monitoring. For example, a vending machine can notify the
merchant in case a product is out of stock. M2M is also used in supply chain management and
warehouse management systems.

Difference between M2M and IOT

Machine-to-Machine (M2M):

Definition: M2M refers to direct communication between devices (machines) without human
intervention. It focuses on enabling devices to exchange information and perform tasks autonomously.
Scope: M2M is more narrowly focused on communication between devices, often within a specific
application or industry.
Technology: M2M technologies typically use point-to-point communication or a fixed network to
enable device interaction.
Applications: Often used in specific industries for targeted applications such as remote monitoring of
industrial equipment, automated meter reading, and vehicle telematics.
Data Management: Data is typically managed within the specific application or system where M2M is
deployed. Analytics may be performed locally or in a centralized manner based on the application.
Traditional and less secure communication protocols are used.

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]
Internet of Things (IoT):

Definition: IoT is a broader concept that encompasses M2M communication but extends it to a vast
network of interconnected devices, systems, and people. IoT involves devices connected to the
internet, allowing them to collect, exchange, and act on data in a more comprehensive and integrated
manner.
Scope: IoT encompasses a wide range of applications and devices beyond just M2M.
Technology: IoT involves a diverse range of technologies including sensors, actuators, cloud computing,
data analytics, and advanced communication protocols like MQTT, CoAP, and HTTP.
Applications: Covers a broad spectrum of use cases across different domains, including smart home
devices (e.g., thermostats, smart locks), healthcare (e.g., wearable health monitors), agriculture (e.g.,
soil moisture sensors), and smart cities (e.g., traffic management, smart lighting).
Data Management: Data from IoT devices is often collected and stored in the cloud, enabling large-
scale data analysis and integration with other data sources.
TCP/IP protocols such as HTTP, HTTPS, FTP and Telnet.

RFID
RFID stands for Radio Frequency Identification. It is a technology that is used to track RFID tags and to
capture the data encoded in these tags. It uses radio waves to identify and track tags attached to objects.
These tags contain electronically stored information that can be read from several meters away, without
requiring direct line-of-sight. RFID is an Automatic Identification and Data Capture (AIDC) technology.
Using radio waves, it automatically identifies objects, gathers data about them, and enters it into computer
systems with little or no human intervention.
RFID tags contain microchips that store information about the object they are attached to. A scanning
device can then read these remotely using radio waves and electromagnetic fields. The RFID tag has a
built-in antenna that communicates to a scanning device that reads the data remotely. The data is then
transferred from the scanning device to the enterprise application software that houses the data. Each
RFID tag has its own unique identifying number.
The components of RFID
RFID tracking systems consist of three main components:
Component #1 – RFID tags: RFID tags contain microchips that store information about the object they
are attached to. These can then be read remotely by a scanning device using radio waves and
electromagnetic fields. The RFID tag has a built-in antenna that communicates to a scanning device that
reads the data remotely. The data is then transferred from the scanning device to the enterprise application
software that houses the data. Each RFID tag has its own unique identifying number.
There are two types of RFID tags: passive and active. Passive tags are popular in retail settings since they
do not require a power supply. Active tags have their power source and can collect more detailed
information about the object they are attached to. RFID tags can be attached to objects or embedded in
devices like cameras and GPS sensors, allowing you to identify and locate them easily.
Component #2 – The RFID reader: An RFID reader is a device that scans RFID tags and collects
information about the asset or inventory item attached to the tag. These readers can be hand-held or
wired, and work with USB and Bluetooth.
Component #3 – The RFID applications or software: RFID inventory or asset tracking software controls
and monitors RFID tags associated with your assets. This can be a mobile application or a standard
software package.

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]

Working
The RFID reader is a network-connected device that can be portable or permanently attached. It uses
radio waves to transmit signals that activate the tag. Once activated, the tag sends a wave back to the
antenna, where it is translated into data.
The transponder is in the RFID tag itself. The read range for RFID tags varies based on factors including
the type of tag, type of reader, RFID frequency, and interference in the surrounding environment or from
other RFID tags and readers. Tags that have a stronger power source also have a longer read range.
Types of RFID systems
There are three main types of RFID systems: low frequency (LF), high frequency (HF) and ultra-high
frequency (UHF). Microwave RFID is also available. Frequencies vary greatly by country and region.
• Low-frequency RFID systems. These range from 30 KHz to 500 KHz, though the typical
frequency is 125 KHz. LF RFID has short transmission ranges, generally anywhere from a few
inches to less than six feet.
• High-frequency RFID system These range from 3 MHz to 30 MHz, with the typical HF frequency
being 13.56 MHz. The standard range is anywhere from a few inches to several feet.
• UHF RFID systems. These range from 300 MHz to 960 MHz, with the typical frequency of 433
MHz and can generally be read from 25-plus feet away.
• Microwave RFID systems. These run at 2.45 GHz and can be read from 30-plus feet away.
Benefits of RFID technology
Increased efficiency and accuracy in tracking
Since this technology uses radio waves to identify and track assets, it provides a much faster and more
accurate tracking process than traditional methods.
Real-time visibility in managing inventory and assets
With RFID tags, enterprises can track location, movement, lifecycle, custodianship history, and more
data in real-time. This greatly improves asset and inventory visibility.
Reduced labor costs
RFID tracking technology eliminates the need for manual data entry, reducing labor costs associated with
tracking inventory and assets.
Enhanced security
RFID tracking technology can help enhance security by allowing businesses to monitor the movement of
assets and personnel in real-time.
RFID challenges
RFID security and privacy: A common RFID security or privacy concern is that anyone with a
compatible reader can read RFID tag data as RFID tags do not have a lot of computing power, and they
are unable to accommodate encryption, such as might be used in a challenge-response authentication
system.
Cost of RFID Technology: The major challenge is the price and return on investment (ROI) in the
implementation of RFID technology. The size and cost of high-frequency RFID tags are limited as they
need the miniature size of antenna design; however, expensive readers are needed to transmit RF signals.
Comparison Between RFID and Barcode

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]

Application
Inventory Management with RFID
Most people think of RFID technology as an enabler of real-time inventory tracking. Improved inventory
control and accuracy has a broad range of benefits, from fewer stockouts and lost orders to reduced
manual labor and improved utilization of company cash. A well-implemented RFID application for
inventory management can eliminate the need for staff to manually scan or otherwise track in the receipt,
shipment, and movement of materials.
RFID Asset Tracking
By attaching an RFID tag to assets such as equipment, tools, and vehicles, you can find lost items faster
with an RFID reader, and obtain providing real-time location and utilization information with the right
RFID asset tracking software. Tool tracking and high-value asset monitoring is a sometimes overlooked
but excellent use case for RFID. Asset tracking applications may require active RFID tags, but with the
right software, you may be able to achieve the benefits with a passive RFID tag

WSN
Wireless Sensor Network (WSN) is a self-configured, infrastructure-less wireless network. WSN
network comprises a group of wireless sensor nodes that communicate wirelessly and are distributed in
an ad-hoc manner (randomly) to monitor various conditions, such as environmental or physical
parameters within a system.
In a WSN, each sensor node is a small but powerful device equipped with a microcontroller, radio
frequency receiver and transceiver, power source, and memory for wireless communication. These nodes
are designed to operate independently, configuring themselves into a network without needing a pre-
existing infrastructure or transmission media, such as cables. Sensor nodes can collect data continuously
or in response to specific events, like a security camera that only records when it detects movement. The
data collected by individual sensor nodes is transmitted to a central node known as the Base Station in a
2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)
BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]

WSN System. The Base Station acts as a point or place where data from across the network is compiled
and sent through the Internet.
Components of Wireless Sensor Networks
• Sensors: These components are responsible for data acquisition, i.e., they collect environmental
data (variables) and convert it into electrical signals through a process known as transduction.
• Radio Nodes: These components are equipped with a microcontroller for data processing, a
transceiver for wireless communication, external memory for data storage, and a power source to
remain operational. They receive the sensor's electrical signals and send this data to
the WLAN access point.
• WLAN Access Points: This component receives data wirelessly from the radio nodes, often via
the Internet. Once received, it forwards it to the evaluation software.
• Evaluation Software: This software analyses the data received from the WLAN Access Point and
turns raw data into actionable information for the user.
Types of Wireless Sensor Networks

Terrestrial WSNs
Terrestrial WSNs are capable of communicating base stations efficiently, and consist of hundreds to
thousands of wireless sensor nodes deployed either in an unstructured (ad hoc) or structured (Pre-
planned) manner. These networks use sensors that are spread out across a space to monitor and report
data to a central location. For example, the GPS sensors and motion detectors in a smartphone are part of
a terrestrial WSN

Underground WSNs
The underground wireless sensor networks are more expensive than the terrestrial WSNs in terms of
deployment, maintenance, and equipment cost considerations and careful planning. The WSNs networks
consist of several sensor nodes that are hidden in the ground to monitor underground conditions.

Under Water WSNs


More than 70% of the earth is occupied with water. These networks consist of several sensor nodes and
vehicles deployed underwater. Autonomous underwater vehicles are used for gathering data from these
sensor nodes.
Static & Mobile WSN
All the sensor nodes in several applications can be set without movement so these networks are static
WSNs.
These networks consist of a collection of sensor nodes that can be moved on their own and can be
interacted with the physical environment.

Challenges of WSN
The different challenges in wireless sensor networks include the following.
Fault Performance: Some sensor nodes stop working because of power loss, so physical damage may
occur. This shouldn’t affect the sensor network’s overall performance, so this is known as the issue of
fault tolerance.

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]
Stability: The number of nodes used in the detecting area may be in the order of thousands, hundreds &
routing schemes should be scalable enough for responding to events.
Limited power and energy: WSNs are typically composed of battery-powered sensors that have limited

energy resources.
Data Latency: These are treated like the essential factors that influence the design of routing protocol.
The data latency can be caused through data aggregation & multi-hop relays.

Data aggregation refers to the process of combining and summarizing data from multiple sensor nodes
before it is transmitted to the sink node. This is important because it reduces the amount of data
transmitted, minimizes energy consumption, and alleviates network congestion, leading to more efficient
use of network resources and prolonged sensor node lifetimes.

Limited processing and storage capabilities: Sensor nodes in a WSN are typically small and have limited
processing and storage capabilities.
Interference: WSNs are often deployed in environments where there is a lot of interference from other
wireless devices.
Data security: WSNs are vulnerable to security threats, such as eavesdropping, tampering, and denial of
service attacks, which can compromise the confidentiality, integrity, and availability of data.
Limited bandwidth: Bandwidth limitation directly affects message exchanges among sensors, and
synchronization is impossible without message exchanges. Sensor networks often operate in a bandwidth
and performance-constrained multi-hop wireless communications medium.
Applications of WSN
• Surveillance and Monitoring for security, threat detection
• Environmental temperature, humidity, and air pressure
• Agriculture
• Landslide Detection

SCADA

Supervisory Control and Data Acquisition (SCADA) systems are used for controlling, monitoring,
and analyzing industrial devices and processes. The system consists of software and hardware
components and enables remote and on-site data gathering from the industrial equipment. These systems
remotely monitor and control installations, collect and analyze data, detect faults, and optimize energy
output. Across the board, SCADA systems enhance efficiency, reduce costs, and ensure compliance with
regulatory standards.
Components of a SCADA system
SCADA systems include components deployed in the field to gather real-time data, as well as related
systems to enable data collection and enhance Industrial automation. SCADA components include the
following:
Sensors and actuators. A sensor is a feature of a device or system that detects inputs from industrial
processes. An actuator is a feature of the device or system that controls the mechanism of the
process. Both sensors and actuators are controlled and monitored by SCADA field controllers.
SCADA field controllers. These interface directly with sensors and actuators. There are two categories
of field controllers:

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]
Remote telemetry units, also called remote terminal units (RTUs), interface with sensors to
collect telemetry data and forward it to a primary system for further action.
Programmable logic controllers (PLCs) interface with actuators to control industrial processes, usually
based on current telemetry collected by RTUs and the standards set for the processes.

SCADA supervisory computers. These control all SCADA processes and are used to gather data from
field devices and to send commands to those devices to control industrial processes.
HMI software. This provides a system that consolidates and presents data from SCADA field devices
and enables operators to understand and, if needed, modify the status of SCADA-controlled processes.
Communication infrastructure. This enables SCADA supervisory systems to communicate with field
devices and field controllers.

Advantages of SCADA
Extensive data storage capacity
One of the main advantages of SCADA systems is their ability to store robust amounts of data generated
from industrial processes. This extensive storage capacity enables organisations to analyse historical data
trends, identify patterns, and make informed decisions to optimise operations and improve efficiency.
Real-time data simulation capabilities
Another critical advantage of SCADA is its real-time data simulation capabilities, which enable operators
to model different scenarios and analyses the significant impact of changes before implementation. This
proactive approach helps minimize risks, optimize processes, and ensure smoother operations.
Rapid response time
With its real-time monitoring capabilities, SCADA facilitates rapid response to critical events or
emergencies. By providing instant alerts and notifications, operators can take prompt action to mitigate
risks, minimize downtime, and prevent potential disruptions to operations.
Customizable data visualization
SCADA systems provide customizable data visualization tools that allow users to tailor information
display according to their specific requirements. Whether it's through charts, graphs, or dashboards,
operators can visualize data to facilitate quick decision-making and enhance situational awareness.
Broad sensor connectivity
SCADA systems offer compatibility with a wide range of sensors and devices, allowing seamless
integration into diverse industrial environments.
Disadvantages of SCADA
Complexity of PLC-based system
One of the major drawbacks of SCADA systems is the complexity associated with programmable logic
controller (PLC)-based architectures. Configuring and programming PLCs require specialized skills and
2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)
BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]
expertise, leading to higher training and maintenance costs for personnel.
Constrained software and hardware compatibility
SCADA systems often face software and hardware compatibility issues, especially when integrating with
legacy systems or third-party devices.
High installation expenses

The initial installation cost of SCADA systems can be huge, particularly for small and medium-sized
enterprises with limited budgets.
Potential impact on employment rates
The automation capabilities of SCADA systems can potentially impact employment rates, particularly in
industries where manual labour is prevalent. While automation can lead to an increase in productivity
and efficiency, it also results in job displacement or the need to retrain the existing workforce to operate
and maintain the SCADA infrastructure.
SCADA Protocols
In order for SCADA systems to obtain its functionality, it needs a protocol for transmitting data. Some
of the SCADA protocols include Modbus RTU, RP-570, Profibus and Conitel. These communication
protocols are all SCADA-vendor specific but are widely adopted and used. Standard protocols are IEC
61850 (in which T101 branched out), IEC 60870-5-101 or 104, and DNP3. These communication
protocols are standardized and recognized by all major SCADA vendors.
DNP is widely used in North America, South America, South Africa, Asia and Australia, while IEC
60870-5-101 or T101 is strongly supported in the Europe.

The DNP3 or Distributed Network Protocol is a set of communications protocols used between
components in process automation systems. It was specifically developed to facilitate communications
between various types of data acquisition and control systems. SCADA protocols such as DNP3 standard
is specified only for IPv4. Like many of the other SCADA protocols, DNP3 is based on a master/slave
relationship. The term master in this case refers to what is typically a powerful computer located in the
control center of a utility, and a slave is a remote device with computing resources found in a location
such as a substation. DNP3 refers to slaves specifically as outstations.

Big Data Analytics


It refers to the method of studying massive volumes of data or big data. Collection of data whose volume,
velocity or variety is simply too massive and tough to store, control, process and examine the data using
traditional databases. Big data is gathered from a variety of sources including social network videos,
digital images, sensors and sales transaction records.
BigData is often characterized by six Vs. They are:
• Volume: Refers to the huge volume of data aggregated from various sources.

• Variety: Refers to different types of data. Data can be structured, semi-structured or unstructured.
• Velocity: Refers to the speed at which the data is generated. Now-a-days the amount of data
available on the Internet per minute is several peta bytes or even more.
• Veracity: Refers to the degree to which the data can be trusted. If the data collected is incorrect
or has manipulated or wrong values, the analysis of such data is useless.

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]
• Value: Refers to the business value of the collected. Even though we have huge amount of data,
but it is not useful for gaining profits in the business, such data is useless.
• Variability: Refers to how the big data can be used and formatted.
Source of Big Data
Bank transactions
Data generated by IoT systems for location and tracking of vehicles
E-commerce and in Big-Basket
Health and fitness data generated by IoT system such as a fitness bands.
Several steps involved in analysing big data -
1. Data cleaning: Data cleaning is fixing or removing incorrect, incomplete, or duplicate data from
a dataset.
2. Munging: Data munging, also known as data wrangling, is converting raw data into a more usable
format.
3. Processing: It involves a series of actions or steps that turn data into insights, knowledge, or
information that can be used for decision-making, analysis, or other purposes.
4. Visualization: Data visualization is the graphical representation of information and data. By using
visual elements like charts, graphs, and maps, data visualization tools provide an accessible way
to see and understand trends, outliers, and patterns in data.
Challenges of Big Data analytics
Data Volume:
The most apparent challenge with Big Data is the sheer volume of data being generated (in order of
petabyte and exabyte). This vast amount of data requires advanced storage infrastructure, which can
be costly and complex to maintain.
Adopting scalable cloud storage solutions, such as Amazon S3, Google Cloud Storage, or Microsoft
Azure, can help manage large volumes of data.
Data Variety:
Big Data encompasses a wide variety of data types, including structured data (e.g., databases), semi-
structured data (e.g., XML, JSON), and unstructured data (e.g., text, images, videos). The diversity
of data types can make it difficult to integrate, analyze, and extract meaningful insights.
To address the challenge of data variety, organizations can employ data integration platforms and
tools like Apache Nifi, Talend, or Informatica.
Processing Data in Real-Time
The speed at which data is generated and needs to be processed is another significant challenge.
To handle high-velocity data, organizations can implement real-time data processing frameworks
such as Apache Kafka, Apache Flink, or Apache Storm.
Ensuring Data Quality and Accuracy
With Big Data, ensuring the quality, accuracy, and reliability of data—referred to as data veracity—
becomes increasingly difficult. Inaccurate or low-quality data can lead to misleading insights and
poor decision-making.
Tools like Trifacta, Talend Data Quality, and Apache Griffin can help automate and streamline data
quality management processes.
Data Security and Privacy:
As organizations collect and store more data, they face increasing risks related to data security and
privacy.
2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)
BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]
To mitigate security and privacy risks, organizations must adopt comprehensive data protection
strategies.

Embedded Systems
It is a combination of hardware and software used to perform special tasks. It includes microcontroller
and microprocessor memory, networking units (Ethernet Wi-Fi adapters), input output units (display
keyword etc. ) and storage devices (flash memory). It collects the data and sends it to the internet.
Embedded systems used in Examples – 1. Digital camera 2. DVD player, music player 3. Industrial robots
4. Wireless Routers etc.
A Real-Time Embedded System is strictly time specific which means these embedded systems provides
output in a particular/defined time interval. Real-Time Embedded System is divided into two types i.e.
• Soft Real-Time Embedded Systems: In these types of embedded systems time/deadline is not
so strictly followed. If deadline of the task is passed (means the system didn’t give result in the
defined time) still result or output is accepted.
• Ex: Multimedia applications: These applications, such as streaming videos or playing music, can
tolerate small delays without significantly impacting the user experience.
• Hard Real-Time Embedded Systems: In these types of embedded systems time/deadline of task
is strictly followed. Task must be completed in between time frame (defined time interval)
otherwise result/output may not be accepted.
• Ex: Flight control systems, missile guidance systems, weapons defense systems, medical systems,
and air traffic control systems.
"The cloud" refers to servers that are accessed over the Internet, and the software and databases that run
on those servers. Cloud servers are in data centres all over the world. By using cloud computing, users
and companies do not have to manage physical servers themselves or run software applications on their
own machines.

Cloud Computing

"The cloud" refers to servers that are accessed over the Internet, and the software and databases that run
on those servers. Cloud servers are in data centres all over the world. By using cloud computing, users
and companies do not have to manage physical servers themselves or run software applications on their
own machines.

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]

Cloud Computing

• The cloud enables users to access the same files and applications from almost any device, because the
computing and storage takes place on servers in a data centre, instead of locally on the user device.
Therefore, a user can log into their Instagram account on a new phone after their old phone breaks and
still find their old account in place, with all their photos, videos, and conversation history. It works the
same way with cloud email providers like Gmail or Microsoft Office 365, and with cloud storage
providers like Dropbox or Google Drive.
• For businesses, switching to cloud computing removes some IT costs and overhead: for instance, they no
longer need to update and maintain their own servers, as the cloud vendor they are using will do that.
This especially makes an impact for small businesses that may not have been able to afford their own
internal infrastructure but can outsource their infrastructure needs affordably via the cloud. The cloud
can also make it easier for companies to operate internationally, because employees and customers can
access the same files and applications from any location.

• Definition of Cloud Computing: The term “Cloud Computing” refers to services provided by the cloud
that is responsible for delivering of computing services such as servers, storage, databases, networking,
software, analytics, intelligence, and more, over the Cloud (Internet).
• Cloud computing services are divided into three classes, according to the abstraction level of the
capability provided and the service model of providers:
• 1. Infrastructure as a Service
• 2. Platform as a Service and
• 3. Software as a Service.
• Infrastructure as a Service: A cloud infrastructure enables on-demand provisioning of servers running
several choices of operating systems and a customized software stack. Infrastructure services are
considered as the bottom layer of cloud computing systems. Offering virtualized resources (computation,
storage, and communication) on demand is known as Infrastructure as a Service (IaaS).
• Amazon Web Services (AWS) is a prime example of a cloud computing platform that primarily offers
Infrastructure as a Service (IaaS). AWS EC2 (Elastic Compute Cloud) allows users to rent virtual
machines (VMs) where they can install and configure their own software stack, including the operating
system, middleware, and applications, similar to how they would set up a physical server. This provides
flexibility and scalability, enabling businesses to manage computing resources based on their specific
needs without the overhead of managing physical infrastructure.
• Platform as a Service A cloud platform offers an environment on which developers create and deploy
applications and do not necessarily need to know how many processors or how much memory that

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)


BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]
applications will be using. In addition, multiple programming models and specialized services (e.g., data
access, authentication, and payments) are offered as building blocks to new applications.
• Google App Engine (GAE) is a great example of Platform as a Service (PaaS). It provides a fully
managed, scalable environment for developers to build, deploy, and run web applications without
worrying about the underlying infrastructure. The key characteristic of PaaS, as demonstrated by Google
App Engine, is that it abstracts away much of the lower-level system management (like servers,
networking, and storage), letting developers focus solely on the application code. GAE supports specific
programming languages like Python, Java, Node.js, Go, and more.
• Software as a Service Traditional desktop applications such as word processing and spreadsheet can
now be accessed as a service in the Web. This model of delivering applications, known as Software as a
Service (SaaS), alleviates the burden of software maintenance for customers and simplifies development
and testing for providers.
• Salesforce.com is a prime example of the Software as a Service (SaaS) model. With Salesforce,
businesses can customize the platform to fit their needs, without worrying about the infrastructure,
maintenance, or software updates, as these are all handled by Salesforce. Since it’s hosted in the cloud,
customers can access it on-demand from any location via a web browser, making it highly flexible and
scalable.
Key Differences Between IaaS, PaaS, and SaaS

Feature IaaS PaaS SaaS

Maximum control Control over application No control over


User Control
over infrastructure development infrastructure

Managed by the user Managed by the provider Fully managed by the


Management
(OS, runtime, apps) (infrastructure) provider

Custom deployments Development and


Access to ready-made
Usage (virtual machines, deployment of
software
etc.) applications

Pay-per-use (per Subscription-based


Costs Pay for platform services
resource) model

AWS EC2, Google Heroku, Google App Google Workspace,


Examples
Compute Engine Engine Salesforce

Deployment Models
Based on model of deployment cloud can be classified as public, private, community, or hybrid.
Private Deployment Model
• It provides an enhancement in protection and customization by cloud resource utilization as per
particular specified requirements. It is perfect for companies which looking for security and
compliance needs.
Public Deployment Model
2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)
BRAINWARE UNIVERSITY
[ESCD 301] CLASS NOTES [Internet of Things]
• It comes with offering a pay-as-you-go principle for scalability and accessibility of cloud
resources for numerous users. it ensures cost-effectiveness by providing enterprise-needed
services.
Hybrid Deployment Model

It comes up with a combination of elements of both private and public clouds providing seamless data
and application processing in between environments. It offers flexibility in optimizing resources such
as sensitive data in private clouds and important scalable applications in the public cloud.

Supports of cloud computing in IoT applications


Cloud computing supports IoT by providing scalable and flexible infrastructure for storing, processing,
and analyzing the massive amounts of data generated by IoT devices. It enables real-time data processing,
offering the ability to manage vast datasets across geographically distributed devices. Cloud platforms
also provide services like data analytics, machine learning, and device management, making it easier for
IoT applications to scale. By offloading processing to the cloud, IoT devices can be smaller, more power-
efficient, and less expensive, with computation-intensive tasks handled remotely.

2024-25 Prepared by: Faculty of CSE-CSDS Department ( Brainware University, Barasat)

You might also like