0% found this document useful (0 votes)
32 views

Module 2

Uploaded by

sushma-icb
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Module 2

Uploaded by

sushma-icb
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Module-2

IoT Analytics for the Cloud


● As more devices connect to the internet, they generate a lot of data. Analyzing this data needs a
lot of computing power, especially because it's not always clear how valuable the insights will
be.

● To handle all this, you need a system that can quickly adjust to the amount of data coming in and
analyze it when needed, without crashing or slowing down. And you need this system to be set
up quickly and not cost a fortune to run or maintain.

● One way to do this is by using cloud-based solutions. They can automatically scale up or down
based on how much data there is to analyze, they're usually cheaper than buying your own
equipment, and they require less work to keep running smoothly.
Complexity of Business Environment: In today's dynamic and competitive business landscape, factors such as market conditions, customer
preferences, and technological advancements constantly evolve. This uncertainty can make it challenging to predict which initiatives will generate
the most value.

Variability in Customer Needs: Customer needs and preferences can vary widely, and what works well for one segment of customers may not
resonate with others. Experimentation allows businesses to test different approaches and gather feedback to understand what truly adds value to
their target audience.

Rapid Technological Changes: Technological advancements introduce new opportunities and challenges for businesses. Experimentation helps
organizations stay ahead of the curve by testing innovative solutions and adapting to changing technologies.

Risk Mitigation: Experimentation allows businesses to mitigate risks associated with uncertainty. By testing ideas on a smaller scale before fully
implementing them, organizations can identify potential pitfalls and refine their strategies to maximize value while minimizing risks.

Iterative Improvement: Continuous experimentation enables iterative improvement and optimization of business processes, products, and
services. By gathering data and insights from experiments, organizations can learn from successes and failures, leading to more informed decision-
making and better outcomes over time.
The "cloud" refers to a network of servers that are accessed over the internet. Instead of storing data or running applications on a local
server or computer, you can store and access data, run software, and perform various tasks on remote servers that are hosted by a cloud
service provider.

You don't need to worry about buying and maintaining your own hardware or infrastructure. Instead, you can use the resources provided
by the cloud service provider on a pay-as-you-go basis.
The cloud enables users to access the same files and applications from almost any device,
because the computing and storage takes place on servers in a data center, instead of locally on
the user device.

This is why a user can log in to their Instagram account on a new phone after their old phone
breaks and still find their old account in place, with all their photos, videos, and conversation
history.
For businesses, switching to cloud computing removes some IT costs and overhead: for
instance, they no longer need to update and maintain their own servers, as the cloud vendor they
are using will do that.

This especially makes an impact for small businesses that may not have been able to afford their
own internal infrastructure but can outsource their infrastructure needs affordably via the cloud.

The cloud can also make it easier for companies to operate internationally, because employees
and customers can access the same files and applications from any location.
The National Institute of Standards and Technology defines five essential characteristics:
● On-demand self-service: You can easily access and manage resources like servers and storage without needing human
intervention. This means you can quickly get what you need when you need it.

● Broad network access: Your cloud resources are available over the internet, so you can access them from anywhere using
different devices like computers, smartphones, or tablets.

● Resource pooling: loud providers share their computing resources, like servers and storage, among multiple users. This means
resources are allocated and reassigned dynamically based on demand. Users don't need to know where their data is stored; the
system manages it for them.

● Rapid elasticity: Cloud resources can be quickly scaled up or down to meet changing demands. This can happen
automatically, and resources are virtually unlimited from the user's perspective. You can expand or reduce your resources as
needed without much delay.

● Measured service: Cloud providers track and measure your resource usage, providing transparency into how your resources
are being utilized. This allows you to monitor and control your usage and costs effectively. Additionally, cloud systems
continuously optimize resources to improve performance and efficiency.
Advantages of Cloud
● Speed: You can bring cloud resources online in minutes
● Agility: The ability to quickly create and destroy resources leads to ease of
experimentation
● Variety of services: Cloud providers have many services available to support
analytics workflows that can be deployed within minutes
● Global reach: You can extend the reach of analytics to the other side of the world
with a few clicks.
● Cost control: You only pay for the resources you need at the time you need them.
You can do more for less.
Elastic analytics
Elastic computing is the ability of a cloud service provider to swiftly scale the usage of resources such as storage, infrastructure,
computer processing, CPU memory, RAM, input/output bandwidth, etc., up and down to adapt to changing resource demands
and dynamically meet workload requirements. Elastic computing is a part of cloud computing that entails dynamically
managing the cloud server.

With cloud elasticity, a company avoids paying for unused capacity or idle resources and doesn’t have to worry about investing in the
purchase or maintenance of additional resources and equipment.
● Cloud computing allows small and large organizations to move their data to cloud storage and use various services such as
online servers, software data platforms, storage space, and others over the internet.

● A cloud service provider’s support to quickly expand and shrink capacity at any given time provides organizations with
incredible flexibility to make quick adjustments in resources without disrupting the flow of their operations.

● Focus instead on the potential value of your analytics versus the limit of what can be done with existing hardware.
● You also want your analytics to be able to scale. It should go from supporting 100 IoT devices to 1 million IoT devices without
requiring any fundamental changes.

● You buy one device that has 16 GB memory and 500 GB hard drive because you think that will meet 90% of your needs, and it
is the top of your budget. Cloud infrastructure abstracts this away.

● Doing analytics in the cloud is like renting a magic laptop where you can change 4 GB memory into 16 GB by snapping your
fingers. Your rental bill increases for only the time you have it at 16 GB.

● You snap your fingers again and drop it back down to 4 GB to save some money. Your hard drive can grow and shrink
independently of the memory specification. You are not stuck having to choose a good balance between them. You can match
compute needs with requirements.
Use software, services, and programming code that can scale from 1 to 1 million without
changes. Each analytic process you put in production has continuing maintenance efforts
that will build up over time as you add more and more processes. Make it easy on yourself
later on. You do not want to have to stop what you are doing to re-architect a process you
built a year ago because it hit the limits of scale.
● Unlike traditional enterprise architecture, where hardware purchases incur capital expenses, cloud computing operates
on a pay-as-you-go model. This means you don't have to invest in hardware upfront, and you can avoid capital
expenses that may restrict your spending.

● With cloud resources, there's no need to limit your analytics based on a set number of servers or hardware constraints.
You can scale resources up or down as needed, allowing for dynamic expansion and contraction of resources based on
demand.

● By managing to a spend budget, you can focus more on achieving results and driving value from your analytics
projects, rather than being constrained by resource limitations or budgetary concerns.
Designing for scale
● Decoupled, or decoupling, is a state of an IT environment in which two or more systems somehow work or
are connected without being directly connected. Or Decoupling means separating functional groups into
components so they are not dependent upon each other to operate. This allows functionality to change or new
functionality to be added with minimal impact on other components.

● In a decoupled microservices architecture, for example, software services have none or very little knowledge
about the other services. In theory, this means that a change can be made to one service without the developer
having to worry about how the change will impact other services

● Cloud computing architecture is often referred to as an example of decoupled architecture because the vendor
and consumer independently operate and manage their resources.
Messaging in the Cloud refers to the concept of utilizing cloud-based services to manage, store, and relay messages
in a distributed network environment. Queues for cloud services are essentially lines of messages that are waiting to
be processed. They play a significant role in ensuring smooth communication and synchronization between
different components of a cloud-based application, thereby enhancing its overall performance and reliability.

Cloud messaging, also known as cloud-based messaging or message queuing, is a method of communication
between different components of an application which are distributed across multiple servers or systems. This
communication technique is asynchronous, meaning that the sending and receiving components do not need to
interact with each other at the same time.

Instead, messages are stored in a queue until they can be processed. This is particularly beneficial for applications
that need to handle high volumes of data and transactions, as it allows for efficient load balancing and ensures that
no data is lost if a component becomes unavailable.
This approach is fundamental in the cloud computing environment, where resources are distributed and need to
communicate effectively. It helps in maintaining the loose coupling of components, thus enabling scalability and
reliability. It also provides a way to handle the intermittency and latency inherent in such environments, ensuring
that messages are delivered and processed even in the event of network disruptions or system failures.
The process adding a message is called the publisher, and the process receiving the
message is called the subscriber

The subscriber does not have to wait until the publisher is willing to chat and vice versa.

The size of the queue can also grow and shrink as needed. If the subscriber gets
behind, the queue just grows to compensate until it can catch up
Queues play a crucial role in the operation of cloud services. They act as buffers that store
messages until they can be processed. This helps to ensure that all messages are handled,
even when the system is under heavy load. Furthermore, queues can help to smooth out
traffic spikes by storing excess messages until the system has the capacity to handle them.

Queues also provide a level of abstraction, helping to decouple the sending and receiving
components of an application. This means that these components do not need to know
about each other's state or existence. Instead, they can focus on their own tasks, safe in the
knowledge that all messages will be properly handled. This decoupling makes the system
more robust and easier to maintain and scale.
Benefits of Cloud-based Messaging Queues

Cloud-based messaging queues offer several significant advantages. First and foremost, they can handle large
volumes of messages without requiring a significant investment in infrastructure. This is because the cloud provider
manages the underlying hardware and software, allowing businesses to focus on their core competencies. The
provider also ensures that the service is always available and can scale to meet changing demand.

In addition, cloud-based queues can provide a high level of reliability. Messages are stored until they can be
processed, ensuring that no data is lost. They can also be replicated across multiple servers or regions to protect
against system failures. This can provide a level of assurance that is difficult to achieve with traditional, on-
premises messaging systems.
While messaging queues can provide many benefits, it's important to use them correctly to get the most out of them.
One best practice is to ensure that messages are idempotent. This means that they can be processed multiple times
without causing unwanted effects. This is important because it allows for messages to be retried in the event of a
failure, without the risk of duplicating transactions or data.

Another best practice is to monitor the size of the queue. If the queue becomes too large, it can indicate that the
system is struggling to process messages fast enough. This can be a sign that more resources are needed, or that
there is a problem with the application that needs to be addressed. Regular monitoring and management of the
queue can help to ensure that the system remains healthy and responsive.
Encapsulate means grouping together similar functions and activities into distinct units.

As your analytics develop, you will have a list of actions that is either transforming the data, running it through a model or algorithm, or
reacting to the result. It can get complicated quickly.

1. Listing Steps: Begin by listing all the steps involved in your analytics process, including data transformation, model execution,
and result reaction.
2. Organizing into Groups: Organize these steps into logical groups based on their similarities or dependencies. This helps in
identifying patterns and reducing redundancy.
3. Identifying Change Patterns: Consider which groups of steps are likely to change together in the future. This could be due to
evolving requirements, updates in data sources, or improvements in algorithms.
4. Separating Independent Groups: Separate the groups of steps that are independent of each other into their own encapsulated
processes. This ensures that changes made to one group do not affect the functionality of others, promoting modularity and
maintainability.
Distributed computing
Also called cluster computing, distributed computing refers to spreading processes across
multiple servers using frameworks that abstract the coordination of each individual server.
The frameworks make it appear as if you are using one unified system
Here are some guidelines on when to keep it simple and on one server:
● There is not much need for scale: Your analytics process needs little change even if the
number of IoT devices and data explodes. For example, the analytics process runs a
forecast on data already summarized by month. The volume of devices makes little
difference in that case.

● Small data instead of big data: The analytics run on a small subset of data without much
impact from data size. Analytics on random samples is an example.

● Resource needs are minimal: Even at orders of magnitude more data, you are unlikely
to need more than what is available with a standard server. In this case,

You might also like