0% found this document useful (0 votes)
8 views

AI Notes

Uploaded by

devprosk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

AI Notes

Uploaded by

devprosk
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

5.

Hardware and Software for AI


5.1. Data center

A data center is a building, a dedicated space within a building, or a group of buildings used to house
computer systems and associated components, such as telecommunications and storage systems. •
Since IT operations are crucial for business continuity, it generally includes redundant or backup
components and infrastructure for power supply, data communication connections, environmental
controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an
industrial-scale operation using as much electricity as a small town.

Modernization and data center transformation enhances performance and energy efficiency.
Information security is also a concern, and for this reason, a data center has to offer a secure
environment that minimizes the chances of a security breach. A data center must, therefore, keep high
standards for assuring the integrity and functionality of its hosted computer environment.

Industry research company International Data Corporation (IDC) puts the average age of a data center at
nine years old. Gartner, another research company, says data centers older than seven years are
obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to
modernize.

Data Center: Types

Data centers vary in size, from a small server room all the way up to groups of geographically distributed
buildings, but they all share one thing in common: they are a critical business asset where companies
often invest in and deploy the latest advancements in data center networking, compute and storage
technologies.

1. Enterprise data centers are typically constructed and used by a single organization for their own
internal purposes. These are common among tech giants.
2. Colocation data centers function as a kind of rental property where the space and resources of a
data center are made available to the people willing to rent it.
3. Managed service data centers offer aspects such as data storage, computing, and other services
as a third party, serving customers directly.
4. Cloud data centers are distributed and are sometimes offered to customers with the help of a
third-party managed service provider.

Data Center Computing

Servers are the engines of the data center. On servers, the processing and memory used to run
applications may be physical, virtualized, distributed across containers, or distributed among remote
nodes in an edge computing model.

Data Center Storage

Data centers host large quantities of sensitive information, both for their own purposes and the needs
of their customers. Decreasing costs of storage media increases the amount of storage available for
backing up the data either locally, remote, or both.
Data Center Networks

Datacenter network equipment includes cabling, switches, routers, and firewalls that connect servers
together and to the outside world. • Properly configured and structured, they can manage high volumes
of traffic without compromising performance. • A typical three-tier network topology is made up of core
switches at the edge connecting the data center to the Internet and a middle aggregate layer that
connects the core layer to the access layer where the servers reside. • Advancements, such as
hyperscale network security and software-defined networking, bring cloud-level agility and scalability to
on-premises networks

Impact of AI on Data Centers

1. Organizations can deploy AI in the data center for data security. For this purpose, AI can learn normal
network behavior and detect cyber threats based on deviation from that behavior.

2. The utilization of AI in the data center can detect malware and identify security loopholes in data
center systems.

3. AI-based cybersecurity can screen and analyze incoming and outgoing data for security threats
thoroughly.

Source: 1. Formal Powerpoint presentation (mitu.co.in)

2. Exploring The Impact Of AI In The Data Center (forbes.com)


5.2 Gateway Edge Computing

Edge Devices:

An edge device is a device that provides an entry point into enterprise or service provider core
networks. Examples include routers, routing switches, integrated access devices (IADs), multiplexers,
and a variety of metropolitan area network (MAN) and wide area network (WAN) access devices. Edge
devices also provide connections into carrier and service provider networks. An edge device that
connects a local area network to a high speed switch or backbone (such as an ATM switch) may be called
an edge concentrator.

Functions:

In general, edge devices are normally routers that provide authenticated access (most commonly PPPoA
and PPPoE) to faster, more efficient backbone and core networks. The trend is to make the edge device
smart and the core device(s) "dumb and fast", so edge routers often include quality of service (QoS) and
multiservice functions to manage different types of traffic.

Consequently, core networks are often designed with switches that use routing protocols such as Open
Shortest Path First (OSPF) or Multiprotocol Label Switching (MPLS) for reliability and scalability, allowing
edge routers to have redundant links to the core network. Links between core networks are different—
for example, Border Gateway Protocol (BGP) routers are often used for peering exchanges.

Edge Computing:

Edge computing is a distributed computing paradigm that brings computation and data storage closer to
the sources of data.In edge computing, data may travel between different distributed nodes connected
through the Internet and thus requires special encryption mechanisms independent of the cloud.Edge
computing brings analytical computational resources close to the end users and therefore can increase
the responsiveness and throughput of applications.

Applications:

Cloud gaming, where some aspects of a game could run in the cloud, while the rendered video is
transferred to lightweight clients running on devices such as mobile phones, VR glasses, etc. This type of
streaming is also known as pixel streaming. • Other notable applications include connected cars,
autonomous cars, smart cities, Industry 4.0 (smart industry), and home automation systems

Source: Formal Powerpoint presentation (mitu.co.in)


5.3 Keyprocessor for AI

1. Two types of processors exist in the market. One is the conventional CPU and the other is Graphic
Processor Unit (GPU). Typical CPU is composed of 1 to 8 cores while GPU has thousands of cores. CPU is
good for sequential processing, while GPU is good to accelerate software with heavy parallel executions.
GPU was initially dedicated for 3D graphics.

2. When GPU started to apply general-purpose cores, it was noticed that this architecture can be used as
a general purpose massive-parallel processor. NVIDIA developed a software framework Compute
Unified Device Architecture (CUDA) that make it possible to easily program the GPU for these
application. With CUDA, GPU started to be used in workstations and supercomputers widely. Recently
two key technologies are highlighted in the industry. The Artificial Intelligence (AI) and Autonomous
Driving Cars.

3. The latest multi-GPU system with P100 makes it possible to finish the training in a few hours. For the
autonomous driving cars, TOPS class of performance is required to implement perception, localization,
path planning processing and again SoC with integrated GPU will play a key role there.

Source: GPU: the biggest key processor for AI and parallel processing (spiedigitallibrary.org)
5.4 CPU and GPU

1. A GPU or ‘Graphics Processing Unit’ is a mini version of an entire computer but only dedicated to a
specific task.

2. It is unlike a CPU that carries out multiple tasks at the same time. GPU comes with its own processor
which is embedded onto its own motherboard coupled with v-ram or video ram, and also a proper
thermal design for ventilation and cooling.

In the term ‘Graphics Processing Unit’, ‘Graphics’ refers to rendering an image at specified coordinates
on a 2d or 3d space. A viewport or viewpoint is a viewer’s perspective of looking to an object depending
upon the type of projection used. Rasterisation and Ray-tracing are some of the ways of rendering 3d
scenes, both of these concepts are based on a type of a projection called as perspective projection.

3. One of the most admired characteristics of a GPU is the ability to compute processes in parallel. This
is the point where the concept of parallel computing kicks in.

4. A CPU in general completes its task in a sequential manner. A CPU can be divided into cores and each
core takes up one task at a time.

5. General-purpose CPUs struggle when operating on a large amount of data e.g., performing linear
algebra operations on matrices with tens or hundreds thousand floating-point numbers. Under the
hood, deep neural networks are mostly composed of operations like matrix multiplications and vector
additions.

6. GPUs were developed (primarily catering to the video gaming industry) to handle a massive degree of
parallel computations using thousands of tiny computing cores. They also feature large memory
bandwidth to deal with the rapid dataflow (processing unit to cache to the slower main memory and
back), needed for these computations when the neural network is training through hundreds of epochs.
This makes them the ideal commodity hardware to deal with the computation load of computer vision
tasks.

Source: Formal Powerpoint presentation (mitu.co.in)


5.5 Field Programmable Gate Array

Field-programmable gate array (FPGA) chips enable you to reprogram logic gates. You can use FPGA
technology to overwrite chip configurations and create custom circuits. FPGA chips are especially useful
for machine learning and deep learning.

There is a wide range of FPGA applications. You can configure an FPGA with thousands of memory units.
This enables the circuits to work in a massively-parallel computing model, like GPUs. With FPGAs, you
gain access to an adaptable architecture that enables you to optimize throughput. This means you can
use FPGAs to meet or exceed the performance of GPUs.

Compared to CPUs and GPUs, FPGAs are well suited for embedded applications and have lower power
consumption. These circuits can be used with custom data types and are not limited by architecture like
GPUs. Also, the programmability of FPGAs make it easier to adapt them for safety and security concerns.
FPGA have been successfully used in safety-critical critical, regulated environments like ADAS (Advanced
Driver Assistance Systems).

GPU vs FPGA for Machine Learning

Compute power

According to research by Xilinx, FPGAs can produce roughly the same or greater compute power as
comparable GPUs. FPGAs also have better on-chip memory, resulting in higher compute capability. This
memory reduces bottlenecks caused by external memory access and reduces the cost and power
required for high memory bandwidth solutions.

In computations, FPGAs can support a full range of data types, including FTP32, INT8, binary, and custom
types. With FPGAs you can make modifications as needed while GPUs require vendors to adapt
architectures to provide compatibility. This may mean pausing projects while vendors make changes.

Efficiency and power

According to research by Microsoft, FPGAs can perform almost 10x better than GPUs in terms of power
consumption. The reason for this is that GPUs require complex compute resources to enable software
programmability, which consumes more power.

This doesn’t mean that all GPUs are less efficient. The NVIDIA V100 has been found to provide efficiency
comparable to Xilinx FPGAs for deep learning tasks. This is due to its hardened Tensor Cores. However,
for general purpose workloads this GPU isn’t comparable. Learn more in our article about NVIDIA deep
learning GPU.

Functional safety

GPUs were designed for high-performance computing systems and graphics workloads. Safety concerns
were not relevant. However, GPUs have been used in applications, like ADAS, where functional safety is
a concern. In these cases, GPUs must be designed to meet safety requirements, which can be time-
consuming for vendors.
In contrast, the programmability of FPGAs enables you to design them in a way that meets whatever
safety requirements you face. These circuits have been successfully used in automation, avionics, and
defense without custom manufacturing requirements.

Using FPGA for deep learning enables you to optimize throughput and adapt processors to meet the
specific needs of different deep learning architectures.Increasing network traffic and the need for high
data processing across datacenters are creating high growth opportunities for the FPGA market. FPGA
devices are integrated into various datacenter hardware such as storage & server racks, and networking
equipment, among others. These devices help datacenter hardware to reduce network latency and
improvise storage & computing applications.

Advantages of FPGA technology include:

Flexibility—reprogrammability is the greatest benefit of FPGA for deep learning, and adds significant
flexibility to operations. You can program individual blocks or your entire circuit to fit the requirements
of your particular algorithm. If the programming doesn’t fit as well as you expected, you can modify it as
needed.

Parallelism—you can switch between programs to adapt to changing workloads with an FPGA. You can
also handle multiple workloads without sacrificing performance. This enables you to work on different
stages of tasks concurrently which you can’t do with GPUs.

Decreased latency—larger memory bandwidths result in lower latency than GPUs. This enables you to
process significant amounts of data in real-time, including streaming data. Additionally, FPGAs can
provide extremely precise timing and reliability without sacrificing flexibility.

Energy efficiency—lower power requirements for FPGAs can help reduce overall power consumption for
machine learning and deep learning implementations. This can reduce the overall costs of training and
potentially extend the life of equipment.

Disadvantages of FPGA technology include:

Programming—programming FPGA circuits requires significant expertise and that is not easy to obtain.
For example, programmers must be familiar with hardware descriptive language (HDL). Lack of
experienced programmers can make it difficult to adopt FPGAs reliably.

Implementation complexity—implementing FPGAs for deep learning is relatively untested and may be
too risky for conservative organizations. Lack of support and minimal community knowledge means that
FPGAs are not yet widely accessible for deep learning applications.

Expense—the cost of the FPGAs themselves in combination with implementation and programming
costs make the circuits a considerable investment. This technology is currently ill-suited for smaller
projects and requires the investment given to larger and on-going implementations.
Lack of libraries—currently, there are very few if any deep learning libraries that support FPGA without
modification. There is, however, a project that researchers from the University of British Columbia are
working on, called LeFlow. This project is attempting to create compatibility between FPGAs and
TensorFlow.

Source: 1. Field Programmable Gate Array (FPGA) Market Size & Share | Global Forecasts 2027
(gminsights.com)

2. FPGA for Deep Learning (run.ai)


6. Applications for AI

6.1 Robotics Process Automation-Chatbot

Robotic Process Automation

RPA is a tool or method of automating manual, time-consuming, and complex, rule-based workflows
using software robots. These software robots, which are traditionally used for back-end administrative
IT work, can perform various tasks and transactions in databases, enterprise systems, and websites
more efficiently than humans and other automation solutions by reducing cycle times. They are often
used to either replace the people who interact these applications, or replace the responsibility to
interact with them.

These logic-driven and algorithmic robots execute pre-programmed rules on structured and semi-
structured data, although the former is still the most common. RPAs mimic humans and the manner in
which they interact with applications, the decisions they make in relation to these applications, and the
logical processes they follow. Unlike chatbots, they do not require active human intelligence to manage,
except in the case of exemptions or errors and during initial deployment. More complex
implementations have focused on robots and employees working on semi-automated processes.

Chatbots

Chatbots are intermediaries between systems that you can talk to. They communicate with applications,
things, and people to take action on revenue generating activities immediately. A chatbot is basically a
program that’s designed to talk to you and collect information from your conversation.

Depending on how it’s developed or how intelligently it’s been built, a bot can use that information to
do things for you, such as book you a flight, suggest personalized offers or promotions, pass you along to
a human in times of frustration, based on how you’d like the bot to help. Common examples of chatbots
for customer use include providing one-to-one guidance, triaging customer service and support
requests, assisting in the completion of transactions or data entry, the delivery of right-time, right-fit
offers and promotions, and much more.

Chatbots are typically composed of fundamental elements that serve to provide rich conversational
interfaces for customers and employees:

• Tasks

• Channels

• NLP & Speech

• Intelligence

Advanced chatbot platforms also provide the following fundamental elements:

• Bot & Dialog Development Tools

• Platform Middleware
• Enterprise Capabilities (advanced encryption, administration, and compliance functionality)

Blending Together Chatbots and RPA in the EnterPrise

Enterprises are constantly faced with new threats to nearly every aspect of their businesses, from
emerging technology to never ending media hype surrounding new products to tectonic shifts in
consumer preferences and disposable income. Cutting-edge businesses and start-ups have always been
at the forefront of innovation, with many choosing the role of the early adopter of promising, potentially
paradigm shifting technology. But, like most new tech, high costs, long development times, functionality
issues, and integration challenges from fragmented legacy systems often delay implementation and
reduce the likelihood of a successful rollout.

Chatbots and automation solutions, such as RPA robots, largely avoid these stereotypes, and in many
ways, exist as a direct result of these challenges. While chatbot technology and robotic process
automation have advanced significantly in just a few short years, there are some limitations to the
usability and impact of each. With a focused and systemic approach, however, both solutions can act in
concert to address and solve key pain points in back-end systems (RPA), such as Enterprise Resource
Planning suites, front-end systems (chatbots), such as Customer Relationship Management applications,
and on the front lines directly interacting with customers (chatbots).

Both technologies are relatively low cost. Both can be deployed quickly, and both can offer both cost-
savings and operational flexibility. It’s likely we will see more organizations deploy either chatbots or
RPA tools to complement an existing legacy deployment of one of these solutions or even consider a
dual-deployment in some fashion. Here are two example use cases to show how these technologies
could work to help transform your organization, drive greater efficiencies, and improve your bottom-
line.

1. Chatbots (front office bots) converse with customers or employees to send information, complete
tasks or capture their requests. Based on the use case, a bot needs to integrate with and access
information from different enterprise systems.

2. Integration of RPA helps chatbots effectively navigate through legacy enterprise systems that do not
have modern APIs. The RPA-chatbot integration is a powerful combination and serious game-changer
for two reasons

a. An RPA powered chatbot can integrate with disparate and multiple back-end enterprise systems. RPA
enables chatbots to retrieve information from these systems and handle more complex and real-time
customer/employee requests and queries at scale.

b. In the same way, chatbots, upon a user’s request, can trigger RPA to perform specific mundane tasks
without routing them to a human agent.

Source: 1. WP-RPA-Software-Robots-vs-Chatbots.pdf (smallake.kr)


6.2 NLP

Natural Language Processing—also known as NLP or computational linguistics—is a subfield of Artificial


Intelligence (AI), Machine Learning (ML), and linguistics.

A branch of AI, it helps computers or machines understand, manipulate, and interpret human language.
For several decades now, humans have been communicating with machines through coding and
programming languages, which in binary form, constitute of millions of zeroes and ones. According to
Gartner, by 2025, nearly 60% of analytical queries will be generated through speech, Natural Language
Processing (NLP) or voice, or would be generated automatically.

As mentioned, we learn language context through associated relationships from childhood and through
education. A symbolic approach works the same way, but for AI systems. It embeds a knowledge graph,
a repository built to include the concept of a language and the relationships between these concepts.
Commonly used for NLP and natural language understanding (NLU), symbolic AI then leverages the
knowledge graph, to understand the meaning of words in context and follows IF-THEN logic structure;
when an IF linguistic condition is met, a THEN output is generated.

This method helps to address a major flaw often associated with AI – the black box, where a system
performs tasks and reaches results that can’t be audited or explained creates a huge problem,
particularly when it comes to concerns over privacy and bias. The black box scenario is endemic of the
most known AI technique, ML, and presents an unacceptable level of risk for enterprise whereas issues
could damage a reputation, deter or even harm consumers, and undermine support for future AI
initiatives.

There are several tools provided to work in NLP. Python programming language provides a Natural
Language Toolkit (NLTK) and other open source libraries and educational resources for NLP
programming. Statistical Analysis combines Machine Learning and Deep Learning models along with
Computer Algorithms to extract and differentiate the text and voice data and statistically provide
meaning to all the elements.

1. Speech Recognition
Speech Recognition is a technology that enables the computer to convert voice input data to
machine readable format. There are a lot of fields where speech recognition is used like, virtual
assistants, adding speech-to-text, translating speech, sending emails etc.
It is used in search engines where the user can voice out the name of their search requirements
and get the desired result, making our work easier than typing out the entire command.

2. Auto Correct and Auto prediction


There are many softwares available nowadays that check grammar and spelling of the text we
type and save us from embarrassing spelling and grammatical mistakes in our emails, texts or
other documents. NLP plays an important role in those softwares and functions.
This is one of the most widely used applications of NLP. These softwares offer a lot of features
like suggesting synonyms, correcting grammar and spellings, rephrasing sentences and giving
clarity to the document and can even predict the tone of the sentence that might be implied by
the user.
3. Translation
Social Media has brought the entire world together but with unity comes challenges like
language barrier. With different translating softwares that work individually or are integrated
within other applications, this hurdle has been easily defeated.
This is called Machine Translation, which uses Natural Language Processing, and has made a lot
of improvement in the field due to availability of huge amounts of data and powerful machines,
and advancement in the field of Machine learning and Neural networking.

4. Text Summarisation
There is a huge amount of data available on the internet and it is very hard to go through all the
data to extract a single piece of information. With the help of NLP, text summarization has been
made available to the users. This helps in the simplification of huge amounts of data in articles,
news, research papers etc. This application is used in Investigative Discovery to identify patterns
in writing reports, Social Media Analytics to track awareness and identify influencers, and
Subject-matter expertise to classify content into meaningful topics.

Source: 1. Top 10 Applications for Natural Language Processing (NLP) | Analytics Steps

2. Formal Powerpoint presentation (mitu.co.in)


6.3 Image Processing

Image processing is the procedure of manipulating an image for two prime purposes – enhancing the
image quality or extracting the vital details from an image. Most of the organizations tend to follow two
foremost kinds of image processing – analog image processing, wherein, the concept is used to process
a hard copy of images. The second one being digital image processing, wherein, the digital images are
manipulated with the support of computer algorithms.

Image processing is broadly incorporated for the following objectives:

1. Processing the images to be presented in a meaningful way through the visualization


2. Enhancing the image quality of processed images
3. Easing the process of image retrieval
4. Helping you to measure the objects in an image through object recognition
5. Enabling object distinguishing and classification in an image

In simple terms, the image processing technique is all about extracting the specific features of an object.
The application of the artificial intelligence model in image processing is widely used for image detection
and image classification.

The concept of image classification is widely categorized into two sub-domains – classifiers and
detectors.

Need for Artificial Intelligence on Image Processing

1. Image augmentation
2. Image enhancement
3. Processing of colored images
4. Image Reconstruction
5. Morphological processing
6. Image recognition
7. Image visualization

But, most of the business organizations face the difficulty to efficiently process the bulk of images or
data every day. And to overcome such challenges, implementing the concept of machine learning
algorithms can turn out to be productive. The techniques of machine learning and deep learning will
speed up the process of image processing.

Image Processing: Open Libraries

Computer vision libraries contain common image processing functions and algorithms. There are several
open-source libraries you can use when developing image processing and computer vision features:

1. OpenCV
2. Visualization Library
3. VGG Image Annotator

Machine Learning Frameworks used for Image Processing

To make development a bit faster and easier, you can use special platforms and frameworks. Below, we
take a look at some of the most popular ones:

1. TensorFlow –Google’s TensorFlow is a popular open-source framework with support for


machine learning and deep learning.
2. PyTorch - PyTorch is an open-source deep learning framework initially created by the Facebook
AI Research lab (FAIR).
3. MATLAB Image Processing Toolbox - MATLAB is an abbreviation for matrix laboratory. It’s the
name of both a popular platform for solving scientific and mathematical problems and a
programming language.
4. Microsoft Computer Vision – Computer Vision is a cloud-based service provided by Microsoft
that gives you access to advanced algorithms for image processing and data extraction. It allows
you to: – analyze visual features and characteristics of an image – moderate image content –
extract text from images
5. Google Cloud Vision - Cloud Vision is part of the Google Cloud platform and offers a set of image
processing features. It provides an API for integrating such features as image labeling and
classification, object localization, and object recognition.
6. Google Colaboratory (Colab) - Google Colaboratory, otherwise known as Colab, is a free cloud
service that can be used not only for improving your coding skills but also for developing deep
learning applications from scratch.

Progress in the implementation of AI algorithms for image processing is impressive and opens a wide
range of opportunities in fields from medicine and agriculture to retail and law enforcement.

Source: 1. Application of Artificial Intelligence in Image Processing (allianzeinfosoft.com)

2. Formal Powerpoint presentation (mitu.co.in)


6.4 Speech Recognition

Speech recognition refers to a computer interpreting the words spoken by a person and converting
them to a format that is understandable by a machine. Depending on the end-goal, it is then converted
to text or voice or another required format.

For instance, Apple’s Siri and Google’s Alexa use AIpowered speech recognition to provide voice or text
support whereas voice-to-text applications like Google Dictate transcribe your dictated words to text.

Voice recognition is another form of speech recognition where a source sound is recognized and
matched to a person’s voice.

Speech recognition AI applications have seen significant growth in numbers in recent times as
businesses are increasingly adopting digital assistants and automated support to streamline their
services.

Speech recognition is fast overcoming the challenges of poor recording equipment and noise
cancellation, variations in people’s voices, accents, dialects, semantics, contexts, etc using artificial
intelligence and machine learning.

This also includes challenges of understanding human disposition, and the varying human language
elements like colloquialisms, acronyms, etc. The technology can provide a 95% accuracy now as
compared to traditional models of speech recognition, which is at par with regular human
communication.

Speech Recognition and NLP

Natural language processing (NLP) is a division of artificial intelligence that involves analyzing natural
language data and converting it into a machine-readable format. Speech recognition and AI play an
integral role in NLP models in improving the accuracy and efficiency of human language recognition.

From smart home devices and appliances that take instructions, and can be switched on and off
remotely, digital assistants that can set reminders, schedule meetings, recognize a song playing in a pub,
to search engines that respond with relevant search results to user queries, speech recognition has
become an indispensable part of our lives.

Use Cases

1. The uses of speech recognition applications in different fields:

– Voice-based speech recognition software is now used to initiate purchases, send emails, transcribe
meetings, doctor appointments, and court proceedings, etc.

– Virtual assistants or digital assistants and smart home devices use voice recognition software to
answer questions, provide weather news, play music, check traffic, place an order, and so on.

2. Companies like Venmo and PayPal allow customers to make transactions using voice assistants.
Several banks in North America and Canada also provide online banking using voice-based software.
3. Ecommerce is significantly powered by voice-based assistants and allows users to make purchases
quickly and seamlessly.

4. Speech recognition is poised to impact transportation services and streamline scheduling, routing,
and navigating across cities.

Global Impact

1. Speech recognition has by far been one of the most powerful products of technological advancement.
As the likes of Siri, Alexa, Echo Dot, Google Assistant, and Google Dictate continue to make our daily
lives easier, the demand for such automated technologies is only bound to increase.

2. Businesses worldwide are investing in automating their services to improve operational efficiency,
increase productivity and accuracy, and make datadriven decisions by studying customer behaviours
and purchasing habits.

Source: Formal Powerpoint presentation (mitu.co.in)

You might also like