0% found this document useful (0 votes)
12 views110 pages

1614864761696_Introduction to Emerging Technologies(1)

The document provides an overview of emerging technologies and their impact on society, highlighting the evolution of industrial revolutions and the significance of data in the current technological landscape. It discusses various types of data, including structured, semi-structured, and unstructured data, as well as the data processing cycle and the concept of big data. Additionally, it outlines future trends in technology such as AI, IoT, and cybersecurity, emphasizing the importance of data science in extracting insights from large datasets.

Uploaded by

zake89982
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views110 pages

1614864761696_Introduction to Emerging Technologies(1)

The document provides an overview of emerging technologies and their impact on society, highlighting the evolution of industrial revolutions and the significance of data in the current technological landscape. It discusses various types of data, including structured, semi-structured, and unstructured data, as well as the data processing cycle and the concept of big data. Additionally, it outlines future trends in technology such as AI, IoT, and cybersecurity, emphasizing the importance of data science in extracting insights from large datasets.

Uploaded by

zake89982
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 110

ST PAUL’S HOSPITAL

MILLENNIUM MEDICAL COLLEGE

Introduction to Emerging Technologies

(EMTE1011/1012)
Belay Alemayehu(MSc)

E-mail: [email protected]
Introduction to Emerging Technologies
Emerging technology is a term generally used to describe a
new technology, but it may also refer to the continuing
development of existing technology; technologies that are
currently developing, or that are expected to be available
within the next five to ten years, and is usually reserved for
technologies that are creating or are expected to create
significant social or economic effects. Technological evolution
is a theory of radical transformation of society through
technological development.

Introduction to Information Technology 2


Introduction to Emerging Technologies
In other words, an emerging technology can be defined as "a radically
novel and relatively fast growing technology characterized by a
certain degree of coherence persisting over time and with the
potential to exert a considerable impact on the socio-economic
domain(s) which is observed in terms of the composition of actors,
institutions and patterns of interactions among those, along with the
associated knowledge production processes. Its most prominent
impact, however, lies in the future and so in the emergence phase is
still somewhat uncertain and ambiguous

Introduction to Information Technology 3


Introduction to the Industrial Revolution (IR)

The primary industry involves getting raw materials e.g. mining,


farming, and fishing.
• The secondary industry involves manufacturing e.g. making cars
and steel.
• Tertiary industries provide a service e.g. teaching and nursing.
• The quaternary industry involves research and development
industries e.g. IT.

Introduction to Information Technology 4


1st Industrial Revolution

The Industrial Revolution (IR) is described as a transition to new


manufacturing processes. IR was first coined in the 1760s, during the time
where this revolution began. The transitions in the first IR included going
from hand production methods to machines, the increasing use of steam
power

Introduction to Information Technology 5


2nd Industrial Revolution
The Second IR, also known as the Technological Revolution, began
somewhere in the 1870s. The advancements in IR 2.0 included the
development of methods for manufacturing interchangeable parts
and widespread adoption of pre-existing technological systems
such as telegraph and railroad networks. This adoption allowed the
vast movement of people and ideas, enhancing communication.
Moreover, new technological systems were introduced, such as
electrical power

Introduction to Information Technology 6


3rd Industrial Revolution

The Third Industrial Revolution began in the ’70s in the 20th


century through partial automation using memory-programmable
controls and computers. Since the introduction of these
technologies, we are now able to automate an entire production
process - without human assistance. Known examples of this are
robots that perform programmed sequences without human
intervention.

Introduction to Information Technology 7


4th Industrial Revolution
It builds on the developments of the Third Industrial Revolution.
Production systems that already have computer technology are expanded
by a network connection and have a digital twin on the Internet so to
speak. These allow communication with other facilities and the output of
information about themselves. This is the next step in production
automation. The networking of all systems leads to "cyber-physical
production systems" and therefore smart factories, in which production
systems, components and people communicate via a network and
production is nearly autonomous.

Introduction to Information Technology 8


Role of Data for Emerging Technologies
Data is regarded as the new oil and strategic asset since we are
living in the age of big data, and drives or even determines the
future of science, technology, the economy, and possibly
everything in our world today and tomorrow. Data have not only
triggered tremendous hype and buzz but more importantly,
presents enormous challenges that in turn bring incredible
innovation and economic opportunities. This reshaping and
paradigm-shifting are driven not just by data itself but all other
aspects that could be created, transformed, and/or adjusted by
understanding, exploring, and utilizing data.

Introduction to Information Technology 9


Enabling devices and network (Programmable
devices)
In the world of digital electronic systems, there are four basic kinds
of devices: memory, microprocessors, logic, and networks. Memory
devices store random information such as the contents of a
spreadsheet or database. Microprocessors execute software
instructions to perform a wide variety of tasks such as running a
word processing program or video game. Logic devices provide
specific functions, including device-to-device interfacing, data
communication, signal processing, data display, timing and control
operations, and almost every other function a system must perform.

Introduction to Information Technology 10


Human to Machine Interaction

Human-machine interaction(HMI) refers to the communication


and interaction between a human and a machine via a user
interface. Nowadays, natural user interfaces such as gestures have
gained increasing attention as they allow humans to control
machines through natural and intuitive behaviors

Introduction to Information Technology 11


Future Trends in Emerging Technologies
1. Artificial Intelligence (AI) and Machine Learning
2. Robotic Process Automation (RPA)
3. Edge Computing
4. Quantum Computing
5. Virtual Reality and Augmented Reality
6. Block chain
7. Internet of Things (IoT)
8. 5G
9. Cyber Security
10. Technology Trends and 1 Solution to Succeed in Them

Introduction to Information Technology 12


Introduction to Information Technology 13
Data Science

Data science is a multi-disciplinary field that uses scientific


methods, processes, algorithms, and systems to extract
knowledge and insights from structured, semi-structured and
unstructured data. Data science is much more than simply
analyzing data. It offers a range of roles and requires a range of
skills.

Introduction to Emerging Technologies 14


Data Science
• As an academic discipline and profession, data science continues to
evolve as one of the most promising and in-demand career paths for
skilled professionals. Today, successful data professionals
understand that they must advance past the traditional skills of
analyzing large amounts of data, data mining, and programming
skills. In order to uncover useful intelligence for their organizations,
data scientists must master the full spectrum of the data science life
cycle and possess a level of flexibility and understanding to maximize
returns at each phase of the process.

Introduction to Information Technology 15


Data Science
• Data scientists need to be curious and result-oriented, with
exceptional industry-specific knowledge and communication skills that
allow them to explain highly technical results to their non-technical
counterparts. They possess a strong quantitative background in
statistics and linear algebra as well as programming knowledge with
focuses on data warehousing, mining, and modeling to build and
analyze algorithms. In this chapter, we will talk about basic definitions
of data and information, data types and representation, data value
change and basic concepts of big data.

Introduction to Information Technology 16


What are data and information?
• Data can be defined as a representation of facts, concepts, or
instructions in a formalized manner, which should be suitable for
communication, interpretation, or processing, by human or electronic
machines. It can be described as unprocessed facts and figures. It is
represented with the help of characters such as alphabets (A-Z, a-z),
digits (0-9) or special characters (+, -, /, *, <,>, =, etc.).
• Whereas information is the processed data on which decisions and
actions are based. It is data that has been processed into a form that
is meaningful to the recipient and is of real or perceived value in the
current or the prospective action or decision of recipient. Furtherer
more, information is interpreted data; created from organized,
structured, and processed data in a particular context.
Introduction to Information Technology 17
Data Processing Cycle

• Data processing is the re-structuring or re-ordering of data by people


or machines to increase their usefulness and add values for a
particular purpose. Data processing consists of the following basic
steps - input, processing, and output. These three steps constitute the
data processing cycle.

Introduction to Information Technology 18


Data types and their representation

• Data types can be described from diverse perspectives. In computer


science and computer programming, for instance, a data type is
simply an attribute of data that tells the compiler or interpreter how the
programmer intends to use the data.
• Data types from Computer programming perspective
Almost all programming languages explicitly include the notion of data
type, though different languages may use different terminology.
Common data types include:

Introduction to Information Technology 19


Data types and their representation

– Integers(int)- is used to store whole numbers, mathematically


known as integers
– Booleans(bool)- is used to represent restricted to one of two
values: true or false
– Characters(char)- is used to store a single character
– Floating-point numbers(float)- is used to store real numbers
– Alphanumeric strings(string)- used to store a combination of
characters and numbers

Introduction to Information Technology 20


Data types from Data Analytics perspective

• From a data analytics point of view, it is important to understand that


there are three common types of data types or structures: Structured,
Semi-structured, and Unstructured data types. Fig. 2.2 below
describes the three types of data and metadata.

Introduction to Information Technology 21


Structured Data

• Structured data is data that adheres to a pre-defined data model and


is therefore straightforward to analyze. Structured data conforms to a
tabular format with a relationship between the different rows and
columns. Common examples of structured data are Excel files or SQL
databases. Each of these has structured rows and columns that can
be sorted.

Introduction to Information Technology 22


Semi-structured Data

• Semi-structured data is a form of structured data that does not


conform with the formal structure of data models associated with
relational databases or other forms of data tables, but nonetheless,
contains tags or other markers to separate semantic elements and
enforce hierarchies of records and fields within the data. Therefore, it
is also known as a self-describing structure. Examples of semi-
structured data include JSON and XML are forms of semi-structured
data.

Introduction to Information Technology 23


Unstructured Data

• Unstructured data is information that either does not have a


predefined data model or is not organized in a pre-defined manner.
Unstructured information is typically text-heavy but may contain data
such as dates, numbers, and facts as well. This results in
irregularities and ambiguities that make it difficult to understand using
traditional programs as compared to data stored in structured
databases. Common examples of unstructured data include audio,
video files or No- SQL databases.

Introduction to Information Technology 24


Metadata – Data about Data

• The last category of data type is metadata. From a technical point of


view, this is not a separate data structure, but it is one of the most
important elements for Big Data analysis and big data solutions.
Metadata is data about data. It provides additional information about
a specific set of data.
• In a set of photographs, for example, metadata could describe when
and where the photos were taken. The metadata then provides fields
for dates and locations which, by themselves, can be considered
structured data. Because of this reason, metadata is frequently used
by Big Data solutions for initial analysis.

Introduction to Information Technology 25


Data value Chain

• The Data Value Chain is introduced to describe the information flow


within a big data system as a series of steps needed to generate
value and useful insights from data. The Big Data Value Chain
identifies the following key high-level activities:

Introduction to Information Technology 26


Data Acquisition

• It is the process of gathering, filtering, and cleaning data before it is


put in a data warehouse or any other storage solution on which data
analysis can be carried out. Data acquisition is one of the major big
data challenges in terms of infrastructure requirements. The
infrastructure required to support the acquisition of big data must
deliver low, predictable latency in both capturing data and in
executing queries; be able to handle very high transaction volumes,
often in a distributed environment; and support flexible and dynamic
data structures.

Introduction to Information Technology 27


Data Analysis

• It is concerned with making the raw data acquired amenable to use in


decision-making as well as domain-specific usage. Data analysis
involves exploring, transforming, and modeling data with the goal of
highlighting relevant data, synthesizing and extracting useful hidden
information with high potential from a business point of view. Related
areas include data mining, business intelligence, and machine
learning

Introduction to Information Technology 28


Data Curation
• It is the active management of data over its life cycle to ensure it
meets the necessary data quality requirements for its effective usage.
Data curation processes can be categorized into different activities
such as content creation, selection, classification, transformation,
validation, and preservation. Data curation is performed by expert
curators that are responsible for improving the accessibility and
quality of data. Data curators (also known as scientific curators or
data annotators) hold the responsibility of ensuring that data are
trustworthy, discoverable, accessible, reusable and fit their purpose. A
key trend for the duration of big data utilizes community and
crowdsourcing approaches.

Introduction to Information Technology 29


Data Storage
• It is the persistence and management of data in a scalable way that
satisfies the needs of applications that require fast access to the data.
Relational Database Management Systems (RDBMS) have been the
main, and almost unique, a solution to the storage paradigm for
nearly 40 years. However, the ACID (Atomicity, Consistency, Isolation,
and Durability) properties that guarantee database transactions lack
flexibility with regard to schema changes and the performance and
fault tolerance when data volumes and complexity grow, making them
unsuitable for big data scenarios.NoSQL technologies have been
designed with the scalability goal in mind and present a wide range of
solutions based on alternative data models.

Introduction to Information Technology 30


Data Usage

• It covers the data-driven business activities that need access to data,


its analysis, and the tools needed to integrate the data analysis within
the business activity. Data usage in business decision- making can
enhance competitiveness through the reduction of costs, increased
added value, or any other parameter that can be measured against
existing performance criteria.

Introduction to Information Technology 31


Basic concepts of big data

• Big data is a blanket term for the non-traditional strategies and


technologies needed to gather, organize, process, and gather insights
from large datasets. While the problem of working with data that
exceeds the computing power or storage of a single computer is not
new, the pervasiveness, scale, and value of this type of computing
have greatly expanded in recent years.
• In this section, we will talk about big data on a fundamental level and
define common concepts you might come across. We will also take a
high-level look at some of the processes and technologies currently
being used in this space.

Introduction to Information Technology 32


What Is Big Data?

• Big data is the term for a collection of data sets so large and complex
that it becomes difficult to process using on-hand database
management tools or traditional data processing applications.
• In this context, a “large dataset” means a dataset too large to
reasonably process or store with traditional tooling or on a single
computer. This means that the common scale of big datasets is
constantly shifting and may vary significantly from organization to
organization

Introduction to Information Technology 33


What Is Big Data?

• Big data is characterized by 3V and more:



Volume: large amounts of data Zeta bytes/Massive datasets
– Velocity: Data is live streaming or in motion
– Variety: data comes in many different forms from diverse sources
– Veracity: can we trust the data? How accurate is it? etc.

Introduction to Information Technology 34


Clustered Computing and Hadoop Ecosystem
• Clustered Computing
Because of the qualities of big data, individual computers are often
inadequate for handling the data at most stages. To better address the
high storage and computational needs of big data, computer clusters
are a better fit.

Introduction to Information Technology 35


Clustered Computing and Hadoop Ecosystem

Big data clustering software combines the resources of many


smaller machines, seeking to provide a number of benefits:
• Resource Pooling: Combining the available storage space to hold data is
a clear benefit, but CPU and memory pooling are also extremely important.
Processing large datasets requires large amounts of all three of these
resources.
• High Availability: Clusters can provide varying levels of fault tolerance and
availability guarantees to prevent hardware or software failures from
affecting access to data and processing. This becomes increasingly
important as we continue to emphasize the importance of real-time
analytics.
Introduction to Information Technology 36
Clustered Computing and Hadoop Ecosystem
• Easy Scalability: Clusters make it easy to scale horizontally by
adding additional machines to the group. This means the system can
react to changes in resource requirements without expanding the
physical resources on a machine.

Introduction to Information Technology 37


Hadoop and its Ecosystem

• Hadoop is an open-source framework intended to make interaction


with easier. It is a framework that allows for the distributed processing
of large datasets across clusters of computers using simple
programming models. It is inspired by a technical document published
by Google.

Introduction to Information Technology 38


Hadoop and its Ecosystem

• The four key characteristics of Hadoop are:

» Economical: Its systems are highly economical as ordinary computers can be used
for data processing.
» Reliable: It is reliable as it stores copies of the data on different machines and is
resistant to hardware failure.
» Scalable: It is easily scalable both, horizontally and vertically. A few extra nodes
help in scaling up the framework.
» Flexible: It is flexible and you can store as much structured and unstructured data
as you need to and decide to use them later

Introduction to Information Technology 39


Hadoop has an ecosystem that has evolved from its four core
components: data management, access, processing, and storage. It is
continuously growing to meet the needs of Big Data. It comprises the
following components and many others:
• HDFS: Hadoop Distributed File System
• YARN: Yet Another Resource Negotiator
• MapReduce: Programming based Data Processing
• Spark: In-Memory data processing

Introduction to Information Technology 40


• PIG, HIVE: Query-based processing of data services
• HBase: NoSQL Database
• Mahout, Spark MLLib: Machine Learning algorithm libraries
• Solar, Lucene: Searching and Indexing
• Zookeeper: Managing cluster
• Oozie: Job Scheduling

Introduction to Information Technology 41


Introduction to Information Technology 42
Big Data Life Cycle with Hadoop
Ingesting data into the system
• The first stage of Big Data processing is Ingest. The data is
ingested or transferred to Hadoop from various sources such as
relational databases, systems, or local files. Sqoop transfers
data from RDBMS to HDFS, whereas Flume transfers event
data.
Processing the data in storage
• The second stage is Processing. In this stage, the data is stored
and processed. The data is stored in the distributed file system,
HDFS, and the NoSQL distributed data, HBase. Spark and
MapReduce perform data processing.
Introduction to Information Technology 43
Big Data Life Cycle with Hadoop
Computing and analyzing data
• The third stage is to Analyze. Here, the data is analyzed by
processing frameworks such as Pig, Hive, and Impala. Pig
converts the data using a map and reduce and then analyzes it.
Hive is also based on the map and reduce programming and is
most suitable for structured data.
Visualizing the results
• The fourth stage is Access, which is performed by tools such as
Hue and Cloudera Search. In this stage, the analyzed data can
be accessed by users.

Introduction to Information Technology 44


Artificial Intelligence (AI)
Artificial Intelligence (AI) as the branch of computer
science by which we can create intelligent machines which
can behave like a human, think like humans, and able to
make decisions. Intelligence, as we know, is the ability to
acquire and apply knowledge. Knowledge is the
information acquired through experience. Experience is the
knowledge gained through exposure (training).

Introduction to Information Technology 45


Intelligence is composed of:

➢ Reasoning
➢ Learning
➢ Problem Solving
➢ Perception
➢ Linguistic Intelligence

Introduction to Information Technology 46


advantages of Artificial Intelligence:
 High Accuracy with fewer errors: AI machines or systems are
prone to fewer errors and high accuracy as it takes decisions
as per pre-experience or information.
 High-Speed: AI systems can be of very high-speed and fast-
decision making, because of that AI systems can beat a
chess champion in the Chess game.
 High reliability: AI machines are highly reliable and can
perform the same action multiple times with high accuracy.
 Useful for risky areas: AI machines can be helpful in
situations such as defusing a bomb, exploring the ocean
floor, where to employ a human can be risky.
Introduction to Information Technology 47
advantages of Artificial Intelligence:
 Digital Assistant: AI can be very useful to provide digital
assistant to users such as AI technology is currently used by
various E-commerce websites to show the products as per
customer requirements.
 Useful as a public utility: AI can be very useful for public utilities
such as a self- driving car which can make our journey safer
and hassle-free, facial recognition for security purposes, Natural
language processing (for search engines, for spelling checker,
for assistant like Siri, for translation like google translate), etc.

Introduction to Information Technology 48


disadvantages of AI:
 High Cost: The hardware and software requirement of AI is very
costly as it requires lots of maintenance to meet current world
requirements.
 Can't think out of the box: Even we are making smarter machines
with AI, but still they cannot work out of the box, as the robot will
only do that work for which they are trained, or programmed.
 No feelings and emotions: AI machines can be an outstanding
performer, but still it does not have the feeling so it cannot make
any kind of emotional attachment with humans, and may
sometime be harmful for users if the proper care is not taken.

Introduction to Information Technology 49


disadvantages of AI:

 Increase dependence on machines: With the increment of


technology, people are getting more dependent on devices
and hence they are losing their mental capabilities.
 No Original Creativity: As humans are so creative and can
imagine some new ideas but still AI machines cannot beat this
power of human intelligence and cannot be creative and
imaginative.

Introduction to Information Technology 50


History of AI
A.Maturation of Artificial Intelligence (1943-1952)
➢ The year 1943: The first work which is now recognized as AI was
done by Warren McCulloch and Walter pits in 1943. They proposed a
model of artificial neurons.
➢ The year 1949: Donald Hebb demonstrated an updating rule for
modifying the connection strength between neurons. His rule is now
called Hebbian learning.
➢ The year 1950: The Alan Turing who was an English mathematician
and pioneered Machine learning in 1950. Alan Turing publishes
"Computing Machinery and Intelligence" in which he proposed a test.
The test can check the machine's ability to exhibit intelligent behavior
equivalent to human intelligence, called a Turing test.
Introduction to Information Technology 51
History of AI
B. The birth of Artificial Intelligence (1952-1956)
➢ The year 1955: An Allen Newell and Herbert A. Simon created the
"first artificial intelligence program" Which was named "Logic
Theorist". This program had proved 38 of 52 Mathematics theorems,
and find new and more elegant proofs for some theorems.
➢ The year 1956: The word "Artificial Intelligence" first adopted by
American Computer scientist John McCarthy at the Dartmouth
Conference. For the first time, AI coined as an academic field. At that
time high-level computer languages such as FORTRAN, LISP, or
COBOL were invented. And the enthusiasm for AI was very high at
that time.
Introduction to Information Technology 52
C. The golden years-Early enthusiasm (1956-1974)
➢ The year 1966: The researchers emphasized developing
algorithms that can solve mathematical problems. Joseph
Weizenbaum created the first chatbot in 1966, which was named
as ELIZA.
➢ The year 1972: The first intelligent humanoid robot was built in
Japan which was named WABOT-1.

Introduction to Information Technology 53


D. The first AI winter (1974-1980)
➢ The duration between the years 1974 to 1980 was the first AI
winter duration. AI winter refers to the time period where
computer scientists dealt with a severe shortage of funding from
the government for AI researches.
➢ During AI winters, an interest in publicity on artificial
intelligence was decreased.

Introduction to Information Technology 54


E. A boom of AI (1980-1987)
➢ The year 1980: After AI winter duration, AI came back with
"Expert System". Expert systems were programmed that emulate
the decision-making ability of a human expert.
➢ In the Year 1980, the first national conference of the American
Association of Artificial Intelligence was held at Stanford University.

Introduction to Information Technology 55


F. The second AI winter (1987-1993)
➢ The duration between the years 1987 to 1993 was the second AI
Winter duration.
➢ Again, Investors and government stopped in funding for AI
research due to high cost but not efficient results. The expert
system such as XCON was very cost-effective.

Introduction to Information Technology 56


G. The emergence of intelligent agents (1993-2011)
➢ The year 1997: In the year 1997, IBM Deep Blue beats world
chess champion, Gary Kasparov, and became the first computer
to beat a world chess champion.
➢ The year 2002: for the first time, AI entered the home in the
form of Roomba, a vacuum cleaner.
➢ The year 2006: AI came into the Business world until the year
2006. Companies like Facebook, Twitter, and Netflix also started
using AI.

Introduction to Information Technology 57


H. Deep learning, big data and artificial general intelligence (2011-
present)
➢ The year 2011: In the year 2011, IBM's Watson won jeopardy, a
quiz show, where it had to solve complex questions as well as
riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
➢ The year 2012: Google has launched an Android app feature
"Google now", which was able to provide information to the user as
a prediction.

Introduction to Information Technology 58


➢ The year 2014: In the year 2014, Chatbot "Eugene Goostman"
won a competition in the infamous "Turing test."
➢ The year 2018: The "Project Debater" from IBM debated on
complex topics with two master debaters and also performed
extremely well.
➢ Google has demonstrated an AI program "Duplex" which was a
virtual assistant and which had taken hairdresser appointment on
call, and the lady on the other side didn't notice that she was
talking with the machine.

Introduction to Information Technology 59


Levels of AI
Stage 1 – Rule-Based Systems The most common uses of AI
today fit in this bracket, covering everything from business
software (Robotic Process Automation) and domestic appliances
to aircraft autopilots.
Stage 2 – Context Awareness and Retention
➢ Algorithms that develop information about the specific domain
they are being applied in. They are trained on the knowledge and
experience of the best humans, and their knowledge base can be
updated as new situations and queries arise. Well, known
applications of this level are chatbots and “roboadvisors”.

Introduction to Information Technology 60


Levels of AI
Stage 3 – Domain-Specific Expertise
➢ Going beyond the capability of humans, these systems build up
expertise in a specific context taking in massive volumes of
information which they can use for decision making. Successful
use cases have been seen in cancer diagnosis and the well-known
Google Deep mind’s AlphaGo. Currently, this type is limited to one
domain only would forget all it knows about that domain if you
started to teach it something else.

Introduction to Information Technology 61


Levels of AI

Stage 4 – Reasoning Machines


➢ These algorithms have some ability to attribute mental
states to themselves and others – they have a sense of
beliefs, intentions, knowledge, and how their own logic
works. This means they could reason or negotiate with
humans and other machines. At the moment these
algorithms are still in development, however, commercial
applications are expected within the next few years.

Introduction to Information Technology 62


Levels of AI

Stage 5 – Self Aware Systems / Artificial General


Intelligence (AGI)
➢ These systems have human-like intelligence – the
most commonly portrayed AI in media – however, no such
use is in evidence today. It is the goal of many working in
AI and some believe it could be realized already from
2024.

Introduction to Information Technology 63


Levels of AI
Stage 6 – Artificial Superintelligence (ASI)
➢ AI algorithms can outsmart even the most intelligent humans in
every domain. Logically it is difficult for humans to articulate what
the capabilities might be, yet we would hope examples would
include solving problems we have failed to so far, such as world
hunger and dangerous environmental change. Views vary as to
when and whether such a capability could even be possible, yet
there a few experts who claim it can be realized by 2029. Fiction
has tackled this idea for a long time, for example in the film Ex
Machina or Terminator.

Introduction to Information Technology 64


Stage 7 – Singularity and Transcendence
➢ This is the idea that development provided by ASI (Stage 6)
leads to a massive expansion in human capability. Human
augmentation could connect our brains to each other and to a
future successor of the current internet, creating a “hive mind” that
shares ideas, solves problems collectively, and even gives others
access to our dreams as observers or participants.

Introduction to Information Technology 65


Artificial Intelligence can be divided into various types,
A.Based on Capabilities
1. Weak AI or Narrow AI:
➢ Narrow AI is a type of AI which is able to perform a dedicated
task with intelligence. The most common and currently available AI
is Narrow AI in the world of Artificial Intelligence.
➢ Narrow AI cannot perform beyond its field or limitations, as it is
only trained for one specific task. Hence it is also termed as weak
AI. Narrow AI can fail in unpredictable ways if it goes beyond its
limits.
➢ Apple Siri is a good example of Narrow AI, but it operates with a
limited pre- defined range of functions.
Introduction to Information Technology 66
2. General AI:
➢ General AI is a type of intelligence that could perform any
intellectual task with efficiency like a human.
➢ The idea behind the general AI to make such a system that could
be smarter and think like a human on its own.
➢ Currently, there is no such system exists which could come
under general AI and can perform any task as perfect as a human. It
may arrive within the next 20 or so years but it has challenges
relating to hardware, the energy consumption required in today’s
powerful machines, and the need to solve for catastrophic memory
loss that affects even the most advanced deep learning algorithms
of today
Introduction to Information Technology 67
3. Super AI:
➢ Super AI is a level of Intelligence of Systems at which machines
could surpass human intelligence, and can perform any task better
than a human with cognitive properties. This refers to aspects like
general wisdom, problem solving and creativity. It is an outcome of
general AI.
➢ Some key characteristics of strong AI include capability include
the ability to think, to reason solve the puzzle, make judgments,
plan, learn, and communicate on its own.
➢ Super AI is still a hypothetical concept of Artificial Intelligence.
The development of such systems in real is still a world-changing
task.
Introduction to Information Technology 68
B. Based on the functionality

1. Reactive Machines
➢ Purely reactive machines are the most basic types of Artificial
Intelligence.
➢ Such AI systems do not store memories or past experiences for
future actions.
➢ These machines only focus on current scenarios and react on it
as per possible best action.
➢ IBM's Deep Blue system is an example of reactive machines.
➢ Google's AlphaGo is also an example of reactive machines.

Introduction to Information Technology 69


B. Based on the functionality

2. Limited Memory
➢ Limited memory machines can store past experiences or some
data for a short period of time.
➢ These machines can use stored data for a limited time period
only.
➢ Self-driving cars are one of the best examples of Limited
Memory systems. These cars can store the recent speed of nearby
cars, the distance of other cars, speed limits, and other information
to navigate the road.

Introduction to Information Technology 70


3. Theory of Mind
➢ Theory of Mind AI should understand human emotions, people,
beliefs, and be able to interact socially like humans.
➢ This type of AI machines is still not developed, but researchers
are making lots of efforts and improvement for developing such AI
machines.

Introduction to Information Technology 71


4. Self-Awareness
➢ Self-awareness AI is the future of Artificial Intelligence. These
machines will be super intelligent and will have their own
consciousness, sentiments, and self- awareness.
➢ These machines will be smarter than the human mind.
➢ Self-Awareness AI does not exist in reality still and it is a
hypothetical concept.

Introduction to Information Technology 72


Applications of AI
1. AI in agriculture
➢ Agriculture is an area which requires various resources,
labor, money, and time for best result. Now a day's agriculture
is becoming digital, and AI is emerging in this field. Agriculture
is applying AI as agriculture robotics, solid and crop
monitoring, predictive analysis. AI in agriculture can be very
helpful for farmers.

Introduction to Information Technology 73


2. AI in Healthcare
➢ In the last, five to ten years, AI becoming more advantageous
for the healthcare industry and going to have a significant impact
on this industry.
➢ Healthcare Industries are applying AI to make a better and
faster diagnosis than humans. AI can help doctors with diagnoses
and can inform when patients are worsening so that medical help
can reach the patient before hospitalization.

Introduction to Information Technology 74


3. AI in education:
➢ AI can automate grading so that the tutor can have more
time to teach. AI chatbot can communicate with students as a
teaching assistant.
➢ AI in the future can be work as a personal virtual tutor for
students, which will be accessible easily at any time and any
place.

Introduction to Information Technology 75


4.AI in Finance and E-commerce
➢ AI and finance industries are the best matches for each other.
The finance industry is implementing automation, chatbot, adaptive
intelligence, algorithm trading, and machine learning into financial
processes.
➢ AI is providing a competitive edge to the e-commerce industry,
and it is becoming more demanding in the e-commerce business.
AI is helping shoppers to discover associated products with
recommended size, color, or even brand.

Introduction to Information Technology 76


5. AI in Gaming
➢ AI can be used for gaming purposes. The AI machines can play
strategic games like chess, where the machine needs to think of a
large number of possible places.
6. AI in Data Security
➢ The security of data is crucial for every company and cyber-
attacks are growing very rapidly in the digital world. AI can be used
to make your data more safe and secure. Some examples such as
AEG bot, AI2 Platform, are used to determine software bugs and
cyber-attacks in a better way.
Introduction to Information Technology 77
7. AI in Social Media
➢ Social Media sites such as Facebook, Twitter, and Snapchat
contain billions of user profiles, which need to be stored and
managed in a very efficient way. AI can organize and manage
massive amounts of data. AI can analyze lots of data to identify the
latest trends, hashtags, and requirements of different users.

Introduction to Information Technology 78


8. AI in Travel &Transport
➢ AI is becoming highly demanding for travel industries. AI is
capable of doing various travel related works such as from making
travel arrangements to suggesting the hotels, flights, and best
routes to the customers. Travel industries are using AI- powered
chatbots which can make human-like interaction with customers for
a better and fast response.

Introduction to Information Technology 79


9. AI in the Automotive Industry
➢ Some Automotive industries are using AI to provide virtual
assistants to their use for better performance. Such as Tesla has
introduced TeslaBot, an intelligent virtual assistant.
➢ Various Industries are currently working for developing self-
driven cars which can make your journey more safe and secure.

Introduction to Information Technology 80


10. AI in Robotics:
➢ Artificial Intelligence has a remarkable role in Robotics.
Usually, general robots are programmed such that they can
perform some repetitive task, but with the help of AI, we can
create intelligent robots which can perform tasks with their own
experiences without pre-programmed.
➢ Humanoid Robots are the best examples for AI in robotics,
recently the intelligent Humanoid robot named Erica and Sophia
has been developed which can talk and behave like humans.

Introduction to Information Technology 81


11. AI in Entertainment
➢ We are currently using some AI-based applications in our daily
life with some entertainment services such as Netflix or Amazon.
With the help of ML/AI algorithms, these services show the
recommendations for programs or shows.

Introduction to Information Technology 82


Internet of Things (IoT)
AI − IoT essentially makes virtually anything “smart”,
meaning it enhances every aspect of life with the power of
data collection, artificial intelligence algorithms, and
networks. This can mean something as simple as
enhancing your refrigerator and cabinets to detect when
milk and your favorite cereal run low, and to then place an
order with your preferred grocer.

Introduction to Information Technology 83


Connectivity − New enabling technologies for networking and
specifically IoT networking, mean networks are no longer
exclusively tied to major providers. Networks can exist on a much
smaller and cheaper scale while still being practical. IoT creates
these small networks between its system devices.
Sensors − IoT loses its distinction without sensors. They act as
defining instruments that transform IoT from a standard passive
network of devices into an active system capable of real-world
integration.
Active Engagement − Much of today's interaction with connected
technology happens through passive engagement. IoT introduces
a new paradigm for active content, product, or service
engagement.

Introduction to Information Technology 84


What is IoT?
➢ According to the Internet Architecture Board’s (IAB) definition,
IoT is the networking of smart objects, meaning a huge number
of devices intelligently communicating in the presence of internet
protocol that cannot be directly operated by human beings but
exist as components in buildings, vehicles or the environment.

Introduction to Information Technology 85


➢ According to the Internet Engineering Task Force (IETF)
organization’s definition, IoT is the networking of smart objects in
which smart objects have some constraints such as limited
bandwidth, power, and processing accessibility for achieving
interoperability among smart objects.

Introduction to Information Technology 86


➢ According to the IEEE Communications category magazine’s
definition, IoT is a framework of all things that have a
representation in the presence of the internet in such a way that
new applications and services enable the interaction in the
physical and virtual world in the form of Machine-to-Machine
(M2M) communication in the cloud.

Introduction to Information Technology 87


➢ According to the Oxford dictionary’s definition, IoT is the
interaction of everyday object’s computing devices through the
Internet that enables the sending and receiving of useful data.
➢ The term Internet of Things (IoT) according to the 2020
conceptual framework is expressed through a simple formula
such as: IoT= Services+ Data+ Networks + Sensors

Introduction to Information Technology 88


History of IoT
The Internet of Things has not been around for very long.
However, there have been visions of machines communicating
with one another since the early 1800s. Machines have been
providing direct communications since the telegraph (the first
landline) was developed in the 1830s and 1840s. Described as
“wireless telegraphy,” the first radio voice transmission took
place on June 3, 1900, providing another necessary
component for developing the Internet of Things. The
development of computers began in the 1950s.

Introduction to Information Technology 89


The Internet, itself a significant component of the IoT, started out as
part of DARPA (Defense Advanced Research Projects Agency) in
1962 and evolved into ARPANET in 1969. In the 1980s, commercial
service providers began supporting public use of ARPANET, allowing
it to evolve into our modern Internet.
Global Positioning Satellites (GPS) became a reality in early 1993,
with the Department of Defense providing a stable, highly functional
system of 24 satellites. This was quickly followed by privately owned,
commercial satellites being placed in orbit. Satellites and landlines
provide basic communications for much of the IoT. One additional
and important component in developing a functional IoT was IPV6’s
remarkably intelligent decision to increase address space.

Introduction to Information Technology 90


The Internet of Things, as a concept, wasn’t officially named until
1999. One of the first examples of an Internet of Things is from the
early 1980s and was a Coca Cola machine, located at the Carnegie
Melon University. Local programmers would connect by the Internet to
the refrigerated appliance, and check to see if there was a drink
available and if it was cold, before making the trip. By the year 2013,
the Internet of Things had evolved into a system using multiple
technologies, ranging from the Internet to wireless communication and
from micro-electromechanical systems (MEMS) to embedded systems.
The traditional fields of automation (including the automation of
buildings and homes), wireless sensor networks, GPS, control
systems, and others, all support the IoT.

Introduction to Information Technology 91


Kevin Ashton, the Executive Director of Auto-ID Labs at MIT, was
the first to describe the Internet of Things, during his 1999
speech. Kevin Ashton stated that Radio Frequency Identification
(RFID) was a prerequisite for the Internet of Things. He
concluded if all devices were “tagged,” computers could manage,
track, and inventory them. To some extent, the tagging of things
has been achieved through technologies such as digital
watermarking, barcodes, and QR codes. Inventory control is one
of the more obvious advantages of the Internet of Things.

Introduction to Information Technology 92


IoT − Advantages
• Improved Customer Engagement − Current analytics suffer from
blind-spots and significant flaws inaccuracy; and as noted,
engagement remains passive. IoT completely transforms this to
achieve richer and more effective engagement with audiences.
• Technology Optimization − The same technologies and data
which improve the customer experience also improve device use,
and aid in more potent improvements to technology. IoT unlocks a
world of critical functional and field data.

Introduction to Information Technology 93


• Reduced Waste − IoT makes areas of improvement clear. Current
analytics give us superficial insight, but IoT provides real-world
information leading to the more effective management of
resources.
• Enhanced Data Collection − Modern data collection suffers from
its limitations and its design for passive use. IoT breaks it out of
those spaces and places it exactly where humans really want to go
to analyze our world. It allows an accurate picture of everything.

Introduction to Information Technology 94


IoT – Disadvantages
• As the number of connected devices increases and more
information is shared between devices, the potential that a hacker
could steal confidential information also increases. If there’s a
bug in the system, it’s likely that every connected device will
become corrupted.
• Since there’s no international standard of compatibility for IoT,
it’s difficult for devices from different manufacturers to
communicate with each other.
• Enterprises may eventually have to deal with massive numbers
maybe even millions of IoT devices and collecting and managing
the data from all those devices will be challenging.
Introduction to Information Technology 95
Challenges of IoT
• Security − IoT creates an ecosystem of constantly connected
devices communicating over networks. The system offers little
control despite any security measures. This leaves users exposed
to various kinds of attackers.
• Privacy − The sophistication of IoT provides substantial personal
data in extreme detail without the user's active participation.
• Complexity − Some find IoT systems complicated in terms of
design, deployment, and maintenance given their use of multiple
technologies and a large set of new enabling technologies.

Introduction to Information Technology 96


• Flexibility − Many are concerned about the flexibility of an IoT
system to integrate easily with another. They worry about finding
themselves with several conflicting or locking systems.
• Compliance − IoT, like any other technology in the realm of
business, must comply with regulations. Its complexity makes the
issue of compliance seem incredibly challenging when many
consider standard software compliance a battle.

Introduction to Information Technology 97


Architecture of IoT

Introduction to Information Technology 98


Devices and Networks
Connected devices are part of a scenario in which every
device talks to other related devices in an environment to
automate home and industrial tasks, and to communicate
usable sensor data to users, businesses and other
interested parties. IoT devices are meant to work in
concert for people at home, in industry or in the
enterprise. As such, the devices can be categorized into
three main groups: consumer, enterprise and industrial.

Introduction to Information Technology 99


IoT Tools and Platforms

 GE Predix
 Cisco IoTCloud
 IBM Watson IoT
 PTC ThingWorx

Introduction to Information Technology 100


Applications of IoT
Agriculture - For indoor planting, IoT makes monitoring and
management of microclimate conditions a reality, which in turn
increases production. For outside planting, devices using IoT
technology can sense soil moisture and nutrients, in conjunction
with weather data, better control smart irrigation and fertilizer
systems. If the sprinkler systems dispense water only when
needed, for example, this prevents wasting a precious resource.
Consumer Use - For private citizens, IoT devices in the form of
wearables and smart homes make life easier. Wearables cover
accessories such as Fitbit, smartphones, Apple watches, health
monitors, to name a few.
Introduction to Information Technology 101
Applications of IoT
Healthcare - First and foremost, wearable IoT devices let hospitals
monitor their patients’ health at home, thereby reducing hospital
stays while still providing up to the minute realtime information that
could save lives. In hospitals, smart beds keep the staff informed
as to the availability, thereby cutting wait time for free space.
Insurance - Even the insurance industry can benefit from the IoT
revolution. Insurance companies can offer their policyholders
discounts for IoT wearables such as Fitbit. By employing fitness
tracking, the insurer can offer customized policies and encourage
healthier habits, which in the long run benefits everyone, insurer,
and customer alike.
Introduction to Information Technology 102
Applications of IoT
Manufacturing - The world of manufacturing and industrial
automation is another big winner in the IoT sweepstakes.
RFID and GPS technology can help a manufacturer track a
product from its start on the factory floor to its placement in
the destination store, the whole supply chain from start to
finish.
Retail - IoT technology has a lot to offer the world of retail.
Online and in-store shopping sales figures can control
warehouse automation and robotics, information gleaned
from IoT sensors.

Introduction to Information Technology 103


Applications of IoT
Transportation - By this time, most people have heard about the progress
being made with self-driving cars. But that’s just one bit of the vast potential
in the field of transportation. The GPS, which if you think of it is another
example of IoT, is being utilized to help transportation companies plot faster
and more efficient routes for trucks hauling freight, thereby speeding up
delivery times.
Utilities - IoT sensors can be employed to monitor environmental conditions
such as humidity, temperature, and lighting. The information provided by IoT
sensors can aid in the creation of algorithms that regulate energy usage and
make the appropriate adjustments, eliminating the human equation (and let’s
face it, who of us hasn’t forgotten to switch off lights in a room or turn down
the thermostat?).

Introduction to Information Technology 104


IoT Based Smart Home
Smart Home initiative allows subscribers to remotely manage
and monitor different home devices from anywhere via
smartphones or over the web with no physical distance
limitations. With the ongoing development of mass-deployed
broadband internet connectivity and wireless technology, the
concept of a Smart Home has become a reality where all
devices are integrated and interconnected via the wireless
network.

Introduction to Information Technology 105


IoT Based Smart Home
• Remote Control Appliances: Switching on and off remotely appliances to avoid accidents and save
energy.
• Weather: Displays outdoor weather conditions such as humidity, temperature, pressure, wind speed
and rain levels with the ability to transmit data over long distances.
• Smart Home Appliances: Refrigerators with LCD screen telling what’s inside, food that’s about to
expire, ingredients you need to buy and with all the information available on a smartphone app.
Washing machines allowing you to monitor the laundry remotely, and. The kitchen ranges with the
interface to a Smartphone app allowing remotely adjustable temperature control and monitoring the
oven’s self-cleaning feature.
• Safety Monitoring: cameras, and home alarm systems making people feel safe in their daily life at
home.
• Intrusion Detection Systems: Detection of window and door openings and violations to prevent
intruders.
• Energy and Water Use: Energy and water supply consumption monitoring to obtain advice on how to
save cost and resources, & many more.

Introduction to Information Technology 106


Augmented Reality (AR)
Augmented reality (AR) is a form of emerging technology that
allows users to overlay computergenerated content in the real
world. AR refers to a live view of a physical real-world environment
whose elements are merged with augmented computer-generated
images creating a mixed reality. The augmentation is typically done
in real-time and in semantic context with environmental elements.
By using the latest AR techniques and technologies, the information
about the surrounding real world becomes interactive and digitally
usable. Through this augmented vision, a user can digitally interact
with and adjust information about their surrounding environment.

Introduction to Information Technology 107


Virtual Reality (VR)

VR is fully immersive, which tricks your senses into


thinking you’re in a different environment or world
apart from the real world. Using a head-mounted
display (HMD) or headset, you’ll experience a
computer-generated world of imagery and sounds in
which you can manipulate objects and move around
using haptic controllers while tethered to a console or
PC.

Introduction to Information Technology 108


Mixed Reality (MR)

Mixed Reality (MR), sometimes referred to as hybrid


reality, is the merging of real and virtual worlds to
produce new environments and visualizations where
physical and digital objects co-exist and interact in
real-time. It means placing new imagery within a real
space in such a way that the new imagery is able to
interact, to an extent, with what is real in the physical
world we know

Introduction to Information Technology 109


Applications of AR Systems

Technology is ever-changing and ever-growing. One of


the newest developing technologies is augmented
reality (AR), which can be applied to many different
disciplines such as education, medicine,
entertainment, military, etc. Let us see some of its
applications.

Introduction to Information Technology 110

You might also like