0% found this document useful (0 votes)
80 views15 pages

Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views15 pages

Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

INTRODUCTION

Understanding just how is it that machines can become intelligent. One must have heard terms
like AI, Deep Learning, Machine Learning, and Algorithms. Throughout this publication, the
aim would be to simplify all these terms much more. Let’s just say that the overarching field is
called Artificial Intelligence or AI. If we have to put it in a few words, AI is where machines can
do what humans can do. Well not quite so, currently systems do not have a sense of self
awareness. As we progress through the study, we will find out the various levels of AI.

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is


intelligence demonstrated by machines, in contrast to the natural intelligence displayed by
humans and animals. Artificial Intelligence is an approach to make a computer, a robot, or a
product to think how smart human think. And finally this study outputs intelligent software
systems. The aim of AI is to improve computer functions which are related to human knowledge,
for example, reasoning, learning, and problem-solving.

Artificial Intelligence

The short answer to what is Artificial Intelligence is that it depends on who you ask.

A layman with a fleeting understanding of technology would link it to robots. They’d say
Artificial Intelligence is a terminator like-figure that can act and think on its own.

An AI researcher would say that it’s a set of algorithms that can produce results without having
to be explicitly instructed to do so. And they would all be right.

Conceptual view of an AI system

The present description of an AI system is based on the conceptual view of AI detailed in


Artificial Intelligence: A Modern Approach. A conceptual view of AI is first presented as the
high-level structure of a generic AI system (also referred to as “intelligent agent”). An AI system
consists of three main elements: sensors, operational logic and actuators. Sensors collect raw
data from the environment, while actuators act to change the state of the environment.

Environment:

An environment in relation to an AI system is a space observable through perceptions (via


sensors) and influenced through actions (via actuators). Sensors and actuators are either
machines or humans. Environments are either real (e.g. physical, social, mental) and usually only
partially observable, or else virtual (e.g. board games) and generally fully observable.
AI system:

An AI system is a machine-based system that can, for a given set of human-defined objectives,
make predictions, recommendations or decisions influencing real or virtual environments. It does
so by using machine and/or human-based inputs to: i) perceive real and/or virtual environments;
ii) abstract such perceptions into models through analysis in an automated manner (e.g. with ML,
or manually); and iii) use model inference to formulate options for information or action. AI
systems are designed to operate with varying levels of autonomy.

History of Artificial Intelligence

In 1950, British mathematician Alan Turing published a paper on computing machinery and
intelligence (Turing, 1950) posing the question of whether machines can think. He developed a
simple heuristic to test his hypothesis: could a computer have a conversation and answer
questions in a way that would trick a suspicious human into thinking the computer was actually a
human? The resulting “Turing test” is still used today. That same year, Claude Shannon
proposed the creation of a machine that could be taught to play chess (Shannon, 1950). The
machine could be trained by using brute force or by evaluating a small set of an opponent’s
strategic moves (UW, 2006). Many consider the Dartmouth Summer Research Project in the
summer of 1956 as the birthplace of artificial intelligence (AI). At this workshop, the principle of
AI was conceptualized by John McCarthy, Alan Newell, Arthur Samuel, Herbert Simon and
Marvin Minsky. While AI research has steadily progressed over the past 60 years, the promises
of early AI promoters proved to be overly optimistic. This led to an “AI winter” of reduced
funding and interest in AI research during the 1970s. New funding and interest in AI appeared
with advances in computation power that became available in the 1990s (UW, 2006) The AI
winter ended in the 1990s as computational power and data storage were advancing to the point
that complex tasks were becoming feasible. In 1995, AI took a major step forward with Richard
Wallace’s development of the Artificial Linguistic Internet Computer Entity that could hold
basic conversations. Also in the 1990s, IBM developed a computer named Deep Blue that used a
brute force approach to play against world chess champion Gary Kasparov. Deep Blue would
look ahead six steps or more and could calculate 330 million positions per second (Somers,
2013). In 1996, Deep Blue lost to Kasparov, but won the rematch a year later. In 2015,
Alphabet’s Deep Mind launched software to play the ancient game of Go against the best players
in the world. It used an artificial neural network that was trained on thousands of human amateur
and professional games to learn how to play. In 2016, Alpha Go beat the world’s best player at
the time, Lee Sedol, four games to one. AlphaGo’s developers then let the program play against
itself using trial and error, starting from completely random play with a few simple guiding rules.
The result was a program (AlphaGo Zero) that trained itself faster and was able to beat the
original AlphaGo by 100 games to 0. Entirely from self-play – with no human intervention and
using no historical data – AlphaGo Zero surpassed all other versions of AlphaGo in 40 days
(Silver et al., 2017).

What are the Types of Artificial Intelligence?

Not all types of AI all the above fields simultaneously. Different Artificial Intelligence entities
are built for different purposes, and that’s how they vary. There are the three types of Artificial
Intelligence

1. Artificial Narrow Intelligence (ANI)


2. Artificial General Intelligence (AGI)
3. Artificial Super Intelligence (ASI)
 What is Artificial Narrow Intelligence (ANI)?

This is the most common form of AI that you’d find in the market now. These Artificial
Intelligence systems are designed to solve one single problem and would be able to execute a
single task really well. By definition, they have narrow capabilities, like recommending a
product for an e-commerce user or predicting the weather. This is the only kind of Artificial
Intelligence that exists today. They’re able to come close to human functioning in very specific
contexts, and even surpass them in many instances, but only excelling in very controlled
environments with a limited set of parameters.

 What is Artificial General Intelligence (AGI)?

AGI is still a theoretical concept. It’s defined as AI which has a human-level of cognitive
function, across a wide variety of domains such as language processing, image processing,
computational functioning and reasoning and so on.

We’re still a long way away from building an AGI system. An AGI system would need to
comprise of thousands of Artificial Narrow Intelligence systems working in tandem,
communicating with each other to mimic human reasoning. Even with the most advanced
computing systems and infrastructures, such as Fujitsu’s K or IBM’s Watson, it has taken them
40 minutes to simulate a single second of neuronal activity. This speaks to both the immense
complexity and interconnectedness of the human brain, and to the magnitude of the challenge of
building an AGI with our current resources.

 What is Artificial Super Intelligence (ASI)?

We’re almost entering into science-fiction territory here, but ASI is seen as the logical
progression from AGI. An Artificial Super Intelligence (ASI) system would be able to surpass all
human capabilities. This would include decision making, taking rational decisions, and even
includes things like making better art and building emotional relationships.

Once we achieve Artificial General Intelligence, AI systems would rapidly be able to improve
their capabilities and advance into realms that we might not even have dreamed of. While the
gap between AGI and ASI would be relatively narrow (some say as little as a nanosecond,
because that’s how fast Artificial Intelligence would learn) the long journey ahead of us towards
AGI itself makes this seem like a concept that lays far into the future.
The AI system lifecycle

The AI system lifecycle phases can be described as follows:

1. Design, data and modeling: It includes several activities, whose order may vary for
different AI systems:
 Planning and design of the AI system involves articulating the system’s concept
and objectives, underlying assumptions, context and requirements, and potentially
building a prototype.
 Data collection and processing includes gathering and cleaning data, performing
checks for completeness and quality, and documenting the metadata and
characteristics of the dataset. Dataset metadata include information on how a
dataset was created, its composition, its intended uses and how it has been
maintained over time.
 Model building and interpretation involves the creation or selection of models
or algorithms, their calibration and/or training and interpretation.
2. Verification and validation involves executing and tuning models, with tests to assess
performance across various dimensions and considerations.
3. Deployment into live production involves piloting, checking compatibility with legacy
systems, ensuring regulatory compliance, managing organizational change and evaluating
user experience.
4. Operation and monitoring of an AI system involves operating the AI system and
continuously assessing its recommendations and impacts (both intended and unintended)
in light of objectives and ethical considerations. This phase identifies problems and
adjusts by reverting to other phases or, if necessary, retiring an AI system from
production.
How Artificial Intelligence (AI) works?

Building an AI system is a careful process of reverse-engineering human traits and capabilities in


a machine, and using its computational prowess to surpass what we are capable of. Artificial
Intelligence can be built over a diverse set of components and will function as an amalgamation
of:

 Philosophy
 Mathematics
 Economics
 Neuroscience
 Psychology
 Computer Engineering
 Control Theory and Cybernetics
 Linguistics

 Philosophy
The purpose of philosophy for humans is to help us understand our actions, their consequences,
and how we can make better decisions. Modern intelligent systems can be built by following the
different approaches of philosophy that will enable these systems to make the right decisions,
mirroring the way that an ideal human being would think and behave. Philosophy would help
these machines think and understand about the nature of knowledge itself. It would also help
them make the connection between knowledge and action through goal-based analysis to achieve
desirable outcomes.

 Mathematics

Mathematics is the language of the universe and system built to solve universal problems would
need to be proficient in it. For machines to understand logic, computation, and probability are
necessary. The earliest algorithms were just mathematical pathways to make calculations easy,
soon to be followed by theorems, hypotheses and more, which all followed a pre-defined logic to
arrive at a computational output.

 Economics

Economics is the study of how people make choices according to their preferred outcomes. It’s
not just about money, although money the medium of people’s preferences is manifested into the
real world. There are many important concepts in economics, such as Design Theory, operations
research and Markov decision processes. They all have contributed to our understanding of
‘rational agents’ and laws of thought, by using mathematics to show how these decisions are
being made at large scales along with their collective outcomes are. These types of decision-
theoretic techniques help build these intelligent systems.

 Neuroscience

Since neuroscience studies how the brain functions and Artificial Intelligence is trying to
replicate the same, there’s an obvious overlap here. The biggest difference between human
brains and machines is that computers are millions of times faster than the human brain, but the
human brain still has the advantage in terms of storage capacity and interconnections. This
advantage is slowly being closed with advances in computer hardware and more sophisticated
software, but there’s still a big challenge to overcome as are still not aware of how to use
computer resources to achieve the brain’s level of intelligence.

 Psychology
Psychology can be viewed as the middle point between neuroscience and philosophy. It tries to
understand how our specially-configured and developed brain reacts to stimuli and responds to
its environment, both of which are important to building an intelligent system. Cognitive
psychology views the brain as an information processing device, operating based on beliefs and
goals and beliefs, similar to how we would build an intelligence machine of our own. Many
cognitive theories have already been codified to build algorithms that power the chat boats of
today.

 Computer Engineering
The most obvious application here, but we’ve put this end to help you understand what all this
computer engineering is going to be based on. Computer engineering will translate all our
theories and concepts into a machine-readable language so that it can make its computations to
produce an output that we can understand. Each advance in computer engineering has opened up
more possibilities to build even more powerful Artificial Intelligence systems that are based on
advanced operating systems, programming languages, information management systems, tools,
and state-of-the-art hardware.

 Control Theory and Cybernetics


To be truly intelligent, a system needs to be able to control and modify its actions to produce the
desired output. The desired output in question is defined as an objective function, towards which
the system will try to move towards, by continually modifying its actions based on the changes
in its environment using mathematical computations and logic to measure and optimize its
behaviors.

 Linguistics
All thought is based on some language and is the most understandable representation of
thoughts. Linguistics has led to the formation of natural language processing, that help machines
understand our syntactic language, and also to produce output in a manner that is understandable
to almost anyone. Understanding a language is more than just learning how sentences are
structured, it also requires a knowledge of the subject matter and context, which has given rise to
the knowledge representation branch of linguistics.

AI applications

AI in transportation with autonomous vehicles

Artificial intelligence (AI) systems are emerging across the economy. However, one of the most
transformational shifts has been with transportation and the transition to self-driving, or
autonomous vehicles (AVs).

At the basic level, AVs have new systems of sensors and processing capacity that generate new
complexities in the extract, transform and load process of their data systems. Innovation is
flourishing amid high levels of investment in all key areas for AV. Less expensive light detection
and ranging systems, for example, can map out the environment. In addition, new computer
vision technologies can track the eyes and focus of drivers and determine when they are
distracted. Now, after pulling in data and processing it, AI is adding another step: split-second
operational decisions.

The core standard for measuring the progress of AV development is a six-stage standard
developed by the Society of Automotive Engineers (SAE). The levels can be summarized as
follows:

Level 0 (no driving automation): A human driver controls everything. There is no automated
steering, acceleration, braking, etc.

Level 1 (driver assistance): There is a basic level of automation, but the driver remains in
control of most functions. The SAE says lateral (steering) or longitudinal control (e.g.
acceleration) can be done autonomously, but not simultaneously, at this level.

Level 2 (partial driving automation): Both lateral and longitudinal motion is controlled
autonomously, for example with adaptive cruise control and functionality that keeps the car in its
lane.

Level 3 (conditional driving automation): A car can drive on its own, but needs to be able to
tell the human driver when to take over. The driver is considered the fallback for the system and
must stay alert and ready.

Level 4 (high driving automation): The car can drive itself and does not rely on a human to
take over in case of a problem. However, the system is not yet capable of autonomous driving in
all circumstances (depending on situation, geographic area, etc.).

Level 5 (full driving automation): The car can drive itself without any expectation of human
intervention, and can be used in all driving situations. There is significant debate among
stakeholders about how far the process has come towards fully autonomous driving.
Stakeholders also disagree about the right approach to introduce autonomous functionality into
vehicles.

AI in agriculture

Improving accuracy of cognitive computing technologies such as image recognition is changing


agriculture. Traditionally, agriculture has relied on the eyes and hands of experienced farmers to
identify the right crops to pick. “Harvesting” robots equipped with AI technologies and data
from cameras and sensors can now make this decision in real time. This type of robot can
increasingly perform tasks that previously required human labour and knowledge. Technology
start-ups are creating innovative solutions leveraging AI in agriculture. They can be categorized
as follows:
Agricultural robots handle essential agricultural tasks such as harvesting crops. Compared to
human workers, these robots are increasingly fast and productive.

Crop and soil monitoring leverages computer vision and deep-learning algorithms to monitor
crop and soil health. Monitoring has improved due to greater availability of satellite data (Figure
3.3).

Predictive analytics use ML models to track and predict the impact of environmental factors on
crop yield.

AI in financial services

In the financial sector, large companies are rapidly deploying AI. Financial service companies
are combining different ML practices. It uses language processing, deep learning, graph theory
and more to develop AI solutions for decision making in financial corporate. Deploying AI in the
financial sector has many significant benefits. These include improving customer experience,
identifying rapidly smart investment opportunities and possibly granting customers more credit
with better conditions. However, it raises policy questions related to ensuring accuracy and
preventing discrimination, as well as the broader impact of automation on jobs.

AI in marketing and advertising

AI is influencing marketing and advertising in many ways. At the core, AI is enabling the
personalization of online experiences. This helps display the content in which consumers are
most likely to be interested. Developments in ML, coupled with the large quantities of data being
generated, increasingly allow advertisers to target their campaigns. They can deliver
personalized and dynamic ads to consumers at an unprecedented scale. Personalized advertising
offers significant benefits to enterprises and consumers. For enterprises, it could increase sales
and the return on investment of marketing campaigns. For consumers, online services funded by
advertising revenue are often provided free of charge to end users and can significantly decrease
consumers’ research costs.

AI in science

Global challenges today range from climate change to antibiotic bacterial resistance. Solutions
to many of these challenges require increases in scientific knowledge. AI could increase the
productivity of science, at a time when some scholars are claiming that new ideas may be
becoming harder to find. AI also promises to improve research productivity even as pressure on
public research budgets is increasing. Scientific insight depends on drawing understanding from
vast amounts of scientific data generated by new scientific instrumentation. In this context, using
AI in science is becoming indispensable. Furthermore, AI will be a necessary complement to
human scientists because the volume of scientific papers is vast and growing rapidly, and
scientists may have reached “peak reading”. The use of AI in science may also enable novel
forms of discovery and enhance the reproducibility of scientific research. AI’s applications in
science and industry have become numerous and increasingly significant. For instance, AI has
predicted the behavior of chaotic systems, tackled complex computational problems in genetics,
improved the quality of astronomical imaging and helped discover the rules of chemical
synthesis. In addition, AI is being deployed in functions that range from analysis of large
datasets, hypothesis generation, and comprehension and analysis of scientific literature to
facilitation of data gathering, experimental design and experimentation itself.

AI in health

Background AI applications in healthcare and pharmaceuticals can help detect health conditions
early, deliver preventative services, optimize clinical decision making, and discover new
treatments and medications. They can facilitate personalized healthcare and precision medicine,
while powering self-monitoring tools, applications and trackers. AI in healthcare offers potential
benefits for quality and cost of care. Nevertheless, it also raises policy questions, in particular
concerning access to (health) data and privacy. This section focuses on AI’s specific implications
for healthcare. In some ways, the health sector is an ideal platform for AI systems and a perfect
illustration of its potential impacts. A knowledge-intensive industry, it depends on data and
analytics to improve therapies and practices. There has been tremendous growth in the range of
information collected, including clinical, genetic, behavioral and environmental data. Every day,
healthcare professionals, biomedical researchers and patients produce vast amounts of data from
an array of devices. These include electronic health records (EHRs), genome sequencing
machines, high-resolution medical imaging, Smartphone applications and ubiquitous sensing, as
well as Internet of Things devices that monitor patient health.

AI in security

AI promises to help address complex digital and physical security challenges.AI is already
broadly used in digital security applications such as network security, anomaly detection,
security operations automation and threat detection. At the same time, malicious use of AI is
expected to increase. Such malicious activities include identifying software vulnerabilities with
the goal of exploiting them to breach the availability, integrity or confidentiality of systems,
networks and data. This will affect the nature and overall level of digital security risk.

AI in the public sector

The potential of AI for public administrations is manifold. The development of AI technologies


is already having an impact on how the public sector works and designs policies to serve citizens
and businesses. Applications touch on areas such as health, transportation and security services.
AI applications using augmented and virtual reality

Companies are using AI technology and high-level visual recognition tasks such as image
classification and object detection to develop AR and virtual reality (VR) hardware and software.
Benefits include offering immersive experiences, training and education, helping people with
disabilities and providing entertainment.

What are the advantages of Artificial Intelligence?

There’s no doubt in the fact that technology has made our life better. From music
recommendations, map directions, mobile banking to fraud prevention, AI and other technology
has taken over. There’s a fine line between advancement and destruction. Let us take a look at
some advantages of Artificial Intelligence-

 Reduction in human error


 Available 24×7
 Helps in repetitive work
 Digital assistance
 Faster decisions
 Rational Decision Maker
 Medical applications
 Improves Security
 Efficient Communication

 Reduction in human error

In an Artificial intelligence model, all decisions are taken from the previously gathered
information after having applied a certain set of algorithms. Hence, errors are reduced and the
chances of accuracy only increase with a greater degree of precision. In case of humans
performing any task, there’s always a chance of error. We aren’t powered by algorithms and
programs and thus, AI can be used to avoid such human error.
 Available 24×7

While an average human works 6-8 hours a day, AI manages to make machines work 24×7
without any breaks or boredom. As one might know, humans do not have the capability to work
for a long period, our body requires rest. An AI powered system won’t require any breaks in
between and is best used for tasks that need 24/7 concentration.

 Helps in repetitive work

AI can productively automate mundane human tasks and free them up to be increasingly creative
– right from sending a thank you mail, or verifying documents to decluttering or answering
queries. A repetitive task such a making food in a restaurant or in a factory can be messed up
because humans are tired or uninterested for a long time. Such tasks can easily be performed
efficiently with the help of AI.

 Digital assistance

Many of the highly advanced organizations use digital assistants to interact with users in order to
save human resources. These digital assistants are also used in many websites to answer user
queries and provide a smooth functioning interface. Chat bots are a great example for the same.
Read here to know more on how to build an AI Chabot.

 Faster decisions 

AI, alongside other technologies, can make machines take decisions faster than an average
human to carry out actions quicker. This is because while making a decision, humans tend to
analyze many factors both emotionally and practically as opposed to AI-powered machines that
deliver programmed results quickly.

 Rational Decision Maker


We as humans may have evolved to a great extent technologically, but when it comes to decision
making, we still allow our emotions to take over. In certain situations, it becomes important to
take quick, efficient and logical decisions without letting our emotions control the way we think.
AI powered decision making will be controlled with the help of algorithms and thus, there is
no scope for emotional decision making. This ensures that efficiency will not be affected and
increases productivity. 

 Medical applications

One of the biggest advantages of Artificial Intelligence is its use in the medical industry. Doctors
are now able to assess their patients health risks with the help of medical applications built for
AI. Radio surgery is being used to operate on tumors in such a way that it won’t damage
surrounding tissues and cause any further damage. Medical professionals have been trained to
use AI for surgery. They can also help in efficiently detecting and monitoring various
neurological disorders and stimulate the brain functions. 

 Improves Security

With advancement in technology, there are chances of it being used for the wrong reasons such
as fraud and identity theft. But if used in the right manner, AI can be very helpful in keeping our
security intact. It is being developed to help protect our life and property. One major area where
we can already see the implementation of AI in security is Cyber security. AI has completely
transformed the way we
able to secure ourselves against any cyber-threats. Read further to know about AI in Cyber
security and how it helps, here.

 Efficient Communication 

When we look at life just a couple of years ago, people who didn’t speak the same language
weren’t able to communicate with each other without the help of a human translator who could
understand and speak both languages. With the help of AI, such a problem does not exist.
Natural Language Processing or NLP allows systems to translate words from one language to
another, eliminating a middleman. Google translate has advanced to a great extent and even
provides an audio example of how a word/sentence in another language must be pronounced.

What are the disadvantages of Artificial Intelligence?

Let’s have a look the disadvantages of Artificial Intelligence:

 Cost overruns

 Dearth of talent

 Lack of practical products

 Lack of standards in software development

 Potential for misuse

 Highly dependent on machines

 Requires Supervision

Conclusion

You might also like