Artificial Intelligence: By-Srishti Bhatia
Artificial Intelligence: By-Srishti Bhatia
INTELLIGENCE
By- Srishti Bhatia
What is artificial intelligence?
CAN
ancient Greek myths of Antiquity. Aristotle's development of
syllogism and its use of deductive reasoning was a key moment
in mankind's quest to understand its own intelligence. While
the roots are long and deep, the history of artificial intelligence
MACHINES
as we think of it today spans less than a century. The
following is a quick look at some of the most important events
in AI. 1940s(1943) Warren McCullough and Walter Pitts
publish "A Logical Calculus of Ideas Immanent in Nervous
Activity.
"The paper proposed the first mathematic model for building a
neural network. (1949) In his book The Organization of
THINK? - ALAN
Behavior: A Neuropsychological Theory, Donald Hebb proposes
the theory that neural pathways are created from experiences
and that connections between neurons become stronger the
more frequently they're used. Hebbian learning continues to
TUrING, 1950
be an important model in AI.
1950 to 2020 of AI
• (1950) Alan Turing publishes "Computing Machinery and • (1956) Allen Newell and Herbert Simon demonstrate
Intelligence, proposing what is now known as the Turing Logic Theorist (LT), the first reasoning program.
Test, a method for determining if a machine is • (1958) John McCarthy develops the AI programming
intelligent. (1950) Harvard undergraduates Marvin language Lisp and publishes the paper. "Programs with
Minsky and Dean Edmonds build SNARC, the first neural
network computer. Common Sense." The paper proposed the
hypothetical Advice Taker, a complete AI system with
• (1950) Claude Shannon publishes the paper the ability to learn from experience as effectively as
"Programming a Computer for Playing Chess."(1950) humans do.
Isaac Asimov publishes the "Three Laws of Robotics." • (1959) Allen Newell, Herbert Simon and J.C. Shaw
• (1952) Arthur Samuel develops a self-learning program develop the General Problem Solver (GPS), a program
to play checkers. (1954) The Georgetown-IBM machine designed to imitate human problem-solving. (1959)
translation experiment automatically translates 60 Herbert Gelernter develops the Geometry Theorem
carefully selected Russian sentences into English. Prover program.
• (1956) The phrase artificial intelligence is coined at the • (1959) Arthur Samuel coins the term machine learning
"Dartmouth Summer Research Project on Artificial while at IBM.(1959) John McCarthy and Marvin
Intelligence." Led by John McCarthy, the conference, Minsky found the MIT Artificial Intelligence Project.
which defined the scope and goals of AI, is widely
considered to be the birth of artificial intelligence as we
know it today.
Continued (1960-1980)
• 1960s-(1963) John McCarthy starts the AI Lab at Stanford.(1966) R1 kicks off an investment boom in expert systems that will last for
The Automatic Language Processing Advisory Committee much of the decade, effectively ending the first "AI Winter“.
(ALPAC) report by the U.S. government details the lack of
progress in machine translations research, a major Cold War • (1982) Japan's Ministry of International Trade and Industry
initiative with the promise of automatic and instantaneous
translation of Russian. The ALPAC report leads to the launches the ambitious Fifth Generation Computer Systems
cancellation of all government-funded MT projects. (1969) The project. The goal of FGCS is to develop supercomputer-like
first successful expert systems are developed in DENDRAL, a XX performance and a platform for AI development.(1983) In
program, and MYCIN, designed to diagnose blood infections, are response to Japan's FGCS, the U.S. government launches the
created at Stanford. Strategic Computing Initiative to provide DARPA funded
• 1970s-(1972) The logic programming language PROLOG is research in advanced computing and artificial intelligence.
created.(1973) The "Lighthill Report," detailing the • (1985) Companies are spending more than a billion dollars a
disappointments in AI research, is released by the British year on expert systems and an entire industry known as the Lisp
government and leads to severe cuts in funding for artificial machine market springs up to support them. Companies like
intelligence projects. (1974-1980) Frustration with the progress Symbolics and Lisp Machines Inc. build specialized computers to
of AI development leads to major DARPA cutbacks in academic run on the AI programming language Lisp.
grants. Combined with the earlier ALPAC report and the previous • (1987-1993) As computing technology improved, cheaper
year's "Lighthill Report," artificial intelligence funding dries up alternatives emerged and the Lisp machine market collapsed in
and research stalls. This period is known as the "First AI Winter." 1987, ushering in the "Second AI Winter." During this period,
• 1980s (1980)Digital Equipment Corporations develops R1 (also expert systems proved too expensive to maintain and update,
known as XCON), the first successful commercial expert system. eventually falling out of favor.
Designed to configure orders for new computer systems,
Continued(1990-2020)
• 1990s(1991) U.S. forces deploy DART, an automated logistics ushering in breakthrough era for neural networks and deep learning
planning and scheduling tool, during the Gulf War.(1992) Japan funding.
terminates the FGCS project in 1992, citing failure in meeting the
ambitious goals outlined a decade earlier.(1993) DARPA ends the • (2014) Google makes first self-driving car to pass a state driving
Strategic Computing Initiative in 1993 after spending nearly $1 billion test. (2014) Amazon's Alexa, a virtual home is released 2015-
and falling far short of expectations. (1997) IBM's Deep Blue beats 2021(2016) Google DeepMind's AlphaGo defeats world champion Go
world chess champion Gary Kasparov player Lee Sedol. The complexity of the ancient Chinese game was
seen as a major hurdle to clear in AI.(2016) The first "robot citizen", a
• 2000s(2005) STANLEY, a self-driving car, wins the DARPA Grand humanoid robot named Sophia, is created by Hansen Robotics and is
Challenge.(2005) The U.S. military begins investing in autonomous capable of facial recognition, verbal communication and facial
robots like Boston Dynamic's "Big Dog" and iRobot's expression.
"PackBot."(2008) Google makes breakthroughs in speech recognition
and introduces the feature in its iPhone app. 2010-2014(2011) IBM's • (2018) Google releases natural language processing engine BERT,
Watson trounces the competition on Jeopardy!. (2011) Apple reducing barriers in translation and understanding by machine
releases Siri, an AI-powered virtual assistant through its iOS learning applications.(2018) Waymo launches its Waymo One service,
operating system. allowing users throughout the Phoenix metropolitan area to request a
pick-up from one of the company's self-driving vehicles.
• (2012) Andrew Ng, founder of the Google Brain Deep Learning
project, feeds a neural network using deep learning algorithms 10 • (2020) Baidu releases its LinearFold AI algorithm to scientific and
million YouTube videos as a training set. The neural network learned medical teams working to develop a vaccine during the early stages of
to recognize a cat without being told what a cat is, the SARS-CoV-2 pandemic. The algorithm is able to predict the RNA
sequence of the virus in just 27 seconds, 120 times faster than other
methods.
Why is artificial intelligence important?
AI is important because it can give enterprises but today Uber has become one of the largest
insights into their operations that they may not have companies in the world by doing just that It utilizes
been aware of previously and because, in some sophisticated machine learning algorithms to predict
cases, AI can perform tasks better than humans. when people are likely to need rides in certain areas,
Particularly when it comes to repetitive, detail-
which helps proactively get drivers on the road before
oriented tasks like analyzing large numbers of legal
documents to ensure relevant fields are filled in they're needed. As another example, Google has
properly, AI tools often complete jobs become one of the largest players for a range of online
quickly and with relatively few errors. services by using machine learning to understand how
people use their services and then improving them. In
This has helped fuel an explosion in efficiency and
2017, the company's CEO, Sundar Pichai, pronounced
opened the door to entirely new business
opportunities for some larger enterprises. that Google would operate as an "AI first" company.
Today's largest and most successful enterprises have
Prior to the current wave of AI, it would have been used AI to improve their operations and gain
hard to imagine using computer software to connect
advantage on their competitors.
riders to taxis,
How does AI work?
As the hype around AI has accelerated, vendors
AI programming focuses on three cognitive skills:
have been scrambling to promote how their
products and services use AI. Often what they refer learning, reasoning and self-correction.
to as AI is simply one component of AI, such as Learning processes. This aspect of AI programming
machine learning. AI requires a foundation of focuses on acquiring data and creating rules for
specialized hardware and software for writing and how to turn the data into actionable information.
training machine learning algorithms. No one The rules, which are called algorithms, provide
programming language is synonymous with AI, but computing devices with step-by-step instructions
a few, including Python, R and Java, are popular. for how to complete a specific task.
In general, AI systems work by ingesting large Reasoning processes. This aspect of AI
amounts of labeled training data, analyzing the programming focuses on choosing the right
data for correlations and patterns, and using these algorithm to reach a desired outcome.
patterns to make predictions about future states.In Self-correction processes. This aspect of AI
this way, a chatbot that is fed examples of text programming is designed to continually fine-tune
chats can learn to produce lifelike exchanges with algorithms and ensure they provide the most
people, or an image recognition tool can learn to
identify and describe objects in images by accurate results possible.
reviewing millions of examples.
What are the advantages and disadvantages of artificial
intelligence?
Arend Hintze, an assistant professor of integrative Type 2: Limited memory. These AI systems have
biology and computer science and engineering at memory, so they can use past experiences to inform
Michigan State University, explained in a future decisions. Some of the decision-making
2016 article that AI can be categorized into four functions in self-driving cars are designed this way.
types, beginning with the task-specific intelligent Type 3: Theory of mind. Theory of mind is a
systems in wide use today and progressing to psychology term. When applied to AI, it means that
sentient systems, which do not yet exist. The the system would have the social intelligence to
categories are as follows: understand emotions. This type of AI will be able to
infer human intentions and predict behavior, a
Type 1: Reactive machines. These AI systems have necessary skill for AI systems to become integral
no memory and are task specific. An example is members of human teams.
Deep Blue, the IBM chess program that beat Garry
Kasparov in the 1990s. Deep Blue can identify Type 4: Self-awareness. In this category, AI systems
have a sense of self, which gives them consciousness.
pieces on the chessboard and make predictions,
Machines with self-awareness understand their own
but because it has no memory, it cannot use past current state. This type of AI does not yet exist.
experiences to inform future ones.
What are examples of AI technology and how is it used today?
AI is incorporated into a variety of different types of technology. Here • Machine vision. This technology gives a machine the ability to see.
are six examples: Machine vision captures and analyzes visual information using a camera,
analog-to-digital conversion and digital signal processing. It is often compared
• Automation. When paired with AI technologies, automation tools to human eyesight, but machine vision isn't bound by biology and can be
can expand the volume and types of tasks performed. An example programmed to see through walls, for example. It is used in a range of
is robotic process automation (RPA), a type of software that applications from signature identification to medical image analysis.
automates repetitive, rules-based data processing tasks Computer vision, which is focused on machine-based image processing, is
traditionally done by humans. When combined with machine often conflated with machine vision.
learning and emerging AI tools, RPA can automate bigger portions • Natural language processing (NLP). This is the processing of human language
of enterprise jobs, enabling RPA's tactical bots to pass along by a computer program. One of the older and best-known examples of NLP is
intelligence from AI and respond to process changes. spam detection, which looks at the subject line and text of an email and
decides if it's junk. Current approaches to NLP are based on machine
• Machine learning. This is the science of getting a computer to act learning. NLP tasks include text translation, sentiment analysis and speech
without programming. Deep learning is a subset of machine recognition.
learning that, in very simple terms, can be thought of as the • Robotics. This field of engineering focuses on the design and manufacturing
automation of predictive analytics. There are three types of of robots. Robots are often used to perform tasks that are difficult for
machine learning algorithms: humans to perform or perform consistently. For example, robots are used in
• Supervised learning. Data sets are labeled so that patterns assembly lines for car production or by NASA to move large objects in space.
can be detected and used to label new data sets. Researchers are also using machine learning to build robots that can interact
in social settings.
• Unsupervised learning. Data sets aren't labeled and are
sorted according to similarities or differences. • Self-driving cars. Autonomous vehicles use a combination of computer
vision, image recognition and deep learning to build automated skill at
• Reinforcement learning. Data sets aren't labeled but, after piloting a vehicle while staying in a given lane and avoiding unexpected
performing an action or several actions, the AI system is obstructions, such as pedestrians.
given feedback.
What are the applications of AI?
• AI in healthcare. The biggest bets are on improving patient
outcomes and reducing costs. Companies are applying Chatbots have been incorporated into websites to provide
machine learning to make better and faster diagnoses than immediate service to customers. Automation of job positions
humans. One of the best-known healthcare technologies is has also become a talking point among academics and IT
IBM Watson. It understands natural language and can analysts.
respond to questions asked of it. The system mines patient • AI in education. AI can automate grading, giving educators
data and other available data sources to form a hypothesis, more time. It can assess students and adapt to their needs,
which it then presents with a confidence scoring schema. helping them work at their own pace. AI tutors can provide
Other AI applications include using online virtual health additional support to students, ensuring they stay on track.
assistants and chatbots to help patients and healthcare And it could change where and how students learn, perhaps
customers find medical information, schedule appointments, even replacing some teachers.
understand the billing process and complete other
administrative processes. An array of AI technologies is also • AI in finance. AI in personal finance applications, such as
being used to predict, fight and understand pandemics such Intuit Mint or TurboTax, is disrupting financial institutions.
as COVID-19. Applications such as these collect personal data and provide
financial advice. Other programs, such as IBM Watson, have
• AI in business. Machine learning algorithms are being been applied to the process of buying a home. Today,
integrated into analytics and customer relationship artificial intelligence software performs much of the trading
management (CRM) platforms to uncover information on on Wall Street.
how to better serve customers.
Continued
• AI in manufacturing. Manufacturing has been at the forefront of • AI in transportation. In addition to AI's fundamental role in
incorporating robots into the workflow. For example, the operating autonomous vehicles, AI technologies are used in
industrial robots that were at one time programmed to perform transportation to manage traffic, predict flight delays, and make
single tasks and separated from human workers, increasingly ocean shipping safer and more efficient.
function as cobots: Smaller, multitasking robots that collaborate
with humans and take on responsibility for more parts of the job • Security. AI and machine learning are at the top of the buzzword
in warehouses, factory floors and other workspaces. list security vendors use today to differentiate their offerings.
Those terms also represent truly viable technologies.
• AI in banking. Banks are successfully employing chatbots to make Organizations use machine learning in security information and
their customers aware of services and offerings and to handle event management (SIEM) software and related areas to detect
transactions that don't require human intervention. AI virtual anomalies and identify suspicious activities that indicate threats.
assistants are being used to improve and cut the costs of By analyzing data and using logic to identify similarities to known
compliance with banking regulations. Banking organizations are malicious code, AI can provide alerts to new and emerging attacks
also using AI to improve their decision-making for loans, and to much sooner than human employees and previous technology
set credit limits and identify investment opportunities. iterations. The maturing technology is playing a big role in helping
organizations fight off cyber attacks.
• AI in law. The discovery process -- sifting through documents - in
law is often overwhelming for humans. Using AI to help automate
the legal industry's labor-intensive processes is saving time and
improving client service. Law firms are using machine learning to
describe data and predict outcomes, computer vision to classify
and extract information from documents and natural language
processing to interpret requests for information
Ethical use of artificial intelligence
• While AI tools present a range of new This is especially true when using AI algorithms that
functionality for businesses, the use of artificial are inherently unexplainable in deep learning and
intelligence also raises ethical questions generative adversarial network (GAN) applications.
because, for better or worse, an AI system will
reinforce what it has already learned. • Explainability is a potential stumbling block to using
AI in industries that operate under strict
• This can be problematic because machine regulatory compliance requirements. For example,
learning algorithms, which underpin many of the financial institutions in the United States operate
most advanced AI tools, are only as smart as under regulations that require them to explain their
the data they are given in training. Because a credit-issuing decisions. When a decision to refuse
human being selects what data is used to train credit is made by AI programming, however, it can
an AI program, the potential for be difficult to explain how the decision was arrived
machine learning bias is inherent and must be at because the AI tools used to make such
monitored closely. decisions operate by teasing out subtle correlations
between thousands of variables. When the decision-
• Anyone looking to use machine learning as part making process cannot be explained, the program
of real-world, in-production systems needs to may be referred to as black box AI.
factor ethics into their AI training processes and
strive to avoid bias.
Augmented intelligence vs. artificial intelligence