Artificial Intelligence - Introduction
Artificial Intelligence - Introduction
HISTORY OF AI
1950
Alan Turing published a trademark research paper in which he speculated about the possibility of creating
machines that can think.
1950 1951
Game AI: Christopher Strachey wrote a checker program and the Dietrich prize won for chess.
The Birth of AI: John MaCarthy first coined the term “AI” in 1956 at the Dartmouth Conference.
First AI laboratory: MIT AI Lab was setup in 1959. The research on AI began.
1950 1951 1956 1959 1960
IBM Deep Blue: beats the world chess champion Garry Kasparov
• Ai is the study of how to make computers do things that at the moment, people
do better.
• By John McCarthy:
The science and engineering of making intelligent machines.
• Creating some system which can exhibit intelligent behavior, learn new things by itself,
demonstrate, explain, and can advise to its user.
ADVANTAGES OF ARTIFICIAL INTELLIGENCE
• High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy
as it takes decisions as per pre-experience or information.
• High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI
systems can beat a chess champion in the Chess game.
• High reliability: AI machines are highly reliable and can perform the same action multiple times
with high accuracy.
• Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring
the ocean floor, where to employ a human can be risky.
• Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI
technology is currently used by various E-commerce websites to show the products as per customer
requirement.
• Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which
can make our journey safer and hassle-free, facial recognition for security purpose, Natural
language processing to communicate with the human in human-language, etc.
DISADVANTAGES OF ARTIFICIAL INTELLIGENCE
• High Cost: The hardware and software requirement of AI is very costly as it requires lots of maintenance to
meet current world requirements.
• Can't think out of the box: Even we are making smarter machines with AI, but still they cannot work out of the
box, as the robot will only do that work for which they are trained, or programmed.
• No feelings and emotions: AI machines can be an outstanding performer, but still it does not have the feeling
so it cannot make any kind of emotional attachment with human, and may sometime be harmful for users if the
proper care is not taken.
• Increase dependency on machines: With the increment of technology, people are getting more dependent
on devices and hence they are losing their mental capabilities.
• No Original Creativity: As humans are so creative and can imagine some new ideas but still AI machines
cannot beat this power of human intelligence and cannot be creative and imaginative.
TURING TEST: ACTING HUMANLY
• In 1950, Alan Turing introduced a test to check whether a machine can think
like a human or not, this test is known as the Turing Test. In this test, Turing
proposed that the computer can be said to be intelligent if it can mimic human
response under specific conditions.
• Turing Test was introduced by Turing in his 1950 paper, "Computing
Machinery and Intelligence," which considered the question, "Can a Machine
think?"
QUALITIES:
• NLP
• Knowledge Base
• Automated Reasoning
• Machine learning
• Computer Vision
• Robotics
• The Turing test is based on a party game "Imitation game," with some modifications. This
game involves three players in which one player is Computer, another player is human
responder, and the third player is a human Interrogator, who is isolated from other two
players and his job is to find that which player is machine among two of them.
• Consider, Player A is a computer, Player B is human, and Player C is an interrogator.
Interrogator is aware that one of them is machine, but he needs to identify this on the basis
of questions and their responses.
• The conversation between all players is via keyboard and screen so the result would not
depend on the machine's ability to convert words as speech.
• The test result does not depend on each correct answer, but only how closely its responses
like a human answer. The computer is permitted to do everything possible to force a wrong
identification by the interrogator.
COGNITIVE MODELING: THINKING HUMANLY
• Cognitive modeling is an area of computer science that deals with simulating human
problem-solving and mental processing in a computerized model. Such a model can
be used to simulate or predict human behavior or performance on tasks similar to
the ones modeled and improve human-computer interaction.
• Cognitive modeling is used in numerous artificial intelligence applications, such as
expert systems, natural language processing, neural networks, and in robotics and
virtual reality applications. Cognitive models are also used to improve products in
manufacturing segments, such as human factors, engineering, and computer game
and user interface design.
• Neural networks work similarly to the human brain by running training data through
a large number of computational nodes, called artificial neurons, which pass
information back and forth between each other. By accumulating information in this
distributed way, applications can make predictions about future inputs.
• Reinforcement learning is an increasingly prominent area of cognitive modeling.
This approach has algorithms run through many iterations of a task that takes
multiple steps, incentivizing actions that eventually produce positive outcomes while
penalizing actions that lead to negative ones. This is a primary part of the AI
algorithm that Google's DeepMind used for its AlphaGo application, which bested
the top human Go players in 2016.
POTENTIAL LIMITATIONS OF COGNITIVE MODELING
• Artificial Intelligence is not a new word and not a new technology for
researchers. This technology is much older than you would imagine. Even there
are the myths of Mechanical men in Ancient Greek and Egyptian Myths.
Following are some milestones in the history of AI which defines the journey
from the AI generation to till date development.
MATURATION OF ARTIFICIAL INTELLIGENCE (1943-
1952)
• Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
• Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian learning.
• Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
THE BIRTH OF ARTIFICIAL INTELLIGENCE (1952-1956)
• Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for
some theorems.
• Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field.
• At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.
THE GOLDEN YEARS-EARLY ENTHUSIASM (1956-1974)
• The duration between years 1974 to 1980 was the first AI winter duration. AI
winter refers to the time period where computer scientist dealt with a severe
shortage of funding from government for AI researches.
• During AI winters, an interest of publicity on artificial intelligence was
decreased.
A BOOM OF AI (1980-1987)
• Year 1980: After AI winter duration, AI came back with "Expert System".
Expert systems were programmed that emulate the decision-making ability of
a human expert.
• In the Year 1980, the first national conference of the American Association of
Artificial Intelligence was held at Stanford University.
THE SECOND AI WINTER (1987-1993)
• The duration between the years 1987 to 1993 was the second AI Winter
duration.
• Again Investors and government stopped in funding for AI research as due to
high cost but not efficient result. The expert system such as XCON was very
cost effective.
THE EMERGENCE OF INTELLIGENT AGENTS (1993-2011)
• Year 1997: In the year 1997, IBM Deep Blue beats world chess champion,
Gary Kasparov, and became the first computer to beat a world chess
champion.
• Year 2002: for the first time, AI entered the home in the form of Roomba, a
vacuum cleaner.
• Year 2006: AI came in the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
DEEP LEARNING, BIG DATA AND ARTIFICIAL GENERAL
INTELLIGENCE (2011-PRESENT)
• Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex
questions as well as riddles. Watson had proved that it could understand natural language and can solve
tricky questions quickly.
• Year 2012: Google has launched an Android app feature "Google now", which was able to provide
information to the user as a prediction.
• Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test."
• Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also
performed extremely well.
• Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken
hairdresser appointment on call, and lady on other side didn't notice that she was talking with the machine.
• Now AI has developed to a remarkable level. The concept of Deep learning, big
data, and data science are now trending like a boom. Nowadays companies like
Google, Facebook, IBM, and Amazon are working with AI and creating amazing
devices. The future of Artificial Intelligence is inspiring and will come with high
intelligence.