0% found this document useful (0 votes)
31 views15 pages

Evolution of Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views15 pages

Evolution of Artificial Intelligence

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Evolution of Artificial

intelligence
Aarav Shah
27/8/24
What is AI
Artificial intelligence (AI) is technology that enables
computers and machines to simulate human learning,
comprehension, problem solving decision making,
creativity and autonomy. Applications and devices
equipped with AI can see and identify objects. They can
understand and respond to human language. They can
learn from new information and experience. They can
make detailed recommendations to users and
experts. They can act independently, replacing the need
for human intelligence or intervention (a classic example
being a self-driving car). m solving, decision making,
creativity and autonomy.
Evolution of AI
Artificial Intelligence is not a new word and not a new
technology for researchers. This technology is much older
than you would imagine. Even there are the myths of
Mechanical men in Ancient Greek and Egyptian Myths.
Following are some milestones in the history of AI which
defines the journey from the AI generation to till date
development.
Maturation of Artificial
Intelligence (1943-1952)
Between 1943 and 1952, there was notable progress in the expansion of artificial
intelligence (AI). Throughout this period, AI transitioned from a mere concept to tangible
experiments and practical applications. Here are some key events that happened during
this period:
•Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
•Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
•Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.
•Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial neural
network (ANN) named SNARC. They utilized 3,000 vacuum tubes to mimic a network of
40 neurons.
The birth of Artificial
Intelligence (1952-1956)
From 1952 to 1956, AI surfaced as a unique domain of investigation. During this
period, pioneers and forward-thinkers commenced the groundwork for what would
ultimately transform into a revolutionary technological domain. Here are notable
occurrences from this era:
•Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-Playing
Program, which marked the world's first self-learning program for playing games.
•Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program "Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems and find new and more elegant proofs for
some theorems.
•Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined
as an academic field.
The golden years-Early enthusiasm (1956-
1974)
The period from 1956 to 1974 is commonly known as the "Golden Age" of artificial intelligence (AI).
In this timeframe, AI researchers and innovators were filled with enthusiasm and achieved
remarkable advancements in the field. Here are some notable events from this era:
•Year 1958: During this period, Frank Rosenblatt introduced the perceptron, one of the early artificial
neural networks with the ability to learn from data. This invention laid the foundation for modern
neural networks. Simultaneously, John McCarthy developed the Lisp programming language, which
swiftly found favor within the AI community, becoming highly popular among developers.
•Year 1959: Arthur Samuel is credited with introducing the phrase "machine learning" in a pivotal
paper in which he proposed that computers could be programmed to surpass their creators in
performance. Additionally, Oliver Selfridge made a notable contribution to machine learning with his
publication "Pandemonium: A Paradigm for Learning." This work outlined a model capable of self-
improvement, enabling it to discover patterns in events more effectively.
•Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow created STUDENT, one of
the early programs for natural language processing (NLP), with the specific purpose of solving
algebra word problems.
•Year 1965: The initial expert system, Dendral, was devised by Edward Feigenbaum, Bruce G.
Buchanan, Joshua Lederberg, and Carl Djerassi. It aided organic chemists in identifying unfamiliar
organic compounds.
•Year 1966: The researchers emphasized developing algorithms that can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named ELIZA. Furthermore, Stanford Research Institute created Shakey, the earliest mobile
intelligent robot incorporating AI, computer vision, navigation, and NLP. It can be considered
a precursor to today's self-driving cars and drones.
•Year 1968: Terry Winograd developed SHRDLU, which was the pioneering multimodal AI
capable of following user instructions to manipulate and reason within a world of blocks.
•Year 1969: Arthur Bryson and Yu-Chi Ho outlined a learning algorithm known as
backpropagation, which enabled the development of multilayer artificial neural networks.
This represented a significant advancement beyond the perceptron and laid the groundwork
for deep learning. Additionally, Marvin Minsky and Seymour Papert authored the book
"Perceptrons," which elucidated the constraints of basic neural networks. This publication
led to a decline in neural network research and a resurgence in symbolic AI research.
•Year 1972: The first intelligent humanoid robot was built in Japan, which was named
WABOT-1.
•Year 1973: James Lighthill published the report titled "Artificial Intelligence: A General
Survey," resulting in a substantial reduction in the British government's backing for AI
research.
The first AI winter (1974-1980)
The initial AI winter, occurring from 1974 to 1980, is known
as a tough period for artificial intelligence (AI). During this
time, there was a substantial decrease in research funding,
and AI faced a sense of letdown.
•The duration between years 1974 to 1980 was the first AI
winter duration. AI winter refers to the time period where
computer scientist dealt with a severe shortage of funding
from government for AI researches.
•During AI winters, an interest of publicity on artificial
intelligence was decreased.
A boom of AI (1980-1987)
Between 1980 and 1987, AI underwent a renaissance and newfound vitality after the
challenging era of the First AI Winter. Here are notable occurrences from this
timeframe:
•In 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
•Year 1980: After AI's winter duration, AI came back with an "Expert System". Expert
systems were programmed to emulate the decision-making ability of a human
expert. Additionally, Symbolics Lisp machines were brought into commercial use,
marking the onset of an AI resurgence. However, in subsequent years, the Lisp
machine market experienced a significant downturn.
•Year 1981: Danny Hillis created parallel computers tailored for AI and various
computational functions, featuring an architecture akin to contemporary GPUs.
•Year 1984: Marvin Minsky and Roger Schank introduced the phrase "AI winter"
during a gathering of the Association for the Advancement of Artificial Intelligence.
They cautioned the business world that exaggerated expectations about AI would
result in disillusionment and the eventual downfall of the industry, which indeed
occurred three years later.
•Year 1985: Judea Pearl introduced Bayesian network causal analysis, presenting
statistical methods for encoding uncertainty in computer systems.
The second AI winter (1987-1993)
•The duration between the years 1987 to 1993 was the second
AI Winter duration.
•Again Investors and government stopped in funding for AI
research as due to high cost but not efficient result. The expert
system such as XCON was very cost effective.
The emergence of intelligent agents (1993-
Between 1993 and 2011, there2011)
were significant leaps forward in artificial
intelligence (AI), particularly in the development of intelligent computer
programs. During this era, AI professionals shifted their emphasis from
attempting to match human intelligence to crafting pragmatic, ingenious software
tailored to specific tasks. Here are some noteworthy occurrences from this
timeframe:
•Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by defeating
world chess champion Gary Kasparov, marking the first time a computer
triumphed over a reigning world chess champion. Moreover, Sepp Hochreiter and
Jürgen Schmidhuber introduced the Long Short-Term Memory recurrent neural
network, revolutionizing the capability to process entire sequences of data such
as speech or video.
•Year 2002: for the first time, AI entered the home in the form of Roomba, a
vacuum cleaner.
•Year 2006: AI came into the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
•Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng
released the paper titled "Utilizing Graphics Processors for
Extensive Deep Unsupervised Learning," introducing the concept
of employing GPUs for the training of expansive neural networks.
•Year 2011: Jürgen Schmidhuber, Dan Claudiu Cire?an, Ueli
Meier, and Jonathan Masci created the initial CNN that attained
"superhuman" performance by emerging as the victor in the
German Traffic Sign Recognition competition. Furthermore, Apple
launched Siri, a voice-activated personal assistant capable of
generating responses and executing actions in response to voice
commands.
Deep learning, big data and
artificial general intelligence
(2011-present)
From 2011 to the present moment, significant advancements have unfolded within the artificial intelligence (AI)
domain. These achievements can be attributed to the amalgamation of deep learning, extensive data
application, and the ongoing quest for artificial general intelligence (AGI). Here are notable occurrences from this
timeframe:
•Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where it had to solve complex questions as well as
riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly.
•Year 2012: Google launched an Android app feature, "Google Now", which was able to provide information to
the user as a prediction. Further, Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky presented a deep CNN
structure that emerged victorious in the ImageNet challenge, sparking the proliferation of research and
application in the field of deep learning.
•Year 2013: China's Tianhe-2 system achieved a remarkable feat by doubling the speed of the world's leading
supercomputers to reach 33.86 petaflops. It retained its status as the world's fastest system for the third
consecutive time. Furthermore, DeepMind unveiled deep reinforcement learning, a CNN that acquired skills
through repetitive learning and rewards, ultimately surpassing human experts in playing games. Also, Google
researcher Tomas Mikolov and his team introduced Word2vec, a tool designed to automatically discern the
semantic connections among words.
•Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test."
Whereas Ian Goodfellow and his team pioneered generative adversarial networks (GANs), a type of machine
learning framework employed for producing images, altering pictures, and crafting deepfakes, and Diederik
Kingma and Max Welling introduced variational autoencoders (VAEs) for generating images, videos, and text.
•Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go
player Lee Sedol in Seoul, South Korea, prompting reminiscence of the
Kasparov chess match against Deep Blue nearly two decades earlier.Whereas
Uber initiated a pilot program for self-driving cars in Pittsburgh, catering to a
limited group of users.
•Year 2018: The "Project Debater" from IBM debated on complex topics with
two master debaters and also performed extremely well.
•Google has demonstrated an AI program, "Duplex," which was a virtual
assistant that had taken hairdresser appointments on call, and the lady on
the other side didn't notice that she was talking with the machine.
•Year 2021: OpenAI unveiled the Dall-E multimodal AI system, capable of
producing images based on textual prompts.
•Year 2022: In November, OpenAI launched ChatGPT, offering a chat-
oriented interface to its GPT-3.5 LLM.
References
IBM. “What Is Artificial Intelligence (AI)?” IBM, 2024,
www.ibm.com/topics/artificial-intelligence.
Javatpoint. “History of Artificial Intelligence - Javatpoint.” Www.javatpoint.com, 2011,
www.javatpoint.com/history-of-artificial-intelligence.
Davenport, Thomas, and Ravi Kalakota. “The Potential for Artificial Intelligence in
Healthcare.” Future Healthcare Journal, vol. 6, no. 2, 2019, pp. 94–98,
www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/,
https://doi.org/10.7861/futurehosp.6-2-94.

You might also like