0% found this document useful (0 votes)
61 views7 pages

AI Lec 1 Introduction, Foundation, History and State of The Art

This document provides a summary of key concepts in the history and development of artificial intelligence (AI). It discusses early definitions and approaches to AI from the 1940s-1950s in fields like philosophy, mathematics, economics, and neuroscience. Major developments are outlined, such as the Dartmouth workshop in 1956 that defined the field of AI. The summary also briefly discusses important eras and subfields in AI's advancement, including knowledge-based systems, the AI industry boom, neural networks, and modern capabilities in areas like natural language processing, computer vision, and machine/deep learning.

Uploaded by

Iqra Razaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views7 pages

AI Lec 1 Introduction, Foundation, History and State of The Art

This document provides a summary of key concepts in the history and development of artificial intelligence (AI). It discusses early definitions and approaches to AI from the 1940s-1950s in fields like philosophy, mathematics, economics, and neuroscience. Major developments are outlined, such as the Dartmouth workshop in 1956 that defined the field of AI. The summary also briefly discusses important eras and subfields in AI's advancement, including knowledge-based systems, the AI industry boom, neural networks, and modern capabilities in areas like natural language processing, computer vision, and machine/deep learning.

Uploaded by

Iqra Razaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Mirpur University of Science & Technology Mirpur

AJK, Department of Software Engineering

Lecture # 01

Artificial Intelligence

Engr Abdul Qadir Khan


Lecturer

[email protected]
1. Definitions of AI:
• Thinking Humanly:
• Described as an effort to make computers think with minds in a literal sense.
• Involves the study of mental faculties through computational models.
• Thinking Rationally:
• Involves the automation of activities associated with human thinking, such as
decision-making and problem-solving.
• Focuses on the study of computations enabling perception, reasoning, and action.
• Acting Humanly:
• Involves creating machines that perform functions requiring intelligence when
performed by people.
• Described as the study of the design of intelligent agents.
• Acting Rationally:
• Concerned with making computers do things at which, currently, people excel.
• Defined as the study of intelligent behavior in artifacts.
2. Dimensions of AI:
• Definitions are laid out along two dimensions: thought processes vs. behavior and fidelity
to human performance vs. rationality.
• Rationality is defined as doing the "right thing" given what is known.
3. Approaches to AI:
• Four approaches to AI: Thinking Humanly, Thinking Rationally, Acting Humanly, and
Acting Rationally.
• Historical development shows different people following different methods.
4. Thinking Humanly (Cognitive Modeling):
• Employs introspection, psychological experiments, and brain imaging to understand
human thought processes.
• Cognitive science combines AI models with experimental techniques from psychology to
construct theories of the human mind.
5. Acting Rationally (Rational Agent Approach):
• An agent is something that acts, and a rational agent acts to achieve the best outcome or
the best expected outcome.
• Emphasizes more than correct inferences; includes autonomous operation, perception,
persistence, adaptation, and goal pursuit.
• The approach is more general and amenable to scientific development compared to
approaches based on human behavior or thought.
6. Achieving Rationality:
• Perfect rationality is the standard, but it's acknowledged that achieving it in complicated
environments is not feasible due to computational demands.
Foundations of Artificial Intelligence: A Historical Overview
In tracing the foundations of Artificial Intelligence (AI), we explore key disciplines that
contributed concepts, viewpoints, and techniques. The journey begins with fundamental
questions across various domains:
Philosophy:
• Key Questions:
• Can formal rules draw valid conclusions?
• How does the mind emerge from the physical brain?
• Where does knowledge originate, and how does it guide actions?
• Historical Contributors:
• Aristotle formulated laws governing the rational mind.
• Descartes introduced dualism, distinguishing mind and matter.
• Empiricism emerged, emphasizing knowledge from sensory experience.
Mathematics:
• Key Questions:
• What are the formal rules for valid conclusions?
• What can be computed?
• How to reason with uncertain information?
• Contributions:
• Formal logic, initiated by George Boole, laid the groundwork.
• Computation theory explored what could be algorithmically computed.
• Probability theory, from Cardano to Bayes, handled uncertain information.
Economics:
• Key Questions:
• How to make decisions for maximum payoff?
• How to decide when others may not align?
• How to optimize decisions for future payoffs?
• Influence:
• Economics, treating economies as agents maximizing utility, contributed decision
theory.
• Game theory extended decision theory to interactions among multiple agents.
Neuroscience:
• Key Question:
• How do brains process information?
• Milestones:
• Advancements from Broca's study of brain localization to modern imaging
techniques.
• Neuroscience strives to unravel how the brain enables thought.
Psychology and Behaviorism:
• Key Questions:
• How do humans and animals think and act?
• Development:
• Behaviorism, led by Watson, rejected mental processes for observable behaviors.
• Cognitive science emerged, marrying psychology with computer modeling.
Computer Engineering:
• Key Question:
• How to build an efficient computer?
• Innovation:
• Invention of the modern digital electronic computer was pivotal for AI.
Control Theory and Cybernetics:
• Key Question:
• How can artifacts operate under their own control?
• Pioneers:
• Cybernetics, led by Wiener, explored self-regulating systems and feedback
control.
Linguistics and Computational Linguistics:
• Key Questions:
• How does language work, and how can it be understood by machines?
• Intersection:
• Linguistics and AI intersected, giving rise to computational linguistics.
• Early challenges in language understanding spurred developments in knowledge
representation.
Example:
• Consider the evolution of AI in healthcare:
• Philosophy: Ethical considerations in AI-assisted medical decision-making.
• Mathematics: Algorithms for medical image analysis and predictive modeling.
• Economics: Optimization of resource allocation in healthcare systems.
• Neuroscience: Brain-inspired AI models for understanding and treating
neurological disorders.
• Psychology and Behaviorism: AI-driven mental health applications and therapy
bots.
• Computer Engineering: Hardware advancements for efficient medical AI
processing.
• Control Theory and Cybernetics: AI systems adapting responses based on
patient feedback.
• Linguistics and Computational Linguistics: Natural language interfaces for
medical chatbots.
1. The Gestation of Artificial Intelligence (1943–1955):
• Warren McCulloch and Walter Pitts (1943): Proposed a model of artificial neurons,
laying the groundwork for neural networks.
• Donald Hebb (1949): Introduced Hebbian learning, an influential model for modifying
connection strengths between neurons.
2. The Birth of Artificial Intelligence (1956):
• Dartmouth Workshop (1956): John McCarthy, Marvin Minsky, Claude Shannon, and
others organized a workshop to explore the idea that every aspect of intelligence could be
precisely described, simulated, and improved through machines.
• Allen Newell and Herbert Simon: Developed the Logic Theorist, a program capable of
non-numerical thinking.
3. Knowledge-Based Systems: The Key to Power? (1969–1979):
• DENDRAL Program (1969): Ed Feigenbaum, Bruce Buchanan, and Joshua Lederberg
developed a knowledge-based system to infer molecular structures from mass
spectrometer data.
4. AI Becomes an Industry (1980–Present):
• Commercial Expert Systems: R1, the first successful commercial expert system,
operated at Digital Equipment Corporation, saving millions of dollars.
• Japanese Fifth Generation Project (1981): Japan initiated a 10-year plan to build
intelligent computers, leading to the formation of the Microelectronics and Computer
Technology Corporation (MCC) in the U.S.
• AI Industry Boom (1980–1988): AI became a billion-dollar industry, with numerous
companies working on expert systems, vision systems, and robotics.
5. The Return of Neural Networks (1986–Present):
• Back-Propagation Algorithm (Mid-1980s): Rediscovery and widespread use of the
back-propagation learning algorithm for neural networks.
• Connectionist Models: Considered as competitors to symbolic and logic-based AI
models.
6. The State of the Art:
• Robotic Vehicles: Autonomous vehicles like STANLEY and BOSS demonstrated
advanced capabilities in navigating complex terrains.
• Speech Recognition: Automated systems, like those used by United Airlines, enabled
natural language interaction.
• Autonomous Planning and Scheduling: NASA's Remote Agent program demonstrated
autonomous planning for spacecraft operations.
8. Natural Language Processing (NLP):
• Language Understanding: AI systems, like chatbots and virtual assistants, have
improved natural language understanding, enabling more sophisticated interactions with
users.
• Language Translation: Advanced language translation models, such as those using
transformer architectures, have achieved remarkable accuracy.
9. Computer Vision:
• Object Recognition: Deep learning models have significantly improved object
recognition in images and videos, contributing to applications in autonomous vehicles,
security systems, and more.
• Facial Recognition: Facial recognition technology has become more prevalent in
security, authentication, and personalized user experiences.
10. Machine Learning and Deep Learning:
• Deep Learning Advances: Breakthroughs in deep learning, especially with
convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have led
to improved performance in various tasks, from image recognition to natural language
processing.
• Transfer Learning: Techniques like transfer learning have enabled models to leverage
knowledge gained from one task to perform well in related tasks, reducing the need for
extensive training data.
11. Reinforcement Learning:
• Game Playing: Reinforcement learning algorithms have demonstrated exceptional
performance in playing complex games, including board games like Go and video games.
12. AI in Healthcare:
• Medical Diagnosis: AI systems, particularly deep learning models, are making strides in
medical image analysis for diagnosing conditions such as cancer from radiological
images.
• Drug Discovery: AI is being used to accelerate drug discovery processes by predicting
potential drug candidates and their effects.
13. Autonomous Systems:
• Drones and Robotics: AI-powered drones and robots are increasingly used for tasks
such as surveillance, delivery, and exploration.
• Autonomous Vehicles: Ongoing advancements in self-driving car technology have the
potential to transform transportation.

You might also like