Unit 1: Introduction to Artificial Intelligence
1. Definition and Scope of Artificial Intelligence
Definition:
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are
designed to think and act like humans. These machines can learn from experience, adjust to new
inputs, and perform tasks that typically require human intelligence such as problem-solving,
decision-making, understanding language, and recognizing patterns.
Scope of AI:
AI covers a broad range of areas, including:
Machine Learning (ML): Systems that learn from data and improve over time.
Natural Language Processing (NLP): Machines understanding and responding in
human language.
Robotics: AI used to control and guide physical robots.
Computer Vision: Enabling machines to interpret and understand visual information.
Expert Systems: Mimicking decision-making ability of a human expert.
Speech Recognition: Understanding spoken language.
Planning and Scheduling: Machines solving complex tasks over time.
2. Historical Overview and Milestones in AI
Key Milestones:
1943: McCulloch and Pitts developed the first concept of a neural network.
1950: Alan Turing proposed the Turing Test to evaluate machine intelligence.
1956: The term "Artificial Intelligence" was coined at the Dartmouth Conference (John
McCarthy).
1966–1974 (Early AI): Programs like ELIZA (a chatbot) showed early NLP capabilities.
1980s (Expert Systems): Development of rule-based systems like MYCIN in medical
diagnosis.
1997: IBM’s Deep Blue defeated world chess champion Garry Kasparov.
2002: Introduction of AI-powered robotic vacuum cleaners (e.g., Roomba).
2011: IBM Watson won the quiz show "Jeopardy!" against human champions.
2012–Present (Deep Learning Era): Breakthroughs in image and speech recognition
with neural networks.
2016: Google DeepMind’s AlphaGo defeated Go world champion Lee Sedol.
Present day: Widespread use of AI in virtual assistants, self-driving cars, healthcare, and
more.
3. AI Applications in Various Fields
1. Healthcare:
o Disease diagnosis (e.g., cancer, COVID-19)
o Medical imaging analysis
o Personalized treatment plans
o Virtual health assistants
2. Education:
o Intelligent tutoring systems
o Automated grading
o Adaptive learning platforms
o Chatbots for student support
3. Finance:
o Fraud detection
o Algorithmic trading
o Credit scoring and risk analysis
o Chatbots for customer service
4. Agriculture:
o Crop monitoring using drones
o Predictive analytics for yield estimation
o Automated irrigation systems
o Disease detection in plants
5. Transportation:
o Autonomous (self-driving) vehicles
o Traffic management systems
o Route optimization using AI algorithms
6. Retail:
o Personalized recommendations
o Customer behavior analytics
o Inventory management
o Chatbots and virtual shopping assistants
7. Entertainment:
o Content recommendation (e.g., Netflix, Spotify)
o AI-generated art and music
o Game design with AI opponents
8. Security:
o Surveillance and facial recognition
o Cyber security threat detection
o Biometric authentication
1. What are Agent and Environment?
An AI agent is a software program that can interact with its surroundings, gather
information, and use that information to complete tasks on its own to achieve goals set by
humans.
Agent = Perception (input) + Action (output)
An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors.
A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the
sensors, and other organs such as hands, legs, mouth, for effectors.
A robotic agent replaces cameras and infrared range finders for the sensors, and various
motors and actuators for effectors.
A software agent has encoded bit strings as its programs and actions.
Examples:
Human Agent: Eyes (sensors), hands/legs (actuators)
Robot Agent: Cameras/sensors, wheels/arms
Software Agent: Keyboard inputs or web APIs (sensors), displays or file writing
(actuators)
2. Structure of an Agent
An AI agent consists of:
Sensors – to gather information from the environment
Actuators – to interact with the environment
Agent Program – the algorithm that maps percepts to actions
3. Types of Agents
A. Simple Reflex Agents
Description:
Act only based on current percept.
No memory or history of past states.
Operate using condition-action rules (IF condition THEN action).
Advantages:
Fast and simple to design.
Useful for environments that are fully observable and predictable.
Limitations:
Can’t handle complex or partially observable environments.
No learning or adaptability.
Example:
A thermostat: IF temperature < 22°C THEN turn on heater.
Flow: [Percept] → [Condition-Action Rule] → [Action]
Simple Reflex Agents
B. Model-Based Reflex Agents
Description:
Maintain an internal model of the world to keep track of unobservable parts.
Use both current percept and internal state to decide actions.
Can deal with partially observable environments.
Internal Model: Describes how the world evolves and how the agent’s actions affect the
world.
Advantages:
More powerful than simple reflex agents.
Can work in partially observable environments.
Limitations:
Requires accurate models of the environment.
Complex to design and maintain.
Example:
A cleaning robot that remembers where it has already cleaned.
Flow: [Percept] → [Update Internal Model] → [Condition-Action Rules] → [Action]
Model-Based Reflex Agents
3. Goal-Based Agents
Description:
Make decisions by considering future goals.
Use search and planning algorithms to find action sequences that achieve goals.
Advantages:
More flexible and intelligent.
Can handle multiple possible goals.
Limitations:
Computationally expensive (search and planning can be slow).
May not be efficient in dynamic environments.
Example:
GPS navigation system: finds a path to the destination.
Flow: [Percept] → [Update Model] → [Goal + Search/Plan] → [Action]
Goal-Based Agents
D. Utility-Based Agents
Description:
Extend goal-based agents by introducing a utility function.
Choose between conflicting goals by selecting the most useful outcome.
Utility = Measure of agent’s happiness/satisfaction.
Advantages:
Allow for rational decisions in complex situations.
Can compare different outcomes quantitatively.
Limitations:
Designing a good utility function can be difficult.
Requires lots of computational resources.
Example:
An AI stock trading bot that maximizes profit while minimizing risk.
Flow: [Percept] → [Update Model] → [Goals + Utility Function] → [Best Action]
Utility-Based Agents
E. Learning Agents
Description:
Have the ability to learn from past experiences.
Improve performance over time by updating knowledge and behavior.
Consist of four main components:
1. Learning Element – Improves with experience.
2. Performance Element – Chooses external actions.
3. Critic – Gives feedback on performance.
4. Problem Generator – Suggests exploratory actions.
Advantages:
Can adapt to new environments and challenges.
Capable of self-improvement.
Limitations:
May need a lot of data and time to learn effectively.
Risk of learning incorrect behavior.
Example:
Spam email filter that improves as it processes more emails.
Flow: Environment] → [Sensors] → [Learning Agent] → [Actuators]
↘ ↑ ↑ ↑
[Critic][Learning][Problem Generator]
Learning Agents
Uses of Agents:
Agents are used in a wide range of applications in artificial intelligence, including:
Robotics: Agents can be used to control robots and automate tasks in manufacturing,
transportation, and other industries.
Smart homes and buildings: Agents can be used to control heating, lighting, and other
systems in smart homes and buildings, optimizing energy use and improving comfort.
Transportation systems: Agents can be used to manage traffic flow, optimize routes for
autonomous vehicles, and improve logistics and supply chain management.
Healthcare: Agents can be used to monitor patients, provide personalized treatment
plans, and optimize healthcare resource allocation.
Finance: Agents can be used for automated trading, fraud detection, and risk management
in the financial industry.
Games: Agents can be used to create intelligent opponents in games and simulations,
providing a more challenging and realistic experience for players.