0% found this document useful (0 votes)
8 views

unit 1

The document provides an overview of Artificial Intelligence (AI), covering its definition, history, goals, types, advantages, disadvantages, and challenges. It explains the concept of intelligent agents, their structure, and the PEAS framework for categorizing agents based on performance, environment, actuators, and sensors. The document emphasizes the significance of AI in solving real-world problems and highlights both its potential benefits and ethical considerations.

Uploaded by

Bhargavi Jangam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

unit 1

The document provides an overview of Artificial Intelligence (AI), covering its definition, history, goals, types, advantages, disadvantages, and challenges. It explains the concept of intelligent agents, their structure, and the PEAS framework for categorizing agents based on performance, environment, actuators, and sensors. The document emphasizes the significance of AI in solving real-world problems and highlights both its potential benefits and ethical considerations.

Uploaded by

Bhargavi Jangam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

UNIT I:

Introduction to AI - Intelligent Agents, Problem-Solving Agents,

Searching for Solutions - Breadth-first search, Depth-first search, Hill-climbing search,


Simulated annealing search, Local Search in Continuous Spaces.

Introduction:
 Artificial Intelligence is concerned with the design of intelligence in an
artificial device. The term was coined by John McCarthy in 1956.
 Intelligence is the ability to acquire, understand and apply the knowledge to
achieve goals in the world.
 AI is the study of the mental faculties through the use of computational models
 AI is the study of intellectual/mental processes as computational processes.
 AI program will demonstrate a high level of intelligence to a degree that
equals or exceeds the intelligence required of a human in performing some
task.
 AI is unique, sharing borders with Mathematics, Computer
Science, Philosophy, Psychology, Biology, Cognitive Science
and many others.
 Although there is no clear definition of AI or even Intelligence, it can be
described as an attempt to build machines that like humans can think and
act, able to learn and use knowledge to solve problems on their own.

Why Artificial Intelligence?


Following are some main reasons to learn about AI:
o With the help of AI, you can create such software or devices which can solve real-
world problems very easily and with accuracy such as health issues, marketing, traffic
issues, etc.
o With the help of AI, you can create your personal virtual Assistant, such as Cortana,
Google Assistant, Siri, etc.
o With the help of AI, you can build such Robots which can work in an environment
where survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new Opportunities.

History of AI
o Throughout history, people have been intrigued by the idea of making non-living
things smart. In ancient times, Greek stories mentioned gods creating clever
machines, and in Egypt, engineers made statues move. Thinkers like Aristotle and
Ramon Llull laid the groundwork for AI by describing how human thinking works
using symbols.
o In the late 1800s and early 1900s, modern computing started to take shape. Charles
Babbage and Ada Lovelace designed machines that could be programmed in the
1830s. In the 1940s, John Von Neumann came up with the idea of storing computer
programs. At the same time, Warren McCulloch and Walter Pitts started building the
basics of neural networks.
o The 1950s brought us modern computers, letting scientists dig into machine
intelligence. Alan Turing's Turing test became a big deal in computer smarts. The
term "artificial intelligence" was first used in a 1956 Dartmouth College
meeting, where they introduced the first AI program, the Logic Theorist.
o The following years had good times and bad times for AI, called "AI Winters." In the
1970s and 1980s, we hit limits with computer power and complexity. But in the late
1990s, things got exciting again. Computers were faster, and there was more
data. IBM's Deep Blue beating chess champion Garry Kasparov in 1997 was a big
moment.
o The 2000s started a new era with machine learning, language processing, and
computer vision. This led to cool new products and services. The 2010s saw AI take
off with things like voice assistants and self-driving cars. Generative AI, which makes
creative stuff, also started getting big.
o In the 2020s, generative AI like ChatGPT-3 and Google's Bard grabbed everyone's
attention. These models can create all sorts of new things when you give them a
prompt, like essays or art. But remember, this tech is still new, and there are things to
fix, like making sure it doesn't make things up.

Goals of Artificial Intelligence


Following are the main goals of Artificial Intelligence:
1. Replicate human intelligence
2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human intelligence such as:
o Proving a theorem
o Playing chess
o Plan some surgical operation
o Driving a car in traffic

5. Creating some system which can exhibit intelligent behavior, learn new things by
itself, demonstrate, explain, and can advise to its user.

Types of Artificial Intelligence


Artificial Intelligence can be categorized in several ways, primarily based on two main
criteria: capabilities and functionality.
AI Type 1: Based on Capabilities
1. Weak AI or Narrow AI: Narrow AI, also known as Weak AI, is like a specialist in
the world of Artificial Intelligence. Imagine it as a virtual expert dedicated to
performing one specific task with intelligence. For example, think of Apple's Siri. It's
pretty smart when it comes to voice commands and answering questions, but it doesn't
understand or do much beyond that. Narrow AI operates within strict limits, and if
you ask it to step outside its comfort zone, it might not perform as expected. This type
of AI is everywhere in today's world, from self-driving cars to image recognition on
your smartphone.BM's Watson is another example of Narrow AI. It's a supercomputer
that combines Expert Systems, Machine Learning, and Natural Language Processing,
but it's still a specialist. It's excellent at crunching data and providing insights but
doesn't venture far beyond its defined tasks.
2. General AI: General AI, often referred to as Strong AI, is like the holy grail of
artificial intelligence. Picture it as a system that could do any intellectual task with the
efficiency of a human. General AI aims to create machines that think and learn like
humans, but here's the catch: there's no such system in existence yet. Researchers
worldwide are working diligently to make it a reality, but it's a complex journey that
will require significant time and effort.
3. Super AI: Super AI takes AI to another level entirely. It's the pinnacle of machine
intelligence, where machines surpass human capabilities in every cognitive aspect.
These machines can think, reason, solve puzzles, make judgments, plan, learn, and
communicate independently. However, it's important to note that Super AI is
currently a hypothetical concept. Achieving such a level of artificial intelligence
would be nothing short of revolutionary, and it's a challenge that's still on the horizon.
AI Type 2: Based on Functionality
1. Reactive Machines: Reactive Machines represent the most basic form of Artificial
Intelligence. These machines live in the present moment and don't have memories or
past experiences to guide their actions. They focus solely on the current scenario and
respond with the best possible action based on their programming. An example of a
reactive machine is IBM's Deep Blue, the chess-playing computer, and Google's
AlphaGo, which excels at the ancient game of Go.
2. Limited Memory: Limited Memory machines can remember some past experiences
or data but only for a short period. They use this stored information to make decisions
and navigate situations. A great example of this type of AI is seen in self-driving cars.
These vehicles store recent data like the speed of nearby cars, distances, and speed
limits to safely navigate the road.
3. Theory of Mind: Theory of Mind AI is still in the realm of research and
development. These AI systems aim to understand human emotions and beliefs and
engage in social interactions much like humans. While this type of AI hasn't fully
materialized yet, researchers are making significant strides toward creating machines
that can understand and interact with humans on a deeper, more emotional level.
4. Self-Awareness: Self-Awareness AI is the future frontier of Artificial Intelligence.
These machines will be extraordinarily intelligent, possessing their own
consciousness, emotions, and self-awareness. They'll be smarter than the human mind
itself. However, it's crucial to note that Self-Awareness AI remains a hypothetical
concept and does not yet exist in reality. Achieving this level of AI would be a
monumental leap in technology and understanding.

Advantages of Artificial Intelligence


Following are some main advantages of Artificial Intelligence:
o High Accuracy with less errors: AI machines or systems are prone to less errors and
high accuracy as it takes decisions as per pre-experience or information.
o High-Speed: AI systems can be of very high-speed and fast-decision making, because
of that AI systems can beat a chess champion in the Chess game.
o High reliability: AI machines are highly reliable and can perform the same action
multiple times with high accuracy.
o Useful for risky areas: AI machines can be helpful in situations such as defusing a
bomb, exploring the ocean floor, where to employ a human can be risky.
o Digital Assistant: AI can be very useful to provide digital assistant to the users such
as AI technology is currently used by various E-commerce websites to show the
products as per customer requirement.
o Useful as a public utility: AI can be very useful for public utilities such as a self-
driving car which can make our journey safer and hassle-free, facial recognition for
security purpose, Natural language processing to communicate with the human in
human-language, etc.
o Enhanced Security: AI can be very helpful in enhancing security, as It can detect
and respond to cyber threats in real time, helping companies protect their data and
systems.
o Aid in Research: AI is very helpful in the research field as it assists researchers by
processing and analyzing large datasets, accelerating discoveries in fields such as
astronomy, genomics, and materials science.

Disadvantages of Artificial Intelligence


Every technology has some disadvantages, and thesame goes for Artificial intelligence. Being
so advantageous technology still, it has some disadvantages which we need to keep in our
mind while creating an AI system. Following are the disadvantages of AI:
o High Cost: The hardware and software requirement of AI is very costly as it requires
lots of maintenance to meet current world requirements.
o Can't think out of the box: Even we are making smarter machines with AI, but still
they cannot work out of the box, as the robot will only do that work for which they
are trained, or programmed.
o No feelings and emotions: AI machines can be an outstanding performer, but still it
does not have the feeling so it cannot make any kind of emotional attachment with
human, and may sometime be harmful for users if the proper care is not taken.
o Increase dependency on machines: With the increment of technology, people are
getting more dependent on devices and hence they are losing their mental capabilities.
o No Original Creativity: As humans are so creative and can imagine some new ideas
but still AI machines cannot beat this power of human intelligence and cannot be
creative and imaginative.

o Complexity: Making and keeping AI systems can be very complicated and


need a lot of knowledge. This can make it hard for some groups or people to
use them.
o Job Concerns: As AI gets better, it might take away not just basic jobs but
also some skilled ones. This worries people about losing jobs in different
fields.

Challenges of AI
Artificial Intelligence offers incredible advantages, but it also presents some challenges that
need to be addressed:
o Doing the Right Thing: AI should make the right choices, but sometimes it doesn't.
It can make mistakes or do things that aren't fair. We need to teach AI to be better at
making good choices.
o Government and AI: Sometimes, governments use AI to keep an eye on people.
This can be a problem for our freedom. We need to make sure they use AI in a good
way.
o Bias in AI: AI can sometimes be a bit unfair, especially when it comes to recognizing
people's faces. This can cause problems, especially for people who aren't like the
majority.
o AI and Social Media: What you see on social media is often decided by AI. But
sometimes, AI shows things that aren't true or are kind of mean. We need to make
sure AI shows the right stuff.
o Legal and Regulatory Challenges: The rapid evolution of AI has outpaced the
development of comprehensive laws and regulations, leading to uncertainty about
issues like liability and responsibility.
Intelligent Agent

An agent is a computer program or system that is designed to perceive its environment,


make decisions and take actions to achieve a specific goal or set of goals. The agent
operates autonomously, meaning it is not directly controlled by a human operator.
Agents can be classified into different types based on their characteristics, such as whether
they are reactive or proactive, whether they have a fixed or dynamic environment, and
whether they are single or multi-agent systems.
 Reactive agents are those that respond to immediate stimuli from their
environment and take actions based on those stimuli. Proactive agents, on the
other hand, take initiative and plan ahead to achieve their goals. The
environment in which an agent operates can also be fixed or dynamic. Fixed
environments have a static set of rules that do not change, while dynamic
environments are constantly changing and require agents to adapt to new
situations.
 Multi-agent systems involve multiple agents working together to achieve a
common goal. These agents may have to coordinate their actions and
communicate with each other to achieve their objectives. Agents are used in a
variety of applications, including robotics, gaming, and intelligent systems. They
can be implemented using different programming languages and techniques,
including machine learning and natural language processing.
Artificial intelligence is defined as the study of rational agents. A rational agent could be
anything that makes decisions, such as a person, firm, machine, or software. It carries out
an action with the best outcome after considering past and current percepts(agent’s
perceptual inputs at a given instance). An AI system is composed of an agent and its
environment. The agents act in their environment. The environment may contain other
agents.
An agent is anything that can be viewed as:
 Perceiving its environment through sensors and
 Acting upon that environment through actuators
Structure of an AI Agent

To understand the structure of Intelligent Agents, we should be familiar


with Architecture and Agent programs. Architecture is the machinery that the agent
executes on. It is a device with sensors and actuators, for example, a robotic car, a camera,
and a PC. An agent program is an implementation of an agent function. An agent
function is a map from the percept sequence(history of all that an agent has perceived to
date) to an action.

Agent = Architecture + Agent Program


There are many examples of agents in artificial intelligence. Here are a few:
 Intelligent personal assistants: These are agents that are designed to help users
with various tasks, such as scheduling appointments, sending messages, and
setting reminders. Examples of intelligent personal assistants include Siri,
Alexa, and Google Assistant.
 Autonomous robots: These are agents that are designed to operate
autonomously in the physical world. They can perform tasks such as cleaning,
sorting, and delivering goods. Examples of autonomous robots include the
Roomba vacuum cleaner and the Amazon delivery robot.
 Gaming agents: These are agents that are designed to play games, either against
human opponents or other agents. Examples of gaming agents include chess-
playing agents and poker-playing agents.
 Fraud detection agents: These are agents that are designed to detect fraudulent
behavior in financial transactions. They can analyze patterns of behavior to
identify suspicious activity and alert authorities. Examples of fraud detection
agents include those used by banks and credit card companies.
 Traffic management agents: These are agents that are designed to manage
traffic flow in cities. They can monitor traffic patterns, adjust traffic lights, and
reroute vehicles to minimize congestion. Examples of traffic management agents
include those used in smart cities around the world.
 A software agent has Keystrokes, file contents, received network packages that
act as sensors and displays on the screen, files, and sent network packets acting
as actuators.
 A Human-agent has eyes, ears, and other organs which act as sensors, and
hands, legs, mouth, and other body parts act as actuators.
 A Robotic agent has Cameras and infrared range finders which act as sensors
and various motors act as actuators.
PEAS Representation
The PEAS system is a critical framework used to categorize these agents based on
their performance, environment, actuators, and sensors. Understanding the PEAS
system is essential for grasping how different AI agents function effectively in diverse
environments. Among these agents, Rational Agents are considered the most efficient,
consistently choosing the optimal path for maximum efficiency.
PEAS stands for Performance measure, Environment, Actuator, Sensor.
PEAS is a framework used to specify the structure of an intelligent agent in AI . It breaks
down the agent’s interaction with the environment into four key components:
1. Performance Measure: The criteria that define the success of the agent’s
actions.
2. Environment: The surroundings or the context in which the agent operates.
3. Actuators: The mechanisms through which the agent interacts with the
environment.
4. Sensors: The tools the agent uses to perceive its environment.
By defining these elements, PEAS provides a clear outline for designing and evaluating
intelligent systems, ensuring they are equipped to perform their tasks effectively.
P: Performance Measure
Performance measure is the unit to define the success of an agent. Performance varies with
agents based on their different precepts.
Performance measure is a quantitative measure that evaluates the outcomes of an agent’s
actions against a predefined goal. The performance measure is crucial because it guides the
agent’s decision-making process, ensuring that it acts in a way that maximizes its success.
For example, in a self-driving car, the performance measure could include criteria such as
safety (avoiding accidents), efficiency (minimizing travel time), and comfort (ensuring a
smooth ride). The car’s AI will aim to optimize these factors through its actions.
E: Environment
Environment is the surrounding of an agent at every instant. It keeps changing with time if
the agent is set in motion.
There are 5 major types of environments:
1. Fully Observable & Partially Observable
2. Episodic & Sequential
3. Static & Dynamic
4. Discrete & Continuous
5. Deterministic & Stochastic
The environment includes all external factors and conditions that the agent must consider
when making decisions. The environment can vary significantly depending on the type of
agent and its task.
For instance in the case of a smart thermostat, the environment for a smart thermostat
includes the rooms in the house, the outside weather conditions, the heating or cooling
system, and the presence of people, all of which the thermostat interacts with to maintain
the desired temperature efficiently..
Understanding the environment is critical for designing AI systems because it affects how
the agent perceives its surroundings and interacts with them.
Fully Observable vs Partially Observable
 When an agent sensor is capable to sense or access the complete state of an
agent at each point in time, it is said to be a fully observable environment else it
is partially observable.
 Maintaining a fully observable environment is easy as there is no need to keep
track of the history of the surrounding.
 An environment is called unobservable when the agent has no sensors in all
environments.
 Examples:
o Chess – the board is fully observable, and so are the
opponent’s moves.
o Driving – the environment is partially observable because
what’s around the corner is not known.

Episodic vs Sequential
 In an Episodic task environment, each of the agent’s actions is divided into
atomic incidents or episodes. There is no dependency between current and
previous incidents. In each incident, an agent receives input from the
environment and then performs the corresponding action.
 Example: Consider an example of Pick and Place robot, which is used to
detect defective parts from the conveyor belts. Here, every time robot(agent)
will make the decision on the current part i.e. there is no dependency between
current and previous decisions.
 In a Sequential environment, the previous decisions can affect all future
decisions. The next action of the agent depends on what action he has taken
previously and what action he is supposed to take in the future.
 Example:
o Checkers- Where the previous move can affect all the
following moves.
Dynamic vs Static
 An environment that keeps constantly changing itself when the agent is up with
some action is said to be dynamic.
 A roller coaster ride is dynamic as it is set in motion and the environment keeps
changing every instant.
 An idle environment with no change in its state is called a static environment.
 An empty house is static as there’s no change in the surroundings when an agent
enters.
Discrete vs Continuous
 If an environment consists of a finite number of actions that can be deliberated
in the environment to obtain the output, it is said to be a discrete environment.
 The game of chess is discrete as it has only a finite number of moves. The
number of moves might vary with every game, but still, it’s finite.
 The environment in which the actions are performed cannot be numbered i.e. is
not discrete, is said to be continuous.
 Self-driving cars are an example of continuous environments as their actions are
driving, parking, etc. which cannot be numbered.
Deterministic vs Stochastic
 When a uniqueness in the agent’s current state completely determines the next
state of the agent, the environment is said to be deterministic.
 The stochastic environment is random in nature which is not unique and cannot
be completely determined by the agent.
 Examples:
o Chess – there would be only a few possible moves for a chess
piece at the current state and these moves can be determined.
o Self-Driving Cars- the actions of a self-driving car are not
unique, it varies time to time.

A: Actuators
An actuator is a part of the agent that delivers the output of action to the environment.
They are responsible for executing the actions decided by the agent based on its perceptions
and decisions. In essence, actuators are the “hands and feet” of the agent, enabling it to
carry out tasks.
The actuators for a smart thermostat include the heating system, cooling system, and fans,
which it controls to adjust the room temperature and maintain the desired comfort level.
The design and choice of actuators are crucial because they directly affect the agent’s
ability to perform its functions in the environment.
S: Sensors
Sensors are the receptive parts of an agent that takes in the input for the agent.
Sensors collect data from the environment, which is then processed by the agent to make
informed decisions. Sensors are the “eyes and ears” of the agent, providing it with the
necessary information to act intelligently.
The sensors for a smart thermostat include temperature sensors to measure the current room
temperature, humidity sensors to detect moisture levels, and motion sensors to determine if
people are present in the house.
The quality and variety of sensors used in an AI system greatly influence its ability to
perceive and understand its environment.
Importance of PEAS in AI
The PEAS framework is vital for the design and development of AI system because it
provides a structured approach to defining the agent’s interaction with its environment. By
clearly specifying the performance measure, environment, actuators, and sensors,
developers can create AI systems that are more effective and adaptable to their tasks.
Using PEAS helps in:
 Defining clear goals: The performance measure ensures that the agent’s actions
are aligned with the desired outcomes.
 Understanding the operational context: Analyzing the environment allows
developers to anticipate challenges and design solutions that are robust and
effective.
 Designing effective interactions: Selecting the right actuators and sensors
ensures that the agent can perceive and interact with its environment in a
meaningful way.
Exploring Different Types of AI Agents with PEAS Examples

Performance
Agent Measure Environment Actuator Sensor

Patient’s health,
Hospital Prescription, Symptoms,
Admission Hospital,
Management Diagnosis, Scan Patient’s
process, Doctors, Patients
System report response
Payment

The comfortable
Steering wheel, Camera,
Automated trip, Safety, Roads, Traffic,
Accelerator, GPS,
Car Drive Maximum Vehicles
Brake, Mirror Odometer
Distance

Subject Maximize Classroom, Smart displays, Eyes, Ears,


Tutoring scores, Desk, Chair, Corrections Notebooks
Improvement is Board, Staff,
Performance
Agent Measure Environment Actuator Sensor

students Students

Percentage of Camera,
Part-picking Conveyor belt Jointed arms
parts in correct joint angle
robot with parts; bins and hand
bins sensors

Satellite
Display
image Correct image Downlink from Color pixel
categorization
analysis categorization orbiting satellite arrays
of scene
system

Advantages of PEAS in AI
1. Structured Design: Provides a clear framework for designing intelligent agents
by breaking down their components.
2. Versatility: Applicable to various AI systems, from simple bots to complex
autonomous agents.
3. Goal-Oriented: Ensures that agents are designed with specific, measurable
objectives, leading to better performance.
4. Systematic Development: Facilitates organized planning and development,
making the process more efficient.
Disadvantages of PEAS in AI
1. Complexity: Can be complex to implement in dynamic environments with many
variables.
2. Over-Simplification: Might oversimplify real-world scenarios, leading to gaps
in agent behavior.
3. Resource-Intensive: Requires significant resources to accurately define and
implement each PEAS component.
4. Limited Adaptability: May struggle to adapt to unexpected changes if not
designed with enough flexibility.

Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability :
 Simple Reflex Agents
 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent
 Multi-agent systems
 Hierarchical agents
Simple Reflex Agents
Simple reflex agents ignore the rest of the percept history and act only on the basis of
the current percept. Percept history is the history of all that an agent has perceived to date.
The agent function is based on the condition-action rule. A condition-action rule is a rule
that maps a state i.e., a condition to an action. If the condition is true, then the action is
taken, else not. This agent function only succeeds when the environment is fully
observable. For simple reflex agents operating in partially observable environments, infinite
loops are often unavoidable. It may be possible to escape from infinite loops if the agent
can randomize its actions.
Problems with Simple reflex agents are :
 Very limited intelligence.
 No knowledge of non-perceptual parts of the state.
 Usually too big to generate and store.
 If there occurs any change in the environment, then the collection of rules needs
to be updated.
Simple Reflex Agents
Model-Based Reflex Agents
It works by finding a rule whose condition matches the current situation. A model-based
agent can handle partially observable environments by the use of a model about the
world. The agent has to keep track of the internal state which is adjusted by each percept
and that depends on the percept history. The current state is stored inside the agent which
maintains some kind of structure describing the part of the world which cannot be seen.
Updating the state requires information about:
 How the world evolves independently from the agent?
 How do the agent’s actions affect the world?
Model-Based Reflex Agents
Goal-Based Agents
These kinds of agents take decisions based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to reduce their
distance from the goal. This allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state. The knowledge that supports its decisions is
represented explicitly and can be modified, which makes these agents more flexible. They
usually require search and planning. The goal-based agent’s behavior can easily be
changed.
Goal-Based Agents
Utility-Based Agents
The agents which are developed having their end uses as building blocks are called utility-
based agents. When there are multiple possible alternatives, then to decide which one is
best, utility-based agents are used. They choose actions based on a preference (utility) for
each state. Sometimes achieving the desired goal is not enough. We may look for a quicker,
safer, cheaper trip to reach a destination. Agent happiness should be taken into
consideration. Utility describes how “happy” the agent is. Because of the uncertainty in the
world, a utility agent chooses the action that maximizes the expected utility. A utility
function maps a state onto a real number which describes the associated degree of
happiness.
Utility-Based Agents
Learning Agent
A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then is able to act and adapt
automatically through learning. A learning agent has mainly four conceptual components,
which are:
1. Learning element: It is responsible for making improvements by learning from
the environment.
2. Critic: The learning element takes feedback from critics which describes how
well the agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action.
4. Problem Generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.
Learning Agent
Multi-Agent Systems
These agents interact with other agents to achieve a common goal. They may have to
coordinate their actions and communicate with each other to achieve their objective.
A multi-agent system (MAS) is a system composed of multiple interacting agents that are
designed to work together to achieve a common goal. These agents may be autonomous or
semi-autonomous and are capable of perceiving their environment, making decisions, and
taking action to achieve the common objective.
MAS can be used in a variety of applications, including transportation systems, robotics,
and social networks. They can help improve efficiency, reduce costs, and increase
flexibility in complex systems. MAS can be classified into different types based on their
characteristics, such as whether the agents have the same or different goals, whether the
agents are cooperative or competitive, and whether the agents are homogeneous or
heterogeneous.
 In a homogeneous MAS, all the agents have the same capabilities, goals, and
behaviors.
 In contrast, in a heterogeneous MAS, the agents have different capabilities,
goals, and behaviors.
This can make coordination more challenging but can also lead to more flexible and robust
systems.
Cooperative MAS involves agents working together to achieve a common goal, while
competitive MAS involves agents working against each other to achieve their own goals. In
some cases, MAS can also involve both cooperative and competitive behavior, where
agents must balance their own interests with the interests of the group.
MAS can be implemented using different techniques, such as game theory, machine
learning, and agent-based modeling. Game theory is used to analyze strategic interactions
between agents and predict their behavior. Machine learning is used to train agents to
improve their decision-making capabilities over time. Agent-based modeling is used to
simulate complex systems and study the interactions between agents.
Overall, multi-agent systems are a powerful tool in artificial intelligence that can help solve
complex problems and improve efficiency in a variety of applications.
Uses of Agents
Agents are used in a wide range of applications in artificial intelligence, including:
 Robotics: Agents can be used to control robots and automate tasks in
manufacturing, transportation, and other industries.
 Smart homes and buildings: Agents can be used to control heating, lighting,
and other systems in smart homes and buildings, optimizing energy use and
improving comfort.
 Transportation systems: Agents can be used to manage traffic flow, optimize
routes for autonomous vehicles, and improve logistics and supply chain
management.
 Healthcare: Agents can be used to monitor patients, provide personalized
treatment plans, and optimize healthcare resource allocation.
 Finance: Agents can be used for automated trading, fraud detection, and risk
management in the financial industry.
 Games: Agents can be used to create intelligent opponents in games and
simulations, providing a more challenging and realistic experience for players.
 Natural language processing: Agents can be used for language translation,
question answering, and chatbots that can communicate with users in natural
language.
 Cybersecurity: Agents can be used for intrusion detection, malware analysis,
and network security.
 Environmental monitoring: Agents can be used to monitor and manage natural
resources, track climate change, and improve environmental sustainability.
 Social media: Agents can be used to analyze social media data, identify trends
and patterns, and provide personalized recommendations to users.

Problem Solving Agents


Problem-solving agents are an essential part of artificial intelligence (AI), designed to tackle
complex challenges and achieve specific goals in dynamic environments. These agents work
by defining problems, formulating strategies, and executing solutions, making them
indispensable in areas like robotics, decision-making, and autonomous systems.
Types of Problems in AI
In AI, problems are classified based on their characteristics and how they affect the problem-
solving process. Understanding these types helps in designing effective problem-solving
agents.
Classification Criteria
AI problems can be categorized into three main types:
1. Ignorable Problems: Problems where certain steps can be disregarded without
impacting the outcome.
2. Recoverable Problems: Problems where mistakes can be corrected or undone.
3. Irrecoverable Problems: Problems where actions are permanent, requiring
careful planning.
Each type has unique implications for AI agent design and strategy.
1. Ignorable Problems
Definition: These are problems where certain solution steps can be skipped or ignored
without affecting the overall outcome.
Characteristics:
 Simpler problem structure.
 Lesser computational resources required.
Examples:
 Optimization tasks where some variables have negligible impact, like tuning
hyperparameters in a machine learning model.
2. Recoverable Problems
Definition: Problems where agents can undo or correct their actions, allowing flexibility in
the problem-solving process.
Characteristics:
 Reversible actions.
 Lower risk compared to irrecoverable problems.
Examples:
 Decision-making in game-playing AI, where moves can be adjusted based on
opponent actions.
3. Irrecoverable Problems
Definition: Problems where actions are irreversible, making careful planning critical.
Characteristics:
 High-risk problem-solving.
 Requires precise execution.
Examples:
 Autonomous vehicle navigation, where a wrong action (e.g., crossing a red
light) can lead to permanent consequences.
Steps in Problem Solving in Artificial Intelligence (AI)
Problem-solving in AI involves a systematic process where agents identify a challenge,
develop strategies, and execute solutions to achieve a goal. Below are the key steps:
1. Problem Identification
 What It Is: Recognizing and defining the problem within the environment.
 Example: An AI assistant identifying a user request, such as booking a flight
or finding a nearby restaurant.
2. Formulating the Problem
 What It Is: Structuring the problem in a way the AI agent can understand and
solve, often using state-space representation.
 Example: Representing a chessboard as a state space, where each move
changes the state.
3. Strategy Formulation
 What It Is: Developing a plan to navigate from the initial state to the goal
state. This includes selecting appropriate algorithms or heuristics.
 Example: Using A* search to find the shortest path in a navigation system.
4. Execution and Monitoring
 What It Is: Implementing the chosen strategy while continuously monitoring
its effectiveness.
 Example: A robot vacuum executing a cleaning path and adjusting if it
encounters obstacles.
5. Learning and Adaptation
 What It Is: Learning from experiences to improve future problem-solving
abilities. This often involves reinforcement learning or machine learning
techniques.
 Example: A self-driving car improving its route planning after encountering
unexpected traffic patterns.
Components of Problem Formulation in AI
1. Initial State
 What It Is: The starting point of the problem-solving process.
 Example: In a chess game, the initial state is the standard arrangement of
pieces on the board.
2. Actions
 What It Is: The set of possible actions an AI agent can take to move from one
state to another.
 Example: In a navigation system, actions could include moving left, right, up,
or down.
3. Transition Model
 What It Is: A description of how actions change the current state into a new
state.
 Example: In a game, moving a pawn forward changes the board configuration.
4. Goal Test
 What It Is: The criteria to determine whether the goal has been achieved.
 Example: In a puzzle-solving task, the goal test checks if the puzzle pieces are
arranged correctly.
5. Path Cost
 What It Is: The cumulative cost associated with a sequence of actions leading
to the goal.
 Example: In a delivery system, the path cost could be the total distance
traveled.
Techniques for Problem Solving in AI
AI agents use a variety of techniques to solve problems efficiently. These techniques include
search algorithms, constraint satisfaction methods, optimization techniques, and machine
learning approaches. Each is suited to specific problem types.
1. Search Algorithms
a. Uninformed Search
These algorithms explore the problem space without prior knowledge about the goal’s
location.
 Examples:
 Breadth-First Search (BFS): Explores all nodes at one level before
moving to the next.
 Depth-First Search (DFS): Explores as far as possible along a
branch before backtracking.
b. Informed Search
These algorithms use heuristics to guide the search process, making them more efficient.
 Examples:
 A Search:* Combines path cost and heuristic estimates to find the
shortest path.
2. Constraint Satisfaction Problems (CSP)
 Definition: Problems where the solution must satisfy a set of constraints.
 Techniques:
 Backtracking: Systematically exploring possible solutions.
 Constraint Propagation: Narrowing down possibilities by applying
constraints.
Searching for Solutions

 A search problem consists of:

o A State Space. Set of all possible states where you can be.
o A Start State. The state from where the search begins.
o A Goal State. A function that looks at the current state returns whether or
not it is the goal state.
 The Solution to a search problem is a sequence of actions, called the plan that
transforms the start state to the goal state.
 This plan is achieved through search algorithms.
Types of search algorithms:

Depth First Search or DFS for a Graph


In Depth First Search (or DFS) for a graph, we traverse all adjacent vertices one by one.
When we traverse an adjacent vertex, we completely finish the traversal of all vertices
reachable through that adjacent vertex. This is similar to a tree, where we first completely
traverse the left subtree and then move to the right subtree. The key difference is that,
unlike trees, graphs may contain cycles (a node may be visited more than once). To avoid
processing a node multiple times, we use a boolean visited array.
Example:
Note : There can be multiple DFS traversals of a graph according to the order in which we
pick adjacent vertices. Here we pick vertices as per the insertion order.
Input: adj = [[1, 2], [0, 2], [0, 1, 3, 4], [2], [2]]

Output: 1 0 2 3 4
Explanation: The source vertex s is 1. We visit it first, then we visit an adjacent.
Start at 1: Mark as visited. Output: 1
Move to 0: Mark as visited. Output: 0 (backtrack to 1)
Move to 2: Mark as visited. Output: 2 (backtrack to 0)
Move to 3: Mark as visited. Output: 3 (backtrack to 2)
Move to 4: Mark as visited. Output: 4 (backtrack to 2)
Not that there can be more than one DFS Traversals of a Graph. For example, after 1, we
may pick adjacent 2 instead of 0 and get a different DFS. Here we pick in the insertion
order.
Input: [[2,3,1], [0], [0,4], [0], [2]]

Output: 0 2 4 3 1
Explanation: DFS Steps:
Start at 0: Mark as visited. Output: 0
Move to 2: Mark as visited. Output: 2
Move to 4: Mark as visited. Output: 4 (backtrack to 2, then backtrack to 0)
Move to 3: Mark as visited. Output: 3 (backtrack to 0)
Move to 1: Mark as visited. Output: 1

Breadth First Search or BFS for a Graph

Given a undirected graph represented by an adjacency list adj, where


each adj[i] represents the list of vertices connected to vertex i. Perform a Breadth First
Traversal (BFS) starting from vertex 0, visiting vertices from left to right according to the
adjacency list, and return a list containing the BFS traversal of the graph.
Examples:
Input: adj = [[2,3,1], [0], [0,4], [0], [2]]
Output: [0, 2, 3, 1, 4]
Explanation: Starting from 0, the BFS traversal will follow these steps:
Visit 0 → Output: 0
Visit 2 (first neighbor of 0) → Output: 0, 2
Visit 3 (next neighbor of 0) → Output: 0, 2, 3
Visit 1 (next neighbor of 0) → Output: 0, 2, 3,
Visit 4 (neighbor of 2) → Final Output: 0, 2, 3, 1, 4
Input: adj = [[1, 2], [0, 2], [0, 1, 3, 4], [2], [2]]

Output: [0, 1, 2, 3, 4]
Explanation: Starting from 0, the BFS traversal proceeds as follows:
Visit 0 → Output: 0
Visit 1 (the first neighbor of 0) → Output: 0, 1
Visit 2 (the next neighbor of 0) → Output: 0, 1, 2
Visit 3 (the first neighbor of 2 that hasn’t been visited yet) → Output: 0, 1, 2, 3
Visit 4 (the next neighbor of 2) → Final Output: 0, 1, 2, 3, 4
Input: adj = [[1], [0, 2, 3], [1], [1, 4], [3]]
Output: [0, 1, 2, 3, 4]
Explanation: Starting the BFS from vertex 0:
Visit vertex 0 → Output: [0]
Visit vertex 1 (first neighbor of 0) → Output: [0, 1]
Visit vertex 2 (first unvisited neighbor of 1) → Output: [0, 1, 2]
Visit vertex 3 (next neighbor of 1) → Output: [0, 1, 2, 3]
Visit vertex 4 (neighbor of 3) → Final Output: [0, 1, 2, 3, 4]
Uniform Cost Search (UCS)
Uniform Cost Search is a pathfinding algorithm that expands the least cost node first,
ensuring that the path to the goal node has the minimum cost.
Key Concepts of Uniform Cost Search
1. Priority Queue: UCS uses a priority queue to store nodes. The node with the
lowest cumulative cost is expanded first. This ensures that the search explores
the most promising paths first.
2. Path Cost: The cost associated with reaching a particular node from the start
node. UCS calculates the cumulative cost from the start node to the current node
and prioritizes nodes with lower costs.
3. Exploration: UCS explores nodes by expanding the least costly node first,
continuing this process until the goal node is reached. The path to the goal node
is guaranteed to be the least costly one.
4. Termination: The algorithm terminates when the goal node is expanded,
ensuring that the first time the goal node is reached, the path is the optimal one.
A* Search Algorithm
A* (pronounced "A-star") is a powerful graph traversal and pathfinding algorithm widely
used in artificial intelligence and computer science. It is mainly used to find the shortest path
between two nodes in a graph, given the estimated cost of getting from the current node to the
destination node.
A heuristic function, denoted h(n), estimates the cost of getting from any given node n to the
destination node.
The main idea of A* is to evaluate each node based on two parameters:
1. g(n): the actual cost to get from the initial node to node n. It represents the sum of the
costs of node n outgoing edges.
2. h(n): Heuristic cost (also known as "estimation cost") from node n to destination node
n. This problem-specific heuristic function must be acceptable, meaning it never
overestimates the actual cost of achieving the goal. The evaluation function of node n
is defined as f(n) = g(n) h(n).
Algorithm A* selects the nodes to be explored based on the lowest value of f(n), preferring
the nodes with the lowest estimated total cost to reach the goal. The A* algorithm works:
1. Create an open list of foundbut not explored nodes.
2. Create a closed list to hold already explored nodes.
3. Add a startingnode to the open list with an initial value of g
4. Repeat the following steps until the open list is empty or you reachthe target node:
1. Find the node with the smallest f-value (i.e., the node with the minor g(n)
h(n)) in the open list.
2. Move the selected node from the open list to the closed list.
3. Createall valid descendantsof the selected node.
4. For each successor, calculateits g-value as the sum of the current node's g
value and the cost of movingfrom the current node to the successor node.
Update the g-value of the tracker when a better path is found.
5. If the followeris not in the open list, add it with the calculated g-value and
calculate its h-value. If it is already in the open list, update its g value if the
new path is better.
6. Repeat the cycle. Algorithm A* terminates when the target node is reached or
when the open list empties, indicating no paths from the start node to the
target node. The A* search algorithm is widely used in various fields such as
robotics, video games, network routing, and design problems because it is
efficient and can find optimal paths in graphs or networks.

o GreedyBest-FirstSearch:
This informed search algorithm chooses the nodes, taking into consideration the goal's
cost estimate. It favors those nodes that look like they are closest to the goal, but it
does not always guarantee an optimal solution. It is useful when a good heuristic is
found, meaning that we do not always have to solve the problem from scratch.

Hill Climbing
Hill climbing is a widely used optimization algorithm in Artificial Intelligence (AI) that
helps find the best possible solution to a given problem. As part of the local search
algorithms family, it is often applied to optimization problems where the goal is to
identify the optimal solution from a set of potential candidates.

Understanding Hill Climbing in AI


Hill Climbing is a heuristic search algorithm used primarily for mathematical optimization
problems in artificial intelligence (AI). It is a form of local search, which means it focuses
on finding the optimal solution by making incremental changes to an existing solution and
then evaluating whether the new solution is better than the current one. The process is
analogous to climbing a hill where you continually seek to improve your position until you
reach the top, or local maximum, from where no further improvement can be made.
Hill climbing is a fundamental concept in AI because of its simplicity, efficiency, and
effectiveness in certain scenarios, especially when dealing with optimization problems or
finding solutions in large search spaces.

How Does the Hill Climbing Algorithm Work?


In the Hill Climbing algorithm, the process begins with an initial solution, which is then
iteratively improved by making small, incremental changes. These changes are evaluated
by a heuristic function to determine the quality of the solution. The algorithm continues to
make these adjustments until it reaches a local maximum—a point where no further
improvement can be made with the current set of moves.
Basic Concepts of Hill Climbing Algorithms
Hill climbing follows these steps:
1. Initial State: Start with an arbitrary or random solution (initial state).
2. Neighboring States: Identify neighboring states of the current solution by
making small adjustments (mutations or tweaks).
3. Move to Neighbor: If one of the neighboring states offers a better solution
(according to some evaluation function), move to this new state.
4. Termination: Repeat this process until no neighboring state is better than the
current one. At this point, you’ve reached a local maximum or minimum
(depending on whether you’re maximizing or minimizing).

Hill Climbing as a Heuristic Search in Mathematical Optimization


Hill Climbing algorithm often used for solving mathematical optimization problems in
AI. With a good heuristic function and a large set of inputs, Hill Climbing can find a
sufficiently good solution in a reasonable amount of time, although it may not always find
the global optimal maximum.
In mathematical optimization, Hill Climbing is commonly applied to problems that
involve maximizing or minimizing a real function. For example, in the Traveling
Salesman Problem, the objective is to minimize the distance traveled by the salesman
while visiting multiple cities.

What is a Heuristic Function?


A heuristic function is a function that ranks the possible alternatives at any branching step
in a search algorithm based on available information. It helps the algorithm select the best
route among various possible paths, thus guiding the search towards a good solution
efficiently.
Features of the Hill Climbing Algorithm
1. Variant of Generating and Testing Algorithm: Hill Climbing is a specific
variant of the generating and testing algorithms. The process involves:
 Generating possible solutions: The algorithm creates potential
solutions within the search space.
 Testing solutions: Each generated solution is evaluated to determine
if it meets the desired criteria.
 Iteration: If a satisfactory solution is found, the algorithm
terminates; otherwise, it returns to the generation step.
This iterative feedback mechanism allows Hill Climbing to refine its search by
using information from previous evaluations to inform future moves in the
search space.
2. Greedy Approach: The Hill Climbing algorithm employs a greedy approach,
meaning that at each step, it moves in the direction that optimizes the objective
function. This strategy aims to find the optimal solution efficiently by making
the best immediate choice without considering the overall problem context.

Types of Hill Climbing in Artificial Intelligence

1. Simple Hill Climbing Algorithm


Simple Hill Climbing is a straightforward variant of hill climbing where the algorithm
evaluates each neighboring node one by one and selects the first node that offers an
improvement over the current one.
Algorithm for Simple Hill Climbing
1. Evaluate the initial state. If it is a goal state, return success.
2. Make the initial state the current state.
3. Loop until a solution is found or no operators can be applied:
 Select a new state that has not yet been applied to the current state.
 Evaluate the new state.
 If the new state is the goal, return success.
 If the new state improves upon the current state, make it the current
state and continue.
 If it doesn’t improve, continue searching neighboring states.
4. Exit the function if no better state is found.

2. Steepest-Ascent Hill Climbing


Steepest-Ascent Hill Climbing is an enhanced version of simple hill climbing. Instead of
moving to the first neighboring node that improves the state, it evaluates all neighbors and
moves to the one offering the highest improvement (steepest ascent).
Algorithm for Steepest-Ascent Hill Climbing
1. Evaluate the initial state. If it is a goal state, return success.
2. Make the initial state the current state.
3. Repeat until the solution is found or the current state remains unchanged:
 Select a new state that hasn’t been applied to the current state.
 Initialize a ‘best state’ variable and evaluate all neighboring states.
 If a better state is found, update the best state.
 If the best state is the goal, return success.
 If the best state improves upon the current state, make it the new
current state and repeat.
4. Exit the function if no better state is found.

3. Stochastic Hill Climbing


Stochastic Hill Climbing introduces randomness into the search process. Instead of
evaluating all neighbors or selecting the first improvement, it selects a random neighboring
node and decides whether to move based on its improvement over the current state.
Algorithm for Stochastic Hill Climbing:
1. Evaluate the initial state. If it is a goal state, return success.
2. Make the initial state the current state.
3. Repeat until a solution is found or the current state does not change:
 Apply the successor function to the current state and generate all
neighboring states.
 Choose a random neighboring state based on a probability function.
 If the chosen state is better than the current state, make it the new
current state.
 If the selected neighbor is the goal state, return success.
4. Exit the function if no better state is found.
State-Space Diagram in Hill Climbing: Key Concepts and Regions
In the Hill Climbing algorithm, the state-space diagram is a visual representation of all
possible states the search algorithm can reach, plotted against the values of the objective
function (the function we aim to maximize).
In the state-space diagram:
 X-axis: Represents the state space, which includes all the possible states or
configurations that the algorithm can reach.
 Y-axis: Represents the values of the objective function corresponding to each
state.
The optimal solution in the state-space diagram is represented by the state where
the objective function reaches its maximum value, also known as the global maximum.

Key Regions in the State-Space Diagram


1. Local Maximum: A local maximum is a peak state in the landscape which is better than
each of its neighboring states, but there is another state also present which is higher than the
local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the search
space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of
the current state contains the same value, because of this algorithm does not find any best
direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching,
to solve the problem. Randomly select a state which is far away from the current state so it is
possible that the algorithm could find non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than
its surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can
improve this problem.

4. Current State: The current state refers to the algorithm’s position in the state-space
diagram during its search for the optimal solution.
5. Shoulder: A shoulder is a plateau with an uphill edge, allowing the algorithm to move
toward better solutions if it continues searching beyond the plateau.
Simulated annealing search

A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be
incomplete because it can get stuck on a local maximum. And if algorithm applies a random
walk, by moving a successor, then it may complete but not efficient. Simulated Annealing is
an algorithm which yields both efficiency and completeness.
In mechanical term Annealing is a process of hardening a metal or glass to a high
temperature then cooling gradually, so this allows the metal to reach a low-energy crystalline
state. The same process is used in simulated annealing in which the algorithm picks a random
move, instead of picking the best move. If the random move improves the state, then it
follows the same path. Otherwise, the algorithm follows the path which has a probability of
less than 1 or it moves downhill and chooses another path.
The algorithm is characterized by three main steps: initialization, temperature initialization,
and iteration.
Initialization
The algorithm begins with an initial solution to the optimization problem. This solution can
be generated randomly or through heuristic methods. The quality of the initial solution can
significantly impact the performance of the algorithm.
Temperature Initialization
An initial temperature, denoted as T, is set at the start of the algorithm. This temperature
plays a crucial role in determining the probability of accepting worse solutions initially. As
the algorithm progresses, the temperature gradually decreases, influencing the exploration of
the solution space.
Iteration Phase
During the iteration phase, the algorithm continues until a stopping criterion is met, such as
reaching a maximum number of iterations or when the temperature drops below a certain
threshold. The first step in each iteration typically involves perturbing the current solution to
generate a neighboring solution. This perturbation can include minor adjustments to the
current solution.
The algorithm then calculates the change in energy (∆Energy) between the current and
neighboring solutions. If ∆Energy is negative, indicating that the neighboring solution is
better, it is accepted. If not, the neighboring solution may still be accepted based on a
probability determined by the current temperature. This mechanism allows the algorithm to
escape local minima and explore the solution space more effectively.
Cooling Schedule
The cooling schedule is a critical component of the algorithm, controlling the rate at which
the temperature decreases. It can be linear, exponential, or follow other schemes. The choice
of cooling schedule affects the balance between exploration and exploitation, ultimately
influencing how quickly the algorithm converges to an optimal solution.
Simulated Annealing's ability to initially accept worse solutions with a certain probability
enables it to escape local optima, potentially leading to better solutions.

Local Search in Continuous Spaces.

In AI, local search in continuous spaces refers to iteratively optimizing a solution by


making small, local adjustments within a given space, aiming for a locally optimal
solution, rather than a global one. For example, imagine adjusting a set of
parameters to achieve a model's best performance, or finding the shortest path in a
real-world scenario.
Here's a more detailed explanation:
o What is Local Search?
 Local search is a class of optimization algorithms that starts with an initial
solution and iteratively explores its neighborhood to find a better one.
 The key principle is to move to a neighbor state that improves the current
solution, aiming for an optimal or near-optimal state within a local
region.
 Unlike global search algorithms that try to explore the entire solution
space, local search focuses on a limited portion.
o Local Search in Continuous Spaces:
 In continuous spaces, the solution can take any value within a given
range, making the search space much larger than in discrete spaces.
 Algorithms like gradient descent are used to find the local optimum by
iteratively moving in the direction of steepest descent or ascent.
 Example: In machine learning, local search is used in training neural
networks, where the algorithm adjusts the parameters to minimize the loss
function.

 Advantages:
 Can be computationally efficient, as it focuses on a smaller region
of the solution space.

 Easier to implement than global search algorithms.


 Disadvantages:
 Risk of getting stuck in local optima, where there is no better
neighboring solution.
 Not guaranteed to find the global optimum.
o Example: Gradient Descent
 Gradient descent is a widely used local search algorithm for optimizing
continuous functions.
 It starts with an initial value of the parameters and iteratively updates
them in the opposite direction of the gradient (or the direction of steepest
descent).
 The goal is to find the point where the gradient is zero or close to zero,
which corresponds to a local minimum.
o Other Examples of Local Search Algorithms:
 Hill Climbing: A simple algorithm that starts with an initial solution and
iteratively moves to a neighboring solution with a better value.
 Simulated Annealing: Inspired by the process of annealing in
metallurgy, this algorithm explores the solution space with a probabilistic
approach, allowing for moves to worse solutions with a certain
probability.

You might also like