Aids QB1
Aids QB1
Ans: Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to think and act like humans. It involves the development of algorithms and computer
programs that can perform tasks that typically require human intelligence such as visual perception,
speech recognition, decision-making, and language translation.
Advantages:
Automation: AI streamlines tasks, boosting efficiency and productivity.
Cost Reduction: AI optimizes resources, cutting operational expenses.
Decision Making: AI empowers swift, data-driven decisions.
24/7 Availability: AI systems operate tirelessly, ensuring constant availability.
Application:
1) Healthcare:
Artificial Intelligence is widely used in the field of healthcare and medicine. The various algorithms of
Artificial Intelligence are used to build precise machines that are able to detect minor diseases inside
the human body. Also, Artificial Intelligence uses the medical history and current situation of a
particular human being to predict future diseases. Artificial Intelligence is also used to find the
current vacant beds in the hospitals of a city that saves the time of patients who are in emergency
conditions.
2) Agriculture:
Artificial Intelligence is also becoming a part of agriculture and farmers’ life. It is used to detect
various parameters such as the amount of water and moisture, amount of deficient nutrients, etc in
the soil. There is also a machine that uses AI to detect where the weeds are growing, where the soil
is infertile, etc
3) Human Resource:
The online selection processes are done using the voice and camera permission of the candidate’s
device. Here Artificial Intelligence is sued to detect any kind of malpractice behavior and many other
things. It is also used to detect any candidate’s personality in some cases. This reduces the effort of
the hiring team and also enhances the efficiency of the selection process.
4) Social media:
There are various use of Artificial Intelligence in the field of social media. Some social media
platform such as Facebook, Instagram, etc uses Artificial Intelligence to show relevant content to the
user. It uses the search history and view history of a user to show relevant content.
5) Chatbots:
Chatbots are defined as a tool that is used to respond to the text that is given to them as input. In it,
the customer or user sends the query according to their need and the chatbot gives the most
appropriate output to provide the best solution according to the input.
2. Compare model-based agents, utility-based agents.
Ans:
Parameter Model-Based Agents Utility-Based Agents
Definition Utilize models of the environment to make Make decisions based on
decisions. maximizing expected utility.
Knowledge Require accurate models of the environment Require a utility function and
Requirement probabilities of outcomes.
Complexity Can be complex due to the need for accurate Less complex as it focuses on
models utility optimization.
Adaptability May struggle with changes in the environment Flexible in adapting to
not in the model. changes based on utility
analysis.
Decision Optimal decisions are based on accurate models. Optimal decisions aim to
Optimality maximize expected utility.
Common Robotics, planning systems, where environment Economics, decision theory,
Applications is well-known. where preferences are
paramount.
Decision Makes decision based on the goal and available Makes decision based on the
Making Information Utility and general
Information
Environment: This is the setting where the agent operates. It includes everything around the agent
that it can see, hear, touch, or interact with. For example, if the agent is a robot vacuum cleaner, the
environment includes the room it's cleaning, the furniture, and any obstacles in its path.
Actuators: These are the tools or body parts of the agent that allow it to do things in the
environment. They are like the agent's hands, legs, or tools. For example, the actuators for a robot
vacuum cleaner might be its wheels for movement and its vacuum suction for cleaning.
Sensors: These are like the agent's senses. They help the agent understand what's happening in its
environment. Sensors provide information to the agent about things like temperature, light, sound,
or obstacles in its path. For example, sensors on a robot vacuum cleaner might detect walls or
furniture to avoid bumping into them.
Problem Generator: This component is responsible for generating new problems or situations for
the agent to learn from. It helps in exploration and discovery by presenting novel challenges to the
agent.
Actuators: The final step involves executing the chosen action, which could be motor, manipulator
or the devices capable of affecting the environment.
11. Algorithm for Greedy Best first search and specify it’s properties
Best first search algorithm:
Step 1: Place the starting node into the OPEN list.
Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it
in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any
successor node is goal node, then return success and terminate the search, else proceed to Step
6.
Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if
the node has been in either OPEN or CLOSED list. If the node has not been in both list, then add
it to the OPEN list.
Step 7: Return to Step 2
Properties:
Heuristic-Driven: Relies on heuristic values to guide search.
Greedy Strategy: Selects nodes based solely on heuristic, prioritizing lowest heuristic value.
Incomplete: May not find solution if heuristic is not optimal.
Time Complexity: Depends on heuristic quality, can be exponential.
Memory Usage: Depends on OPEN list size, often uses priority queues.
Admissibility and Consistency: Requires admissible and consistent heuristic for optimality
guarantees.
d. Vaccum Cleaner
Performance: Efficiently remove dust and debris from floors, carpets, and surfaces while minimizing
noise and energy consumption.
Environment: Indoor spaces such as homes, offices, and commercial buildings with various floor types
and furniture.
Actuators: Vacuum suction, Brush rotation, Movemen
t control.
Sensors: Dirt and debris sensors, Floor type sensors, Collision sensors, Navigation sensors.
e. Refinery Plant
Performance: Optimization, Compliance, Cost-effectiveness, Product quality.
Environment: Industrial setting, Processing units, Chemical reactors, Storage tanks.
Actuators: Compressors, Conveyors, Reactor vessels, Distillation columns.
Sensors: pH, Turbidity, Density, Composition, Safety interlocks.
18. Why Informed search techniques are called as Heuristic Methods. Write Heuristic methods
and Heuristic Function they are using.
Informed search techniques are often referred to as heuristic methods because they employ
heuristic functions to guide the search process towards the most promising paths. Heuristics are
problem-solving strategies or techniques that use rules of thumb, intuition, or domain-specific
knowledge to make educated guesses or estimates about the best course of action.
Here are some common heuristic methods along with the heuristic functions they use:
Heuristic Method: Greedy Best-First Search selects the path that appears to be the most promising
based on a heuristic evaluation function.
Heuristic Function: Typically, the heuristic function used in Greedy Best-First Search estimates the
cost to reach the goal state from the current state. Examples include the straight-line distance
(Euclidean distance) to the goal in pathfinding problems or the estimated cost-to-go in optimization
problems.
A* Search:
Heuristic Method: A* Search combines the benefits of both uniform cost search and greedy best-
first search by using both the cost to reach a node and an estimate of the cost to the goal.
Heuristic Function: The heuristic function used in A* Search is the sum of the cost to reach the
current node (g-value) and the estimated cost to reach the goal from the current node (h-value).
This heuristic function is often denoted as f(n) = g(n) + h(n), where g(n) represents the cost to reach
node n from the start node, and h(n) represents the estimated cost to reach the goal from node n.
Hill Climbing:
Heuristic Method: Hill Climbing is a local search algorithm that iteratively moves towards the
neighboring state that maximizes or minimizes the heuristic evaluation function.
Heuristic Function: The heuristic function used in Hill Climbing evaluates the quality or "goodness" of
a state based on domain-specific criteria. For example, in optimization problems, the heuristic
function might represent the objective function to be maximized or minimized.
19. construct BFS Traversa
Ans: In the context of artificial intelligence, the environment refers to the external context or
surroundings in which an intelligent agent operates, interacts, and perceives. The environment is a
crucial concept in AI as it defines the domain in which an agent seeks to achieve its objectives. Different
types of environments exist in AI, each characterized by its properties and dynamics.
Fully Observable: The agent has access to complete information about the state of the environment at
any given time.
Example: Chess game, where the player can see the entire board.
Partially Observable: The agent has limited or incomplete information about the state of the
environment.
Example: Poker game, where players have limited information about each other's cards.
Deterministic vs Stochastic:
Deterministic: The outcome of actions is certain and predictable.
Example: Tic-Tac-Toe game, where the result of each move is determined by the rules of the game.
Stochastic: The outcome of actions is uncertain and subject to randomness.
Example: Backgammon game, where the roll of dice introduces randomness into the game.
Competitive vs Collaborative:
Competitive: Agents have conflicting objectives and may work against each other.
Example: Competitive sports like tennis, where players compete to win points against each other.
Collaborative: Agents have shared objectives and work together to achieve common goals.
Example: Team-based video games like Overwatch, where players collaborate to defeat the opposing
team.
Single-agent vs Multi-agent:
Single-agent: There is only one intelligent agent operating in the environment.
Example: Solitaire card game, where the player plays alone to achieve a specific outcome.
Multi-agent: There are multiple intelligent agents interacting with each other and the environment.
Example: Market trading, where multiple traders buy and sell stocks, influencing prices and each other's
strategies.
Static vs Dynamic:
Static: The environment does not change over time.
Example: Chessboard remains unchanged throughout the game.
Dynamic: The environment changes over time due to agent actions or external factors.
Example: Traffic navigation system, where road conditions dynamically change due to traffic flow and
accidents.
Discrete vs Continuous:
Discrete: The environment consists of a finite or countable set of states and actions.
Example: Gridworld environment, where the agent moves between discrete grid cells.
Continuous: The environment consists of an infinite set of states and actions.
Example: Autonomous driving in the real world, where the vehicle operates in a continuous space of
positions and velocities.
Episodic vs Sequential:
Episodic: Agent's actions have no impact on subsequent episodes.
Example: Playing a single game of Sudoku, where each game is independent of others.
Sequential: Agent's actions affect subsequent states and episodes.
Example: Learning to play a series of levels in a video game, where success in one level affects gameplay
in subsequent levels.
Known vs Unknown:
Known: The agent has complete knowledge of the environment's dynamics and rules.
Example: Traditional board games like Chess, where players know all the rules and possible moves.
Unknown: The agent has incomplete or uncertain knowledge about the environment.
Example: Exploring a new environment, where the agent learns about its dynamics and rules through
interaction.