0% found this document useful (0 votes)
49 views14 pages

Aids QB1

The document discusses artificial intelligence and data science topics including defining AI and its applications in healthcare, agriculture, human resources, social media and chatbots. It also compares model-based and utility-based agents, supervised and unsupervised learning with examples, and discusses various uninformed search techniques like depth first search, breadth first search and uniform cost search with examples. It further explains what PEAS descriptors are which is a framework used to characterize tasks or problems in AI.

Uploaded by

Maqsood Shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views14 pages

Aids QB1

The document discusses artificial intelligence and data science topics including defining AI and its applications in healthcare, agriculture, human resources, social media and chatbots. It also compares model-based and utility-based agents, supervised and unsupervised learning with examples, and discusses various uninformed search techniques like depth first search, breadth first search and uniform cost search with examples. It further explains what PEAS descriptors are which is a framework used to characterize tasks or problems in AI.

Uploaded by

Maqsood Shaikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Artificial Intelligence and Data Science Question Bank

1. Define AI, and its applications.

Ans: Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to think and act like humans. It involves the development of algorithms and computer
programs that can perform tasks that typically require human intelligence such as visual perception,
speech recognition, decision-making, and language translation.

Advantages:
 Automation: AI streamlines tasks, boosting efficiency and productivity.
 Cost Reduction: AI optimizes resources, cutting operational expenses.
 Decision Making: AI empowers swift, data-driven decisions.
 24/7 Availability: AI systems operate tirelessly, ensuring constant availability.

Application:
1) Healthcare:
Artificial Intelligence is widely used in the field of healthcare and medicine. The various algorithms of
Artificial Intelligence are used to build precise machines that are able to detect minor diseases inside
the human body. Also, Artificial Intelligence uses the medical history and current situation of a
particular human being to predict future diseases. Artificial Intelligence is also used to find the
current vacant beds in the hospitals of a city that saves the time of patients who are in emergency
conditions.
2) Agriculture:
Artificial Intelligence is also becoming a part of agriculture and farmers’ life. It is used to detect
various parameters such as the amount of water and moisture, amount of deficient nutrients, etc in
the soil. There is also a machine that uses AI to detect where the weeds are growing, where the soil
is infertile, etc
3) Human Resource:
The online selection processes are done using the voice and camera permission of the candidate’s
device. Here Artificial Intelligence is sued to detect any kind of malpractice behavior and many other
things. It is also used to detect any candidate’s personality in some cases. This reduces the effort of
the hiring team and also enhances the efficiency of the selection process.
4) Social media:
There are various use of Artificial Intelligence in the field of social media. Some social media
platform such as Facebook, Instagram, etc uses Artificial Intelligence to show relevant content to the
user. It uses the search history and view history of a user to show relevant content.
5) Chatbots:
Chatbots are defined as a tool that is used to respond to the text that is given to them as input. In it,
the customer or user sends the query according to their need and the chatbot gives the most
appropriate output to provide the best solution according to the input.
2. Compare model-based agents, utility-based agents.
Ans:
Parameter Model-Based Agents Utility-Based Agents
Definition Utilize models of the environment to make Make decisions based on
decisions. maximizing expected utility.
Knowledge Require accurate models of the environment Require a utility function and
Requirement probabilities of outcomes.
Complexity Can be complex due to the need for accurate Less complex as it focuses on
models utility optimization.
Adaptability May struggle with changes in the environment Flexible in adapting to
not in the model. changes based on utility
analysis.
Decision Optimal decisions are based on accurate models. Optimal decisions aim to
Optimality maximize expected utility.
Common Robotics, planning systems, where environment Economics, decision theory,
Applications is well-known. where preferences are
paramount.
Decision Makes decision based on the goal and available Makes decision based on the
Making Information Utility and general
Information

3. Compare supervised and unsupervised learning. Give example.


Ans:
Example:
supervised learning:
Suppose there is a basket which is filled with some fresh fruits, the task is to arrange the same type
of fruits in one place. Also, suppose that the fruits are apple, banana, cherry, and grape. Suppose
one already knows from their previous work (or experience) that, the shape of every fruit present in
the basket so, it is easy for them to arrange the same type of fruits in one place. Here, the previous
work is called training data in Data Mining terminology. So, it learns things from the training data.
Unsupervised learning:
Again, Suppose there is a basket and it is filled with some fresh fruits. The task is to arrange the
same type of fruits in one place. This time there is no information about those fruits beforehand, it’s
the first time that the fruits are being seen or discovered So how to group similar fruits without any
prior knowledge about them? First, any physical characteristic of a particular fruit is selected.
Suppose colour. Then the fruits are arranged based on the color.
The groups will be something as shown below:
 RED COLOR GROUP: apples & cherry fruits.
 GREEN COLOR GROUP: bananas & grapes. So now, take another physical character say, size,
so now the groups will be something like this.
 RED COLOR AND BIG SIZE: apple.
 RED COLOR AND SMALL SIZE: cherry fruits.
 GREEN COLOR AND BIG SIZE: bananas.
 GREEN COLOR AND SMALL SIZE: grapes.

4. Explain with example various uniformed search techniques


Ans: Uninformed search algorithms, also known as blind search algorithms, are a class of search
algorithms used in artificial intelligence and computer science to traverse or explore a search space
without employing any domain-specific knowledge or heuristic information.
Depth first Search:
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The
algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a
graph) and explores as far as possible along each branch before backtracking. It uses last in- first-out
strategy and hence it is implemented using a stack.
Algorithm:
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until STACK is empty
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1) and
set their STATUS = 2 (waiting state)
[END OF LOOP]
Step 6: EXIT
Example:

Breadth First Search:


Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. It
starts at the tree root (or some arbitrary node of a graph, sometimes referred to as a ‘search key’),
and explores all of the neighbor nodes at the present depth prior to moving on to the nodes at the
next depth level. It is implemented using a queue.
Algorithm:
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until QUEUE is empty
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and set
their STATUS = 2
(waiting state)
[END OF LOOP]
Step 6: EXIT
Add Example from Qno: 19
Uniform Cost Search:
UCS is different from BFS and DFS because here the costs come into play. In other words, traversing
via different edges might not have the same cost. The goal is to find a path where the cumulative
sum of costs is the least.
Algorithm:
Initialize all nodes in the graph to the ready state.
Create an empty priority queue.
Enqueue the starting node with its cost into the priority queue.
Repeat until the priority queue is empty:
a. Dequeue a node and process it.
b. If it's the goal node, return success.
c. Otherwise, expand the node's successors:
 Calculate the cost to each successor.
 Enqueue successors with updated costs into the priority queue.
If the priority queue is empty and the goal node hasn't been found, return failure.

5. What are PEAS Descriptors?


Ans: PEAS descriptors are a framework used in artificial intelligence to characterize tasks or
problems.
Performance Measure: This is like a report card for the agent or system. It tells us how well the
agent is doing its job. For example, if the job is to clean a room, the performance measure could be
how clean the room is after the agent has finished.

Environment: This is the setting where the agent operates. It includes everything around the agent
that it can see, hear, touch, or interact with. For example, if the agent is a robot vacuum cleaner, the
environment includes the room it's cleaning, the furniture, and any obstacles in its path.

Actuators: These are the tools or body parts of the agent that allow it to do things in the
environment. They are like the agent's hands, legs, or tools. For example, the actuators for a robot
vacuum cleaner might be its wheels for movement and its vacuum suction for cleaning.

Sensors: These are like the agent's senses. They help the agent understand what's happening in its
environment. Sensors provide information to the agent about things like temperature, light, sound,
or obstacles in its path. For example, sensors on a robot vacuum cleaner might detect walls or
furniture to avoid bumping into them.

6. What are the basic building blocks of Learning agents


Ans:
The basic building blocks of learning agents in artificial intelligence include:
Perception (Sensors): They detect the current state of the environment and produce a percept,
which is a description of what the agent sense.
Performance Element: This component is responsible for selecting actions to achieve the agent's
goals based on the current state and its internal knowledge. It is essentially the decision-making
component of the agent.
Learning Element: This component is responsible for improving the agent's performance over time
through learning from experience. It involves mechanisms for acquiring new knowledge, updating
existing knowledge, or modifying behavior based on feedback from the environment.
Critic: The critic component evaluates the agent's actions and provides feedback on their
effectiveness in achieving the agent's goals. It helps guide the learning process by providing
reinforcement signals or evaluations of performance.

Problem Generator: This component is responsible for generating new problems or situations for
the agent to learn from. It helps in exploration and discovery by presenting novel challenges to the
agent.
Actuators: The final step involves executing the chosen action, which could be motor, manipulator
or the devices capable of affecting the environment.

7. Compare problem solving and planning agents.


8. Explain in detail knowledge-based agents

9. Algorithm for BFS


The steps involved in the BFS algorithm to explore a graph are given as follows -
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until QUEUE is empty
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and set
their STATUS = 2
(waiting state)
[END OF LOOP]
Step 6: EXIT
OR
Initialize Node Status: Set the status of each node in the graph to 1 (ready state), indicating that
they are ready to be explored.
Enqueue Starting Node: Enqueue the starting node (e.g., node A) into a queue and set its status
to 2 (waiting state), indicating that it is waiting to be processed.
Explore Nodes in Queue: Repeat the following steps until the queue is empty:
a. Dequeue a Node: Remove a node, let's call it N, from the front of the queue.
b. Process the Node: Perform any necessary operations or tasks related to the processing of
node N.
c. Update Node Status: Set the status of node N to 3 (processed state), indicating that it has
been explored.
d. Enqueue Neighbors: Enqueue all neighbors of node N that are in the ready state (status = 1)
into the queue. Set their status to 2 (waiting state), indicating that they are now waiting to be
processed.
Exit: Once the queue is empty, exit the algorithm.

10. Algorithm for DFS


Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until STACK is empty
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1)
and set their STATUS = 2 (waiting state)
[END OF LOOP]
Step 6: EXIT
OR
Initialize Node Status: At the beginning of the exploration, set the status of each location (node)
in the map (graph) to "ready to be explored."
Push Starting Node: Start the exploration from a specific location, let's call it the origin (node A).
Push the origin onto a stack of locations to explore and mark it as "waiting to be explored."
Explore Nodes in Stack: Now, initiate the exploration loop, which repeats until all reachable
locations have been explored:
a. Take a Look at a Place: Pop a location from the top of the stack, let's name it the current
location (node N), and start exploring it.
b. Explore the Place: Conduct exploration activities at the current location, such as data
gathering or analysis.
c. Mark as Explored: After exploration, update the status of the current location to indicate it
has been explored.
d. Look for Nearby Places to Explore: While at the current location, identify neighboring
locations that have not yet been explored. Push these unexplored neighbors onto the stack and
mark them as "waiting to be explored."
Exit: Once the stack becomes empty (indicating all reachable locations have been explored),
terminate the exploration process.

11. Algorithm for Greedy Best first search and specify it’s properties
Best first search algorithm:
Step 1: Place the starting node into the OPEN list.
Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and places it
in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal node or not. If any
successor node is goal node, then return success and terminate the search, else proceed to Step
6.
Step 6: For each successor node, algorithm checks for evaluation function f(n), and then check if
the node has been in either OPEN or CLOSED list. If the node has not been in both list, then add
it to the OPEN list.
Step 7: Return to Step 2
Properties:
Heuristic-Driven: Relies on heuristic values to guide search.
Greedy Strategy: Selects nodes based solely on heuristic, prioritizing lowest heuristic value.
Incomplete: May not find solution if heuristic is not optimal.
Time Complexity: Depends on heuristic quality, can be exponential.
Memory Usage: Depends on OPEN list size, often uses priority queues.
Admissibility and Consistency: Requires admissible and consistent heuristic for optimality
guarantees.

12. Simple Reflex agent

13. Define AI, give it’s applications


Ans: Already Done
14. Explain various techniques for solving problems by searching
Ans: Pending
15. Define agent , give classification of agents
Ans:
Special Agent:
Definition: A special agent is designed to perform a specific task or set of tasks within a limited domain.
Scope: Special agents are specialized to handle a particular problem or operate within a specific
environment.
Example: An agent programmed to play chess is a special agent because it is designed specifically to play
chess and may not perform well in other tasks or domains.
General Agent:
Definition: A general agent is capable of performing a wide range of tasks across multiple domains.
Scope: General agents have a broader scope of functionality compared to special agents and can adapt
to various environments or tasks.
Example: A virtual assistant like Siri or Google Assistant is a general agent because it can perform tasks
such as answering questions, setting reminders, providing directions, and more, across different
domains.
Universal Agent:
Definition: A universal agent is theoretically capable of performing any task that a human can perform.
Scope: Universal agents have the most extensive scope of functionality and are designed to be highly
versatile and adaptable.
Example: While true universal agents do not exist yet, advanced AI systems like artificial general
intelligence (AGI) aspire to achieve this level of versatility and adaptability, capable of reasoning,
learning, and problem-solving across a wide range of domains.

16. Differentiate between BFS and DFS with example


17. Write the PEAS descriptor for
a.Satellite Image Processing
Performance: Accuracy, Timeliness, Speed, Reliability, Effectiveness, Consistency.
Environment: Orbital space, Earth's atmosphere, Remote sensing, Geospatial context, Atmospheric
conditions.
Actuators: Image processing algorithms, Machine learning models, Computational techniques, Data
analytics tools, Visualization methods.
Sensors: Optical sensors, Infrared sensors, Radar sensors, Thermal sensors, Spectral sensors,
Radiometers, Spectrometers, Lidar sensors, Global Positioning System (GPS) receivers, Magnetometers.

b.AI chatbot for primitive health


Performance: Provide basic health-related information and assistance to users, prioritizing user privacy
and confidentiality.
Environment: Online platform or mobile application where users interact with the chatbot.
Actuators: Text-based responses, providing information, suggestions, and advice.
Sensors: Text inputs from users, possibly integration with health data APIs for more personalized
responses.

c. Automated Self Driving car(IMP)


Perforance Measure:
Safety: Automated system should be able to drive the car safely without dashing anywhere.
Optimum speed: Automated system should be able to maintain the optimal speed depending upon the
surroundings.
Comfortable journey: Automated system should be able to give a comfortable journey to the end user.
Environment:
Roads: Automated car driver should be able to drive on any kind of a road ranging from city roads to
highway.
Traffic conditions: You will find different sort of traffic conditions for different type of roads.
Actuators:
Steering wheel: used to direct car in desired directions.
Accelerator, gear: To increase or decrease speed of the car.
Sensors: To take i/p from environment in car driving example cameras, sonar system etc.

d. Vaccum Cleaner
Performance: Efficiently remove dust and debris from floors, carpets, and surfaces while minimizing
noise and energy consumption.
Environment: Indoor spaces such as homes, offices, and commercial buildings with various floor types
and furniture.
Actuators: Vacuum suction, Brush rotation, Movemen
t control.
Sensors: Dirt and debris sensors, Floor type sensors, Collision sensors, Navigation sensors.
e. Refinery Plant
Performance: Optimization, Compliance, Cost-effectiveness, Product quality.
Environment: Industrial setting, Processing units, Chemical reactors, Storage tanks.
Actuators: Compressors, Conveyors, Reactor vessels, Distillation columns.
Sensors: pH, Turbidity, Density, Composition, Safety interlocks.

18. Why Informed search techniques are called as Heuristic Methods. Write Heuristic methods
and Heuristic Function they are using.

Informed search techniques are often referred to as heuristic methods because they employ
heuristic functions to guide the search process towards the most promising paths. Heuristics are
problem-solving strategies or techniques that use rules of thumb, intuition, or domain-specific
knowledge to make educated guesses or estimates about the best course of action.

Here are some common heuristic methods along with the heuristic functions they use:

Greedy Best-First Search:

Heuristic Method: Greedy Best-First Search selects the path that appears to be the most promising
based on a heuristic evaluation function.

Heuristic Function: Typically, the heuristic function used in Greedy Best-First Search estimates the
cost to reach the goal state from the current state. Examples include the straight-line distance
(Euclidean distance) to the goal in pathfinding problems or the estimated cost-to-go in optimization
problems.

A* Search:

Heuristic Method: A* Search combines the benefits of both uniform cost search and greedy best-
first search by using both the cost to reach a node and an estimate of the cost to the goal.

Heuristic Function: The heuristic function used in A* Search is the sum of the cost to reach the
current node (g-value) and the estimated cost to reach the goal from the current node (h-value).
This heuristic function is often denoted as f(n) = g(n) + h(n), where g(n) represents the cost to reach
node n from the start node, and h(n) represents the estimated cost to reach the goal from node n.

Hill Climbing:

Heuristic Method: Hill Climbing is a local search algorithm that iteratively moves towards the
neighboring state that maximizes or minimizes the heuristic evaluation function.

Heuristic Function: The heuristic function used in Hill Climbing evaluates the quality or "goodness" of
a state based on domain-specific criteria. For example, in optimization problems, the heuristic
function might represent the objective function to be maximized or minimized.
19. construct BFS Traversa

20. Construct BFS


21. Construct DFS

22. What is Environment in AI , describe each environment with examples.

Ans: In the context of artificial intelligence, the environment refers to the external context or
surroundings in which an intelligent agent operates, interacts, and perceives. The environment is a
crucial concept in AI as it defines the domain in which an agent seeks to achieve its objectives. Different
types of environments exist in AI, each characterized by its properties and dynamics.

Fully Observable vs Partially Observable:

Fully Observable: The agent has access to complete information about the state of the environment at
any given time.
Example: Chess game, where the player can see the entire board.
Partially Observable: The agent has limited or incomplete information about the state of the
environment.
Example: Poker game, where players have limited information about each other's cards.

Deterministic vs Stochastic:
Deterministic: The outcome of actions is certain and predictable.
Example: Tic-Tac-Toe game, where the result of each move is determined by the rules of the game.
Stochastic: The outcome of actions is uncertain and subject to randomness.
Example: Backgammon game, where the roll of dice introduces randomness into the game.

Competitive vs Collaborative:
Competitive: Agents have conflicting objectives and may work against each other.
Example: Competitive sports like tennis, where players compete to win points against each other.
Collaborative: Agents have shared objectives and work together to achieve common goals.
Example: Team-based video games like Overwatch, where players collaborate to defeat the opposing
team.

Single-agent vs Multi-agent:
Single-agent: There is only one intelligent agent operating in the environment.
Example: Solitaire card game, where the player plays alone to achieve a specific outcome.
Multi-agent: There are multiple intelligent agents interacting with each other and the environment.
Example: Market trading, where multiple traders buy and sell stocks, influencing prices and each other's
strategies.
Static vs Dynamic:
Static: The environment does not change over time.
Example: Chessboard remains unchanged throughout the game.
Dynamic: The environment changes over time due to agent actions or external factors.
Example: Traffic navigation system, where road conditions dynamically change due to traffic flow and
accidents.

Discrete vs Continuous:
Discrete: The environment consists of a finite or countable set of states and actions.
Example: Gridworld environment, where the agent moves between discrete grid cells.
Continuous: The environment consists of an infinite set of states and actions.
Example: Autonomous driving in the real world, where the vehicle operates in a continuous space of
positions and velocities.

Episodic vs Sequential:
Episodic: Agent's actions have no impact on subsequent episodes.
Example: Playing a single game of Sudoku, where each game is independent of others.
Sequential: Agent's actions affect subsequent states and episodes.
Example: Learning to play a series of levels in a video game, where success in one level affects gameplay
in subsequent levels.

Known vs Unknown:
Known: The agent has complete knowledge of the environment's dynamics and rules.
Example: Traditional board games like Chess, where players know all the rules and possible moves.
Unknown: The agent has incomplete or uncertain knowledge about the environment.
Example: Exploring a new environment, where the agent learns about its dynamics and rules through
interaction.

Prove that SEND + MORE= MONEY


EAT + THAT= APPLE

You might also like