0% found this document useful (0 votes)
83 views

Ai-Unit-Ii Notes

The document provides information on various search strategies including: 1. Breadth first search and depth first search which are uninformed search strategies. Breadth first search explores all nodes at the current depth before moving to the next depth. 2. Heuristic search strategies like greedy best first search and A* search which use partial problem information to guide the search. 3. Local search algorithms like hill climbing, simulated annealing, and genetic algorithms are also discussed.

Uploaded by

Charan 's
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Ai-Unit-Ii Notes

The document provides information on various search strategies including: 1. Breadth first search and depth first search which are uninformed search strategies. Breadth first search explores all nodes at the current depth before moving to the next depth. 2. Heuristic search strategies like greedy best first search and A* search which use partial problem information to guide the search. 3. Local search algorithms like hill climbing, simulated annealing, and genetic algorithms are also discussed.

Uploaded by

Charan 's
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

SYLLABUS

UNIT – II
 Searching: Searching for solutions,
 uninformed search strategies – Breadth first search, depth limited Search.
 Search with partial information (Heuristic search)
 Greedy best first search,
 A* search,
 Memory -bounded heuristic search
 Local search algorithms- Hill climbing, Simulated annealing search, Local beam
search, Genetic algorithms
Introduction to Problem Solving, General problem solving

 Problem solving is a process of generating solutions from observed data.


 −a problem is characterized by a set of goals,
 −a set of objects, and
 −a set of operations.
 These could be ill-defined and may evolve during problem solving.
Searching for Solutions:
 To build a system to solve a problem:
1. Define the problem precisely
2. Analyze the problem
3. Isolate and represent the task knowledge that is necessary to solve the problem.
4.Choose the best problem-solving techniques and apply it to the particular problem.
Defining the problem as State Space Search:
 The state space representation forms the basis of most of the AI methods.
 Formulate a problem as a state space search by showing the legal problem states, the
legal operators, and the initial and goal states.
 A state is defined by the specification of the values of all attributes of interest in the
world
 An operator changes one state into the other; it has a precondition which is the value
of certain attributes prior to the application of the operator, and a set of effects, which
are the attributes altered by the operator
 The initial state is where you start
 The goal state is the partial description of the solution
Formal Description of the problem:
 Define a state space that contains all the possible configurations of the relevant
objects.
 Specify one or more states within that space that describe possible situations from
which the problem solving process may start ( initial state)
 Specify one or more states that would be acceptable as solutions to the problem. ( goal
states)
 Specify a set of rules that describe the actions (operations) available.
There are many ways to represent nodes, but we will assume that a node is a data
structure with five components:
 STATE: the state in the state space to which the node corresponds;
 PARENT-NODE: the node in the search tree that generated this node;
 ACTION: the action that was applied to the parent to generate the node;
 PATH-COST: the cost, traditionally denoted by g(n), of the path from the initial state to
the node, as indicated by the parent pointers; and
 DEPTH: the number of steps along the path from the initial state.
State-Space Problem Formulation:

Example: A problem is defined by four items:

1. Initial state e.g., "at Arad“


2. Actions or successor function : S(x) = set of action–state pairs
e.g., S(Arad) = {<Arad -> Zerind, Zerind>, … }
3. Goal test (or set of goal states)
e.g., x = "at Bucharest”, Checkmate(x)
4. Path cost (additive)
e.g., sum of distances, number of actions executed, etc.
c(x,a,y) is the step cost, assumed to be ≥ 0
Properties of Search Algorithms /Measuring problem-solving performance

 We will evaluate an algorithm's performance in four ways:


COMPLETENESS : Is the algorithm guaranteed to find a solution when there is one?.
OPTIMALITY : most desirable or satisfactory Does the strategy find the optimal
solution.
TIME COMPLEXITY : How long does it take to find a
solution?
SPACE COMPLEXITY:
How much memory is needed to perform the search?
Difference between Informed and Uninformed Search in AI

Parameters Informed Search(heuristic or intelligent search) Uninformed Search(blind, exhaustive or


brute-force search)

Utilizing Knowledge It uses knowledge during the process of It does not require using any knowledge
searching. during the process of searching.

Speed Finding the solution is quicker. Finding the solution is much slower
comparatively.

Cost Incurred The expenses are much lower. The expenses are comparatively higher.
Length of Implementation is shorter using AI. The implementation is lengthier using AI.
Implementation

Complexity Less Complexity(Time and Space) More Complexity(Time and Space)


Examples A* Search, Heuristic DFS, Best First Search, Breadth-First Search or BFS and Depth-First
Graph Search and Greedy Search. Search or DFS.
Breadth-first search
 Consider the state space of a problem that takes the form of a tree. Now, if we
search the goal along each breadth of the tree, starting from the root and
continuing up to the largest depth, we call it breadth first search.
 Breadth-first search can be implemented by calling TREE-SEARCH with an
empty fringe that is a first-in-first-out (FIFO) queue, assuring that the
nodes that are visited first will be expanded first.
 The FIFO queue puts all newly generated successors at the end of the
queue, which means that shallow nodes are expanded before deeper
nodes.
BFS Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a.Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty,
quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST
BFS illustrated:

 Step 1: Initially fringe contains only one node corresponding to the source state A.

FRINGE: A
 Step 2: A is removed from fringe. The node is expanded, and its children B and C are
generated. They are placed at the back of fringe.

FRINGE: B C
 Step 3: Node B is removed from fringe and is expanded. Its children D, E
are generated and put at the back of fringe.

FRINGE: C D E
 Step 4: Node C is removed from fringe and is expanded. Its children D and G are added
to the back of fringe.

FRINGE: D E D G
 Step 5: Node D is removed from fringe. Its children C and F are generated and added to
the back of fringe.

FRINGE: E D G C F
 Step 6: Node E is removed from fringe. It has no children.

FRINGE: D G C F
 Step 7: D is expanded; B and F are put in OPEN.

FRINGE: G C F B F
 Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm
returns the path A C G by following the parent pointers of the node corresponding to G.
The algorithm terminates.
 Breadth first search is:
 One of the simplest search strategies
 Complete. If there is a solution, BFS is guaranteed to find it.
 If there are multiple solutions, then a minimal solution will be found
 The algorithm is optimal (i.e., admissible) if all operators have the same cost.
Otherwise, breadth first search finds a solution with the shortest path length.
 Time complexity : O(bd )
 Space complexity : O(bd )
 Optimality :Yes
 b - branching factor(maximum no of successors of any node), d – Depth of the shallowest goal
node
 Maximum length of any path (m) in search space

 Advantages: Finds the path of minimal length to the goal.


 Disadvantages:
 Requires the generation and storage of a tree whose size is exponential the depth of the
shallowest goal node.
 The breadth first search algorithm cannot be effectively used unless the search space is quite
small.
Depth- First- Search
 We may sometimes search the goal along the largest depth of the tree, and move up
only when further traversal along the depth is not possible.
 We then attempt to find alternative offspring of the parent of the node (state) last
visited.
 If we visit the nodes of a tree using the above principles to search the goal, the
traversal made is called depth first traversal and consequently the search strategy is
called depth first search.
 The search proceeds immediately to the deepest level of the search tree, where the
nodes have no successors.
 As those nodes are expanded, they are dropped from the fringe, so then the search
"backs up" to the next shallowest node that still has unexplored successors.
 This strategy can be implemented by TREE-SEARCH with a last-in-first-out (LIFO)
queue, also known as a stack.
DFS Algorithm:
1. Create a variable called NODE-LIST and set it to initial state.
2. Until a goal state is found or NODE-LIST is empty do
a.Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty,
quit.
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state.
ii. If the new state is a goal state, quit and return this state.
iii. Otherwise, add the new state in front of NODE-LIST.
 Depth first search is:
 The algorithm takes exponential time.
 If N is the maximum depth of a node in the search space, in the worst case the algorithm
will take time
 O(bd)
 The space taken is linear in the depth of the search tree, O(bN).
 Note that the time taken by the algorithm is related to the maximum depth of the search tree.
 If the search tree has infinite depth, the algorithm may not terminate.
 This can happen if the search space is infinite. It can also happen if the search space contains
cycles.
 The latter case can be handled by checking for cycles in the algorithm. Thus Depth First Search
is not complete.
Depth-limited Search

 A depth-limited search algorithm is similar to depth-first search with a


predetermined limit. Depth-limited search can solve the drawback of the infinite
path in the Depth-first search. In this algorithm, the node at the depth limit will
treat as it has no successor nodes further.
 Depth-limited search can be terminated with two Conditions of failure:
 Standard failure value: It indicates that problem does not have any solution.
 Cutoff failure value: It defines no solution for the problem within a given depth
limit.
Advantages:
 Depth-limited search is Memory efficient.
Disadvantages:
 Depth-limited search also has a disadvantage of incompleteness.
 It may not be optimal if the problem has more than one solution.

Completeness: DLS search algorithm is complete if the solution is above the depth-limit.
Time Complexity: Time complexity of DLS algorithm is O(bℓ).
Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).
SEARCHING WITH PARTIAL INFORMATION

 The agent can calculate exactly which state results from any sequence of actions and
always knows which state it is in.
 Its percepts provide no new information after each action.
 What happens when knowledge of the states or actions is incomplete? We find that
different types of incompleteness lead to three distinct problem types.
1. Sensorless problems (also called conformant problems):
If the agent has no sensors at all, then (as far as it knows) it could be in one of several
possible initial states, and each action might therefore lead to one of several
possible successor states.
2. Contingency problems: If the environment is partially observable or if actions are
uncertain, then the agent's percepts provide new information after each action.
Each possible percept defines a contingency that must be planned for. A
problem
is called adversarial if the uncertainty is caused by the actions of another agent.
3. Exploration problems:
 When the states and actions of the environment are unknown, the agent must act to
discover them.
 Exploration problems can be viewed as an extreme case of contingency problems.
In this Vaccum world environment example, the state space has 8 states, as shown in Fig
3.20.There are three actions- Left, Right, and Suck and the goal is to clean up all the
dirt(states 7 and 8).If the environment is observable, deterministic, and completely known,
then the problem is trivially solvable by any of the algorithms we have described.
For Example, if the initial state is 5,then the action Sequence[Right, Suck] will
reach a goal state,8.
 Sensorless problems :
Suppose that the vacuum agent knows all the effects of its actions, but has no sensors.
Then it knows only that its initial state is one of the set {1,2,3,4,5,6,7,8).
One might suppose that the agent's predicament is hopeless, but in fact it can do quite
well.
Because it knows what its actions do, it can, for example, calculate that the action Right
will cause it to be in one of the states {2,4,6,8), and the action sequence [Right,Suck]
will always end up in one of the states {4,8}.
Finally, the sequence [Right,Suck,Left,Suck] is guaranteed to reach the COERCION goal
state 7 no matter what the start state
Greedy best first search
Greedy best-first search tries to expand the node that is closest to the goal, on the
grounds that this is likely to lead to a solution quickly.
Thus, it evaluates nodes by using just the heuristic function; that is,
f(n) = h(n).
 This works for route-finding problems in Romania; we use the straightline distance
heuristic, which we will call hSLD .
 If the goal is Bucharest, we need to know the straight-line distances to Bucharest,
which are shown in Figure 3.22.
 For example, hSLD (In(Arad)) = 366.
 Notice that the values of hSLD cannot be computed from the problem description
itself.
 Moreover, it takes a certain amount of experience to know that hSLD is correlated
with actual road distances and is, therefore, a useful heuristic.
 For example, hSLD (In(Arad)) = 366.
 Notice that the values of hSLD cannot be computed from the problem description itself.
 Moreover, it takes a certain amount of experience to know that hSLD is correlated with
actual road distances and is, therefore, a useful heuristic.
 Figure 3.23 shows the progress of a greedy best-first search using hSLD to find a path
from Arad to Bucharest.
 The first node to be expanded from Arad will be Sibiu because it is closer to Bucharest
than either Zerind or Timisoara. The next node to be expanded will be Fagaras because
it is closest.
 Fagaras in turn generates Bucharest, which is the goal.
 For this particular problem, greedy best-first search using hSLD finds a solution
 Greedy best-first tree search is also incomplete even in a finite state space, much like
depth-first search.
A* search: Minimizing the total estimated solution cost
 The most widely known form of best-first search is called A∗ search (pronounced “A-
star search”).
 It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to
get from the node to the goal:
f(n) = g(n) + h(n)
 Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated
cost of the cheapest path from n to the goal, we have f(n) = estimated cost of the
cheapest solution through n .
 Thus, if we are trying to find the cheapest solution, a reasonable thing to try first is the
node with the lowest value of g(n) + h(n).
 A∗ search is both complete and optimal.
 The algorithm is identical to UNIFORM-COST-SEARCH except that A∗ uses g + h
instead
of g
Conditions for optimality: Admissibility and consistency
 The first condition we require for optimality is that h(n) be an admissible heuristic.
 An admissible heuristic is one that never overestimates the cost to reach the goal.
Because g(n) is the actual cost to reach n along the current path, and
f(n)= g(n) + h(n), we have as an immediateconsequence that f(n)
never overestimates the true cost of a solution along the current path through n.
 Admissible heuristics are by nature optimistic because they think the cost of solving
the problem is less than it actually is.
 An obvious example of an admissible heuristic is the straight-line distance hSLD
that we used in getting to Bucharest.
 Straight-line distance is admissible because the shortest path between any two
points is a straight line, so the straight line cannot be an overestimate.
 A second, slightly stronger condition called consistency (or sometimes
monotonicity) is required only for applications of A∗ to graph search.
 A heuristic h(n) is consistent if, for every node n and every successor n’ of n
generated by any action a, the estimated cost of reaching the goal from n is no
greater than the step cost of getting to n’ plus the estimated cost of reaching the
goal from n’
 h(n) ≤ c(n, a, n’ ) + h(n’ ) .
Optimality of A*
 A∗ has the following properties:
 The tree-search version of A∗ is optimal if h(n) is admissible, while the graph-
search version is optimal if h(n) is consistent.
 Among optimal algorithms of this type—algorithms that extend search
paths from the root and use the same heuristic information—A∗ is
optimally efficient for any given consistent heuristic.
Memory-bounded heuristic search
 The simplest way to reduce memory requirements for A∗ is to adapt the idea of iterative
deepening to the heuristic search context, resulting in the iterative-deepening A∗ (IDA∗)
Algorithm.
 Two other memory-bounded algorithms, called RBFS and MA∗.
 Recursive best-first search (RBFS) is a simple recursive algorithm that attempts to
mimic the
operation of standard best-first search, but using only linear space.
 Its structure is similar to that of a recursive depth-first search, but rather than continuing
indefinitely down the current path, it uses the f limit variable to keep track of the f-value of the
best alternative path available from any ancestor of the current node.
 If the current node exceeds this limit, the recursion unwinds back to the alternative path. As
the recursion unwinds, RBFS replaces the f-value of each node along the path with a backed-up
value—the best f-value of its children. In this way, RBFS remembers the f-value of the best leaf
in the forgotten subtree.
Local Search Algorithms: Hill Climbing Algorithm
 Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a peak value where no neighbor
has a higher value.
 It is also called greedy local search as it only looks to its good immediate neighbor state
and not beyond that.
 A node of hill climbing algorithm has two components which are state and value.
 In this algorithm, we don't need to maintain and handle the search tree or graph as it
only keeps a single current state.
 We will assume we are trying to maximize a function. That is, we are trying to find a
point in the search space that is better than all the others. And by "better" we mean
that the evaluation is higher. We might also say that the solution is of better quality than
all the others.
The idea behind Hill climbing is as follows.
1. Pick a random point in the search space.
2. Consider all the neighbors of the current state.
3. Choose the neighbor with the best quality and move to that state.
4. Repeat 2 thru 4 until all the neighboring states are of lower quality.
5. Return the current state as the solution state.
Features of Hill Climbing:
 Following are some main features of Hill Climbing Algorithm:
 Generate and Test variant: Hill Climbing is the variant of Generate and Test method.
The Generate and Test method produce feedback which helps to decide which
direction to move in the search space.
 Greedy approach: Hill-climbing algorithm search moves in the direction which
optimizes the cost.
 No backtracking: It does not backtrack the search space, as it does not remember
the previous states.
State-space Diagram for Hill Climbing:

 The state-space landscape is a graphical representation of the hill-climbing algorithm


which is showing a graph between various states of algorithm and Objective
function/Cost.
 On Y-axis we have taken the function which can be an objective function or cost
function, and state-space on the x-axis.
 If the function on Y-axis is cost then, the goal of search is to find the global minimum
and local minimum. If the function of Y-axis is Objective function, then the goal of the
search is to find the global maximum and local maximum.
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is
also another state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objective function.
Current state: It is a state in a landscape diagram where an agent is currently present.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current
states have the same value.
Shoulder: It is a plateau region which has an uphill edge.
Algorithm for Hill Climbing:

Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
Step 2: Loop Until a solution is found or there is no new operator left to
apply. Step 3: Select and apply an operator to the current state.
Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a
current state.
c. Else if not better than the current state, then return to step2.
Step 5: Exit.
Types of Hill Climbing Algorithm:
 Simple hill Climbing:
 Steepest-Ascent hill-climbing:
 Stochastic hill Climbing:
1. Simple Hill Climbing:
 Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only
evaluates the neighbor node state at a time and selects the first one which optimizes
current cost and set it as a current state. It only checks it's one successor state, and if it
finds better than the current state, then move else be in the same state. This algorithm has
the following features:
 Less time consuming
 Less optimal solution and the solution is not guaranteed
2. Steepest-Ascent hill climbing:
 The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is
closest to the goal state. This algorithm consumes more time as it searches for multiple
neighbors.

3. Stochastic hill climbing:


 Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search
algorithm selects one neighbor node at random and decides whether to choose it as a current
state or examine another state.
Problems in Hill Climbing Algorithm:
1. Local Maximum: A local maximum is a peak state in the landscape which is better than each of
its neighboring states, but there is another state also present which is higher than the local
maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the search
space and explore other paths as well.

2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the
current state contains the same value, because of this algorithm does not find any best direction
to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching, to
solve the problem. Randomly select a state which is far away from the current state so it is
possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is
higher than its surrounding areas, but itself has a slope, and cannot be reached in a
single move.
Solution: With the use of bidirectional search, or by moving in different directions,
we can improve this problem.
Simulated Annealing Search

 A hill-climbing algorithm which never makes a move towards a lower value


guaranteed to be incomplete because it can get stuck on a local maximum.
 And if algorithm applies a random walk, by moving a successor, then it may
complete but not efficient. Simulated Annealing is an algorithm which yields both
efficiency and completeness.
 In mechanical term Annealing is a process of hardening a metal or glass to a high
temperature then cooling gradually, so this allows the metal to reach a low-energy
crystalline state.
 The same process is used in simulated annealing in which the algorithm picks a
random move, instead of picking the best move.
 To understand simulated annealing, let's switch our point of GRADIENTDESCENT view from hill
climbing to gradient descent (i.e., minimizing cost) and imagine the task of getting a ping-pong
ball into the deepest crevice in a bumpy surface.
 If we just let the ball roll, it will come to rest at a local minimum. If we shake the surface, we
can bounce the ball out of the local minimum. The trick is to shake just hard enough to
bounce the ball out of local minima, but not hard enough to dislodge it from the global
minimum.
 The simulated annealing solution is to start by shaking hard (i.e., at a high
temperature) and then gradually reduce the intensity of the shaking (i.e., lower the
temperature).
Simulated annealing Search Algorithm
 Figure 4.14 shows simulated annealing algorithm. It is quite similar to hill climbing.
Instead of picking the best move, however, it picks the random move. If the move
improves the situation, it is always accepted. Otherwise, the algorithm accepts the
move with some probability less than 1. The probability decreases exponentially with
the “badness” of the move – the amount by which the evaluation is worsened.
 The probability also decreases as the "temperature" T goes down: "bad moves are
more likely to be allowed at the start when temperature is high, and they become
more unlikely as T decreases.
 Simulated annealing was first used extensively to solve VLSI layout problems. It has
been applied widely to factory scheduling and other large-scale optimization tasks.
Local Beam Search

 Idea: Keeping only one node in memory is an extreme reaction to


memory problems.
 Beam search is an optimization of best-first search that reduces its memory
requirements.
 Keep track of k states instead of one
- Initially: k randomly selected states
- Next: determine all successors of k states
- If any of successors is goal -> Finished
- Else select k best from successors and repeat
 Best-first search is a graph search that orders all partial solutions according to some
heuristic. But in beam search, only a predetermined number of best partial solutions
are kept as candidates. Therefore, it is a greedy algorithm.
 Beam search uses breadth-first search to build its search tree. At each level of the tree,
it generates all successors of the states at the current level, sorting them in increasing
order of heuristic cost. However, it only stores a predetermined number (β), of best
states at each level called the beamwidth. Only those states are expanded next.
Uses of Beam Search:-
 It has been used in many machine translation systems.
 Each part is processed to select the best translation, and many different ways of
translating the words appear.
 The first use of a beam search was in the Harpy Speech Recognition System, CMU
1976.
Genetic algorithms
 A genetic algorithm (or GA) is a variant of stochastic beam search in which successor
states are generated by combining two parent states, rather than by modifying a
single state.
 Like beam search, Genetic Algorithm(GA) s begin with a set of k randomly generated
states, called the Population.
 Each state, or individual, is represented as a string over a finite alphabet(Binary).
 For example, an 8-queens state must specify the positions of 8 queens, each in a
column of 8 squares, and so requires 8 x log2 8 = 24 bits.
 Evaluation function(Fitness function):
- A fitness function should return Higher values for better states.
- Opposite to heuristic function, eg., # non-attacking pairs in 8-queens.
 Produce the next generation of states by “simulated evolution”
- Random selection
- Crossover
- Random mutation
 so, for the 8-queens problem we use the number of nonattacking pairs of queens,
which has a value of 28 for a solution.
 24/(24+23+20+11)=31%
 23/(24+23+20+11)=29%
 20/(24+23+20+11)=26%
 11/(24+23+20+11)=14%
Advantages of Genetic Algorithm
 The parallel capabilities of genetic algorithms are best.
 It helps in optimizing various problems such as discrete functions, multi-objective
problems, and continuous functions.
 It provides a solution for a problem that improves over time.
 A genetic algorithm does not need derivative information.
Limitations of Genetic Algorithms
 Genetic algorithms are not efficient algorithms for solving simple problems.
 It does not guarantee the quality of the final solution to a problem.
 Repetitive calculation of fitness values may generate some computational
challenges.
THANK
YOU

You might also like