Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
100 views
Optimization For Engineering Design
1st chapter
Uploaded by
Niraj Thakre
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Optimization for Engineering Design For Later
Download
Save
Save Optimization for Engineering Design For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
100 views
Optimization For Engineering Design
1st chapter
Uploaded by
Niraj Thakre
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Optimization for Engineering Design For Later
Carousel Previous
Carousel Next
Save
Save Optimization for Engineering Design For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 36
Search
Fullscreen
2 Single-variable Optimization Algorithms We begin our discussion with optimization algorithms for single-variable and unconstrained functions. Single-variable optimization algorithms are taught in the first or the second-year level engineering courses and many students are therefore familiar with them. Since single-variable functions involve only one variable, the optimization procedures are simple and easier to understand Moreover, these algorithms are repeatedly used as a subtask of many multi- variable optimization methods. Therefore, a clear understanding of these algorithms will help readers learn complex algorithms discussed in subsequent chapters. The algorithms described in this chapter can be used to solve minimization problems of the following type: Minimize f(x), where f(z) is the objective function and z is a real variable. The purpose of an optimization algorithm is to find a solution x, for which the function f(z) is minimum. Two distinct types of algorithms are presented in this chapter: Direct search methods use only objective function values to locate the minimum point, and gradient-based methods use the first and/or the second-order derivatives of the objective function to locate the minimum point. Gradient-based optimization methods presented in this chapter do not really require the function f(r) to be differentiable or continuous. Gradients are computed numerically wherever required. Even though the optimization methods described here are for minimization problems, they can also be used to solve a maximization problem by adopting any of the following two procedures. In the first procedure, an equivalent dual problem (—f(.x)) is formulated and minimized. In this case, the algorithms described in this chapter can be directly used to solve a maximization problem. In the second procedure, the if-then-else clauses used in the algorithms have to be modified 4344 Optimization for Engineering Design: Algorithms and Examples to directly solve maximization problems. The first procedure is simpler to use and is therefore popular, whereas the second procedure is more elegant but requires a proper understanding of the working of the optimization algorithms, We first present the necessary and sufficient conditions for optimality. Then, we describe two bracketing algorithms, followed by direct and gradient-based search methods. 2.1 Optimality Criteria Before we present conditions for a point to be an optimal point, we define three different types of optimal points. (i) Local optimal point: A point or solution x* is said to be a local optimal Point, if there exists no point in the neighbourhood of x* which is better than 2*. In the parlance of minimization problems, a point 2* is a locally minimal point if no point in the neighbourhood has a function value smaller than f(2*). (ii) Global optimal point: A point or solution «** is said to be a global optimal point, if there exists no point in the entire search space which is better than the point 2**. Similarly, a point x** is a global minimal point if no point in the entire search space has a function value smalller than f(**). (iii) Inflection point: A point * is said to be an inflection point if the function value increases locally as x” increases and decreases locally as x* reduces or if the function value decreases locally as x* increases and increases locally as ™ decreases. Certain characteristics of the underlying objective function can be exploited to check whether a point is either a local minimum or a global minimum, or an inflection point. Assuming that the first and the second- order derivatives of the objective function f(x) exist in the chosen search Space, we may expand the function in Taylor’s series at any point 7 and satisfy the condition that any other point in the neighbourhood has a larger function value. It can then be shown that conditions for a point F to be a minimum point is that f'(z) = 0 and f(z) > 0, where f’ and f” represent the first and second derivatives of the function. The first condition alone suggests that the point is either a minimum, a maximum, or an inflection point, and both conditions together suggest that the point is a minimum. In general, the sufficient conditions of optimality are given as follows: Suppose at point x*, the first derivative is zero and the first nonzero higher order derivative is denoted by n; then © If n is odd, «* is an inflection point. © If niseven, 2* is a local optimum. (i) If the derivative is positive, z* is a local minimum. (ii) If the derivative is negative, * is a local maximum.Single-variable Optimization Algorithms 45 We consider two simple mathematical functions to illustrate a minimum and an inflection point. EXERCISE 2.1.1 Consider the optimality of the point « = 0 in the function f(x) = 23. The function is shown in Figure 2.1. It is clear from the figure that the point Figure 2.1 The function f(c) z= 0 is an inflection point, since the function value increases for x > 0 and decreases for x < 0 in a small neighbourhood of z = 0. We may use sufficient conditions for optimality to demonstrate this fact. First of all, the first derivative of the function at « = 0 is f'(x = 0) = 32|, ). Searching for a nonzero higher-order derivative, we observe that f”(z = 0) = 62|r-0 = 0 and f(x =0) = 6|,-0 =6 (a nonzero number). Thus, the nonzero derivative occurs at n = 3, and since 7 is odd, the point x = 0 is an inflection point. EXERCISE 2.1.2 Consider the optimality of the point z = 0 in function f(x) = 2*. A plot of this function is shown in Figure 2.2. The point © =0 is a minimal point as can be seen from the figure. Since f’(2 = 0) = 4232» = 0, we calculate higher-order derivatives in search of a nonzero derivative at 2 = 0: f”(0) = 0, f°(0) = 0, and f”"(0) = 24. Since the value of the fourth-order derivative is positive, n = 4, which is an even number. Thus the point z = 0 is a local minimum point. It is important to note here that even though a point can be tested for local optimality using the above conditions, global optimality of a point cannot be obtained using the above conditions. The common procedure for obtaining the global optimal point is to find a number of local optimal points tested using the above conditions and choose the best point. In Chapter 6, we discuss more about global optimization. In the following sections, we present a number of optimization techniques to find a local minimal point in a single-variable function.46 Optimization for Engineering Design: Algorithms and Examples (2) = 24 SE ob gh mete aig Figure 2.2. The function f(x) 2.2. Bracketing Methods The minimum of a function is found in two phases. First, a crude technique is used to find a lower and an upper bound of the minimum. Thereafter, a more sophisticated method is used to search within these limits and find the optimal solution with the desired accuracy. In this section we discuss two methods for bracketing the minimum Point and in the next section we shall discuss a number of methods to find the minimum point with the desired accuracy. 2.2.1 Exhaustive Search Method We begin with the exhaustive search method, simply because this method is the simplest of all other methods. In the exhaustive search method, the optimum of a function is bracketed by calculating the function values at a number of equally spaced points (Figure 2.3). Usually, the search begins from a lower bound on the variable and three consecutive function values are compared at a time based on the assumption of unimodality of the function Based on the outcome of comparison, the search is either terminated or continued by replacing one of the three points by a new point. The search continues until the minimum is bracketed. 6 Figure 2.3 The exhaustive search method that uses equally spaced pointsa Single-variable Optimization Algorithms 47 Algorithm Step 1 Set =a, Ar = (b—a)/n (nis the number of intermediate points), @_ = 2; + Az, and x3 = 22 + Az. Step 2 If f(c1) = f(x2) < f(#s), the minimum point lies in (1,28), ‘Terminate; Else x) = 29, 22 = 23, 13 = 22 + Az, and go to Step 3. Step 3 Is x3 <6? If yes, go to Step 2; Else no minimum exists in (a,b) or a boundary point (a or 6) is the minimum point. ‘The final interval obtained by using this algorithm is 2(b— a)/n. It can be computed that on an average (n/2 + 2) number of function evaluations are necessary to obtain the desired accuracy. To illustrate the working of the above algorithm, we consider the following minimization problem. EXERCISE 2.2.1 Consider the problem: Minimize f(x) =a? +54/2 in the interval (0,5). A plot of the function is shown in Figure 2.4. The plot shows that the minimum lies at z* = 3. The corresponding function value at f@ = 2+ 54/2 Sai cd. : cpt arene ae ed ‘Minimum point i gee i 3 4 5 6 7 8 9 10 11 z Figure 2.4 The unimodal, single-variable function used in the exercise problems. The minimum point is at z= 3. that point is f(2*) = 27. By calculating the first and second derivatives at this point, we observe that f'(3) =0 and f”(8) =6. Thus, the point 2 = 3 is48 Optimization for Engineering Design: Algorithms and Examples a local minimum point, according to the sufficiency conditions for minimality described earlier. In the following, we try to bracket the minimum of the above function using the exhaustive search method. Let us assume that we would like to bracket the minimum point by evaluating at most 11 different funetion values. Thus, we are considering only 10 intermediate points or n = 10. Step 1 According to the parameters chosen, 2; = a= 0 and b=5. Since n= 10, the increment Ac = (5 — 0)/10 or 0.5. We set 22 = 0+ 0.5 or 0.5 and zy = 0.5+0.5 or 1.0. Step 2 Computing function values at various points, we have f(0) =00, (0.5) = 108.25, (1.0) = 55.00. Comparing these function values, we observe that f(#1) > (x2) > {(z3). Thus, the minimum does not lie in the interval (0,1). We set 1 = 0.5, 19 = 1.0, 23 = 1.5, and proceed to Step 3 Step 3 At this step, we observe that x3 < 5. Therefore, we move to Step 2. This completes one iteration of the exhaustive search method. Since the minimum is not bracketed, we continue to perform the next iteration. Step 2 At this iteration, we have function values at 3m, = 0.5, ¢2 = 1.0, and 23 = 1.5. Since we have already calculated function values at the first two points, we compute the function value at x3 only: (xs) = f(1.5) = 38.25. Thus, f(21) > f(r2) > f(@a), and the minimum does not lie in the interval (0.5, 1.5). Therefore, we set ry = 1.0, t2 = 1.5, 3 = 2.0, and move to Step 3. Step 3 Once again, r3 < 5.0. Thus, we move to Step 2. Step 2 At this step, we require to compute the function value only at 2 = 2.0. The corresponding function value is (x3) = f(2.0) = 31.00. Since f(a) > f(@2) > f (zs), we continue with Step 3. We set x1 = 1.5, cz = 2.0, and x3 = 2.5. Step 3 At this iteration, x3 < 5.0. Thus, we move to Step 2. Step 2 The function value at r3=2.5 is f(x3) = 27.85. Like previous iterations, we observe that f(s1) > f(z2) > f(s), and therefore, we go to Step 3. The new set of three points are 71 = 2.0, v2 = 2.5, and x3 = 3.0. ‘As evident from the progress of the algorithm so far that if the number of desired iterations to achieve the optimum is taken to be large to attain good. accuracy in the solution, this method may lead to @ number of iterations through Steps 2 and 3. Step 3 Once again, 73 < 5.0. Thus, we move to Step 2. Step 2 Here, f(r) = f(3.0) = 27.00. Thus, we observe that f(t) > (x2) > f(s). We set 21 = 2.5, r2 = 3.0, and 3 = 3.5. Step 3. Again, 13 < 5.0, and we move to Step 2.Single-variable Optimization Algorithms 49 Step 2 Here, f(s) = f(3.5) = 27.68. At this iteration we have a different situation: f(z1) > f(v2) < f(xs). This is precisely the condition for termination of the algorithm. Therefore, we have obtained a bound for the minimum: z* € (2.5, 3.5). ‘As already stated, with n = 10, the accuracy of the solution can only be 2(5 — 0)/10 or 1.0, which is what we have obtained in the final interval. An interesting point to note is that if we require more precision in the obtained solution, we need to increase n or, in other words, we should be prepared to compute more function evaluations. For a desired accuracy in the solution, the parameter n has to be chosen accordingly. For example, if three decimal places of accuracy are desired in the above exercise problem, the required can be obtained by solving the following equation for n: 5 =(b— a) = 0.001. For a = 0 and b =5, about n = 10,000 intermediate points are required and ‘one an average, (n/2 +2) or 5,002 function evaluations are required. In the above problem, with n= 10,000, the obtained interval is (2.9995, 3.0005). 2.2.2 Bounding Phase Method Bounding phase method is used to bracket the minimum of a function. This method guarantees to bracket the minimum of a unimodal function. The algorithm begins with an initial guess and thereby finds a search direction based on two more function evaluations in the vicinity of the initial guess. Thereafter, an exponential search strategy is adopted to reach the optimum. In the following algorithm, an exponent of two is used, but any other value may very well be used. Algorithm Step 1 Choose an initial guess 2(°) and an increment A. Set k = 0. Step 2 If f(x —[Al) > f(a) > f(x + [A)), then A is positive; Else if f(x — |A)) < f(x) < fie + |A)), then A is negative; Eise go to Step 1. Step 3 Set c(t) = al") +2kA. Step 4 If f(x*t%Q f(a), set k =k +1 and go to Step 3; ‘Eke the minimum lies in the interval (x*~)),2**)) and Terminate. If the chosen A is large, the bracketing accuracy of the minimum point & poor but the bracketing of the minimum is faster. On the other hand, if ‘be chosen A is small, the bracketing accuracy is better, but more function exluations may be necessary to bracket the minimum. This method of bracketing the optimum is usually faster than exhaustive search method ‘Sscussed in the previous section. We illustrate the working of this algorithm by taking the same exercise problem.50 Optimization for Engineering Design: Algorithms and Examples EXERCISE 2.2.2 We would like to bracket the minimum of the function f(a) =2? + 54/2 using the bounding phase method. Step 1 We choose an initial guess © = 0.6 and an increment A = 0.5. We also set k = 0. Step 2 We calculate three function values to proceed with the algorithm: f(a — Al) = f(0.6 — 0.5) = 540.010, f(x) = f(0.6) = 90.360, and J(2 +|Al) = f(0.6 + 0.5) = 50.301. We observe that f(0.1) > f(0.6) > (1.1). Thus we set A = +0.5. Step 3 We compute the next guess: 2?) = x) +2°A =1.1. Step 4. The function value at 2) is 50.01 which is less than that at 2). "Thus, we set k = 1 and go to Step 3. This completes one iteration of the bounding phase algorithm. Step 3 The next guess is 2) = 20) 421A = 1.1 4 2(0.5) = 2.1. Step 4 The function value at?) is 30.124 which is smaller than that at x), Thus we set k = 2 and move to Step 3. Step 3 We compute 2) = 2 + 27A = 2.1 + 4(0.5) = 4.1. Step 4 The function value f(x) is 29.981 which is smaller than f(<\) = 31.124, We set k =3. Step 3 The next guess is 2 = 2) + 2°A = 4.1 +8(0.5) = 8.1. Step 4 The function value at this point is f(8-1) = 72.277 which is larger than f(x) = 29.981. Thus, we terminate with the obtained interval as (2.1,8.1). With A = 0.5, the obtained bracketing is poor, but the number of function evaluations required is only 7. It is found that with 2 = 0.6 and A = 0.001, the obtained interval is (1.623, 4.695), and the number of function evaluations is 15. The algorithm approaches the optimum exponentially but the accuracy in the obtained interval may not be very good, whereas in the exhaustive search method the iterations required to attain near the optimum may be large, but the obtained accuracy is good. An algorithm with a mixed strategy may be more desirable. At the end of this chapter, we present a FORTRAN code implementing this algorithm. A sample simulation result obtained using the code is also presented.Single-variable Optimization Algorithms 51 2.3 Region-Elimination Methods Once the minimum point is bracketed, a more sophisticated algorithm needs to be used to improve the accuracy of the solution. In this section, we describe three algorithms that primarily work with the principle of region elimination and require comparatively smaller function evaluations. Depending on the function values evaluated at two points and assuming that the function is unimodal in the chosen search space, it can be concluded that the desired minimum cannot lie in some portion of the search space. The fundamental rule for region-elimination methods is as follows: Let us consider two points 2; and <2 which lie in the interval (a,b) and satisfy 2; < z2. For unimodal functions for minimization, we can conclude the following: If f(x1) > f(xg) then the minimum does not lie in (a, 21). If f(2:) < f(z2) then the minimum does not lie in (x2, 6). e If f(x1) = f (za) then the minimum does not lie in (a, 21) and (22, ). Consider a unimodal function drawn in Figure 2.5. If the function value at 2 is larger than that at x2, the minimum point 2* cannot lie on the left-side of . Thus, we can eliminate the region (a,x) from further consideration. Therefore, we reduce our interval of interest from (a,b) to (21,6). Similarly, the second possibility (f(x1) < f(#2)) can be explained. If the third situation occurs, that is, when f(z) = f(z) (this is a rare situation, especially when numerical computations are performed), we can conclude that regions (a, 1) and (b,x) can be eliminated with the assumption that there exists only one Jocal minimum in the search space (a,b). The following algorithms constitute their search by using the above fundamental rule for region elimination. a z 2 Figure 2.5 A typical single-variable unimodal function with function values at two distinct points.52 Optimization for Engineering Design: Algorithms and Examples 2.3.1 Interval Halving Method In this method, function values at three different points are considered, Three points divide the search space into four regions. Fundamental region elimination rule is used to eliminate a portion of the search space based on function values at the three chosen points. Three points chosen in the interval (a,b) are all equidistant from each other and equidistant from the boundaries by the same amount. Figure 2.6 shows these three points in the interval. Two of the function values are compared at a time and some region Figure 2.6 Three points 11, tm, and x2 used in the interval halving method. ig eliminated. There are three scenarios that may occur. If f(a) < f(m); then the minimum cannot lie beyond «tm. Therefore, we reduce the interval from (a,b) to (a.m). The point 2,, being the middle of the search space, this elimination reduces the search space to 50 per cent of the original search space. On the other hand, if f(21) > f(@m), the minimum cannot lie in the interval (a,2;). The point 2; being at one-fourth point in the search space, this reduction is only 25 per cent. Thereafter, we compare function values at Lm and x to eliminate further 25 per cent of the search space. This process continues until a small enough interval is found. The complete algorithm is described below. Since in each iteration of the algorithm, exactly half of the search space is retained, the algorithm is called the interval halving method. Algorithm Step 1 Choose a lower bound a and an upper bound 6. Choose also a small number ¢. Let tm, = (a +5)/2, Lo = L = —a. Compute f (2m). Step 2 Set x) =a+L/4,22 =b—L/4. Compute f(x1) and f(x2) Step 3 If f(r1) < f(%m) set b = 2m; Tm = 71; go to Step 5; Else go to Step 4. Step 4 If f(22) < f(am) set d= tm} 2m = 19; go to Step 5; Else set a = 21,6 = 19; go to Step 5. Step 5 Calculate L = b—a, If |L| <, Terminate; Else go to Step 2.Single-variable Optimization Algorithms 53 At every iteration, two new function evaluations are performed and the interval reduces to half of that at the previous iteration. Thus, the interval reduces to about 0.5"/?Z after n function evaluations. Thus, the function evaluations required to achieve a desired accuracy € can be computed by solving the following equation: (0.5)"/2(6 — a) =. EXERCISE 2.3.1 We again consider the unimodal, single-variable function used before: F(x) =2? + 54/2. Step 1 We choose a = 0, b= 5, and «= 10-3. The point x», is the mid- point of the search interval. Thus, tm = (0 +5)/2 = 2.5. The initial interval length is Lo = L =5—0=5. The function value at zm is f (2m) = 27.85. Step 2 We set 2; = 0+ 5/4 = 1.25 and 22 = 5 —5/4 = 3.75. The corresponding function values are f(z,) = 44.76 and f(a2) = 28.46. Step 3 By comparing these function values, we observe that f(71) > f(tm) Thus we continue with Step 4. Step 4 We again observe that f(a2) > f(2m). Thus, we drop the intervals (0.00, 1.25) and (3.75, 5.00). In other words, we set a = 1.25 and b = 3.75. The outcome of this iteration is pictorially shown in Figure 2.7. Iteration 1 32 97.85 27.05 Iteration 2 1875 25 43.125 2507 3.76 Figure 2.7 First two iterations of the interval halving method. The figure shows how exactly half of the search space is eliminated at every iteration. Step 5 The new interval is L = 3.75 — 1.25 = 2.5, which is exactly half of that in the original interval (Zo = 5). Since |L| is not small, we continue with Step 2. This completes one iteration of the interval halving method.54 Optimization for Engineering Design: Algorithms and Examples Step 2 We now compute new values of x1 and 22% ny = 1.25 +2.5/4 = 1.875, tp = 3.75 = 2.5/4 = 3.125. The function values are f(x) = 32.32 and f(a2) = 27.05, respectively. It is important to note that even though three function values are required for comparison at Steps 3 and 4, we have to compute function values at two new points only; the other point (zr, in this case) always happens to be one from the previous iteration. Step 3 We observe that f(r1) = 32.32 > f(zm) = 27.85. Thus, we go to Step 4. Step 4 Here, (72) = 27.05 < f(tm) =27.85. Thus, we eliminate the interval (1.25,2.5) and set a= 2.5 and tm, =3.125. This procedure is also depicted in Figure 2.7. Step 5 At the end of the second iteration, the new interval length is L = 3.75 —2.5 = 1.25, which is again half of that in the previous iteration. Since this interval is not smaller than ¢, we perform another iteration. Step 2 We compute 71 = 2.8125 and x2 = 3.4375. The corresponding function values are f (21) = 27.11 and f (x2) = 27.53. Step 3 We observe that f(a1) = 27.11 > f(am) = 27.05. So we move to Step 4. Step 4 Here, f(x) = 27.53 > f (am) = 27.05 and we drop the boundary intervals. Thus, a = 2.8125 and 6 = 3.4375. Step 5 The new interval L = 0.625. We continue this process until an L smaller than a specified small value (¢) is obtained. ‘We observe that at the end of each iteration, the interval is reduced to half of its original size and after three iterations, the interval is (})*Lo = 0.625. Since two function evaluations are required per iteration and half of the region is eliminated at each iteration, the effective region elimination per function evaluation is 25 per cent. In the following subsections, we discuss two more algorithms with larger region elimination capabilities per function evaluation. 2.3.2 Fibonacci Search Method In this method, the search interval is reduced according to Fibonacci numbers. The property of the Fibonacci numbers is that, given two consecutive numbers Fy—2 and Fp—1, the third number is calculated as follows: Fy = Fn-i + Fa-2 (2.1) where n = 2,34... The first: few Fibonacci numbers are Fy = 1, Fi = 1, Fy=2, Fy=3, Fy=5, Fs =8, Fo = 13, and s0 on. The property of the Fibonacci numbers can be used to create a search algorithm that requiresSingle-variable Optimization Algorithms 55 only one function evaluation at each iteration. The principle of Fibonacci search is that out of two points required for the use of the region-elimination rule, one is always the previous point and the other point is new. Thus, only one function evaluation is required at each iteration. At iteration k, two intermediate points, each Li away from either end of the search space (Z=5~ a) are chosen. When the region-elimination rule eliminates a portion of the search space depending on the function values at these two points, the remaining search space is Ly. By defining Lt = (Fn—e+i/Fn+i)D and Ly = (Fn—t+2/Fns1)L, it can be shown that Ly — Lf = L{,,, which means that one of the two points used in iteration k remains as one point in iteration (&+1). This can be seen from Figure 2.8. If the region (a, 72) is eliminated in the k-th iteration, the point 2 is at a distance (Ly — Lf) or Lf, from the point x2 in the (k+1)-th iteration. Since, the first two Fibonacci numbers are the same, the algorithm usually starts with k = 2. Step 1 Choose a lower bound a and an upper bound &. Set L Assume the desired number of function evaluations to be n. Set k = Step 2 Compute LE = (Fy—4+1/Fn4i)L. Set 21 = a+Li and x = b— Lj. Step 3 Compute one of f(z1) or f(¢2), which was not evaluated earlier. ‘Use the fundamental region-elimination rule to eliminate a region. Set new a and b. Step 4 Is k=n? If no, set k= k +1 and go to Step 2; Else Terminate. In this algorithm, the interval reduces to (2/Fn41)L after n function ‘evaluations. Thus, for a desired accuracy ¢, the number of required function “evaluations n can be calculated using the following equation: mals a) =e.56 Optimization for Engineering Design: Algorithms and Examples Asis clear from the algorithm, only one function evaluation is required at each iteration. At iteration k, a proportion of Fn-k/Fr—e+2 of the search space at the previous iteration is eliminated. For large values of n, this quantity is close to 38.2 per cent, which is better than that in the interval halving method. (Recall that in the interval halving method this quantity is 25 per cent.) However, one difficulty with this algorithm is that the Fibonacci numbers must be calculated in each iteration. We illustrate the working of this algorithm on the same function used earlier. EXERCISE 2.3.2 Minimize the function f(a) = 2? + 54/z. Step 1 We choose a = 0 and b= 5. Thus, the initial interval is L us also choose the desired number of function evaluations to be three ( In practice, a large value of n is usually chosen. We set k = 2. Step 2 We compute L¥ as follows: 2 U3 = (Fs-201/Fari)L = (Fa/Fy) 5 = p52 Thus, we calculate 2; = 0 + 2=2 and 22 =5 2-3, Step 3° We compute the function values: (1) = 31 and f(v2) = 27, Since F(z2) > F(¢2), we eliminate the region (0,3) or (0,2). In other words, we set @ = 2 and b = 5. Figure 2.9 shows the function values at these two points and the resulting region after Step 3. The exact minimum of the function is also shown. beret tt + 0.00 1.25 2.50 3.75 5.00 Iteration 2 2.00 t 3.00 4.00 5.00 2.00 f 4.00 Figure 2.9 Two iterations of the Fibonacci search method. Step 4 Sincek=24n=3, we set k = 3 and go to Step 2. This completes one iteration of the Fibonacci search method.———— Single-variable Optimization Algorithms 87 Step 2 We compute Lf = (F/Fy)L = }-5 = 1, 2, =2+1=3, and m=5-1=4 Step 3 We observe that one of the points (x) = 3) was evaluated in the previous iteration. It is important to note that this is not an accident. The property of the Fibonacci search method is such that at every iteration only one new point will be considered, Thus, we need to compute the function value only at point x2 = 4: (x2) = 29.5. By comparing function values at 2 =3 and x, =4, we observe that f(x1) < f(z). Therefore, we set a = 2 and b = xp = 4, since the fundamental rule suggests that the minimum cannot lie beyond x2 = 4. Step 4 At this iteration, k = final interval is (2, 4). = 3 and we terminate the algorithm. The As already stated, after three function evaluations, the interval reduces to (2/F4)L or (2 x 5) or 2. The progress of the above two iterations is shown in Figure 2.9. For a better accuracy, a larger value of n is required to be set and more iterations will be required to achieve that accuracy. 2.3.3 Golden Section Search Method ‘One difficulty of the Fibonacci search method is that the Fibonacci numbers have to be calculated and stored. Another problem is that at every iteration ‘the proportion of the eliminated region is not the same. In order to overcome these two problems and yet calculate one new function evaluation per iteration, the golden section search method is used. In this algorithm, the search space (a,b) is first linearly mapped to a unit interval search space (0.1). Thereafter, two points at 7 from cither end of the search space are chosen so that at every iteration the eliminated region is (1 — 7) to that in ‘the previous iteration (Figure 2.10). This can be achieved by equating 1 — r ‘with (7 x 7). This yields the golden number: 7 = 0.618. Figure 2.10 can be ‘sed to verify that in each iteration one of the two points x; and x2 is always point considered in the previous iteration. ‘Figure 2.10 The points (2; and 22) used in the golden section search method.58 Optimization for Engineering Design: Algorithms and Examples Algorithm Step 1 Choose a lower bound a and an upper bound b. Also choose a small number ¢. Normalize the variable x by using the equation w = (z~a)/(b—a). Thus, ay =0, by =1, and Ly =1. Set k= 1. Step 2 Set wy = ay + (0.618)Ly and w2 = by — (0.618)Ly. Compute (wi) or f (wa), depending on whichever of the two was not evaluated earlier. Use the fundamental region-elimination rule to eliminate a region. Set new a,, and by. Step 3 Is |Lw| < small? If no, set k =k +1, go to Step 2; Else Terminate. In this algorithm, the interval reduces to (0.618)"~! after n function evaluations. Thus, the number of function evaluations n required to achieve a desired accuracy is calculated by solving the following equation: (0.618)"""(b — a) e Like the Fibonacci method, only one function evaluation is required at each iteration and the effective region climination per function evaluation is exactly 38.2 per cent, which is higher than that in the interval halving method. This quantity is the same as that in the Fibonacci search for large n. In fact, for a large n, the Fibonacci search is equivalent to the golden section search. EXERCISE 2.3.3 Consider the following function again: fx) = 0? +54/z. Step 1 We choose a =0 and b= 5. The transformation equation becomes w = 2/5. Thus, dy = 0, by = 1, and Ly = 1. Since the golden section method works with a transformed variable w, it is convenient to work with the transformed function: F(w) = 25w? + 54/(5w) In the w-space, the minimum lies at w* = 3/5 = 0.6. We set an iteration counter & = 1. Step 2 We set w; =0+(0.618)1=0.618 and w,=1-(0.618)1 or wz = 0.382. The corresponding function values are f(w;) 27.02 and f(w2) = 31.92. Since f(wi) < f(w2), the minimum cannot lie in any point smaller than w = 0.382. ‘Thus, we eliminate the region (a, w2) or (0,0.382). Thus, ay, = 0.382 and b,, = 1. At this stage, Ly, = 1— 0.382 = 0.618. The re- gion being eliminated after this iteration is shown in Figure 2.11. The position of the exact minimum at w = 0.6 is also shown. Step 3 Since |L,.| is not smaller than ¢, we set k = 2 and move to Step 2. This completes one iteration of the golden section search method.Single-variable Optimization Algorithms Iteration L wie at 0.000 0.382 0618 1.000 Iteration 2 pari 0.382 0.618 0.764 1.000 0,382 F 0.764 Figure 2.11 Region eliminations in the first two iterations of the golden section search algorithm. x Step 2 For the second iteration, we set w; = 0.382 + (0.618)0.618 = 0.764, wz = 1 — (0.618)0.618 = 0.618. We observe that the point wz was computed in the previous iteration. ‘Thus, we only need to compute the function value at w,: f(w1) = 28.73. Using the fundamental region-climination rule and observing the relation ‘Fiws) > f(wa), we eliminate the interval (0.764, 1). Thus, the new bounds ‘are a,, = 0.382 and b,, = 0.764, and the new interval is L,, = 0.764 — 0.382 = 9382, which is incidentally equal to (0.618)?! Figure 2.11 shows the final ‘sezion after two iterations of this algorithm. ‘Step 3 Since the obtained interval is not smaller than ¢, we continue to “groceed to Step 2 after incrementing the iteration counter k to 3. Step 2 Here, we observe that w= 0.618 and wz = 0.528, of which the ‘point w1 was evaluated before. Thus, we compute f(w2) only: f(a) = 5543. We also observe that f(wi) < {(wa) and we eliminate the interval
[email protected]
,0.528). The new interval is (0.528,0.764) and the new range is E. — 0.764 — 0.528 = 0.236, which is exactly equal to (0.618)°! “Step 3 Thus, at the end of the third iteration, L,, = 0.236. This way, Steps 2 3 may be continned until the desired accuracy is achieved. We observe that at each iteration, only one new function evaluation ‘= necessary. After three iterations, we have performed only four function ‘esaluations. Thus, the interval reduces to (0.618)° or 0.236. "At the end of this chapter, we present a FORTRAN code implementing ‘ths algorithm. A simulation run on the above function obtained using thisEE 60 Optimization for Engineering Design: Algorithms and Examples algorithm is also presented. For other functions, the subroutine funct may be modified and rerun the code. 2.4 Point-Estimation Method In the previous search methods, only relative function values of two points Were considered to guide the search, but the magnitude of the function values at the chosen points may also provide some information about the location of the minimum in the search space. The successive quadratic estimation method described below represents a class of point-estimation methods which use the magnitude and sign of function values to help guide the search, Typically, the function values are computed at a number of points and a unimodal function is fitted through these points exactly. The minimum of the fitted function is considered to be a guess of the minimum of the original objective function. 2.4.1 Successive Quadratic Estimation Method In this algorithm, the fitted curve is a quadratic polynomial function. Since any quadratic function can be defined with three points, the algorithm begins with three initial points. Figure 2.12 shows the original function and three Interpolated , function, g(z) S Figure 2.12 The function f(x) and the interpolated quadratic function, initial points x1, x2, and x3. The fitted quadratic curve through these three points is also plotted with a dashed line. The minimum (£) of this curve is used as one of the candidate points for the next iteration. For non-quadratic functions, a number of iterations of this algorithm is necessary, whereas for quadratic objective functions the exact minimum can be found in one iteration only. A general quadratic function passing through two points 2 and 2 can be written as Q(x) = a9 + a1 (@— 21) + a(x —24)(2 — 2),Single-variable Optimization Algorithms 61 If (x1, fi), (2, fz), and (23, fg) are three points on this funtion, then the following relationships can be obtained: a = fi, (2.2) a= Boh. (2.3) oad. dade ee coe By differentiating ¢(z) with respect to x and setting it to zero, it can be shown that the minimum of the above function is att a4 2 2a2” z= (2.5) ‘The above point is an estimate of the minimum point provided q’'(z) > 0 or @2 > 0, which depends only on the choice of the three basic points. Among the four points («1, x2, x3, and Z), the best three points are kept and a new Ssterpolated function q(x) is found again. This procedure continues until two eensecutive estimates are close to each other. Based on these results, we present Powell’s algorithm (Powell, 1964). Algorithm Step 1 Let x be an initial point and A be the step size. Compute r2 = +A. Step 2 Evaluate f(2,) and f(z2). Step 3 If f(x1) > f(x2), let 73 = 21 + 2A; ise let v3 = x; — A. Evaluate f(zs). Step 4 Determine Fin = min(f,, fo, fs) and Ximin is the point 1; that serresponds to Finin. Step 5 Use points 21, x2, and zs to calculate @ using Equation (2.5). Step 6 Are |Finin — f(Z)| and |Xmin — Z| small? If not, go to Step 7; ‘Bike the optimum is the best of current four points and Terminate. Step 7 Save the best point and two bracketing it, if possible; otherwise, seve the best three points. Relabel them according to x; <2 < srs and go fe Step 4. In the above algorithm, no check is made to satisfy a > 0. The same == be incorporated in Step 5. If az is found to be negative, one of the three points may be replaced by a random point. This process is continued until ‘she quantity a2 becomes nonnegative.62 Optimization for Engineering Design: Algorithms and Examples EXERCISE 2.4.1 ‘We consider again the same unimodal, single-variable function I(a) =2? + 54/a to illustrate the working principle of the algorithm. Step 1 We choose x; = 1 and A =1. Thus, r2=1+1=2. Step 2 The corresponding function values are f(2;) = 55 and f(a2) = 31. Step 3 Since f(z1) > f(x2), we set ry = 1+ 2(1) = 3 and the function value is f(23) = 27. Step 4 By comparing function values at these points, we observe that the minimum function value Frain = min (55,31,27) = 27 and the corresponding point is Xmin = 23 = 3. Step 5 Using Equations (2.2) to (2.4), we calculate the following parameters: a = 55, 31— 55 a = =, 27-55 eo ~(-26)] =10. Since ag > 0, the estimated minimum is (1+2)/2—(-24)/(2 x 10) = 2.7. z ‘The corresponding function value is ((Z) = 27.29. Step 6 Let us assume that |27 — 27.29| and |3— 2.7| are not small enough to terminate. Thus, we proceed to Step 7. Step 7 The best point is 23 = 3, which is an extreme point. Thus, we consider the best three points: x1 = 2, x2 = 2.7, and x3 = 3. This completes one iteration of the algorithm. To continue with the next iteration, we proceed to Step 4. Step 4 At this stage, Frain = min(31, 27.29, 27) = 27 and the corresponding point is Xiin = 3. Step 5 Using Equation (2.2), we obtain a, = —5.3 and az = 4.33, which is positive. The estimated minimum is F = 2.96. The corresponding function value is f(@) = 27.005. Step 6 Here, the values |27 — 27.005] and |3 — 2.96] may be assumed to be small. Therefore, we terminate the process and declare that the minimum solution of the function is the best of current four points. In this case, the minimum is c* = 3 with f(x*) = 27.Single-variable Optimization Algorithms 63 It is observed that for well-behaved qmimodal functions, this method finds the minimum point faster than the region-elimination methods. But for skewed functions, the golden section search is better than Powell’s method. 2.5 Gradient-based Methods We have demonstrated that alll the methods described in earlier sections work with direct function values and not with derivative information. The Rlgotithms discussed in this section require derivative information, Tn many feal-world problems, it is difficult to obtain the information about derivatives, cither due to the nature of the problem or due to the computations involved in caloulating the derivatives. Despite these difficulties, gradient-based methods are popular and are often found to be effective. However, it is recommended to sce these algorithms in problems where the derivative information is available er can be calculated easily. ‘The optimality property that at a local or a global optimum the gradient is zero can be used to terminate the search process. 2.5.1 Newton-Raphson Method The goal of an unconstrained local optimization method is to achieve & point having as small a derivative as possible. In the Newton-Raphson method, linear approximation to the first derivative of the function is made at a point using the Taylor's series expansion. That ‘expression is equated to zero to find the next guess. If the current point at iteration t is 2), the point in the next iteration is governed by the following simple equation (obtained by considering up to the linear term in Taylor's series expansion): (al) ve we ay - 7m (2.6) Algorithm Step 1 Choose initial guess 7) and a small number €. Set k = 1. Compute Fe). Step 2 Compute f”(z"). Step 3 Calculate 2(#+) = a) — f'(2®)/f"(@™). Compute fie). Step 4 If|f’(e**))| <«, Terminate; Blke set k = k + 1 and go to Step 2 Convergence of the algorithm depends on the initial point and the nature ‘ef the objective function. For mathematical functions, the derivative may _ be easy to compute, but in practice, the gradients have to be computed64 Optimization for Engineering Design: Algorithms and Examples numerically. At a point 2), the first and second derivatives are computed as follows, using the central difference method (Scarborough, 1966): (t) (t)) t) (t) pie®y =e aS ee) erie a). (27) pretty = 12042) ~27(29) + fle ae) yg (Ax)? The parameter Az) is usually taken to be a small value. In all our calculations, we assign Az‘) to be about 1 per cent of x"): 0.01\c), if jz] > 0.01, Ag() = (2.9) 0.0001, otherwise. According to Equations (2.7) and (2.8), the first derivative requires two function evaluations and the second derivative requires three function evaluations. EXERCISE 2.5.1 Consider the minimization problem: f(a) =2? +54/c. Step 1 We choose an initial guess 2‘) =1, a termination factor ¢=10-%, and an iteration counter k = 1. We compute the derivative using Equation (2.7). The small increment as computed using Equation (2.9) is 0.01. The computed derivative is 52.005, whereas the exact derivative at x") is found to be —52. We accept the computed derivative value and proceed to Step 2. Step 2 The exact second derivative of the function at x!) =1 is found to be 110. The second derivative computed using Equation (2.8) is f(z) = 110.011, which is close to the exact value. Step 3 We compute the next guess, xl?) = alt) = fi(at®)/ p"(e), = 1— (—52.005)/(110.011), = 1473. The derivative computed using Equation (2.7) at this point is found to be f(z) = -21.944. Step 4 Since |f‘(x@))| ¢ €, we increment k to 2 and go to Step 2. This completes one iteration of the Newton-Raphson method.Single-variable Optimization Algorithms 65 Step 2 We begin the second iteration by computing the second derivative numerically at x"): f(r) = 35.796. Step 3 The next guess, as computed using Equation (2.6), is c® = 2.086 and f’(c@) = —8.239 computed numerically. Step 4 Since |f’(2'))| 4 ¢, we set k = 3 and move to Step 2. This is the end of the second iteration. Step 2 The second derivative at the point is f”(x@)) = 13.899. Step 3 The new point is calculated as 24) = 2.679 and the derivative is f (2) = —2.167. Nine function evaluations were required to obtain this point. Step 4 Since the absolute value of this derivative is not smaller than e, the search proceeds to Step 2. After three more iterations, we find that 2'7) = 3.0001 and the derivative & fe) = —4(10)-8, which is small enough to terminate the algorithm. Since, at every iteration the first and second-order derivatives are calculated ‘at a new point, a total of three function values are evaluated at every iteration. 25.2 Bisection Method ‘The Newton-Raphson method involves computation of the second derivative, = sumerical computation of which requires three function evaluations. In the “Section method, the computation of the second derivative is avoided; instead, ‘ealy the first: derivative is used. Both the function value and the sign of the derivative at two points is used to eliminate a certain portion of the search This method is similar to the region-elimination methods discussed in ction 2.3.1, but in this method, derivatives are used to make the decision eut the region to be eliminated. The algorithm once again assumes the ity of the function. _ Using the derivative information, the minimum is said to be bracketed the interval (a,b) if two conditions—f’(a) < 0 and f'(b) > O—are tisfied. Like other region-elimination methods, this algorithm also requires » initial boundary points bracketing the minimum. A bracketing algorithm bed in Section 2.2 may be used to find the bracketing points. In the ‘method, derivatives at two boundary points and at the middle point calculated and compared. Of the three points, two consecutive points with atives having opposite signs are chosen for the next iteration. 1 Choose two points @ and b such that f'(a) < 0 and f’(b) > 0. Also a small number ¢. Set x; =a and 22 —b. 2 Calculate z = (xz + 2)/2 and evaluate f'(z).66 Optimization for Engineering Design: Algorithms and Examples Step 3 If |f'(z)| <¢, Terminate; Else if f'(z) <0 set x; = z and go to Step 2; Else if f'(z) > 0 set x2 = z and go to Step 2. The sign of the first-derivative at the mid-point of the current search region is used to eliminate half of the search region. If the derivative is negative, the minimum cannot lie in the left-half of the search region and if the derivative is positive, the minimum cannot lie in the right-half of the search space. EXERCISE 2.5.2 Consider again the function: f(x) = 27 +54/z. Step 1 We choose two points a = 2 and 6 = 5 such that f'(a) = —9.501 and f'(b) = 7.841 are of opposite sign. The derivatives are computed numerically using Equation (2.7). We also choose a small number € = 10-3 Step 2 We calculate a quantity z = (2; + r2)/2 = 3.5 and compute f’(z) = 2.591 Step 3 Since f’(z) > 0, the right-half of the search space needs to be eliminated. Thus, we set 1; = 2 and r2 = 2 = 3.5. This completes one iteration of the algorithm. This algorithm works more like the interval halving method described in Section 2.4. At each iteration, only half of the search region is eliminated, but here the decision about which half to delete depends on the derivatives at the mid-point of the interval. Step 2 We compute z= (2+ 3.5)/2 = 2.750 and f'(z) = —1.641. Step 3 Since f’(z) <0, we set 21 = 2.750 and xg = 3.500. Step 2 The new point z is the average of the two bounds: z = 3.125. The function value at this point is f‘(z) = 0.720. Step 3 Since |f’(z)| ¢ €, we continue with Step 2 Thus, at the end of 10 function evaluations, we have obtained an interval (2.750,3.125), bracketing the minimum point 2* = 3.0. The guess of the minimum point is the mid-point of the obtained interval or x = 2.938. This process continues until we find a point with a vanishing derivative. Since at each iteration, the gradient is evaluated only at one new point, the bisection method requires two function evaluations per iteration. In this method, exactly half the region is eliminated at every iteration; but using the magnitude of the gradient, a faster algorithm can be designed to adaptively eliminate variable portions of search region—a matter which we discuss in the following subsection.Single-variable Optimization Algorithms 67 2.5.3 Secant Method Jn the secant method, both magnitude and sign of derivatives are used to create a new point. The derivative of the function is assumed to vary Tmearly between the two chosen boundary points. Since boundary points have derivatives with opposite signs and the derivatives vary linearly between the boundary points, there exists a point between these two points with a zero derivative, Knowing the derivatives at the boundary points, the point with gero derivative can be easily found. If at two points «1 and ga, the quantity (21)f' (22) < 0, the linear approximation of the derivative x and 22 will have © zero derivative at the point = given by polenta nokeif Cs) ocemnibes 7="- Gay fle)/e@— A) (2.10) In this method, in one iteration more than half the search space may be climinated depending on the gradient values at the two chosen points, However, smaller than half the search space may also be eliminated in one iteration. Algorithm ‘The algorithm is the same as the bisection method except that Step 2 is modified as follows: Step 2 Caloulate the new point 2 using Equation (2.10) and evaluate f’(2)- | This algorithm also requizes only one gradient evaluation at every eemation. Thus, only two function values are required per iteration. ISE 2.5.3 ‘once again the function: f(a) =2? + 54/2. points a = 2 and b = 5 having derivatives — 9,501 and f'(b) = 7.841 with opposite signs. We also choose 30-% and set 1, =2 and zz = 5. In any problem, a number of iterations be required to find two points with opposite gradient values. 2 We now calculate a new point using Equation (2.10): 2=5-r6)/(LO-F) 1 We begin with initial 3.644. derivative at this point, computed numerically, is {’(2) = 3.221. The ‘sm of finding the new point is depicted in Figure 2.13. 3 Since f'(z) > 0, we eliminate the right part (the region (z,b)) original scarch region. ‘The amount of eliminated search space is68 Optimization for Engineering Design: Algorithms and Examples Iteration 1 Iteration 2 ° Oy deat Bs Se ore aig Figure 2.13 Two iterations of the region-elimination technique in the secant method. (0 — 2) = 1.356, which is less than half the search space (b— a)/2 = 25. We set 2: = 2 and x2 = 3.644. This completes one iteration of the secant method. Step 2 The next point which is computed using Equation (2.10) is z = 3.228. The derivative at this point is f'(z) = 1.197. Step 3 Since f(z) > 0, we eliminate the right part of the search space, that is, we discard the region (3.228, 3.644). The amount of eliminated search Space is 0.416, which is also smaller than half of the previous search space (3.644 — 2)/2 or 0.822. In both these iterations, the eliminated region is less than half of the search space, but in some iterations, a region more than the half of the search space can also be eliminated. Thus, we set x; = 2 and rq = 3.208. Step 2 The new point, z = 3.101 and f’(z) = 0.586. Step 3. Since |f"(z)| ¢¢, we continue with Step 2. At the end of 10 function evaluations, the guess of the true minimum Point is computed using Equation (2.10): 2 = 3.037. This point is closer to the true minimum point (2* = 3. 0) than that obtained using the bisection method. 2.5.4 Cubic Search Method This method is similar to the successive quadratic point-estimation method discussed in the Section 2.4.1, except that the derivatives are used to reduce the number of required initial points. For example, a cubic function Fle) = ap +.01(@ ~ 2) +03(¢—24)(2 — 22) + a3(a — 2,)*(x — x9) has four unknowns ag, a1, a2, and ag and, therefore, requires at least four points to determine the function, But the function can also be determinedSingle-variable Optimization Algorithms 69 exactly by specifying the function value as well as the first derivative at only two points: (x1, fi, fi), (22, fe, fi)). Thereafter, by setting the derivative of the above equation to zero, the minimum of the above function can be obtained (Reklaitis, et al., 1983): ‘2, ifu=0, B= a2-p(t2—2i), if0< "<1, @.) 1, ifp>1. 3(fi — fa) emongag, widely 2: lea _ fptw-z fe- fit 2w’ to Powell's successive quadratic estimation method, the minimum the approximation function f(x) can be used as an estimate of the true inimum of the objective function. This estimate and the earlier two points =: and zz) may be used to find the next estimate of the true minimum point. 9 points (x; and x2) are so chosen that the product of their first derivative ative. This procedure may be continued until the desired accuracy is (2 - iif), 1] “ 1 Choose an initial point 2°), a step size A, and two termination rameters €, and 2. Compute f’(x(). If f'(x)) > 0, set A = —A. Set =0. 2 Compute c+) = cl) 42kA, 3 Evaluate f’(2(*+)), 2+) fr(al")) <0, set x, = 2), ata = e+), and go to Step 4; set k= k +1 and go to Step 2. 4 Calculate the point 7 using Equation (2.11). ep 5 If f(z) < f(21), go to Step 6; set = — }(E— x) until f(@) < f(21) is achieved. & Compute f(z). If |f"(#)| < «1 and |(@ —21)/Z| < 2, Terminate; = if f'(Z)f'(x1) <0, set x2 = 2;70 Optimization for Engineering Design: Algorithms and Examples This method is most effective if the exact derivative information is available. The bracketing of the minimum point is achieved in the first three steps. The bracketing algorithm adopted in this method is similar to that of the bounding phase method. Except at the first iteration, in all other iterations the function value as well as the first derivative are calculated only at one new point. Thus, at every iteration, only two new function evaluations are required. The first iteration requires repetitive execution of Steps 2 and 3 to obtain two bracketing points. If the new point @ is better than the point 21, one of the two points x or
@;), respectively, and the first derivative f{ at a. (i) Find ap, a1, and ag in terms of 2, 22, fr, fo, and ff. (i) Find the minimum of the approximating function. (iii) In finding the minimum of f(z) = 2° —3r +2, we would like to use the above approximating polynomial g(a) for the following two scenarios: (a) 2 =2 and x2 =3. Find the approximating minimum solution obtained after one iteration of this approach. (b) 2 = 5 and 22 = 5. Explain why we cannot use these points to successfully find the minimum of f(c). 2-8 Use two iterations of Powell's quadratic estimation method to minimize the following function: f(a) = 2exp (x) — 23 — 102. 2-9 Use three iterations of the bisection and the secant method to minimize the following function: exp (0.2x) — (x + 3)? — 0.0124. Compare the algorithms in terms of the interval obtained at the end of three iterations.variable Optimization Algorithms 7 Compare the golden section search and interval halving method ‘terms of the obtained interval after 10 function evaluations for the ion of the function f(a) = 2? — exp (0.12) interval (10,5). Compare the bisection and secant methods in terms of the obtained ‘after 10 function evaluations for the minimization of the function F(z) = exp (x) — 2° interval (2,5). How does the outcome change if the interval (—2,5) is Find at least one root of the following functions: f(z) = 23 +52? = 3. f(x) = («+ 10)? — 0.012. F(z) = exp (2) — 2°. F(z) = (22 —5)* - (2? -1)°. f(z) = ((e + 2)? + 10)? — 24. Perform two iterations of the cubic search method to minimize the f(x) = (x? —1)8 — (22 -5)*. “In trying to solve the following problem using feasible direction at the current point 2 = (0.5,0.5), a direction vector d®) = s) is obtained. Minimize (a, — 1)? + (22 — 3)? 452 +03-18<0, 2a, — 22-120, 44,22 20. iterations of the interval halving method to find the bracketing points. ‘out the exact minimum point along d\“) and compare. feasible direction method is discussed in Chapter 4.78 Optimization for Engineering Design: Algorithms and Examples COMPUTER PROGRAMS In this section, we present two FORTRAN codes implementing the bounding Phase algorithm and the golden section search algorithm. These codes are written in a step-by-step format as described in Sections 2.2.2 and 2.3.3. ‘These codes show how other algorithms outlined in this chapter can be easily coded. We first present the code for the bounding phase algorithm and then present the code for the golden section search method. Sample runs showing the working of the codes are also presented. Bounding phase method The bounding phase algorithm is coded in subroutine bphase. The objective function is coded in function funct. In the given code, the objective function J (a) = x? + 54/x is used. The user needs to modify the function funct for a different objective function. JRE HE OS IO ROOD O OBES OF Hb oo H io Geo Ope Developed by Dr. Kalyanmoy Deb © Indian Institute of Technology, Kanpur ¢ All rights reserved. This listin, CUMANAUAA Ihde AeA UMUGAGLAT, c Change the function funct() for a new function CMAUIMAUAAI Med dedetede tothe implicit real+*8 (a-h,o-z) nfun = 0 call bphase(a,b,nfun) write(*,5) a,b,nfun 5 format (2x, ’The bounds are (’,£10.3,’, ’,£10.3,’)’, = /,2x,’Total function evaluations: ’, i6) stop end 6 GEBEEE HOO BOBOOBO RE HOS HO Hbo Ho Hoo OO oGU EEOoHOE ooo k ¢-e * c * BOUNDING PHASE METHOD * cies * c e subroutine bphase(a,b,nfun) © bounding phase algorithm Co AHS S BREE HORE HO obo OH HO Go ODOE HO oKo Hore © a and b are lower and upper bounds (output) ¢ nfun is the number of function evaluations (output) Co GHEE BERBER EHO EeHonono Ono rOHOE EEEd ior k implicit real+*8 (a-h,o-z) Genes step 1 of the algorithm 1 write(*,*) ‘enter x0, delta’ read(#,*) x0,delta call funct(x0-delta,fn,nfun)
You might also like
Opimization Chapter 2 by DEB PDF
PDF
No ratings yet
Opimization Chapter 2 by DEB PDF
44 pages
Lec - 2 Single Variable Opt1
PDF
No ratings yet
Lec - 2 Single Variable Opt1
22 pages
NEOM UNIT-1 Sept-23
PDF
No ratings yet
NEOM UNIT-1 Sept-23
34 pages
CH 2
PDF
No ratings yet
CH 2
31 pages
Lecture 1 NLPP Optimizaati
PDF
No ratings yet
Lecture 1 NLPP Optimizaati
23 pages
Optimization Nonlinear
PDF
No ratings yet
Optimization Nonlinear
144 pages
15MAT301_O1
PDF
No ratings yet
15MAT301_O1
16 pages
One-Dimensional Optimization: Unimodal in Some Range of X, I.e., F (X) Has Only One Minimum in Some Range X X X X
PDF
No ratings yet
One-Dimensional Optimization: Unimodal in Some Range of X, I.e., F (X) Has Only One Minimum in Some Range X X X X
2 pages
Lec 5
PDF
No ratings yet
Lec 5
80 pages
Chapter 2 Power System Operation
PDF
No ratings yet
Chapter 2 Power System Operation
63 pages
Lecture 32 34
PDF
No ratings yet
Lecture 32 34
71 pages
Linear Programming: Dr. G Srinivasan Industrial Management Division Iitm
PDF
No ratings yet
Linear Programming: Dr. G Srinivasan Industrial Management Division Iitm
65 pages
Fibonacci Method
PDF
No ratings yet
Fibonacci Method
12 pages
Unconstrained Opt
PDF
No ratings yet
Unconstrained Opt
44 pages
1.4 Lecture Notes Template
PDF
No ratings yet
1.4 Lecture Notes Template
4 pages
21ED602 Optimization Techniques in Engineering: Class Notes (Internal Circulation)
PDF
No ratings yet
21ED602 Optimization Techniques in Engineering: Class Notes (Internal Circulation)
7 pages
Optimization of Chemical Processes (Che1011)
PDF
No ratings yet
Optimization of Chemical Processes (Che1011)
22 pages
20-Region Elimination Method_ Golden search method-11-03-2025
PDF
No ratings yet
20-Region Elimination Method_ Golden search method-11-03-2025
20 pages
Optimization Techniques
PDF
No ratings yet
Optimization Techniques
96 pages
ECEG-6311 Power System Optimization and AI: Linear and Non Linear Programming Yoseph Mekonnen (PH.D.)
PDF
No ratings yet
ECEG-6311 Power System Optimization and AI: Linear and Non Linear Programming Yoseph Mekonnen (PH.D.)
56 pages
Optim Notes
PDF
No ratings yet
Optim Notes
19 pages
A New Filled Function Method With Two Parameters in A Directional Search
PDF
No ratings yet
A New Filled Function Method With Two Parameters in A Directional Search
9 pages
Unimodal Function: F X Is Unimodal If I X X X FX FX II X X X FX F X, Where X
PDF
No ratings yet
Unimodal Function: F X Is Unimodal If I X X X FX FX II X X X FX F X, Where X
5 pages
Nonlinear Program
PDF
No ratings yet
Nonlinear Program
12 pages
L8 Single Variable Optimization Algorithms
PDF
No ratings yet
L8 Single Variable Optimization Algorithms
9 pages
ECOM 6302: Engineering Optimization: Chapter Three
PDF
100% (1)
ECOM 6302: Engineering Optimization: Chapter Three
56 pages
Optimization Using Calculus: Stationary Points: Functions of Single and Two Variables
PDF
No ratings yet
Optimization Using Calculus: Stationary Points: Functions of Single and Two Variables
28 pages
4 Pattern Directions, 21-08-2024
PDF
No ratings yet
4 Pattern Directions, 21-08-2024
58 pages
Elimination Methods
PDF
No ratings yet
Elimination Methods
58 pages
Lecture 9
PDF
No ratings yet
Lecture 9
22 pages
PHD Course Lectures: Optimization
PDF
No ratings yet
PHD Course Lectures: Optimization
25 pages
Exam1Review Annotated
PDF
No ratings yet
Exam1Review Annotated
13 pages
امثلية2
PDF
No ratings yet
امثلية2
13 pages
US - TMC - 05 - Optimization 2022
PDF
No ratings yet
US - TMC - 05 - Optimization 2022
43 pages
CH 4-Design Optimization-Optimum Design Concepts PDF
PDF
No ratings yet
CH 4-Design Optimization-Optimum Design Concepts PDF
62 pages
Machine Learning Notes2
PDF
No ratings yet
Machine Learning Notes2
34 pages
Week 12
PDF
No ratings yet
Week 12
65 pages
Penaltyfunctionmethodsusingmatrixlaboratory MATLAB
PDF
No ratings yet
Penaltyfunctionmethodsusingmatrixlaboratory MATLAB
39 pages
02_Reduced O. T. Introduction
PDF
No ratings yet
02_Reduced O. T. Introduction
42 pages
One Variable Optimization
PDF
No ratings yet
One Variable Optimization
15 pages
Optimization & 1-D Unconstrained Optimization
PDF
No ratings yet
Optimization & 1-D Unconstrained Optimization
21 pages
Mathematical Optimization
PDF
No ratings yet
Mathematical Optimization
15 pages
HW4 Solutions Autotag
PDF
No ratings yet
HW4 Solutions Autotag
7 pages
Hawassa University (Hu), Institute of Technology (Iot) Chemical Engineering Department
PDF
No ratings yet
Hawassa University (Hu), Institute of Technology (Iot) Chemical Engineering Department
30 pages
Algo IMP Unit 1
PDF
No ratings yet
Algo IMP Unit 1
8 pages
ECEG-6311 Power System Optimization and AI: Linear and Non Linear Programming Yoseph Mekonnen (PH.D.)
PDF
No ratings yet
ECEG-6311 Power System Optimization and AI: Linear and Non Linear Programming Yoseph Mekonnen (PH.D.)
25 pages
5 Optimization Techniques
PDF
No ratings yet
5 Optimization Techniques
40 pages
lecture 7
PDF
No ratings yet
lecture 7
10 pages
Mathematical Optimization - Wikipedia
PDF
No ratings yet
Mathematical Optimization - Wikipedia
16 pages
Lecture 2- gradient-directional derivatives
PDF
No ratings yet
Lecture 2- gradient-directional derivatives
31 pages
Unrestricted Search
PDF
No ratings yet
Unrestricted Search
5 pages
Exercise 3 Math Econ PDF
PDF
No ratings yet
Exercise 3 Math Econ PDF
6 pages
Classical Optimization Techniques: Hapter
PDF
No ratings yet
Classical Optimization Techniques: Hapter
18 pages
FALLSEM2023-24 EEE1020 ETH VL2023240103124 2023-08-11 Reference-Material-I
PDF
No ratings yet
FALLSEM2023-24 EEE1020 ETH VL2023240103124 2023-08-11 Reference-Material-I
10 pages
New Microsoft Office PowerPoint Presentation
PDF
No ratings yet
New Microsoft Office PowerPoint Presentation
141 pages
Chapter 4: Unconstrained Optimization
PDF
No ratings yet
Chapter 4: Unconstrained Optimization
25 pages
Search Google or Type Url: Youtube Erp System-Iit, KH Gmail Online Hindi Epaper
PDF
No ratings yet
Search Google or Type Url: Youtube Erp System-Iit, KH Gmail Online Hindi Epaper
1 page
Guggenheim 1935
PDF
No ratings yet
Guggenheim 1935
57 pages
Compressible Flow in A Nozzle
PDF
No ratings yet
Compressible Flow in A Nozzle
6 pages
Heat Transfer Notes Chemical Engineering PDF
PDF
100% (4)
Heat Transfer Notes Chemical Engineering PDF
210 pages
Extraction of Lactic Acid by Phosphonium Ionic Liquids: J An Mart Ak, Stefan Schlosser
PDF
No ratings yet
Extraction of Lactic Acid by Phosphonium Ionic Liquids: J An Mart Ak, Stefan Schlosser
12 pages
Assignment
PDF
No ratings yet
Assignment
4 pages
T.P. Syllabus
PDF
No ratings yet
T.P. Syllabus
1 page