0% found this document useful (0 votes)
10 views20 pages

Numerical Methods for Engineers ---- (17.1 Linear Regression)

This document discusses least-squares regression, particularly focusing on linear regression as a method for fitting a straight line to paired observations. It outlines various criteria for determining the 'best' fit, emphasizing the minimization of the sum of the squares of the residuals as the most effective approach. Additionally, it covers the quantification of error, the correlation coefficient, and the importance of visualizing data alongside regression results.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views20 pages

Numerical Methods for Engineers ---- (17.1 Linear Regression)

This document discusses least-squares regression, particularly focusing on linear regression as a method for fitting a straight line to paired observations. It outlines various criteria for determining the 'best' fit, emphasizing the minimization of the sum of the squares of the residuals as the most effective approach. Additionally, it covers the quantification of error, the correlation coefficient, and the importance of visualizing data alongside regression results.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

points and the curve.

A technique for accomplishing this objective, called least-squares


regression, will be discussed in the present chapter.

17.1 LINEAR REGRESSION


The simplest example of a least-squares approximation is fitting a straight line to a set of
paired observations: (x1, y1), (x2, y2), …, (xn, yn). The mathematical expression for the
straight line is
(17.1)
Page 463
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

FIGURE 17.1
(a) Data exhibiting significant error. (b) Polynomial fit oscillating beyond the range of the data. (c) More
satisfactory result using the least-squares fit.

where a0 and a1 are coefficients representing the intercept and the slope, respectively, and e

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
is the error, or residual, between the model and the observations, which can be represented by
rearranging Eq. (17.1) as

Thus, the error, or residual, is the discrepancy between the true value of y and the
approximate value, a0 + a1x, predicted by the linear equation.
Page 464
17.1.1 Criteria for a “Best” Fit
One strategy for fitting a “best” line through the data would be to minimize the sum of the
residual errors for all the available data, as in
(17.2)

where n = total number of points. However, this is an inadequate criterion, as illustrated by


Fig. 17.2a which depicts the fit of a straight line to two points. Obviously, the best fit is the
line connecting the points. However, any straight line passing through the midpoint of the
connecting line (except a perfectly vertical line) results in a minimum value of Eq. (17.2)
equal to zero because the errors cancel.
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
FIGURE 17.2
Examples of some criteria for “best fit” that are inadequate for regression: (a) minimizes the sum of the
residuals, (b) minimizes the sum of the absolute values of the residuals, and (c) minimizes the maximum
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

error of any individual point.

Therefore, another logical criterion might be to minimize the sum of the Page 465

absolute values of the discrepancies, as in

Figure 17.2b demonstrates why this criterion is also inadequate. For the four points shown,
any straight line falling within the dashed lines will minimize the sum of the absolute values.
Thus, this criterion also does not yield a unique best fit.

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
A third strategy for fitting a best line is the minimax criterion. In this technique, the line
is chosen that minimizes the maximum distance that an individual point falls from the line.
As depicted in Fig. 17.2c, this strategy is ill-suited for regression because it gives undue
influence to an outlier, that is, a single point with a large error. It should be noted that the
minimax principle is sometimes well-suited for fitting a simple function to a complicated
function (Carnahan, Luther, and Wilkes 1969).
A strategy that overcomes the shortcomings of the aforementioned approaches is to
minimize the sum of the squares of the residuals between the measured y and the y calculated
with the linear model,
(17.3)

This criterion has a number of advantages, including the fact that it yields a unique line for a
given set of data. Before discussing these properties, we will present a technique for
determining the values of a0 and a1 that minimize the result of Eq. (17.3).

17.1.2 Least-Squares Fit of a Straight Line


To determine values for a0 and a1, Eq. (17.3) is differentiated with respect to each
coefficient:

Note that we have simplified the summation symbols; unless otherwise indicated, all
summations are from i = 1 to n. Setting these derivatives equal to zero will result in a
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

minimum Sr. If this is done, the equations can be expressed as

Now, realizing that Σa0 = na0, we can express the equations as a set of two Page 466

simultaneous linear equations with two unknowns (a0 and a1):


(17.4)
(17.5)

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
These are called the normal equations. They can be solved simultaneously
(17.6)

This result can then be used in conjunction with Eq. (17.4) to solve for
(17.7)

where and are the means of y and x, respectively.

EXAMPLE 17.1 Linear Regression

Problem Statement. Fit a straight line to the x and y values in the first two columns of
Table 17.1.

TABLE 17.1 Computations for an error analysis of the linear fit.

Solution. The following quantities can be computed:


Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Using Eqs. (17.6) and (17.7),

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
Therefore, the least-squares fit is Page 467

The line, along with the data, is shown in Fig. 17.1c.

17.1.3 Quantification of Error of Linear Regression


Any line other than the one computed in Example 17.1 results in a larger sum of the squares
of the residuals. Thus, the line is unique and in terms of our chosen criterion is a “best” line
through the points. A number of additional properties of this fit can be elucidated by
examining more closely the way in which residuals were computed. Recall that the sum of
the squares is defined as [Eq. (17.3)]
(17.8)

Notice the similarity between Eqs. (PT5.3) and (17.8). In the former case, the square of
the residual represented the square of the discrepancy between the data and a single estimate
of the measure of central tendency—the mean. In Eq. (17.8), the square of the residual
represents the square of the vertical distance between the data and another measure of central
tendency—the straight line (Fig. 17.3).
The analogy can be extended further for cases where (1) the spread of the points around
the line is of similar magnitude along the entire range of the data and (2) the distribution of
these points about the line is normal. It can be demonstrated that if these criteria are met,
least-squares regression will provide the best (that is, the most likely) estimates of a0 and a1
(Draper and Smith 1981). This is called the maximum likelihood principle in statistics. In
addition, if these criteria are met, a “standard deviation” for the regression line can be
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

determined as [compare with Eq. (PT5.2)]

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
FIGURE 17.3
The residual in linear regression represents the vertical distance between a data point and the straight
line.

(17.9) Page 468

where sy/x is called the standard error of the estimate. The subscript notation “y/x” designates
that the error is for a predicted value of y corresponding to a particular value of x. Also,
notice that we now divide by n − 2 because two data-derived estimates—a0 and a1—were
used to compute Sr; thus, we have lost two degrees of freedom. As in our discussion of the
standard deviation in PT5.2.1, another justification for dividing by n − 2 is that there is no
such thing as the “spread of data” around a straight line connecting two points. Thus, for the
case where n = 2, Eq. (17.9) yields a meaningless result of infinity.
Just as was the case with the standard deviation, the standard error of the estimate
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

quantifies the spread of the data. However, sy/x quantifies the spread around the regression
line, as shown in Fig. 17.4b, in contrast to the original standard deviation sy, which quantified
the spread around the mean (Fig. 17.4a).
The above concepts can be used to quantify the “goodness” of our fit. This is
particularly useful for comparison of several regressions (Fig. 17.5). To do this, we return to
the original data and determine the total sum of the squares around the mean for the
dependent variable (in our case, y). As was the case for Eq. (PT5.3), this quantity is
designated St. This is the magnitude of the residual error associated with the dependent

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
variable prior to regression. After performing the regression, we can compute Sr, the sum of
the squares of the residuals around the regression line. This characterizes the residual error
that remains after the regression. It is, therefore, sometimes called the unexplained sum of the
squares. The difference between the two quantities, St − Sr, quantifies the improvement, or
error reduction, due to describing the data in terms of a straight line rather than as an average
value. Because the magnitude of this quantity is scale-dependent, the difference is
normalized to St to yield

FIGURE 17.4
Regression data showing (a) the spread of the data around the mean of the dependent variable and (b)
the spread of the data around the best-fit line. The reduction in the spread in going from (a) to (b), as
indicated by the bell-shaped curves at the right, represents the improvement due to linear regression.

Page 469
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
FIGURE 17.5
Examples of linear regression with (a) small and (b) large residual errors.

(17.10)

where r2 is called the coefficient of determination and r is the correlation coefficient .


For a perfect fit, Sr = 0 and r = r2 = 1, signifying that the line explains 100 percent of the
variability of the data. For r = r2 = 0, Sr = St and the fit represents no improvement. An
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

alternative formulation for r that is more convenient for computer implementation is


(17.11)

Page 470

EXAMPLE 17.2 Estimation of Errors for the Linear Least-Squares


Fit

Problem Statement. Compute the total standard deviation, the standard error of the

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
estimate, and the correlation coefficient for the data in Example 17.1.

Solution. The summations are performed and presented in Table 17.1. The standard
deviation is [Eq. (PT5.2)]

and the standard error of the estimate is [Eq. (17.9)]

Thus, because sy/x < sy, the linear regression model has merit. The extent of the
improvement is quantified by [Eq. (17.10)]

or

These results indicate that 86.8% of the original uncertainty has been explained by the
linear model.

Before proceeding to the computer program for linear regression, a word of caution is in
order. Although the correlation coefficient provides a handy measure of goodness-of-fit, you
should be careful not to ascribe more meaning to it than is warranted. Just because r is
“close” to 1 does not mean that the fit is necessarily “good.” For example, it is possible to
obtain a relatively high value of r when the underlying relationship between y and x is not
even linear. Draper and Smith (1981) provide guidance and additional material regarding
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

assessment of results for linear regression. In addition, at the minimum, you should always
inspect a plot of the data along with your regression curve. As described in the next section,
software packages include such a capability.

17.1.4 Computer Program for Linear Regression


It is a relatively trivial matter to develop pseudocode for linear regression (Fig. 17.6). As
mentioned above, a plotting option is critical to the effective use and interpretation of
regression. Such capabilities are included in popular packages like MATLAB software and

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
Excel. If your computer language has plotting capabilities, we recommend that you expand
your program to include a plot of y versus x, showing both the data and the regression line.
The inclusion of the capability will greatly enhance the utility of the program in problem-
solving contexts.
Page 471

FIGURE 17.6
Algorithm for linear regression.
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

EXAMPLE 17.3 Linear Regression Using the Computer

Problem Statement. We can use software based on Fig. 17.6 to solve a hypothesis-testing
problem associated with the falling parachutist discussed in Chap. 1. A theoretical
mathematical model for the velocity of the parachutist was given as the following [Eq.
(1.10)]:

where υ = velocity (m/s), g = gravitational constant (9.8 m/s2), m = mass of the parachutist,

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
equal to 68.1 kg, and c = drag coefficient of 12.5 kg/s. The model predicts the velocity of
the parachutist as a function of time, as described in Example 1.1.
An alternative empirical model for the velocity of the parachutist is given by
(E17.3.1)

Suppose that you would like to test and compare the adequacy of these two
mathematical models. This might be accomplished by measuring the actual velocity of the
parachutist at known values of time and comparing these results with the predicted
velocities according to each model.
Such an experimental-data-collection program was implemented, and the Page 472
results are listed in column (a) of Table 17.2. Computed velocities for each model are listed
in columns (b) and (c).

TABLE 17.2 Measured and calculated velocities for the falling parachutist.
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Solution. The adequacy of the models can be tested by plotting the model-calculated
velocity versus the measured velocity. Linear regression can be used to calculate the slope
and the intercept of the plot. This line will have a slope of 1, an intercept of 0, and r2 = 1 if
the model matches the data perfectly. A significant deviation from these values can be used
as an indication of the inadequacy of the model.

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
Figure 17.7a and b are plots of the line and data for the regressions of columns (b) and
(c), respectively, versus column (a). For the first model [Eq. (1.10) as depicted in Fig.
17.7a],

and for the second model [Eq. (E17.3.1) as depicted in Fig. 17.7b],

These plots indicate that the linear regression between these data and each of the models is
highly significant. Both models match the data with a correlation coefficient of greater than
0.99.
However, the model described by Eq. (1.10) conforms to our hypothesis test criteria
much better than that described by Eq. (E17.3.1) because the slope and intercept are more
nearly equal to 1 and 0, respectively. Thus, although each plot is well described by a
straight line, Eq. (1.10) appears to be a better model than Eq. (E17.3.1).
Page 473
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
FIGURE 17.7
(a) Results using linear regression to compare predictions computed with the theoretical model [Eq.
(1.10)] versus measured values. (b) Results using linear regression to compare predictions computed
with the empirical model [Eq. (E17.3.1)] versus measured values.
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Model testing and selection are common and extremely important activities performed
in all fields of engineering. The background material provided in this chapter, together with
your software, should allow you to address many practical problems of this type.

There is one shortcoming with the analysis in Example 17.3. The example was
unambiguous because the empirical model [Eq. (E17.3.1)] was clearly inferior to Eq. (1.10).
Thus, the slope and intercept for the former were so much closer to the desired result of 1 and

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
0, that it was obvious which model was superior.
However, suppose that the slope were 0.85 and the intercept were 2. Obviously this
would make the conclusion that the slope and intercept were 1 and 0 open to debate. Page 474
Clearly, rather than relying on a subjective judgment, it would be preferable to base
such a conclusion on a quantitative criterion.
This can be done by computing confidence intervals for the model parameters in the
same way that we developed confidence intervals for the mean in Sec. PT5.2.3. We will
return to this topic later in this chapter.

17.1.5 Linearization of Nonlinear Relationships


Linear regression provides a powerful technique for fitting a best line to data. However, it is
predicated on the fact that the relationship between the dependent and independent variables
is linear. This is not always the case, and the first step in any regression analysis should be to
plot and visually inspect the data to ascertain whether a linear model applies. For example,
Fig. 17.8 shows some data whose plot is obviously curvilinear. In some cases, techniques
such as polynomial regression, which is described in Sec. 17.2, are appropriate. For others,
transformations can be used to express the data in a form that is compatible with linear
regression.
One example is the exponential model,
(17.12)
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
FIGURE 17.8
(a) Data that are ill-suited for linear least-squares regression. (b) Indication that a parabola is preferable.

where α1 and β1 are constants. This model is used in many fields of engineering to Page 475
characterize quantities that increase (positive β1) or decrease (negative β1) at a rate that is
directly proportional to their own magnitude. For example, population growth or radioactive
decay can exhibit such behavior. As depicted in Fig. 17.9a, the equation represents a
nonlinear relationship (for β1 ≠ 0) between y and x.
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Another example of a nonlinear model is the simple power equation,


(17.13)
where α2 and β2 are constant coefficients. This model has wide applicability in all fields of
engineering. As depicted in Fig. 17.9b, the graph of this equation (for β2 ≠ 0 or 1) is
nonlinear.

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
FIGURE 17.9
(a) The exponential equation, (b) the power equation, and (c) the saturation-growth-rate equation. Parts
(d), (e), and (f) are linearized versions of these equations that result from simple transformations.

A third example of a nonlinear model is the saturation-growth-rate equation Page 476

[recall Eq. (E17.3.1)],


(17.14)
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

where α3 and β3 are constant coefficients. This model, which is particularly well-suited for
characterizing population growth rate under limiting conditions, also represents a nonlinear
relationship between y and x (Fig. 17.9c) that levels off, or “saturates,” as x increases.
Nonlinear regression techniques are available to fit these equations to experimental data
directly. (Note that we will discuss nonlinear regression in Sec. 17.5.) However, a simpler
alternative is to use mathematical manipulations to transform the equations into a linear form.
Then, simple linear regression can be employed to fit the equations to data.
For example, Eq. (17.12) can be linearized by taking its natural logarithm to yield

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
But because ln e = 1,
(17.15)
Thus, a plot of ln y versus x will yield a straight line with a slope of β1 and an intercept of ln
α1 (Fig. 17.9d).
Equation (17.13) is linearized by taking its base-10 logarithm to give
(17.16)
Thus, a plot of log y versus log x will yield a straight line with a slope of β2 and an intercept
of log α2 (Fig. 17.9e).
Equation (17.14) is linearized by inverting it to give
(17.17)

Thus, a plot of 1/y versus l/x will be linear, with a slope of β3/α3 and an intercept of 1/α3 (Fig.
17.9f).
In their transformed forms, these models allow us to use linear regression to evaluate the
constant coefficients. They could then be transformed back to their original state and used for
predictive purposes. Example 17.4 illustrates this procedure for Eq. (17.13). In addition, Sec.
20.4 provides a more sophisticated engineering example of the same sort of computation.

EXAMPLE 17.4 Linearization of a Power Equation

Problem Statement. Fit Eq. (17.13) to the data in Table 17.3 using a logarithmic
transformation of the data.
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

Solution. Figure 17.10a is a plot of the original data in its untransformed state. Figure
17.10b shows the plot of the transformed data. A linear regression of the log-transformed
data yields the result

Page 477
TABLE 17.3 Data to be fit to the power equation.

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.
Copyright © 2020. McGraw-Hill US Higher Ed ISE. All rights reserved.

FIGURE 17.10
(a) Plot of untransformed data with the power equation that fits these data. (b) Plot of transformed data
used to determine the coefficients of the power equation.

Thus, the intercept, log α2, equals −0.300, and therefore, by taking the Page 478
antilogarithm, we get α2 = 10−03 = 0.5. The slope is β2 = 1.75. Consequently, the power

Chapra, Steven, and Raymond Canale. Numerical Methods for Engineers, McGraw-Hill US Higher Ed ISE, 2020. ProQuest Ebook Central,
http://ebookcentral.proquest.com/lib/undip-ebooks/detail.action?docID=6212769.
Created from undip-ebooks on 2025-06-04 18:51:26.

You might also like