0% found this document useful (0 votes)
21 views5 pages

NON LINEAR EQUATIONS Final

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views5 pages

NON LINEAR EQUATIONS Final

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

NON LINEAR EQUATIONS

Recall: MATHEMATICAL MODELS:

Dependent variables- reflect the state or performance of the system.

Parameters- represent the properties or composition of system

• If the parameters are known, the equation can be used to predict the velocity.
• Such computations can be performed directly because v is expressed explicitly as a function of
the model parameters. That is, it is isolated on one side of the equal sign.
• However, suppose that we had to determine the mass with a given drag coefficient to attain a
prescribed velocity in a set time period. In such cases, m is said to be implicit.

"I need to know (parameter) such that (variable) will behave in a certain way."

To solve the problem using numerical methods, it is conventional to re- express our example by
subtracting the dependent variable v from both sides of the equation to give f(m).

BRACKETING VS OPEN METHOD

Bracketing Method

1. The root is located within an interval prescribed by a lower and an upper bound.

2 Repeated applications of these method always results in closer estimates of the true value of the root.

3. Such methods are said to be "convergent" because they move closer to the truth as the computation
progresses.

INCREMENTAL SEARCH

Incremental methods take advantage of the observation by locating an interval where the function
changes sign.

NOTE HOWEVER, that you should be careful with the incremental search method regarding the choice of
increment length.

too small: the search can be very time-consuming too great: there is a

possibility that closely spaced roots will be missed BISECTION


1. Is a variation of the incremental search method in which the interval is always divided in half.

2. If a function changes sign over an interval, the function value at the midpoint is evaluated.

3. The process is repeated until the root is known to the required precision.

STOPPING CRITERION

By repetitive iterations, we can approximate to the real root by a certain true error.

We approximate our roots with errors that will satisfy the stopping criterion.

E = Xnew – Xold

FALSE POSITION METHOD

Also called the linear interpolation method.

1. Rather than bisecting the interval, it locates the root by joining f(xl) and f(xu) with a straight line.

2. The intersection of this line with the x-axis represents an improved estimate of the root.

3. Using similar triangles, the intersection can be estimated.

4. Replace initial guess with false-position guess.


Open Method

-require only single starting value or two starting values that do not necessarily bracket the plot.

-as such, they sometimes diverge or move away from the root as the computation progresses.

FIXED POINT ITERATION Also known as the Method Of Successive Substitution (MOSS) or one-point
iteration

• Starts by rearranging f(x) = 0 such that x is on the left-hand side of the equation x = g(x),
transformation can be accomplished by algebraic manipulation.
• We can use this to get an expression for the improved iterative estimate of the root Xi+1 = g(x)
LINEAR CONVERGENCE

• The relative errors from succeeding iterations are proportional. This property is characteristic of
the fixed point iteration method.
• To check if the initial guess will arrive at convergence, we can evaluate the first derivative of g(x)
at the initial guess Xo.
• Convergence occurs when: g'(x) <1
• Positive derivative will mean the same sign in the next iteration
• Negative derivative will mean different sign in the next iteration

NEWTON-RHAPSON

• The most widely used of all root-locating formulas due to its fast convergence.
• If the initial guess at the root is Xi, a tangent can be extended from the point (Xi, f(Xi)).
• The point where this tangent crosses the x axis usually represents an improved estimate of the
QUADRATIC CONVERGENCE

• The relative errors from succeeding iterations are roughly proportional to the square of the
previous error.
• In other words, the number of significant figures of accuracy approximately doubles with each
iteration.
• There is no general convergence criterion for Newton-Rhapson. Convergence depends on the
nature of the function and the accuracy of the initial guess.
• The relative errors from succeeding iterations are roughly proportional to the square of the
previous error.
• In other words, the number of significant figures of accuracy approximately doubles with each
iteration.
• There is no general convergence criterion for Newton-Rhapson. Convergence depends on the
nature of the function and the accuracy of the initial guess.
REGRESSION
CURVE FITTING

Two types of applications are generally encountered when fitting experimental data: trend analysis and
hypothesis testing.

Trend analysis represents the process of using the pattern of the data to make predictions. For cases
where the data are measured with high precision, you might utilize interpolating polynomials.

Hypothesis testing. Here, an existing mathematical model is compared with measured data. If the
model coefficients are unknown, it may be necessary to determine values that best fit the observed
data.

LINEAR REGRESSION

Linear Least-Squares Regression

One approach to curve fitting is to visually inspect the plotted data and then sketch a "best" line through
the points. BUT THIS WOULD YIELD DIFFERENT LINES FOR DIFFERENT ANALYSTS.

To remove this subjectivity, some criterion must be devised to establish a basis for the fit. We minimize
the discrepancy

Criteria for best fit


Coefficient of Correlation

LINEARIZATION OF NONLINEAR RELATIONSHIPS

Exponential Regression Power Regression

α1 = 𝑒 𝑎0 α2 = 10𝑎0

β1 = a1 β2 = a1

NONLINEAR REGRESSION

Polynomial Regression

The least squares procedure can also be applied to any nth order polynomial in the form

Standard Error

You might also like