0% found this document useful (0 votes)
1 views

Nonlinear Regression

The document discusses nonlinear regression, emphasizing its use in fitting data with nonlinear functions to minimize the difference between observed and predicted values. It contrasts linear regression, where the function is linear in coefficients, with nonlinear regression, which is more complex and often used for estimating physical constants. Additionally, it covers methods for estimating uncertainty in coefficients and provides examples related to crack propagation data analysis.

Uploaded by

nusyirwan.1966
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Nonlinear Regression

The document discusses nonlinear regression, emphasizing its use in fitting data with nonlinear functions to minimize the difference between observed and predicted values. It contrasts linear regression, where the function is linear in coefficients, with nonlinear regression, which is more complex and often used for estimating physical constants. Additionally, it covers methods for estimating uncertainty in coefficients and provides examples related to crack propagation data analysis.

Uploaded by

nusyirwan.1966
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Nonlinear regression

• Regression is fitting data by a given function (surrogate) with


unknown coefficients by finding the coefficients that minimize
the sum of the squares of the difference with the data.
• In linear regression, the assumed function is linear in the
coefficients, for example, .
• Regression is nonlinear, when the function is a nonlinear in the
coefficients (not x), e.g.,
• The most common use of nonlinear regression is for finding
physical constants given measurements.
• Example: fitting crack propagation data with Paris law:
• Fit
2
 m m

 
m 1 2 m
a  NC (1  )   a
0
2

 2 
Review of Linear Regression
• Surrogate is linear
combination of given n b

yˆ  bii (x)
shape functions i 1

• For linear
1 1  2  x
approximation
• Difference (residual) nb
r  y   b  (x ) r y  Xb
between data and
j j i i j
i 1

surrogate
• Minimize square rT r (y  Xb)T (y  Xb)
residual
• Differentiate to obtain X T Xb  X T y
Basic equations
• General form

• Residuals
ri  ˆ
y (xi , b)  yi
1
2
erms 
ny
i
r 2

• Rms error i

• Finding the coefficients requires the solution of an


optimization problem.
• However, minimizing the sum of squares is a specialized
problem with specialized algorithms. Matlab lsqnonlin is
very good.
Example – Linear vs. Nonlinear Regression
• y(1) = 20, y(2) = 7, y(3) = 5, and y(4) = 4.
• Data suggests a rational function
• Compare to quadratic polynomial
• Both use three coefficients 20

• Get 18
Data
Rational function
Polynomial
16

14

12

10
6.99
yrational 1.99  8

0.612  x 6

yquadratic 36.5  20 x  3 x 2 4

2
1 1.5 2 2.5 3 3.5 4
Estimating uncertainty in coefficients
• Brute force approach, generate noise in data
and repeat multiple times
• Alternatively linearize about optimum set of
coefficients b*
n
yˆ
r  yˆ (x , b*)  y 
i i i b j
j 1 b j

• Now perform linear regression with .


– Provides improvement to solution
– Provides estimate of uncertainty in , which is an
estimate for the uncertainty in
Model based error for linear
regression
• The common assumptions for linear regression
– Surrogate is in functional form of true function
– The data is contaminated with normally distributed
error with the same standard deviation at every point.
– The errors at different points are not correlated.
• Under these assumptions, the noise standard
deviation (called standard error) is estimated as.
rT r
ˆ 
2

n y  nb
• Similarly, the standard error in the coefficients is

se(bi ) ˆ X X 
T 1
ii
Rational function example
• Linearize with respect to b’s
b2 b2 b2*b3
y b1  r b1  * 
b3  x b3  x b*  x 2
3
• Perform fit by linear regression
X T X b  X T r
=1.0e-007* [0.1435 0.3685 0.1230]’

• Finally perform error analysis ˆ 


2 rT r
0.102
n y  n
X X 
1
se(bi ) ˆ T
[0.20, 0.50, 0.024]
ii

• Standard errors range between 4% to 10% of


the b’s (1.99 -6.99 0.612)
Application to crack propagation
• Paris law and its solution
2
da  m m
 2 m
 
m 1
C K 
m
K   a a  NC (1  )   a 0
2

dN  2 
• Coppe, A. ,Haftka, R.T., and Kim, N.H. (2011) " Uncertainty Identification of
Damage Growth Parameters Using Nonlinear Regression" AIAA
Journal ,Vol 49(12), 2818–2621
• Properties to be identified from measurements
meas T
ai ai  b  v b  {m, a0 , b}
ri aimeas  a calc ( N i , b)
Example with only m unknown
• Simulation with b=0 v=[-1,1]mm, m=3.8
• Excellent agreement between Monte Carlo (1,000
repetitions) simulation and linearization.
-1
10
Standard error
Standard deviation
Uncertainty in m

-2
10

-3
10

-4
10
0 500 1000 1500 2000 2500
Number of cycles at inspection
All three unknown
• Difficult to differentiate between initial crack
size and bias 10
0

-1
Standard error
Standard deviation
10

meas
ai  b  v

Uncertainty in b
a
-2
10

i 10
-3

-4
2 10
10
Standard error -5
10
Standard deviation 0 500 1000 1500 2000 2500
1 Number of cycles at inspection
10
Uncertainty in m

0
10
0 Standard error
10 Standard deviation
-1
10

10
-1
Uncertainty in a0 -2
10

-3
-2 10
10
-4
10
-3
10
0 500 1000 1500 2000 2500 10
-5

0 500 1000 1500 2000 2500


Number of cycles at inspection Number of cycles at inspection
Problems
• Using the data for the rational function, repeat the fit and
the uncertainty calculation for an exponential decay
model
• Instead of using the data in Slide 4, generate your own
data for 31 uniformly distributed points (1,1.3,…)from the
identified algebraic model and contaminate the data with
normally distributed random noise with zero mean and
standard deviation of 1. Compare the standard error
from linear regression with the value you get by
repeating the process multiple times using different
realizations of the noise.

You might also like