0% found this document useful (0 votes)
27 views

CCE-311 Question Solution

The document discusses solving systems of equations using various methods like Cramer's rule, Gauss elimination, and Gauss-Jordan. It also discusses numerical integration techniques like Trapezoidal rule and Simpson's 3/8 rule. Key steps of algorithms like bisection method are provided.

Uploaded by

Hasan Ahammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

CCE-311 Question Solution

The document discusses solving systems of equations using various methods like Cramer's rule, Gauss elimination, and Gauss-Jordan. It also discusses numerical integration techniques like Trapezoidal rule and Simpson's 3/8 rule. Key steps of algorithms like bisection method are provided.

Uploaded by

Hasan Ahammad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Session: 2018-2019

1. A. Resolve the following system using the Cramer’s Rule:


0.40x1-0.1x2-0.2x3=7.85
0.10x1+7x2-0.3x3=-19.3
0.30x1-0.2x2+10x3=71.4
Soln: Let us write these equations in the form AX=B
. 4 −.1 −.2 𝑥1 7.85
[. 1 7 −.3] [𝑥2 ] = [−19.3]
. 3 −.2 10 𝑥3 71.4
.4 −.1 −.2
D=|A|=|. 1 7 −.3|=28.509
.3 −.2 10
7.85 −.1 −.2
Dx1=|−19.3 7 −.3|=631.059
71.4 −.2 10
. 4 7.85 −.2
Dx2=|. 1 −19.3 −.3|=-79.7745
. 3 71.4 10
. 4 −.1 7.85
𝐷𝑥3 = |. 1 7 −19.3|=183.027
. 3 −.2 71.4
X1= Dx1/D=631.059/28.509≈22.135
X2= Dx2/D=-79.7745/28.509≈-2.79
X3= 𝐷𝑥3 /D=183.027/28.509≈6.41
B. use the Gauss-Elimination technique to resolve the following system:
0.3x1-0.1x2-0.2x3=7.85
0.10x1+7x2-0.3x3=-19.3
x1-0.2x2+10x3=71.4
Soln: We'll set up the augmented matrix [A|B]:
A|B = | 0.3 -0.1 -0.2 | 7.85 |
| 0.10 7 -0.3 | -19.3 |
| 1 -0.2 10 | 71.4 |
Now, let's perform row operations to solve the system:
Step 1: Make the coefficient of the first row's first element (A[1,1]) equal to 1. Divide the
first row by 0.3:
A|B = | 1 -0.3333 -0.6667 | 26.1667|
| 0.10 7 -0.3 | -19.3 |
| 1 -0.2 10 | 71.4 |

Step 2: Eliminate the coefficients below A[1,1]. Multiply the first row by -0.10 and add it
to the second row. Multiply the first row by -1 and add it to the third row:
A|B = | 1 -0.3333 -0.6667 | 26.1667 |
| 0 7.03333 -0.2333 | -21.4833 |
| 0 0.1333 10.6667 | 45.2333 |

Step 3: Make the coefficient of the second row's second element (A[2,2]) equal to 1.
Divide the second row by 7.3333:
A|B = | 1 -0.3333 -0.6667 | 26.1667 |
| 0 1 -0.0318 | -2.9256 |
| 0 0.1333 10.6667 | 45.2333 |

Step 4: Eliminate the coefficients below A[2,2]. Multiply the second row by -0.1333 and
add it to the third row:
A|B = | 1 -0.3333 -0.6667 | 26.1667 |
| 0 1 -0.0318 | -2.9256 |
| 0 0 1 | 3.7866 |
Now, the augmented matrix is in row-echelon form, and we can back-substitute to find
the solutions:
From the third row, we have:
1x3 = 3.7866
x3 = 3.7866

From the second row, we have:


1x2 -0.0318x3= -2.9256
x2 =-2.80

From the first row, we have:


1x1 - 0.3333x2 -0.6667x3 = 26.1667
x1 = 27.71
So, the correct solution to the system of equations is:
x1 ≈ 28
x2 ≈ -2.80
x3 ≈ 3.7866
2. A. Apply the Cholesky’s process to locate the root of the following system
X1-X2-X3=2
2X1+3X2+5X3=-3
3X1+2X2-3X3=6.
n
o Sol : Rules or technique→ AX=B => LUX=B => LY=B then UX=Y
1 −1 −1 𝑥1 2
A=[2 3 5 ] , 𝑋 = [𝑥2 ] , 𝐵 = [−3]
3 2 −3 𝑥3 6
A matrix can be written as the product of a lower triangular matrix and an upper
triangular matrix
1 −1 −1 𝑙11 0 0 𝑢11 𝑢12 𝑢13
A=[2 3 5 ] = [𝑙21 𝑙22 0 ] [ 0 𝑢22 𝑢23 ]
3 2 −3 𝑙31 𝑙32 𝑙33 0 0 𝑢33
1 0 0 𝑢11 𝑢12 𝑢13
= [𝑙21 1 0] [ 0 𝑢22 𝑢23 ]
𝑙31 𝑙32 1 0 0 𝑢33
𝑢11 𝑢12 𝑢13
=[𝑙21 𝑢11 𝑙21 𝑢12 + 𝑢22 𝑙21 𝑢13 + 𝑢23 ]
𝑙31 𝑢11 𝑙31 𝑢12 + 𝑙32 𝑢22 𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑢33
Comparing the values,
u11=1, u12=-1, u13=-1, l21u11=2, so that l21=2
Similarly, we can find out another unknown variable,
AX=B => LUX=B
1 0 0 1 1 −1 2
[2 1 0] [0 1 7 ] X=[−3]
3 −1 1 0 0 7 6
Let, UX=Y => LY=B
1 0 0 𝑦1 2
𝑦
[2 1 0] [ 2 ]= [−3]
3 −1 1 𝑦3 6
From above matrix we find,
✓ y1=2
✓ 2y1+ y2=-3
=> y2=-7
✓ 3y1- y2+y3=6=> y3=-7
Again, UX=Y
1 1 −1 𝑥1 2
𝑥
[0 1 7 ] [ 2 ]= [−7]
0 0 7 𝑥3 −7
From above matrix we find,
✓ 7x3=-7
x3=-1
✓ x2+ 7x3=-7
=> x2=0
✓ x1+ x2-Y3=2
=> x1=1
B.
i. How can we eliminate the error in the Trapezoidal rule by applying the Simpson’s
rule?
To eliminate error in the Trapezoidal rule using Simpson's rule, follow these steps:
▪ Divide the Interval: Divide the given interval into an even number of
subintervals (n) for Simpson's rule. The Trapezoidal rule may use any number
of subintervals, but Simpson's rule requires an even number.
▪ Apply Trapezoidal Rule: Use the Trapezoidal rule to approximate the integral
over each subinterval individually. This gives you an estimate of the integral
using linear approximations.
▪ Apply Simpson's Rule: Now, apply Simpson's rule over the same subintervals.
Simpson's rule uses quadratic approximations, which can provide a more
accurate estimate.
▪ Calculate the Difference: Subtract the result from Simpson's rule from the
result from the Trapezoidal rule for each subinterval.
▪ Sum the Differences: Sum up all the differences calculated in step 4 for each
subinterval.
ii. Integrate using Simpson’s 3/8 rule.
f(x)=0.2+25x-200x2+675x3-900x4+400x5 from a=0 to b=0.8
soln: A single application of Simpson’s 3/8 rule requires four equally spaced points:
➢ Points and function values:
f(0)=0.2
𝑎+𝑏 0+0.8
= =0.2667
3 3
f(0.2667)=1.432724
0.2667+0.2667=0.5333
f(0.5333)=3.487177
0.5333+0.2667=0.8
f(0.8)=0.232
➢ Calculation of approximate integral:
using Simpson’s 3/8 rule we find,
0.2+3(1.432724+3.487177)+0.232
I=0.8 X =1.519170
8
➢ Error Calculation:
The true integral value is 1.640533.
So, Et = |True Value - Approximate Value|
= |1.640533 - 1.519170|
= 0.121363.
(Et / True Value) x 100% = (0.121363 / 1.640533) x 100% ≈ 7.4%.
3. A. Solve the following system using the Gauss-Jordan Method:
0.7X1-0.1X2-0.2X3=7.85
0.1X1+7X2-0.3X3=-19.3
X1-0.2X2+10X3=71.4
Starting with the augmented matrix:
[ 0.7 -0.1 -0.2 | 7.85 ]
[ 0.1 7 -0.3 | -19.3 ]
[ 1 -0.2 10 | 71.4 ]

Step 1: Divide Row 1 by 0.7:


[ 1 -0.1429 -0.2857 | 11.2143 ]
[ 0.1 7 -0.3 | -19.3 ]
[1 -0.2 10 | 71.4 ]
Step 2: Subtract 0.1 times Row 1 from Row 2:
[ 1 -0.1429 -0.2857 | 11.2143 ]
[ 0 7.1429 -0.2857 | -20.1143 ]
[ 1 -0.2 10 | 71.4 ]

Step 3: Subtract Row 1 from Row 3:


[ 1 -0.1429 -0.2857 | 11.2143 ]
[ 0 7.1429 -0.2857 | -20.1143 ]
[ 0 0.0571 10.2857 | 60.1857 ]

Step 4: Divide Row 2 by 7.1429:


[ 1 -0.1429 -0.2857 | 11.2143 ]
[0 1 -0.0399 | -2.8157 ]
[ 0 0.0571 10.2857 | 60.1857 ]

Step 5: Subtract 0.0571 times Row 2 from Row 3:


[ 1 -0.1429 -0.2857 | 11.2143 ]
[0 1 -0.0399 | -2.8157 ]
[0 0 10.2857 | 61.7571 ]

Step 6: Divide Row 3 by 10.2857:


[ 1 -0.1429 -0.2857 | 11.2143 ]
[0 1 -0.0399 | -2.8157 ]
[0 0 1 | 5.9999 ]

Step 7: Add 0.0399 times Row 3 to Row 2:


[ 1 -0.1429 -0.2857 | 11.2143 ]
[0 1 0 | -2.68 ]
[0 0 1 | 5.9999 ]

Step 8: Add 0.2857 times Row 3 to Row 1:


[ 1 -0.1429 0 | 12.498 ]
[0 1 0 | -2.68 ]
[0 0 1 | 5.9999 ]
Step 9: Add 0.1429 times Row 1 to Row 2:
[1 0 0 | 12.498 ]
[0 1 0 | -2.68 ]
[0 0 1 | 5.9999 ]

So, the solution to the system of equations is:


X1 = 12.498
X2 = -2.68
X3 = 5.9999

Rounded to a reasonable number of decimal places, the solution is:


X1 ≈ 12.5
X2 ≈ -2.68
X3 ≈ 6.0
B.
i. Show the scenarios. In the case of an iteration process, there is convergence
and divergence.
▪ Convergence:
• Numerical Approximation: Imagine you are using the Newton-Raphson
method to find the root of a mathematical function.
• Gradient Descent (Optimization): In machine learning, gradient descent is
often used to minimize a loss function.
• Power Iteration (Eigenvalues): When finding the largest eigenvalue of a
matrix using the power iteration method, the sequence of approximations
will often converge to the largest eigenvalue and its corresponding
eigenvector.
▪ Divergence:
• Numerical Instability: In some cases, numerical instability can lead to
divergence. For instance, when solving a system of linear equations using
an iterative method, if the matrix is ill-conditioned, the solution may start
to diverge instead of converging to the correct answer.
• Learning Rate Too High (Gradient Descent): In machine learning, if the
learning rate in gradient descent is set too high, it can cause the
optimization process to diverge.
• Chaos Theory: In some complex systems described by chaos theory, even
small changes in initial conditions can lead to divergent behavior
ii. Write down the algorithm of Bisection Method.
1. Choose 2 real numbers a & b such that f(a)*f(b)<0.
𝑎+𝑏
2. Define root, c= .
2
3. Find f(c).
4. If f(a)xf(c)<0
then set b=c
else set a=c
return the step-1 until finding the root matched twice.
4 [A] Fit a second-order polynomial to the data of the following table. Also find out
standard error Sy/x.
Xi Yi
0 2.1
1 7.7
2 13.6
∑ 𝒙𝒊 = 𝟑 ∑ 𝒚𝒊 = 𝟐𝟑. 𝟒

m=2, n=3, ∑ 𝑥𝑖 = 3 , ∑ 𝑥𝑖4 = 17


∑ 𝑥𝑖2 = 5 , ∑ 𝑥𝑖3 = 9, ∑ 𝑥𝑖𝑦𝑖 =34.9
∑ 𝑥𝑖2 𝑦𝑖 = 62.1, ∑ 𝑦𝑖 = 23.4
∑ 𝑥𝑖
𝑥̅ = =3/3=1
𝑛
∑ 𝑦𝑖
𝑦̅= =23.4/3=7.8
𝑛
Therefore, the simultaneous Linear equations are:
3 3 5 𝑎0 23.4
[3 5 9 ] [𝑎1]=[34.9]
5 9 17 𝑎2 62.1
Solving these equations through a technique such as Gauss-elimination gives a0=7.55
a1=-2.725 and a2=2.875

=0.14332+1.00286+1.08158=2.22776
2.22776
=√
3−(2+1)

2.22776
=√
0
= Something/divided by zero.
=Undefined.
Unfortunately, the results are undefined.

4 B. Derive equation for linear regression and find a0 and a1.


The simplest example of a least-squares approximation is fitting a straight line to a set of
paired observations: (x1, y1), (x2, y2), . . . , (xn, yn).
The mathematical expression for the straight line is:
y = a0 + a1x + e---------------------------------------------------------17.1
where a0 and a1 are coefficients representing the intercept and the slope, respectively,
and e is the error, or residual, between the model and the observations, which can be
represented by re-arranging Eq. (17.1) as
e = y − a0 − a1x
➢ Criteria for a “best” fit: One strategy for fitting a “best” line through the data would
be to minimize the sum of the residual errors for all the available data, as in

✓ where n = total number of points


Therefore, another logical criterion might be to minimize the sum of the absolute
values of the discrepancies, as in

A strategy that overcomes the shortcomings of the aforementioned approaches is to


minimize the sum of the squares of the residuals between the measured y and the y
calculated with the linear model
➢ least-square fit of a straight line: To determine values for a0 and a1, Eq. (17.3) is
differentiated with respect to each coefficient:

Note that we have simplified the summation symbols; unless otherwise indicated, all
summations are from i = 1 to n. Setting these derivatives equal to zero will result in a
minimum Sr. If this is done, the equations can be expressed as

Now, realizing that ∑ a0 = na0, we can express the equations as a set of two
simultaneous linear equations with two unknowns (a0 and a1):

You might also like