Nov Dec 2023
Nov Dec 2023
8
23
P-7619 [Total No. of Pages : 3
ic-
tat
[6180]-139
0s
T.E. (Information Technology)
8:2
02 91
9:3
MACHINE LEARNING
0
30
(2019 Pattern) (Semester - I) (314443)
4/1 13
Time : 2½ Hours] [Max. Marks : 70
0
2/2
.23 GP
8
C
23
3) Figures to the right side indicate full marks.
ic-
4) Assume Suitable data if necessary.
16
tat
8.2
0s
Q1) a) Compare univariate and multivariate linear regression. [CO3,L2] [6]
.24
8:2
b) Describe the tradeoff between bias and variance using dart example.
91
49
9:3
[CO3,L1] [5]
30
30
c) Compute the R-Square error of the best fit line “Y = – 0.94X+43.7” for
01
02
Sr.No. X Y
4/1
CE
1 3 40
80
8
23
.23
2 10 35
ic-
16
tat
3 11 30
8.2
0s
4 15 32
.24
8:2
91
49
5 22 19
9:3
30
30
6 22 26
01
02
7 23 24
2/2
GP
8 28 22
4/1
CE
80
9 28 18
.23
10 35 6
16
OR
8.2
P.T.O.
.24
49
Q2) a) Explain gradient descent technique for optimization in linear regression
8
23
with example. [CO3, L1] [6]
ic-
b) Explain the cost function used to evaluate the performance of regression.
tat
[CO3, L3] [6]
0s
c) What is least square method? Explain least square method in the context
8:2
of regression. [CO3, L1] [6]
02 91
9:3
0
30
Q3) a) 4/1 13
Describe Bayesian network in short for learning and inferences.
[CO4, L3] [8]
0
2/2
.23 GP
b) Explain naïve bays algorithm? For below dataset compute the posterior
probability of banana class using Bayes rule. [CO4, L3] [7]
E
80
8
C
23
Fruit Yellow Sweet Long Total
ic-
Orange 350 450 0 650
16
tat
Banana 400 300 350 400
8.2
0s
Other 50 100 50 150
.24
8:2
91
49
c) Write any four applications of naïve Bayes classifier. [CO4, L1] [2]
30
01
02
OR
2/2
GP
Q4) a) Explain ID-3 decision tree algorithm in detail with example.[CO4, L2][8]
4/1
8
i) Information Gain ii) Gini Index iii) Entropy
23
.23
ic-
16
tat
Q5) a) Explain K-Nearest Neighbor algorithm with example. [CO5, L2] [7]
8.2
0s
b) Suppose we have the following dataset that has various transactions and
.24
8:2
91
from this dataset, we need to find the frequent item sets and generate the
49
9:3
Tl A,B T6 B,C
4/1
T2 B,D T7 A,C
CE
80
T3 B,C T8 A, B, C, E
.23
T4 A, B, D T9 A, B, C
16
T5 A, C
8.2
.24
[6180]-139 2
49
c) Explain any one of the following terms: [CO5, L2] [3]
8
23
i) Medoid OR ii) Dendrogram
ic-
tat
OR
0s
Q6) a) Cluster the following eight points (with (x, y) representing locations) into
8:2
three clusters: A1(2, 10), A2(2, 5), A3(8, 4), A4(5, 8), A5(7, 5), A6(6, 4),
02 91
9:3
A7(1, 2), A8(4, 9) Initial cluster centers are: A1(2, 10), A4(5, 8) and A7(1,
0
30
2). The distance function between two points a = (x1, y1) and b = (x2, y2)
4/1 13
is defined as-P(a, b)= |x2–x1|+|y2—y1| Use K-Means Algorithm to find the
0
three cluster centers after the first iteration. [CO5, L3] [9]
2/2
.23 GP
8
C
23
1) Support 2) Confidence 3) Lift
ic-
Explain any one of the following distance metrics with example. [CO5, L1]
16
tat
[3]
8.2
0s
1) Euclidean Distance 2) Manhattan Distance
.24
8:2
91
3) Hamming Distance
49
9:3
30
Q7) a) Explain perceptron learning algorithm? Describe shortly about how learning
30
b) Explain Sigmoid, Tanh and Relu activation functions in detail. [CO6, L2]
2/2
GP
[9]
4/1
CE
OR
80
8
23
Q8) a) Explain the simulation of AND gate using McCulloch Pitts Neuron? What
.23
tat
8.2
b) Explain what is Deep Learning and its different architectures? State the
0s
8:2
91
49
9:3
30
30
01
02
2/2
GP
4/1
CE
80
.23
16
8.2
.24
[6180]-139 3
49