ML Assignment 5
ML Assignment 5
January 3, 2025
Given a dataset credit.csv with imbalanced class distributions and a high-dimensional feature space,
discuss the challenges and considerations in using decision trees for classification. Propose strategies
for mitigating the impact of class imbalance and feature selection to improve model robustness and
generalisation performance.You are being provided with a meta data also please rad it before doing
implementation.
[1]: import pandas as pd
data = pd.read_csv(url)
Time V1 V2 V3 V4 V5 V6 V7 \
0 0.0 -1.359807 -0.072781 2.536347 1.378155 -0.338321 0.462388 0.239599
1 0.0 1.191857 0.266151 0.166480 0.448154 0.060018 -0.082361 -0.078803
2 1.0 -1.358354 -1.340163 1.773209 0.379780 -0.503198 1.800499 0.791461
3 1.0 -0.966272 -0.185226 1.792993 -0.863291 -0.010309 1.247203 0.237609
4 2.0 -1.158233 0.877737 1.548718 0.403034 -0.407193 0.095921 0.592941
1
1 0.125895 -0.008983 0.014724 2.69 0
2 -0.139097 -0.055353 -0.059752 378.66 0
3 -0.221929 0.062723 0.061458 123.50 0
4 0.502292 0.219422 0.215153 69.99 0
[5 rows x 31 columns]
Time V1 V2 V3 V4 \
count 284807.000000 2.848070e+05 2.848070e+05 2.848070e+05 2.848070e+05
mean 94813.859575 1.168375e-15 3.416908e-16 -1.379537e-15 2.074095e-15
std 47488.145955 1.958696e+00 1.651309e+00 1.516255e+00 1.415869e+00
min 0.000000 -5.640751e+01 -7.271573e+01 -4.832559e+01 -5.683171e+00
25% 54201.500000 -9.203734e-01 -5.985499e-01 -8.903648e-01 -8.486401e-01
50% 84692.000000 1.810880e-02 6.548556e-02 1.798463e-01 -1.984653e-02
75% 139320.500000 1.315642e+00 8.037239e-01 1.027196e+00 7.433413e-01
max 172792.000000 2.454930e+00 2.205773e+01 9.382558e+00 1.687534e+01
V5 V6 V7 V8 V9 \
count 2.848070e+05 2.848070e+05 2.848070e+05 2.848070e+05 2.848070e+05
mean 9.604066e-16 1.487313e-15 -5.556467e-16 1.213481e-16 -2.406331e-15
std 1.380247e+00 1.332271e+00 1.237094e+00 1.194353e+00 1.098632e+00
min -1.137433e+02 -2.616051e+01 -4.355724e+01 -7.321672e+01 -1.343407e+01
25% -6.915971e-01 -7.682956e-01 -5.540759e-01 -2.086297e-01 -6.430976e-01
50% -5.433583e-02 -2.741871e-01 4.010308e-02 2.235804e-02 -5.142873e-02
75% 6.119264e-01 3.985649e-01 5.704361e-01 3.273459e-01 5.971390e-01
max 3.480167e+01 7.330163e+01 1.205895e+02 2.000721e+01 1.559499e+01
Class
count 284807.000000
2
mean 0.001727
std 0.041527
min 0.000000
25% 0.000000
50% 0.000000
75% 0.000000
max 1.000000
[8 rows x 31 columns]
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 284807 entries, 0 to 284806
Data columns (total 31 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Time 284807 non-null float64
1 V1 284807 non-null float64
2 V2 284807 non-null float64
3 V3 284807 non-null float64
4 V4 284807 non-null float64
5 V5 284807 non-null float64
6 V6 284807 non-null float64
7 V7 284807 non-null float64
8 V8 284807 non-null float64
9 V9 284807 non-null float64
10 V10 284807 non-null float64
11 V11 284807 non-null float64
12 V12 284807 non-null float64
13 V13 284807 non-null float64
14 V14 284807 non-null float64
15 V15 284807 non-null float64
16 V16 284807 non-null float64
17 V17 284807 non-null float64
18 V18 284807 non-null float64
19 V19 284807 non-null float64
20 V20 284807 non-null float64
21 V21 284807 non-null float64
22 V22 284807 non-null float64
23 V23 284807 non-null float64
24 V24 284807 non-null float64
25 V25 284807 non-null float64
26 V26 284807 non-null float64
27 V27 284807 non-null float64
28 V28 284807 non-null float64
29 Amount 284807 non-null float64
30 Class 284807 non-null int64
dtypes: float64(30), int64(1)
memory usage: 67.4 MB
None
3
[2]: import matplotlib.pyplot as plt
import seaborn as sns
Class
0 284315
1 492
Name: count, dtype: int64
4
[3]: from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
5
[4]: from imblearn.over_sampling import SMOTE
Class
0 199020
1 199020
Name: count, dtype: int64
print(feature_importance)
V14 0.773470
V4 0.058559
V12 0.020056
V10 0.014436
V8 0.013011
V13 0.010029
V7 0.007680
V1 0.006985
V11 0.006978
Time 0.006907
V23 0.006864
6
V19 0.006841
V6 0.006788
V17 0.005915
V26 0.005891
V3 0.005694
V21 0.005551
V18 0.005128
V24 0.005046
V9 0.003899
V25 0.003670
V20 0.003646
V16 0.003503
V22 0.003228
V5 0.002949
V15 0.002785
Amount 0.001979
V27 0.001449
V2 0.000643
V28 0.000419
dtype: float64
# Make predictions
y_pred = clf.predict(X_test_selected)
Accuracy: 0.9964654799105837
Confusion Matrix:
[[85035 260]
[ 42 106]]
Classification Report:
precision recall f1-score support
7
accuracy 1.00 85443
macro avg 0.64 0.86 0.71 85443
weighted avg 1.00 1.00 1.00 85443