0% found this document useful (0 votes)
3 views

EXPERIMENT ML

The document outlines six experiments implementing various regression and classification techniques using Python and Spyder IDE. Each experiment includes code snippets for Linear Regression, Multiple Regression, Logistic Regression, Support Vector Machine (SVM), Principal Component Analysis (PCA), and Linear Discriminant Analysis (LDA), along with relevant datasets and performance metrics. The experiments demonstrate data handling, model training, predictions, and visualizations for different machine learning algorithms.

Uploaded by

sawantlaxmi91
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

EXPERIMENT ML

The document outlines six experiments implementing various regression and classification techniques using Python and Spyder IDE. Each experiment includes code snippets for Linear Regression, Multiple Regression, Logistic Regression, Support Vector Machine (SVM), Principal Component Analysis (PCA), and Linear Discriminant Analysis (LDA), along with relevant datasets and performance metrics. The experiments demonstrate data handling, model training, predictions, and visualizations for different machine learning algorithms.

Uploaded by

sawantlaxmi91
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

EXPERIMENT NUMBER: 01

Title: To implement Linear Regression.


Tools/Software's used: Python, Spyder IDE
Experiment/Program: -
import pandas as pd
dataset pd. read_csv('Salary_Data.csv')
x = dataset.iloc[:, :-1].values
y = dataset.iloc[:,1).values
from sklearn. model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split
(X, Y, test size=1/3, random state=0)
from sklearn.linear_model import Linear Regression
regressor Linear Regression()
regressor. Fit (X_train, y_train)
y_pred regressor. Predict (X_test)
import matplotlib.pyplot as plt
plt. Scatter (X_train, y_train, color 'red')
plt.plot(X_train, regressor.predict (X_train), color='blue')
plt.title('Salary vs Experience (Training set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show()
import matplotlib.pyplot as plt
plt.scatter (X_test, y_test, color 'red')
plt.plot (X_train, regressor.predict (X_train), color='blue') plt.title('Salary vs
Experience (Test set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show()
new_salary_pred regressor. Predict([[15]])
print ('The predicted salary of a person with 15 years experience is,
new_salary_pred)
new_salary_pred regressor. Predict([[20]])
print('The predicted salary of a person with 20 years' experience is ',
new_salary_pred)
Output :
EXPERIMENT NUMBER: - 02

Title: To implement Multiple Regression


Tools/Software's used: Python, Spyder IDE
Experiment/Program:-
import pandas as pd
dataset pd.read_csv("weight-height.csv")
dataset.info()
dataset.describe ()
dataset.isnull().sum()
dataset ['Gender').replace('Female', 0, inplace=True)
dataset ['Gender').replace('Male', 1, inplace=True)
X= dataset.iloc[:, :-1].values
y dataset.iloc[:, 2).values
from sklearn.model_selection import train_test_split X_train, X_test,
y_train, y_test test_size=0.2, random_state=0) train_test_split (X, y
from sklearn.linear_model import LinearRegression
lin_reg LinearRegression ()
lin_reg.fit(X_train, y_train)
lin_pred lin_reg.predict (X_test)
from sklearn import metrics
print('R square', metrics.r2_score (y_test, lin_pred))
print('Mean squared Error = ',metrics.mean_squared_error (y_test,
lin_pred))
print('Mean absolute Error = ,metrics.mean_absolute_error (y_test,
lin_pred))
my_weight_pred lin_reg.predict([[1,67]])
print('My predicted weight, my_weight_pred)
my_weight_pred lin_reg.predict([[0,67]])
print('My predicted weight, my_weight_pred)
Output :
EXPERIMENT NUMBER: - 03

Title: To implement Logistic Regression


Tools/Software's used: Python, Spyder IDE
Experiment/Program:-
import pandas as pd
dataset pd.read_csv("iphone_purchase_records.csv")
X = dataset.iloc[:,:-1].values
Y = dataset.iloc[:, 3].values
from sklearn.preprocessing import LabelEncoder labelEncoder_gender =
LabelEncoder() X[:,0] labelEncoder_gender.fit_transform(X[:,0])
import numpy as np
x = np.vstack(X):, :)).astype (np.float64)
from sklearn.model_selection import train_test_split train, X_test, y - ^ 1 , y
- ^ prime x - test_size=0.25, random_state=0) train_test_split (X, y,
from sklearn.preprocessing import StandardScaler sc StandardScaler()
X train sc.fit_transform (X_train)
X_test sc.transform(X_test)
from sklearn.linear_model import LogisticRegression classifier
LogisticRegression (random state=0, solver "liblinear") classifier.fit(X_train,
y_train)
y_pred classifier.predict (X_test)
from sklearn import metrics
cm metrics.confusion_matrix(y_test, y_pred) print (cm)
accuracy metrics.accuracy_score (y_test, y_pred)
print ("Accuracy score:", accuracy)
precision metrics.precision_score y - ^ 1 , y_pred)
print("Precision score:", precision)
recall metrics.recall_score (y_test, y_pred)
print("Recall score:", recall)
x * 1 = sc.transform([[1,21,40000]])
x*2 =sc.transform([[1,21,80000]])
x* 3 = sc.transform([[0,21,40000]])
x * 4 = sc.transform([[0,21,80000]])
x* 5 = sc.transform([[1,41,40000]])
x* 6 = sc.transform([[1,41,80000]])
x * 7 = sc.transform([[0,41,40000]])
x* 8 = sc.transform([[0,41,80000]])
print ("Male aged 21 making $40k will buy iPhone:", classifier.predict(x1))
print ("Male aged 21 making $80k will buy iPhone:", classifier.predict(x2))
print ("Female aged 21 making $40k will buy iPhone:", classifier.predict(x3))
print("Female aged 21 making $80k will buy iPhone:", classifier.predict(x4))
print("Male aged 41 making $40k will buy iPhone:", classifier.predict(x5))
print("Male aged 41 making $80k will buy iPhone:", classifier.predict(x6))
print ("Female aged 41 making $40k will buy iPhone:", classifier.predict(x7))
print("Female aged 41 making $80k will buy iPhone:", classifier.predict(x8))

Output :
EXPERIMENT NUMBER: - 04
Title: To implement SVM
Tools/Software's used: Python, Spyder IDE
Experiment/Program: -
import pandas as pd dataset =
pd.read_csv("iphone_purchase_records.csv") X dataset.iloc[:,:-1].values y
dataset.iloc[:, 3].values
from sklearn.preprocessing import LabelEncoder labelEncoder_gender =
LabelEncoder() X[:,0] = labelEncoder_gender.fit_transform(X[:,0])
import numpy as np X = np.vstack (X(:, :)).astype (np.float64)
from sklearn.model_selection import train_test_split X train, X test, y_train,
y_test test_size=0.25, random_state=0) train_test_split(X, y,
from sklearn.preprocessing import StandardScaler ss_X StandardScaler ()
X_train ss_X.fit_transform (X_train)
X test ss X.transform(X_test)
from sklearn.svm import SVC
classifier SVC (kernel = "linear", random_state=0)
classifier.fit(X_train, y_train)
y_pred classifier.predict (X_test)
from sklearn import metrics
cm metrics.confusion_matrix(y_test, y_pred)
print (cm) accuracy metrics.accuracy_score (y_test, y_pred)
print("Accuracy score:",accuracy)
precision metrics.precision score (y test, y_pred) print("Precision score:",
precision)
recall metrics.recall_score (y_test, y_pred)
print("Recall score:", recall)
Output :
EXPERIMENT NUMBER:-05

Title: To implement PCA


Tools/Software's used: Python, Spyder IDE
Experiment/Program: -
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset pd.read_csv('Wine.csv')
dataset.iloc[:, :-1].values
y dataset.iloc[:, -1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test test_size = 0.2, random_state = 0)
train_test_split(X, Y,
from sklearn.preprocessing import StandardScaler
sc StandardScaler ()
X_train sc.fit_transform (X_train)
X_test sc.transform(X_test)
from sklearn.decomposition import PCA
pca PCA (n_components = 2)
X train pca.fit_transform (X_train)
X_test pca.transform (X_test)
from sklearn.linear_model import LogisticRegression
classifier LogisticRegression (random_state = 0)
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score
y_pred classifier.predict (X_test)
cm confusion_matrix(y_test, y_pred)
print (cm)
accuracy_score (y_test, y_pred)
from matplotlib.colors import ListedColormap
X_set, y_set X_train, y_train
X1, X2 np.meshgrid(np.arange(start = X_set(:, 0].min()
stop X_set[:, 0].max() + 1, step = 0.01),1
np.arange(start = X_set(:, 1).min() - 1,
stop X_set[:, 1).max() + 1, step = 0.01))
plt.contourf (X1, X2, classifier.predict(np.array([X1.ravel(),
X2.ravel())).T).reshape(X1.shape),
alpha 0.75, стар ListedColormap(('red','green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim (X2.min(), X2.max())
for i, j in enumerate (np.unique (y_set)):
plt.scatter (X_set [y_setj, 0], X_set [y_set == j, 1], c ListedColormap(('red',
'green', 'blue')} (i),
label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()
from matplotlib.colors import ListedColormap
X_set, y_set X_test, y_test
X1, X2 np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop X_set(:,
0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1,
stop X_set[:, 1].max() + 1, step = 0.01)) plt.contourf (X1, X2,
classifier.predict (np.array([X1.ravel(), X2.ravel())).T).reshape (X1.shape),
alpha 0.75, cmap ListedColormap(('red','green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, jin enumerate (np.unique (y_set)):
plt.scatter (X_set [y_set == j, 0], X_set [y_set == j, 1], c
ListedColormap(('red', 'green', 'blue')) (i),label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()

Output :
EXPERIMENT NUMBER: - 06

Title: To implement LDA


Tools/Software's used: - Python, Spyder IDE
Experiment/Program: -
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib.axes._axes import log as matplotlib_axes_logger
matplotlib_axes_logger.setLevel('ERROR')
dataset pd.read_csv('Wine.csv')
X dataset.iloc[:, :-1].values
y dataset.iloc[:, -1].values
from sklearn.model_selection import train_test_split X_train, X_test,
y_train, y_test train_test_split (X, y,
test size 0.2, random state 0)
from sklearn.preprocessing import StandardScaler
SC StandardScaler()
X_train sc.fit_transform (X_train)
X_test sc.transform (X_test)
from sklearn.discriminant_analysis import
LinearDiscriminantAnalysis as LDA lda LDA (n_components 2)
X train lda.fit_transform(X_train, y_train)
X test lda.transform(X_test)
from sklearn.linear_model import LogisticRegression
classifier LogisticRegression (random_state = 0)
classifier.fit(X_train, y_train)
from sklearn.metrics import confusion_matrix, accuracy_score
y pred classifier.predict (X_test)
cm confusion_matrix(y_test, y_pred)
print (cm)
accuracy_score (y_test, y_pred)
from matplotlib.colors import ListedColormap
X_set, y setX_train, y_train
X1, X2 np.meshgrid(np.arange(start=X_set[:, 0].min() - 1, stop X_set[:,
0].max() + 1, step = 0.01),
np.arange(start X_set[:, 1].min() - 1,
stop X_set[:, 1].max() + 1, step 0.01)) plt.contourf (X1, X2,
classifier.predict(np.array([X1.ravel(),
X2.ravel())).T).reshape(X1.shape), alpha 0.75, cmap
ListedColormap(('red','green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate (np.unique (y_set)):
plt.scatter (X_set (y_setj, 0], X_set[y_setj, 1], c = ListedColormap(('red',
'green', 'blue')) (i),label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend()
plt.show()
from matplotlib.colors import ListedColormap
X_set, y_set X test, y_test
X1, X2 np.meshgrid (np.arange(start stop X set(:, 0].max() + 1, step X_set[:,
0].min() - 1, 0.01),
np.arange(start = X set:, 1].min() - 1,
stop X set[:, 1].max() + 1, step = 0.01))
plt.contourf (X1, X2, classifier.predict (np.array([X1.ravel(),
X2.ravel())).T).reshape (X1.shape),alpha 0.75, cmap
ListedColormap(('red''green', 'blue')))
plt.xlim (X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, jin enumerate (np.unique (y_set)):
plt.scatter (X_set (y_set == j, 0], X_set [y_set == j, 1],
c ListedColormap(('red', 'green', 'blue')) (i),label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend()
plt.show()
14
Output :

You might also like