Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
52 views
Week-7 (SWI)
SOlve With Instructor
Uploaded by
Meer Hassan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Week-7 (SWI) For Later
Download
Save
Save Week-7 (SWI) For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
52 views
Week-7 (SWI)
SOlve With Instructor
Uploaded by
Meer Hassan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Week-7 (SWI) For Later
Carousel Previous
Carousel Next
Save
Save Week-7 (SWI) For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 19
Search
Fullscreen
rain, 1197 AM (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory from google.colab import files uploaded = files.upload() No file chosen Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable. Saving faces target nav to faces target (1).nnv ~ Dataset-1 Face recognition-Identifying and verifying people in a photograph by their face. Congratulations for your new Job...!!! You see many new faces in your new office. Sometime it is very disrespectful not to address a person by his or her name. Hence Datascientist in you is planning to make a model which will help you to identify all new colleague's name by their photograph. There are ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement). The image is quantized to 256 grey levels and stored as unsigned 8-bit integers; the loader will convert these to floating point values on the interval [0, 1] The “target” for this database is an integer from 0 to 39 indicating the identity of the person pictured; however, with only 10 examples per class. Image Credit: AT&T Laboratories Cambridge Step-1:Load the files given below in colab. Target_data: https://drive. google.com/file/d/1_dZ2-ea3N32DKyr3sZisnandpatFy2BA/view? usp=sharing Features_data: https://drive. google.com/file/d/1LAjKw0O73XwSeNvmUgrHmwBFclxVit0q/view? usp=sharing Step-2: Check the shape of your dataset. Step-3: Write a code to plot first 40 image. Comment Your observation. Step-4: Write the code to plot all image whose index difference is 10. (like 10,20,30). Step-5: Use train-test split to split the dataset by keeping random stat Oand test siz hitpsscolab research google comida ADh”XNWKyLm4MDESZKZ¢98y’1DiisO_#scrlTo=IP-nlSItLkO&prin!Mode=tte snrain, 1197 AM (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory heck the shape of X_train,Y_train,X_test,Y_test Step- Step- estimator. : Train your model using Logistic regression, Softmax regression and KNN Classifier : Create a dataframe which shows all the estimator with corresponding accuracy. lot the confusion matrix for the result obtained using KNN classifier. > Based on the above step Please answer the following Questio! Que 1) What is the shape of your dataset. Que 2) How many labels are there in the dataset. Que 3) How many examples are there with each class, Que-4) What is the shape of X train and Y_train. Que-5) Convert x_train and x_test shape into (a,b) type. Que-7) What is the accuracy you got with logistic regression and KNN classification Que-8) What is the f1_score you got with softmax regression. import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.image as mpimg Ymatplotlib inline import warnings warnings. filterwarnings( ignore’) ?train_test_split from google.colab import files uploaded = files.upload() No file chosen Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable. Saving faces.npy to faces (1).npy Saving fares target nny tn farae target (1) anv pics=np.load("faces.npy") pics.shape (400, 64, 64) hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre anerain, 1197 AM pics[::1] [[e.3181818 , @.3553719 , [@. 32991736, 0. 37603307, [e.26859504, 0..37190083, [o.1322314 , 0.14876033, [2.11570248, 8.1446281 , [e.11157025, 14876033, [fe.s ’ 023966943, [0.49173555, 0..20247933, [0.46694216, 0.17768595, [2.23305785, .17355372, [e.157e248 , @.1570248 , [0.45454547, 014876033, [[@.21487603, 0.71487606, [@.20247933, 2.706616 , [@.2107438 , 8.69008267, [e.2644628 , 8.57438016, [2.26859504, 0.58264464, [@.27272728, @.59990906, [[e.5165289 , @.5413223 , [2.5165289 , 0.553719, [o.5165289 , 0.5785124 , [.39256197, @.37190083,
[email protected]
. htpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZ 498’ 1DiisO_#scrolTo=IP-nSItLkOSprin]Mode=tre (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory 40082645, @.49173555, 30991736], 3966942 , @.47933885, 30165288}, 30710744, @.45454547, 30991736], 0.09917355, @.08264463, 015289256], @.09504132, @.0785124 , @.1570248 J, @.09990909, @.0785124 , @.15289256]], 53305787, 0.607438, 21487603], 5813223 , @.60330576, 20661157], 55785125, @.6198347 , 18595041], 0.46280992, @.5289256 , @.1694215 J, 0.5247934 , @.53305787, @.18595041], 0.520612 , @.53305787, .19008264]], .21900827, @.21900827, 0.6942149 ], @.20661157, @.20661157, 26942149 J, @.20661157, @.20661157, @.6942149 J, 25619835, @.2603306 , 59090996], 2644628 , @.26859504, 59504133], 26859504, @.27272728, -60330576]], 46280992, @.28099173, 60330576], 45041323, @.29338843, 5785124 J, 48214877, @.29338843, 54545456], 0.41322315, @.38842976, 2.396942 J, .38429752. @.40495867. 0.49082645, 0.40495867, 0.396942 , 0.13636364, e.14a6281 , e.14049587, 28512397, 0.29752067, 0. 29752067, 0.17355372, 0.16528925, 017768595, 0.71487606, 0.7107438 , 0.68s95e4 , 8.541323 , @.56198347, @.57438016, 05785124 , 0. 58264464, 0.59917355, 0.33471075, @.3305785 . anerain, 1197 AM @..35950413, [2.3677686 , @.3553719 , 37603307], labels=np.load("faces_target .npy") labels. shape (400,) labels array([ @, ®, @ @ 2, 2 2, fig = plt.figure(figsize=(28, 10)) columns = 10 rows = 4 jet for i in range(1,400,1@): img = pics[i-1,:,:] fig.add_subplot(rows, columns, 3) jest plt.imshow(img, cmap pit. show() htpsscolab research google comida! ADh”XNWKyLm4MDESZKZ¢98y’1DiisO_#scrolTo=IP-nlSItLkO&prin!Mode=tte 40495867, 0.3966942 , -38429752]]], dtyp ‘loat32) 21, 21, 21, 21, 21, 22, 23, 23, 23, 23, 24, 24, 24, 24, 25, 26, 26, 26, 26, 26, 28, 28, 29, 29, 29, 38, 30, 31, 31, 31, 31, 31, 33, 33, 33, 33, 33, 34, 34, 35, 35, 35, 36, 36, 36, 36, 36, 38, 38, 38, 39], dtype=int32) -+) @.35950413, (Shared)(Term-3) MLP Week 7 SWlipynb -Colaboratory plt.get_cnap("viridis"))#colormap possible values = default, virid anerain, 1137 AM (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory fig = plt.figure(figsize-(20, 10)) columns = 8 rows = 5 for i in range(1, columns*rows +1): img = pics[10*(i-1),:,:] ig-add_subplot(rows, columns, i) pt. imshow(img, cmap = plt.get_cmap('gray"))#colornap possible values = plt.title(” Colleague-{}".format(i), fontsize=16) plt.axis(‘off") lefault, viridis, plt.suptitle("All the 40 people's face data we have plotted here", fontsize=25) plt.show() hitpsscolab research google comida ADh”XNWKyLm4MDESZKZ¢98y’1DiisO_#scrlTo=IP-nlSItLkO&prin!Mode=tte ene‘ayia, 137 AML (SharedTem-3) MLP Week 7 SWlipyn ~ laboratory All the 40 people's face data we have plotted here Coleague-1 Colleague-2 Colleague-3 Colleague-4Colleague-5Colleague-6 Colleague7__Colleague-8 ei a aebeaea sd a & = \fa a ~ ki : Gotranuel? Goleagueds Coieayuers Coteegve do Coteaguest Coleagued? Coteague 23 Coteaguet4 Be cay Si 29 feague-30 jeaguess)_ ee 2 se ba is Ki ic G fa Coleaguenss pics # store images in Xdata Y= labels.reshape(-1,1) # store labels in Ydata xtrain, x test, y_train, y_test = train_test_split(X, Y, test_size = @.2, randon_state=10) x_train.shape, x_test.shape, y train.shape, y_test.shape ((328, 64, 64), (88, 64, 64), (328, 1), (8, 1)) from sklearn.model_selection import train_test_split xtrain, x test, y_train, y_test = train_test_split(X, Y, test_size = @.2, random_state=10) print("x_train: ",x_train.shape) print("x_test: ",x_test.shape) print("y_train: ",y_train.shape) print("y_test: ",y_test.shape) x_train: (320, 64, 64) x_test: (88, 64, 64) y_train: (328, 1) y_test: (88, 1) x_train = x_train.reshape(x_train.shape[®], x_train.shape[1]*x_train.shape[2]) x_test = x_test.reshape(x_test.shape[@], x_test.shape[1]*x_test.shape[2]) print("x_train: ",x_train. shape) print("x_test: ",x_test.shape) hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre anerain, 1137 AM (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory print("y_train: ",y_train.shape) print("y_test: ",y_test.shape) x_train: (320, 4096) xtest: (88, 4096) (32e, 1) y_train: Estimator_list = [] Accuracy.list= [] from sklearn.linear_model import LogisticRegression from sklearn.metrics import f1_score Ip = LogisticRegression(max_iter=1000) Ir. fit(x_train, y_train) model_accuracy = round(1r.score(x_test, y_test)*1@0,2) print("LogReg model accuracy is %", model_accuracy) Estimator_list.append( "Logistic Regression Accuracy_list.append(model_accuracy) LogReg model accuracy is % 96.25 from sklearn.svm import SVC from sklearn.metrics import f1_score classifier = SVC(kernel = ‘linear’, random_state = @) classifier .fit(x_train, y_train) model_accuracy = round(classifier.score(x_test, y_test)*100,2) print("SvM model accuracy is %", model_accuracy) Estimator_List append( "Support vector classifier") Accuracy_list.append(model_accuracy) SVM model accuracy is % 95.0 from sklearn.linear_model import LogisticRegression Ir = LogisticRegression(max_iter=1000,multi_clas: Ir. fit(x_train, y_train) softmax_accuracy = round(1r.score(x_test, y_test)*10@,2) multinomial”, solver="sag") print("softmax_accuracy is %", softmax_accuracy) Estimator_List-append("softmax Regression") Accuracy_list append (softmax_accuracy) softmax_accuracy is % 96.25 PLogisticRegression hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre msrain, 1197 AM (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory y_pred=Ir.predict(x_test) ans=f1_score(y_test, y_pred, average='macro') print("f1 score for softmax regression is:",ans) f1 score for softmax regression is: 0.9626984126984125 from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = 1) # n_neighbors=1 gives the best result for this da knn.fit(x_train, y_train) Knn_accuracy = round(Knn.score(x_test, y_test)*100,2) print("knn_accuracy is %", Knn_accuracy) Estimator_List.append("KNN") Accuracy_list.append(Knn_accuracy) Knn_accuracy is % 92.5 fron sklearn.model_selection import GridSearchcv params = {'n_neighbors’:[1,2,3,4,5,6,7,8,9,10,11]) knn = KNeighborsClassifier() model = GridSearchCv(knn, params, cv=5,) nodel.fit(x_train,y train) model.best_params_ {'n_neighbors*: 1) df = pd.DataFrame({'METHOD': Estimator_List, "ACCURACY (%)': Accuracy_list}) If = df.sort_values(by=[ "ACCURACY (%)"]) df = df.reset_index(drop=True) df.head() METHOD ACCURACY (%) ° KNN 92.50 1 Logistic Regression 96.25 2. softmax Regression 96.25 > Dataset-2 (Insurance decisioning) hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre ane‘2mvea, 197 AM (SharedTem-3) MLP Week -7 SWtipyn® -Colaboratry The insurance industry has been variously described as laggard, sloth-like and conservative in its responses to digital disruption that is underway, But not anymore. Datascientist Like you will help them in making the best use of new-age technologies to constantly innovate, ramp up customer satisfaction, stay ahead in the race and to provide personalized products, better service. Detecting risks early in the process enables insurers to make better use of underwriters’ time and gives them a huge competitive advantage.Use appropriate machine learning algorithms to predict future medical expenses of individuals that help your company to make decision on charging the premiums of insurance based on data collected from client drivers. Step-1:Load the files given below in colab. (Link: https://drive.google.com/file/d/11vnsENE26qOH5y2UBEZcFG-AdF3VsKtZt/view?usp=sharing) Step-2: Check the shape,info of your dataset. Step-3: Check if any duplicate sample is there in the dataset. Step-4: Remove duplicate data and perform one hot encoding for categorical data Step-5: Use train-test split to split the dataset by keeping random state=10 and test size =0.2 Step-6: Check the shape of X_train,Y_train,X_test,¥_test Step-7: Train your model using KNN regressior estimator. Step-8: Create a dataframe which shows k value with corresponding error. Step-9: Plot the error curve and answer which K value will give the best result. from google.colab import files uploaded = files.upload() No file chosen Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable. Saving medical exnnce nred.ccy ta medical exnnse need (1).ccw hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre onerain, 1197 AM (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory data=pd.read_csv("/content/medical_expnse_pred.csv") data. head(5) age 0 19 1 18 2 2 3 33 4 32 data. info() sex female male male male male bmi 279 33.8 33.0 22.7 28.9 children smoker yes no no. ho no
Rangelndex: 1338 entries, @ to 1337 Data columns (total 7 columns): # Col age sex bmi chi reg! 6 exp vauNne ‘umn dren ker Non-Ni 1338 1338 1338 1338 1338 1338 1338 dtypes: floate4(2), memory usage: 73.3+ data. describe! count mean std min 25% 50% 75% max data[ ‘sex’ ].unique() hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre QO. age 1338.000000 39.207025 14.049960 18.000000 27.000000 39.000000 51.000000 64.000000 Jul Count non-null non-null non-null non-null non-null non-null non-null inted(2), object(3) KB bmi 1338.000000 30.665471 6.098382 16,000000 26,300000 30.400000 34.700000 53.100000 Dtype intea object floatea intea object object Floated region expenses southwest 1684.92 southeast 1725.55 southeast 4449.46 northwest 21984.47 northwest 3866.86 children expenses 1338.000000 —1338.000000 1.094918 13270422414 1,205493 12110,011240 0.000000 1121,870000 0.000000 4740,287500 1.000000 9382.030000 2.000000 16639.915000 5.000000 63770.430000 sonerain, 1197 AM (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory array(['female', 'male'], dtype-object) data ' smoker’ ].value_counts() no 1064 yes 274 Name: smoker, dtype: int64 data[data.duplicated()] age sex bmi children smoker region expenses 581 19 male 30.6 0 no northwest 1639.56 data2=data.copy() data2. drop_duplicates(inplace=True) data2.shape (1337, 7) cat_cols = data2.select_dtypes (exclude cat_cols.columns number") Index(['sex', ‘smoker’, ‘region'], dtype=‘object') cat_cols htpsscolab research google comida ADh”XNWKyL m4MDESZKZG08y’1DiisO_#scrolTo=IP-nlSItLkOSprin!Mode=tre ngrain, 1197 AM 0 num_cols = num_cols 1333 1334 1336 1336 1337 sex female smoker (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory region yes southwest data2.select_dtypes(include = age 19 18 28 33 32 50 18 18 24 61 bmi 219 33.8 33.0 227 28.9 31.0 31.9 36.9 25.8 29.1 children 1337 rows * 4 columns. expenses 16884.92 1725.55, 4449.46 21984.47 3866.86 10600.55 2205.98 1629.83 2007.95 29141.36 “number ) onehot_cat_cols = pd.get_dunmies(cat_cols) onehot_cat_cols.head() sex_female 0 1 1 0 2 0 3 0 4 0 sex_male smoker_no smoker_yes 0 1 0 1 1 0 region_northeast 0 ° df_final = pd.concat([num_cols, onehot_cat_cols],sort=True, axis=1) df_final.head(2) region_northwest reg 0 0 htpsscolab research google comida! ADh”XNWKyLm4MDESZKZ¢98y’1DiisO_#scrolTo=IP-nlSItLkO&prin!Mode=tte vanerain, 1197 AM (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory age bmi children expenses sex female sex_male smoker_no smoker_yes region_no 0 19 279 0 16884,92 1 0 0 1 44a aaa 44795 55 a 4 1 a 4 > X = df_final.drop( ‘expenses’ ,axis=1) y = df_final[' expenses" ] fron sklearn.model_selection import train_test_split Xtrain , X test, ytrain, y test = train_test_split(xX,y, test_size = 0.25, random state = @) X_train. shape, y_train.shape,X_test.shape,y_test. shape ((1@@2, 11), (10@2,), (335, 11), (335,)) from sklearn.neighbors import KNeighborsRegressor fron sklearn.metrics import mean_squared_error from math import sqrt, ceil import matplotlib.pyplot as plt %matplotlib inline sqrt (data2.shape[@]) 36.565010597564445 1 = ceil(sqrt(data2.shape[@]))#closest int value 37 rmse = [] for k in range(®,1+1): k = kel model = kNeighborsRegressor(n_neighbors-k) model. it(X_train,y_train) y_test_pred = model.predict(X_test) rmse_error = sqrt(mean_squared_error(y_test,y_test_pred)) mse. append(rmse_error) print("RMSE value for k=" , k , ‘is:', rmse_error) RMSE value for k= 1 is: 11786.012159320124 RMSE value for k= 2 is: 10607.706606488884 htpsscolab research google comida! ADh”XNWKyLm4MDESZKZ¢98y’1DiisO_#scrolTo=IP-nlSItLkO&prin!Mode=tte ranerain, 1137 AM RMSE RMSE RMSE RNSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RMSE RNSE RMSE RMSE RMSE min(rmse) value value value value value value value value value value value value value value value value value value value value value value value value value value value value value value value value value value value value for for for for for for for for for for 3 is: 4 is 5 is: 6 is: 7 is: Bis: 9 is: 10 is: 11 is: 12 is 13 is: 14 is 45 is: 16 is 17 is 18 is 19 is 20 is: 21 is 22 is 23 is 24 is 25 is 26 is: 27 is 28 is: 29 is 30 is: 31 is 32 is: 33 is 34 is: 35 is 36 is: 37 is: 38 is: 10231. 018956732843 (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory 10457.284072294755 10231 .018956732843 10479.92545821663 1518.982322122994 10606.859351174191 10710.863199980842 1eg98. 48708940924 11046 595393735017
[email protected]
11168.236052821367 11176.939159668093 11224. 03134266892 11291..457798448362 11341 .49626924684 11370 68513483235 11365 .430682027109 11359 428434387266 11361..252994222368 11388 .017032555212 11339,873958602495 11376.842831536294 11350.230214339981 11275.938799657779 11312.448086351798 11377 .270562081676 11347 505028291389 11363.571894245391 11389..731281846163 11402.948421729137 11420 .615830023153 11404, 361315347996 11430. 145968422936 11429 494839831841 11416 016083575605 11435.212212491442 11454,323764895762 from sklearn.model_selection import GridSearchCv ('n_neighbors' :[2,3,4,5,6,7,8,9]) params wknn = KNeighborsRegressor() model GridSearchcv(knn, parans, © model. Fit (X_train,y train) model.best_params_ error_curve = pd.Datafrane(rmse, colunns=[ 'error*]) hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre »Scoring="neg_mean_squared_error") vanerain, 1137 AM (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory error_curve.head() error 0 11786.012159 1 10607.708606 2 10457.284072 3 10231.018957 4 10479.925458 error_curve.describe() error count 8.000000 mean 11184,898671 std -354,798192 min 10231,018957 25% — 11123.808190 50% — 11348.867621 75% — 11389.302720 max 11786.012159 error_curve.plot()
11800 n1600 1400 11200 11000 10800 10600 30400 10200 hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre tsn9sine, 137A (Shared Tem-3) MLP Week 7 SWhipyb -Colaboratny ~ Conclusion: RMSE value is minimum at k = 4 > Regression for Large Scale data : 1) Create a large dataset having 100000 samples and 50 features. 2)Split it using train test_split. 3)Reshape it in such a manner that your each chunk contain 100 samples. 4) Import stdscalar and transform features using partial_fit method. 5) Import SGDRegressor estimator and train your model using it. 6) Check accuracy of your model on train and test set. 7) Print the intercept and cofficient value coressponding to each features. 8) Check the value of cofficient and intercept after 4th iteration. fron sklearn import datasets from sklearn.nodel_selection inport train_test_split X, Y = datasets.make_regression(n_sanples-100000,n_features-5@, noise-10, randon_state-10) Xtrain, X test, Ytrain, Ytest - train_test_split(X, Y, train_size-0.9, random_state-10) X_train.shape, X_test.shape, Y_train.shape, ¥_test .shape (99002, 58), (12000, 50), (98008,), (1¢008,)) X.train, X_test = X_train.reshape(-1,100,50), X_test.reshape(-1,100,50) Yitrain, Y_test = Y_train.reshape(-1,100), Y_test.reshape(-1, 102) X_train.shape, X_test.shape, Y_train.shape, Y_test.shape ((900, 100, 58), (100, 100, 50), (900, 100), (100, 16)) }0,000 What we want is make chunk of 100 data No of training samples X_train. shape[2] 5e htpsscolab research google comida! ADh”XNWKyLm4MDESZKZ¢98y’1DiisO_#scrolTo=IP-nlSItLkO&prin!Mode=tte renes2inea, 1197 AM (Shared Tenm-3) MLP Week -7 SWLipynb~Colaboratory import numpy as np train_data = np.concatenate((X, Y[:, np.newaxis]), axis=1) a = np.asarray(train_data) np.savetxt("data_for_large_scale_swi.csv", a, delimiter=",") fron sklearn.preprocessing import Standardscaler Ww Scaling Data scaler = StandardScaler() for i in range(X_train.shape[@]): X_batch, Y_batch = X_train[i], Y_train[i] scaler.partial_fit(x_batch, Y_batch) ## Partially fitting data in batches from sklearn,linear_model import SGDRegressor from sklearn.metrics import mean_squared_error,r2_score regressor = SGDRegressor() for i in range(X_train.shape[@]): ## Looping through batches X_batch, Y_batch = X_train{i], Y_train{i] new_X_batch = scaler. transform(x_batch) regressor.partial_fit(new_X_batch, Y_batch) ## Partially fitting data in batches Y_test_preds = [] for j in range(X_test.shape[@]): ## Looping through test batches for making predictions Y_preds = regressor.predict(x_test[j]) Y_test_preds.extend(Y_preds.tolist()) print("Test MSE print("Test R2 Score : format(mean_squared_error(Y_test.reshape(-1), Y_test_preds))) -format(r2_score(¥_test.reshape(-1), Y_test_preds))) Test MSE : 101.89979220276909 Test R2 Score : .9942576140935281 from sklearn.metrics import mean_squared_error, r2_score Y_train_preds = [] for j in range(X_train.shape[@]): ## Looping through train batches for making predictions Y_preds = regressor.predict(x_train(j]) Y_train_preds.extend(Y_preds.tolist()) print("Train MSE : {}".format(mean_squared_error(Y_train.reshape(-1), Y_train_preds))) print("Train R2 Score : {}".format(r2_score(Y_train.reshape(-1), Y_train_preds))) Train MSE : 100. 7036064420434 hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre we421022, 1197 ANE (Shared)(Term-3) MLP Week -7 SWlipynb - Train R2 Score : @.994344394140802 regressor.intercept_ array([@.00910036]) regressor.coef_ array([ 1.22858911e-01, 3.56265998e-02, -1.84264088e-02, 3.94421827e-02, -4.51722564e-02, 4.24056098e+01, 2.10916472e+01, 4.45498221e-01, 7.39023248e-02, -9.06563689e-02, -3.23803985e-03, -3.15794270e-02, -3.62863372e-@1, 3.54598664e+01, -2.15556009e-01, 1.49201036e-01, 2.75438264e-01, 3.83329721e+01, -1,61753769e-01, -5.65846932e-02, 7.39882945e+01, 1,73627799¢-02, 1.29399874e-01, ~3.73811969e-02, -1.44659379e-01, -1.06260589e-02, -3.80728739e-02, 4,8463536e-01, -5.22314373¢-02, 2.17063876e-01, 5.52013486e+01, 4.28792266e-01, -1.68351107e-01, 5.61569552e+01, -2.58578145e-01, -3.24459239e-01, -1.25112000e-@1, 9.27576042e-@2]) ~ Cofficient after nth iteration X_train. shape[@] 900 from sklearn.linear_model import SGDRegressor regressor = SGDRegressor() for i in range(X_train.shape[@]): ## Looping through batches X_batch, Y_batch = X train[i], Y_train[i] scaled_X= scaler. transform(x_batch) regressor.partial_fit(scaled_X, Y_batch) ## Partially fitting data in batches print (i,regressor. intercept_) if i==3: break @ [-2.38014267] 1 [-1.08887003] 2 [-1.61727297] 3 [8.70578256] regressor.intercept_ Colaboratory 4.62869891e-02, 1.12709367e-02, 3.24007458e+01, -3.56630856e-01,, -1,11394956e-01, -2.88137637e-02, -2.285785410-02, 1,28012517e-01, 9.05423238e+00, -3.10198595¢-22, -6, 36379165¢-04, -2,5@549093e-21, hitpsscolab research google.comidrivel ADh”XNWKyLm4MDESZKZq98y’1DiisO_#srolTo=IP-nllSItLkO&prin!Mode=tre ranerain, 1137 AM array([@.7@578256]) regressor.coef_ array([-1. 4 2 2 @ 3 2 5 39. 6 e20aese , 23118478, -10087935, -2581106 62959545, 31895449, -12533548, -02711918, 50438811, -30482578,, (Shared)Term-3) MLP Week -7 SWlipynb - Colaboratory -0,5596251 , 31.27091646, 18.18226694, -1.41999896, 286221251, 55.23059448,, -7425588 -49521869,, 21721716, 09222435, Colab paid products - Cancel contracts here @. @. 1 23 27 -2 2 -1 -8. e. 31086477, 48956723, -33801961, 99040883, -44717267, 04706595, -86860907,, -29407923,, -45920908,, -94741819,, -3.11024724, 18.12701049, @.38225181, -3.33768337, -4.09704071, L.e2esa84d,, -3.33238789, 5.33438916, @.59296577, @.27090428, 174667944, -2.0464112 , -2.89185451, @.87297804, -1.38198557, -0.95738499, 1.58978959, 2.05774743, 39.73803301, -2.95363339]) hitpsscolab research google comida ADh”XNWKyLm4MDESZKZ¢98y’1DiisO_#scrlTo=IP-nlSItLkO&prin!Mode=tte sane
You might also like
Image Processing
PDF
No ratings yet
Image Processing
5 pages
Pattern Recognition Lab
PDF
No ratings yet
Pattern Recognition Lab
24 pages
CS178 Homework #1: Problem 0: Getting Connected
PDF
No ratings yet
CS178 Homework #1: Problem 0: Getting Connected
4 pages
Lab 2
PDF
100% (1)
Lab 2
4 pages
MiniProject - ML - Ipynb - Colaboratory
PDF
No ratings yet
MiniProject - ML - Ipynb - Colaboratory
26 pages
stanfordKNNassignment
PDF
No ratings yet
stanfordKNNassignment
78 pages
Pattern Recognition
PDF
No ratings yet
Pattern Recognition
26 pages
Sahils Proof of Lazy
PDF
No ratings yet
Sahils Proof of Lazy
4 pages
Data - Preprocessing - Tools - Ipynb - Colaboratory
PDF
No ratings yet
Data - Preprocessing - Tools - Ipynb - Colaboratory
4 pages
22
PDF
No ratings yet
22
7 pages
Student Name: Gaurav Raut University ID: 2038584: 6CS012 Workshop 4
PDF
No ratings yet
Student Name: Gaurav Raut University ID: 2038584: 6CS012 Workshop 4
48 pages
Zainab Pate Data PPF #5 - Colab
PDF
No ratings yet
Zainab Pate Data PPF #5 - Colab
10 pages
LAB-4 Report
PDF
No ratings yet
LAB-4 Report
21 pages
C2W3 Lab 01 Model Evaluation and Selection
PDF
No ratings yet
C2W3 Lab 01 Model Evaluation and Selection
21 pages
C2W3_Lab_01_Model_Evaluation_and_Selection
PDF
No ratings yet
C2W3_Lab_01_Model_Evaluation_and_Selection
21 pages
Machine Learning LAB
PDF
No ratings yet
Machine Learning LAB
20 pages
Ilovepdf Merged
PDF
No ratings yet
Ilovepdf Merged
47 pages
Bilal Ahmad Ai & DSS Assign # 03
PDF
No ratings yet
Bilal Ahmad Ai & DSS Assign # 03
7 pages
Slides on DataI
PDF
No ratings yet
Slides on DataI
33 pages
DA_Programs
PDF
No ratings yet
DA_Programs
44 pages
School of Engineering: Lab Manual On Machine Learning Lab
PDF
No ratings yet
School of Engineering: Lab Manual On Machine Learning Lab
23 pages
# ELG 5255 Applied Machine Learning Fall 2020 # Assignment 3 (Multivariate Method)
PDF
No ratings yet
# ELG 5255 Applied Machine Learning Fall 2020 # Assignment 3 (Multivariate Method)
8 pages
Assignment 4x
PDF
No ratings yet
Assignment 4x
19 pages
Aiml Lab
PDF
No ratings yet
Aiml Lab
14 pages
Mlda - Lab
PDF
No ratings yet
Mlda - Lab
35 pages
20AI16 - ML Record
PDF
No ratings yet
20AI16 - ML Record
24 pages
ML_Manual
PDF
No ratings yet
ML_Manual
53 pages
Patil ML
PDF
No ratings yet
Patil ML
9 pages
ML Record Print
PDF
No ratings yet
ML Record Print
20 pages
1 An Introduction To Machine Learning With Scikit Learn
PDF
No ratings yet
1 An Introduction To Machine Learning With Scikit Learn
2 pages
ML Lab Manual
PDF
No ratings yet
ML Lab Manual
12 pages
1. Linear Regression (Code)
PDF
No ratings yet
1. Linear Regression (Code)
9 pages
MlLabManualdocx 2024 09 04 22 02 58
PDF
No ratings yet
MlLabManualdocx 2024 09 04 22 02 58
19 pages
Answer PDF Lab
PDF
No ratings yet
Answer PDF Lab
34 pages
hand writing using _cnn (1)
PDF
No ratings yet
hand writing using _cnn (1)
5 pages
HW 3
PDF
No ratings yet
HW 3
4 pages
Pytorch (Tabular) - Regression
PDF
No ratings yet
Pytorch (Tabular) - Regression
13 pages
Setup: This Notebook Contains All The Sample Code and Solutions To The Exercises in Chapter 3
PDF
No ratings yet
Setup: This Notebook Contains All The Sample Code and Solutions To The Exercises in Chapter 3
30 pages
ML_LAB Record_final
PDF
No ratings yet
ML_LAB Record_final
39 pages
PyTorch Neural Network Classifcation
PDF
No ratings yet
PyTorch Neural Network Classifcation
1 page
Aiml Ex 4-7
PDF
No ratings yet
Aiml Ex 4-7
8 pages
AML1
PDF
No ratings yet
AML1
9 pages
Advance AI and ML LAB
PDF
No ratings yet
Advance AI and ML LAB
16 pages
featureselection
PDF
No ratings yet
featureselection
11 pages
ML 3 & 4 Notes
PDF
No ratings yet
ML 3 & 4 Notes
18 pages
Week 7 Laboratory Activity
PDF
No ratings yet
Week 7 Laboratory Activity
12 pages
16BCB0126 VL2018195002535 Pe003
PDF
No ratings yet
16BCB0126 VL2018195002535 Pe003
40 pages
CS6301 Homework2 KR
PDF
No ratings yet
CS6301 Homework2 KR
13 pages
178 hw1
PDF
No ratings yet
178 hw1
4 pages
hw1 Problem Set
PDF
No ratings yet
hw1 Problem Set
8 pages
ML RECORD - Merged
PDF
No ratings yet
ML RECORD - Merged
33 pages
Machine Learning Model Building
PDF
No ratings yet
Machine Learning Model Building
6 pages
Case study-ML-SI No 2
PDF
No ratings yet
Case study-ML-SI No 2
13 pages
HW_02
PDF
No ratings yet
HW_02
3 pages
Nlp2.ipynb - Colab
PDF
No ratings yet
Nlp2.ipynb - Colab
3 pages
Abhishek ML File
PDF
No ratings yet
Abhishek ML File
23 pages
ML Lab Programs (1-12)
PDF
No ratings yet
ML Lab Programs (1-12)
35 pages
ML Lab
PDF
No ratings yet
ML Lab
7 pages
ML Shristi File
PDF
No ratings yet
ML Shristi File
49 pages
Week 07 Lecture Material
PDF
No ratings yet
Week 07 Lecture Material
49 pages
Tell Me About Yourself
PDF
No ratings yet
Tell Me About Yourself
8 pages
POD23S2C21890053
PDF
No ratings yet
POD23S2C21890053
2 pages
RAJA
PDF
No ratings yet
RAJA
1 page
MLP - Week 5 - MNIST - Perceptron - Ipynb - Colaboratory
PDF
No ratings yet
MLP - Week 5 - MNIST - Perceptron - Ipynb - Colaboratory
31 pages
Week - 6 - SWI - MLP - LogisticRegression - Ipynb - Colaboratory
PDF
No ratings yet
Week - 6 - SWI - MLP - LogisticRegression - Ipynb - Colaboratory
15 pages
MLP - Week 6 - MNIST - LogitReg - Ipynb - Colaboratory
PDF
No ratings yet
MLP - Week 6 - MNIST - LogitReg - Ipynb - Colaboratory
19 pages
MLP Week 6 NaiveBayesImplementation - Ipynb - Colaboratory
PDF
No ratings yet
MLP Week 6 NaiveBayesImplementation - Ipynb - Colaboratory
5 pages