Assignment 4
Assignment 4
DonorsChoose
DonorsChoose.org receives hundreds of thousands of project proposals each year for classroom projects in need of funding. Right now, a large
number of volunteers is needed to manually screen each submission before it's approved to be posted on the DonorsChoose.org website.
Next year, DonorsChoose.org expects to receive close to 500,000 project proposals. As a result, there are three main problems they need to solve:
How to scale current manual processes and resources to screen 500,000 projects so that they can be posted as quickly and as efficiently as
possible
How to increase the consistency of project vetting across different volunteers to improve the experience for teachers
How to focus volunteer time on the applications that need the most assistance
The goal of the competition is to predict whether or not a DonorsChoose.org project proposal submitted by a teacher will be approved, using the text
of project descriptions as well as additional metadata about the project, teacher, and school. DonorsChoose.org can then use this information to
identify projects most likely to need further review before approval.
1 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
Feature Description
Grade level of students for which the project is targeted. One of the following enumerated values:
Grades PreK-2
project_grade_category Grades 3-5
Grades 6-8
Grades 9-12
One or more (comma-separated) subject categories for the project from the following enumerated list of
values:
Applied Learning
Care & Hunger
Health & Sports
History & Civics
Literacy & Language
Math & Science
project_subject_categories Music & The Arts
Special Needs
Warmth
Examples:
project_subject_subcategories Literacy
Literature & Writing, Social Sciences
project_submitted_datetime Datetime when project application was submitted. Example: 2016-04-28 12:43:56.245
nan
Dr.
teacher_prefix Mr.
Mrs.
Ms.
Teacher.
teacher_number_of_previously_posted_projects Number of project applications previously submitted by the same teacher. Example: 2
* See the section Notes on the Essay Data for more details about these features.
Additionally, the resources.csv data set provides more data about the resources required for each project. Each line in this file represents a
resource required by a project:
Feature Description
Note: Many projects require multiple resources. The id value corresponds to a project_id in train.csv, so you use it as a key to retrieve all
2 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
Starting on May 17, 2016, the number of essays was reduced from 4 to 2, and the prompts for the first 2 essays were changed to the following:
__project_essay_1:__ "Describe your students: What makes your students special? Specific details about their background, your
neighborhood, and your school are all helpful."
__project_essay_2:__ "About your project: How will these materials make a difference in your students' learning and improve their school
lives?"
For all projects with project_submitted_datetime of 2016-05-17 and later, the values of project_essay_3 and project_essay_4 will be NaN.
import sqlite3
import pandas as pd
import numpy as np
import nltk
import string
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
import re
# Tutorial about Python regular expressions: https://pymotw.com/2/re/
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
3 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
Out[211]:
id description quantity price
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
cat_list = []
for i in catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&",
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The'
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science
temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_') # we are replacing the & value into
cat_list.append(temp.strip())
project_data['clean_categories'] = cat_list
project_data.drop(['project_subject_categories'], axis=1, inplace=True)
cat_dict = dict(my_counter)
sorted_cat_dict = dict(sorted(cat_dict.items(), key=lambda kv: kv[1]))
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
sub_cat_list = []
for i in sub_catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&",
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The'
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
sub_cat_list.append(temp.strip())
project_data['clean_subcategories'] = sub_cat_list
project_data.drop(['project_subject_subcategories'], axis=1, inplace=True)
sub_cat_dict = dict(my_counter)
sorted_sub_cat_dict = dict(sorted(sub_cat_dict.items(), key=lambda kv: kv[1]))
4 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
Out[215]:
Unnamed:
id teacher_id teacher_prefix school_state project_submitted_datetime project_grade_category project_title
0
Educational
Support for
0 160221 p253737 c90749f5d961ff158d4b4d1e7dc665fc Mrs. IN 2016-12-05 13:43:57 Grades PreK-2
Learners at
Projector for
1 140945 p258326 897464ce9ddc600bced1151f324dd63a Mr. FL 2016-10-25 09:22:10 Grades 6-8
Learners
5 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
6 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
In [218]: # https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
In [219]: sent = decontracted(project_data['essay'].values[20000])
print(sent)
print("="*50)
My kindergarten students have varied disabilities ranging from speech and language delays, cognitive dela
ys, gross/fine motor delays, to autism. They are eager beavers and always strive to work their hardest wo
rking past their limitations. \r\n\r\nThe materials we have are the ones I seek out for my students. I te
ach in a Title I school where most of the students receive free or reduced price lunch. Despite their di
sabilities and limitations, my students love coming to school and come eager to learn and explore.Have yo
u ever felt like you had ants in your pants and you needed to groove and move as you were in a meeting? T
his is how my kids feel all the time. The want to be able to move as they learn or so they say.Wobble cha
irs are the answer and I love then because they develop their core, which enhances gross motor and in Tur
n fine motor skills. \r\nThey also want to learn through games, my kids do not want to sit and do workshe
ets. They want to learn to count by jumping and playing. Physical engagement is the key to our success. T
he number toss and color and shape mats can make that happen. My students will forget they are doing work
and just have the fun a 6 year old deserves.nannan
==================================================
7 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
In [222]: # https://gist.github.com/sebleier/554280
# we are removing the words from the stop words list: 'no', 'nor', 'not'
stopwords= ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\
"you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \
'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their'
'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those'
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do',
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while',
'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before'
'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again'
'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few'
'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \
's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm',
've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't",
"hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn'
"mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren'
'won', "won't", 'wouldn', "wouldn't"]
8 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
https://www.appliedaicourse.com/course/applied-ai-course-online/lessons/handling-categorical-and-numerical-features/
(https://www.appliedaicourse.com/course/applied-ai-course-online/lessons/handling-categorical-and-numerical-features/)
In [230]: # We are considering only the words which appeared in at least 10 documents(rows or projects).
# vectorizer = CountVectorizer(min_df=10)
# text_bow = vectorizer.fit_transform(preprocessed_essays)
# print("Shape of matrix after one hot encodig ",text_bow.shape)
In [231]: # you can vectorize the title also
# before you vectorize the title make sure you preprocess it
9 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
In [233]: '''
# Reading glove vectors in python: https://stackoverflow.com/a/38230349/4084039
def loadGloveModel(gloveFile):
print ("Loading Glove Model")
f = open(gloveFile,'r', encoding="utf8")
model = {}
for line in tqdm(f):
splitLine = line.split()
word = splitLine[0]
embedding = np.array([float(val) for val in splitLine[1:]])
model[word] = embedding
print ("Done.",len(model)," words loaded!")
return model
model = loadGloveModel('glove.42B.300d.txt')
# ============================
Output:
# ============================
words = []
for i in preproced_texts:
words.extend(i.split(' '))
for i in preproced_titles:
words.extend(i.split(' '))
print("all the words in the coupus", len(words))
words = set(words)
print("the unique words in the coupus", len(words))
inter_words = set(model.keys()).intersection(words)
print("The number of words that are present in both glove vectors and our coupus", \
len(inter_words),"(",np.round(len(inter_words)/len(words)*100,3),"%)")
words_courpus = {}
words_glove = set(model.keys())
for i in words:
if i in words_glove:
words_courpus[i] = model[i]
print("word 2 vec length", len(words_courpus))
import pickle
with open('glove_vectors', 'wb') as f:
pickle.dump(words_courpus, f)
'''
10 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
# print(len(avg_w2v_vectors))
# print(len(avg_w2v_vectors[0]))
In [236]: # # S = ["abc def pqr", "def def def abc", "pqr pqr def"]
# tfidf_model = TfidfVectorizer()
# tfidf_model.fit(preprocessed_essays)
# # we are converting a dictionary with word as a key, and the idf as a value
# dictionary = dict(zip(tfidf_model.get_feature_names(), list(tfidf_model.idf_)))
# tfidf_words = set(tfidf_model.get_feature_names())
In [237]: # # average Word2Vec
# # compute average word2vec for each review.
# tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
# for sentence in tqdm(preprocessed_essays): # for each review/sentence
# vector = np.zeros(300) # as word vectors are of zero length
# tf_idf_weight =0; # num of words with a valid vector in the sentence/review
# for word in sentence.split(): # for each word in a review/sentence
# if (word in glove_words) and (word in tfidf_words):
# vec = model[word] # getting the vector for each word
# # here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sente
# tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for e
# vector += (vec * tf_idf) # calculating tfidf weighted w2v
# tf_idf_weight += tf_idf
# if tf_idf_weight != 0:
# vector /= tf_idf_weight
# tfidf_w2v_vectors.append(vector)
# print(len(tfidf_w2v_vectors))
# print(len(tfidf_w2v_vectors[0]))
# # price_standardized = standardScalar.fit(project_data['price'].values)
# # this will rise the error
# # ValueError: Expected 2D array, got 1D array instead: array=[725.05 213.03 329. ... 399. 287.73 5.5 ].
# # Reshape your data either using array.reshape(-1, 1)
# price_scalar = StandardScaler()
# price_scalar.fit(project_data['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this dat
# print(f"Mean : {price_scalar.mean_[0]}, Standard deviation : {np.sqrt(price_scalar.var_[0])}")
In [241]: # price_standardized
we need to merge all the numerical vectors i.e catogorical, text, numerical vectors
11 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
In [242]: # print(categories_one_hot.shape)
# print(sub_categories_one_hot.shape)
# print(text_bow.shape)
# print(price_standardized.shape)
In [243]: # merge two sparse matrices: https://stackoverflow.com/a/19710648/4084039
# from scipy.sparse import hstack
# # with the same hstack function we are concatinating a sparse matrix and a dense matirx :)
# X = hstack((categories_one_hot, sub_categories_one_hot, text_bow, price_standardized))
# X.shape
In [244]: # please write all the code with proper documentation, and proper titles for each subsection
# when you plot any graph make sure you use
# a. Title, that describes your plot, this will be very helpful to the reader
# b. Legends if needed
# c. X-axis label
# d. Y-axis label
12 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
Find the best hyper parameter which will give the maximum AUC (https://www.appliedaicourse.com/course/applied-ai-course-online
/lessons/receiver-operating-characteristic-curve-roc-curve-and-auc-1/) value
Consider a wide range of alpha values for hyperparameter tuning, start as low as 0.00001
Find the best hyper paramter using k-fold cross validation or simple cross validation data
Use gridsearch cv or randomsearch cv or you can also write your own for loops to do this task of hyperparameter tuning
3. Feature importance
Find the top 10 features of positive class and top 10 features of negative class for both feature sets Set 1 and Set 2 using values of
`feature_log_prob_` parameter of MultinomialNB (https://scikit-learn.org/stable/modules/generated
/sklearn.naive_bayes.MultinomialNB.html) and print their corresponding feature names
4. Representation of results
You need to plot the performance of model both on train data and cross validation data for each hyper parameter, like shown in the figure.
Here on X-axis you will have alpha values, since they have a wide range, just to represent those alpha values on the graph, apply log
function on those alpha values.
Once after you found the best hyper parameter, you need to train your model with it, and find the AUC on test data and plot the ROC curve
on both train and test.
Along with plotting ROC curve, you need to print the confusion matrix (https://www.appliedaicourse.com/course/applied-ai-course-online
/lessons/confusion-matrix-tpr-fpr-fnr-tnr-1/) with predicted and original labels of test data points. Please visualize your confusion matrices
using seaborn heatmaps.
(https://seaborn.pydata.org/generated/seaborn.heatmap.html)
5. Conclusion
You need to summarize the results at the end of the notebook, summarize it in the table format. To print out a table please refer to this
prettytable library link (http://zetcode.com/python/prettytable/)
13 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
2. Naive Bayes
2.1 Splitting data into Train and cross validation(or test): Stratified Sampling
In [245]: # some preprocessing
y = project_data['project_is_approved'].values #target
project_data.drop(['project_is_approved'], axis=1, inplace=True) # drop target column from project data
X = project_data
#split data into train and test with 2/3rd as train and 1/3rd as test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, stratify=y)
#split train data into train and cv with 2/3rd as train and 1/3rd as cv
X_train, X_cv, y_train, y_cv = train_test_split(X_train, y_train, test_size=0.33, stratify=y_train)
In [263]: print(X_train['teacher_prefix_cleaned'].nunique())
print(X_cv['teacher_prefix_cleaned'].nunique())
print(X_test['teacher_prefix_cleaned'].nunique())
6
6
6
<class 'pandas.core.frame.DataFrame'>
Int64Index: 109248 entries, 0 to 109247
Data columns (total 21 columns):
Unnamed: 0 109248 non-null int64
id 109248 non-null object
teacher_id 109248 non-null object
school_state 109248 non-null object
project_submitted_datetime 109248 non-null object
project_grade_category 109248 non-null object
project_title 109248 non-null object
project_essay_1 109248 non-null object
project_essay_2 109248 non-null object
project_essay_3 3758 non-null object
project_essay_4 3758 non-null object
project_resource_summary 109248 non-null object
teacher_number_of_previously_posted_projects 109248 non-null int64
clean_categories 109248 non-null object
clean_subcategories 109248 non-null object
essay 109248 non-null object
price 109248 non-null float64
quantity 109248 non-null int64
teacher_prefix_cleaned 109248 non-null object
prep_essay 109248 non-null object
prep_title 109248 non-null object
dtypes: float64(1), int64(3), object(17)
memory usage: 18.3+ MB
14 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
#initiate normalizer
norm = Normalizer()
# normalize price
n2_train_norm,n2_cv_norm,n2_test_norm = normalize_numeric(X_train['price'],X_cv['price'],X_test['price'])
# normalize quantity
n3_train_norm,n3_cv_norm,n3_test_norm = normalize_numeric(X_train['quantity'],X_cv['quantity'],X_test['quantity'
#combine numerical features and create train,cv & test for numerical encoded featuers
X_num_tr = np.hstack((n1_train_norm,n2_train_norm,n3_train_norm))
X_num_cv = np.hstack((n1_cv_norm,n2_cv_norm,n3_cv_norm))
X_num_test = np.hstack((n1_test_norm,n2_test_norm,n3_test_norm))
15 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
cat_dict = dict(my_counter)
sorted_cat_dict = dict(sorted(cat_dict.items(), key=lambda kv: kv[1]))
print('\nShape of school_state')
print('Train:',c2_tr_ohe.shape)
print('CV:',c2_cv_ohe.shape)
print('Test:',c2_test_ohe.shape)
print('\nShape of project_grade_category')
print('Train:',c3_tr_ohe.shape)
print('CV:',c3_cv_ohe.shape)
print('Test:',c3_test_ohe.shape)
print('\nShape of clean_categories')
print('Train:',c4_tr_ohe.shape)
print('CV:',c4_cv_ohe.shape)
print('Test:',c4_test_ohe.shape)
print('\nShape of clean_subcategories')
print('Train:',c5_tr_ohe.shape)
print('CV:',c5_cv_ohe.shape)
print('Test:',c5_test_ohe.shape)
16 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
#bow essay
essay_tr_bow,essay_cv_bow,essay_test_bow,bow_vect1 = bow_text(X_train['prep_essay'],X_cv['prep_essay'],X_test
#bow title
title_tr_bow,title_cv_bow,title_test_bow,bow_vect2 = bow_text(X_train['prep_title'],X_cv['prep_title'],X_test
#combine bow
X_bow_tr = hstack((title_tr_bow,essay_tr_bow))
X_bow_cv = hstack((title_cv_bow,essay_cv_bow))
X_bow_test = hstack((title_test_bow,essay_test_bow))
In [269]: type(bow_vect1.get_feature_names())
Out[269]: list
# tfidf essay
essay_tr_tfidf,essay_cv_tfidf,essay_test_tfidf,tfidf_vect1 = tfidf_text(X_train['prep_essay'],X_cv['prep_essay'
X_test['prep_essay'],min_df=10)
# tfidf title
title_tr_tfidf,title_cv_tfidf,title_test_tfidf,tfidf_vect2 = (tfidf_text(X_train['prep_title'],X_cv['prep_title'
X_test['prep_title']))
#combine tfidf
X_tfidf_tr = hstack((title_tr_tfidf,essay_tr_tfidf))
X_tfidf_cv = hstack((title_cv_tfidf,essay_cv_tfidf))
X_tfidf_test = hstack((title_test_tfidf,essay_test_tfidf))
17 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
# https://stackoverflow.com/questions/35572000/how-can-i-plot-a-confusion-matrix
# Code for drawing seaborn heatmaps
class_names = ['0','1']
df_heatmap = pd.DataFrame(cm, index=class_names, columns=class_names )
fig = plt.figure(figsize=(5,3))
heatmap = sns.heatmap(df_heatmap, annot=True, fmt="d")
18 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
X_tr =hstack((X_num_tr,X_cat_tr,X_bow_tr)).tocsr()
X_cv = hstack((X_num_cv,X_cat_cv,X_bow_cv)).tocsr()
X_te = hstack((X_num_test,X_cat_test,X_bow_test)).tocsr()
train_auc = []
cv_auc = []
a = [1000,100,10,1, 0.1, 0.01, 0.001, 0.0001,0.00001]
for i in a:
clf = MultinomialNB(alpha=i)
clf.fit(X_tr, y_train)
y_train_pred = clf.predict_proba(X_tr)
y_cv_pred = clf.predict_proba(X_cv)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
train_auc.append(roc_auc_score(y_train,y_train_pred[:,1]))
cv_auc.append(roc_auc_score(y_cv, y_cv_pred[:,1]))
plt.legend()
plt.xlabel("alpha: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
Final data matrix
(49041, 9181) (49041,)
(24155, 9181) (24155,)
(36052, 9181) (36052,)
In [285]: a=1
19 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
clf = MultinomialNB(alpha=a)
clf.fit(X_tr, y_train)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
y_train_pred = clf.predict_proba(X_tr)
y_test_pred = clf.predict_proba(X_te)
20 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
21 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
X_tr1 =hstack((X_num_tr,X_cat_tr,X_tfidf_tr)).tocsr()
X_cv1 = hstack((X_num_cv,X_cat_cv,X_tfidf_cv)).tocsr()
X_te1 = hstack((X_num_test,X_cat_test,X_tfidf_test)).tocsr()
train_auc = []
cv_auc = []
a = [1000,100,10,1, 0.1, 0.01, 0.001, 0.0001,0.00001]
for i in a:
clf = MultinomialNB(alpha=i)
clf.fit(X_tr1, y_train)
y_train_pred = clf.predict_proba(X_tr1)
y_cv_pred = clf.predict_proba(X_cv1)
# roc_auc_score(y_true, y_score) the 2nd parameter should be probability estimates of the positive class
# not the predicted outputs
train_auc.append(roc_auc_score(y_train,y_train_pred[:,1]))
cv_auc.append(roc_auc_score(y_cv, y_cv_pred[:,1]))
plt.legend()
plt.xlabel("alpha: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
Final data matrix
(49041, 10103) (49041,)
(24155, 10103) (24155,)
(36052, 10103) (36052,)
In [301]: a=1
22 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
y_train_pred = clf.predict_proba(X_tr1)
y_test_pred = clf.predict_proba(X_te1)
23 of 24 4/17/2019, 6:36 PM
4_DonorsChoose_NB http://localhost:8888/notebooks/Google Drive/appliedai/assignments/Do...
3. Conclusions
In [307]: # Please compare all your models using Prettytable library
from prettytable import PrettyTable
x = PrettyTable()
x.field_names = ["Vectorize","Model","HyperParam(alpha)","AUC"]
x.add_row(['BOW','MultinomialNB',1,.695])
x.add_row(['TFIDF','MultinomialNB',1,0.662])
print(x)
+-----------+---------------+-------------------+-------+
| Vectorize | Model | HyperParam(alpha) | AUC |
+-----------+---------------+-------------------+-------+
| BOW | MultinomialNB | 1 | 0.695 |
| TFIDF | MultinomialNB | 1 | 0.662 |
+-----------+---------------+-------------------+-------+
24 of 24 4/17/2019, 6:36 PM