Worksheet Classification1
Worksheet Classification1
This worksheet covers parts of the Classification I chapter of the online textbook. You should
read this chapter before attempting the worksheet.
Which of the following statements is NOT true of a training data set (in the context of
classification)?
A. A training data set is a collection of observations for which we know the true classes.
C. The training data set is the underlying collection of observations for which we don't know the
true classes.
Assign your answer to an object called answer0.1. Make sure the correct answer is an
uppercase letter. Remember to surround your answer with quotation marks (e.g. "D").
(Adapted from James et al, "An introduction to statistical learning" (page 53))
We collect data on 20 similar products. For each product we have recorded whether it was a
success or failure (labelled as such by the Sales team), price charged for the product, marketing
budget, competition price, customer data, and ten other variables.
A. We are interested in comparing the profit margins for products that are a success and
products that are a failure.
B. We are considering launching a new product and wish to know whether it will be a success or
a failure.
C. We wish to group customers based on their preferences and use that knowledge to develop
targeted marketing programs.
Assign your answer to an object called answer0.2. Make sure the correct answer is an
uppercase letter. Remember to surround your answer with quotation marks (e.g. "F").
test_0.2()
Note that the breast cancer data in this worksheet have been standardized (centred
and scaled) for you already. We will implement these steps in future
worksheets/tutorials later, but for now, know the data has been standardized.
Therefore the variables are unitless and hence why we have zero and negative values
for variables like Radius.
Read the clean-wdbc-data.csv file (found in the data directory) using the read_csv()
function into the notebook and store it as a data frame. Name it cancer.
After looking at the first six rows of the cancer data fame, suppose we asked you to predict the
variable "area" for a new observation. Is this a classification problem?
Assign your answer to an object called answer1.1. Make sure the correct answer is written in
lower-case. Remember to surround your answer with quotation marks (e.g. "true" / "false").
test_1.1()
We will be treating Class as a categorical variable, so we should convert it into a factor using
the as_factor() function.
Create a scatterplot of the data with Symmetry on the x-axis and Radius on the y-axis. Modify
your aesthetics by colouring for Class. As you create this plot, ensure you follow the guidelines
for creating effective visualizations. In particular, note on the plot axes whether the data is
standardized or not.
test_1.2()
Just by looking at the scatterplot above, how would you classify an observation with Symmetry
= 1 and Radius = 1 (Benign or Malignant)?
Assign your answer to an object called answer1.3. Make sure the correct answer is written
fully. Remember to surround your answer with quotation marks (e.g. "Benign" / "Malignant").
test_1.3()
We will now compute the distance between the first and second observation in the breast
cancer dataset using the explanatory variables/predictors Symmetry and Radius. Recall we can
calculate the distance between two points using the following formula:
√ 2
D i s t a n c e= ( a x − b x ) + ( a y −b y )
2
First, extract the coordinates for the two observations and assign them to objects called:
test_1.4()
test_1.5()
Question 1.6 {points: 1}
Now we'll do the same thing with 3 explanatory variables/predictors: Symmetry, Radius and
Concavity. Again, use the first two rows in the data set as the points you are calculating the
distance between (point a is row 1, and point b is row 2).
Find the coordinates for the third variable (Concavity) and assign them to objects called az and
bz. Use the scaffolding given in Question 1.4 as a guide.
test_1.6()
Again, calculate the distance between the first and second observation in the breast cancer
dataset using 3 explanatory variables/predictors: Symmetry, Radius and Concavity.
Assign your answer to an object called answer1.7. Use the scaffolding given to calculate
answer1.5 as a guide.
test_1.7()
Create a vector of the coordinates for each point. Name one vector point_a and the other
vector point_b. Within the vector, the order of coordinates should be: Symmetry, Radius,
Concavity.
Fill in the ... in the cell below. Copy and paste your finished answer into the fail().
Here will use select and as.numeric instead of pull because we need to get the
numeric values of 3 columns. pull, that we used previously, only works to extract the
numeric values from a single column.
test_1.8()
Compute the squared differences between the two vectors, point_a and point_b. The result
should be a vector of length 3 named dif_square. Hint: ^ is the exponent symbol in R.
test_1.09()
Sum the squared differences between the two vectors, point_a and point_b. The result
should be a single number named dif_sum.
Hint: the sum function in R returns the sum of the elements of a vector
test_1.09.1()
Square root the sum of your squared differences. The result should be a double named
root_dif_sum.
test_1.09.2()
If we have more than a few points, calculating distances as we did in the previous questions is
VERY tedious. Let's use the dist() function to find the distance between the first and second
observation in the breast cancer dataset using Symmetry, Radius and Concavity.
Fill in the ... in the cell below. Copy and paste your finished answer into the fail().
test_1.09.3()
Assign your answer to an object called answer1.09.4. Make sure the correct answer is written
in lower-case. Remember to surround your answer with quotation marks (e.g. "true" / "false").
test_1.09.4()
Let's take a random sample of 5 observations from the breast cancer dataset using the
sample_n function. To make this random sample reproducible, we will use set.seed(20).
This means that the random number generator will start at the same point each time when we
run the code and we will always get back the same random samples.
We will focus on the predictors Symmetry and Radius only. Thus, we will need to select the
columns Symmetry and Radius and Class. Save these 5 rows and 3 columns to a data frame
named small_sample.
#set.seed(20)
#... <- sample_n(cancer, 5) |>
# select(...)
test_2.0.0()
Question 2.0.1 {points: 1}
Finally, create a scatter plot where Symmetry is on the x-axis, and Radius is on the y-axis.
Color the points by Class. Name your plot small_sample_plot.
As you create this plot, ensure you follow the guidelines for creating effective visualizations. In
particular, note on the plot axes whether the data is standardized or not.
test_2.0.1()
Suppose we are interested in classifying a new observation with Symmetry = 0.5 and Radius
= 0, but unknown Class. Using the small_sample data frame, add another row with
Symmetry = 0.5, Radius = 0, and Class = "unknown".
test_2.1()
Compute the distance between each pair of the 6 observations in the newData dataframe using
the dist() function based on two variables: Symmetry and Radius. Fill in the ... in the
scaffolding provided below.
test_2.2()
In the table above, the row and column numbers reflect the row number from the data frame the
dist function was applied to. Thus numbers 1 - 5 were the points/observations from rows 1 - 5
in the small_sample data frame. Row 6 was the new observation that we do not know the
diagnosis class for. The values in dist_matrix are the distances between the points of the row
and column number. For example, the distance between the point 2 and point 4 is 4.196683.
And the distance between point 3 and point 3 (the same point) is 0.
Assign your answer to an object called answer2.3. Make sure your answer is a number.
Remember to surround your answer with quotation marks (e.g. "8").
test_2.3()
Use the K-nearest neighbour classification algorithm with K = 1 to classify the new observation
using your answers to Questions 2.2 & 2.3. Is the new data point predicted to be benign or
malignant?
Assign your answer to an object called answer2.4. Make sure the correct answer is written fully
as either "Benign" or "Malignant". Remember to surround your answer with quotation marks.
test_2.4()
Using your answers to Questions 2.2 & 2.3, what are the three closest observations to your new
point?
A. 1, 3, 2
B. 5, 1, 4
C. 5, 2, 4
D. 3, 4, 2
Assign your answer to an object called answer2.5. Make sure the correct answer is an
uppercase letter. Remember to surround your answer with quotation marks (e.g. "F").
test_2.5()
We will now use the K-nearest neighbour classification algorithm with K = 3 to classify the new
observation using your answers to Questions 2.2 & 2.3. Is the new data point predicted to be
benign or malignant?
Assign your answer to an object called answer2.6. Make sure the correct answer is written
fully. Remember to surround your answer with quotation marks (e.g. "Benign" / "Malignant").
test_2.6()
Compare your answers in 2.4 and 2.6. Are they the same?
Assign your answer to an object called answer2.7. Make sure the correct answer is written in
lower-case. Remember to surround your answer with quotation marks (e.g. "yes" / "no").
test_2.7()
We'll again focus on Radius and Symmetry as the two predictors. This time, we would like to
predict the class of a new observation with Symmetry = 1 and Radius = 0. This one is a bit
tricky to do visually from the plot below, and so is a motivating example for us to compute the
prediction using k-nn with the tidymodels package. Let's use K = 7.
test_3.1()
To train the model on the breast cancer dataset, pass knn_spec and the cancer dataset to the
fit() function. Specify Class as your target variable and the Symmetry and Radius variables
as your predictors. Name your fitted model as knn_fit.
test_3.2()
test_3.3()
Let's perform K-nearest neighbour classification again, but with three predictors. Use the
tidymodels package and K = 7 to classify a new observation where we measure Symmetry
= 1, Radius = 0 and Concavity = 1. Use the scaffolding from Questions 3.2 and 3.3 to
help you.
• Pass the same knn_spec from before to fit, but this time specify Symmetry, Radius,
and Concavity as the predictors. Store the output in knn_fit_2.
• store the new observation values in an object called new_obs_2
• store the output of predict in an object called class_prediction_2
# your code here
fail() # No Answer - remove if you provide an answer
test_3.4()
Finally, we will perform K-nearest neighbour classification again, using the tidymodels
package and K = 7 to classify a new observation where we use all the predictors in our data set
(we give you the values in the code below).
But we first have to do one important thing: we need to remove the ID variable from the analysis
(it's not a numerical measurement that we should use for classification). Thankfully,
tidymodels provides a nice way of combining data preprocessing and training into a single
consistent workflow.
We will first create a recipe to remove the ID variable using the step_rm preprocessing step.
Do so below using the provided scaffolding. Name the recipe object knn_recipe.
Hint: If you want to find out about the available preprocessing steps that you can include in a
recipe, visit the tidymodels page.
test_3.5()
You can examine the output of a recipe by using the prep and bake functions. For example, let's
see if step_rm above actually removed the ID variable. Run the below code to see!
Note: You have to pass the cancer data to bake() again, even though we already specified it in
the recipe above. Why? This is because tidymodels is flexible enough to let you compute
preprocessing steps using one dataset (prep), and applying those steps to another (bake). For
example, if we apply the step_center preprocessing step (which shifts a variable to have
mean 0), we need to compute the shift value (prep) before subtracting it from each observation
(bake). This will be very useful in the next chapter when we have to split our dataset into two
subsets, and only prep using one of them.
Create a workflow that includes the new recipe (knn_recipe) and the model specification
(knn_spec) using the scaffolding below. Name the workflow object knn_workflow.
test_3.6()
Finally, fit the workflow and predict the class label for the new observation named
new_obs_all. Name the fit object knn_fit_all, and the class prediction
class_prediction_all.
test_3.7()
In the K-nearest neighbours classification algorithm, we calculate the distance between the new
observation (for which we are trying to predict the class/label/outcome) and each of the
observations in the training data set so that we can:
C. Find outliers
Assign your answer (e.g. "E") to an object called: answer4.0. Make sure your answer is an
uppercase letter and is surrounded with quotation marks.
test_4.0()
In the K-nearest neighbours classification algorithm, we choose the label/class for a new
observation by:
Assign your answer (e.g., "E") to an object called answer4.1. Make sure your answer is an
uppercase letter and is surrounded with quotation marks.
test_4.1()
source('cleanup.R')