0% found this document useful (0 votes)
152 views

Deep Neural Network - Application 2layer

The document describes building and comparing two models for classifying images as cats or non-cats: a 2-layer neural network and an L-layer deep neural network. It provides the architecture for the 2-layer network, and explains that an L-layer network is harder to represent but would have a similar structure with L hidden layers instead of just one. The reader is then asked to build the 2-layer network using predefined functions to initialize parameters, perform forward and backward passes, and update parameters.

Uploaded by

Gijacis Khaseng
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views

Deep Neural Network - Application 2layer

The document describes building and comparing two models for classifying images as cats or non-cats: a 2-layer neural network and an L-layer deep neural network. It provides the architecture for the 2-layer network, and explains that an L-layer network is harder to represent but would have a similar structure with L hidden layers instead of just one. The reader is then asked to build the 2-layer network using predefined functions to initialize parameters, perform forward and backward passes, and update parameters.

Uploaded by

Gijacis Khaseng
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

23/12/2019 Deep Neural Network - Application v8

Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images
from non-cat images.

You will build two different models:

A 2-layer neural network


An L-layer deep neural network

You will then compare the performance of these models, and also try out different values for L.

Let's look at the two architectures.

3.1 - 2-layer neural network

Figure 2: 2-layer neural network.


The model can be summarized as: ***INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT***.

Detailed Architecture of figure 2:

The input is a (64,64,3) image which is flattened to a vector of size (12288, 1) .


The corresponding vector: [x 0 , x 1 , . . . , x 12287 ]T is then multiplied by the weight matrix W [1] of
size (n[1] , 12288) .
[1] [1] [1]
You then add a bias term and take its relu to get the following vector: [a0 ,a
1
,...,a
[1] −1
]
T
.
n

You then repeat the same process.


You multiply the resulting vector by W [2] and add your intercept (bias).
Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.

3.2 - L-layer deep neural network


It is hard to represent an L-layer deep neural network with the above representation. However, here is a
simplified network representation:

https://wvyomaulmfiajiopfkeeoo.coursera-apps.org/nbconvert/html/Week 4/Deep Neural Network Application%3A Image Classification/Deep Ne… 5/19


23/12/2019 Deep Neural Network - Application v8

4 - Two-layer neural network


Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer
neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you
may need and their inputs are:

def initialize_parameters(n_x, n_h, n_y):


...
return parameters
def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
def compute_cost(AL, Y):
...
return cost
def linear_activation_backward(dA, cache, activation):
...
return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
...
return parameters

https://wvyomaulmfiajiopfkeeoo.coursera-apps.org/nbconvert/html/Week 4/Deep Neural Network Application%3A Image Classification/Deep Ne… 7/19


23/12/2019 Deep Neural Network - Application v8

In [6]:

### CONSTANTS DEFINING THE MODEL ####


n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)

https://wvyomaulmfiajiopfkeeoo.coursera-apps.org/nbconvert/html/Week 4/Deep Neural Network Application%3A Image Classification/Deep Ne… 8/19


23/12/2019 Deep Neural Network - Application v8

In [7]:

# GRADED FUNCTION: two_layer_model

def two_layer_model(X, Y, layers_dims, learning_rate=0.0075, num_iterations=3000, print


_cost=False):
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.

Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number o
f examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations

Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""

np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims

# Initialize parameters dictionary, by calling one of the functions you'd previousl


y implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###

# Get W1, b1, W2 and b2 from the dictionary parameters.


W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]

# Loop (gradient descent)

for i in range(0, num_iterations):

# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b
1". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, 'relu')
A2, cache2 = linear_activation_forward(A1, W2, b2, 'sigmoid')
### END CODE HERE ###

# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###

# Initializing backward propagation


dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))

# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2;
also dA0 (not used), dW1, db1".
https://wvyomaulmfiajiopfkeeoo.coursera-apps.org/nbconvert/html/Week 4/Deep Neural Network Application%3A Image Classification/Deep Ne… 9/19
23/12/2019 Deep Neural Network - Application v8

### START CODE HERE ### (≈ 2 lines of code)


dA1, dW2, db2 = linear_activation_backward(dA2, cache2, 'sigmoid')
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, 'relu')
### END CODE HERE ###

# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db


2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2

# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###

# Retrieve W1, b1, W2, b2 from parameters


W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]

# Print the cost every 100 training example


if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)

# plot the cost

plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()

return parameters

Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may
take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output
below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your
error.

https://wvyomaulmfiajiopfkeeoo.coursera-apps.org/nbconvert/html/Week 4/Deep Neural Network Application%3A Image Classification/Deep N… 10/19


23/12/2019 Deep Neural Network - Application v8

In [8]:

parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_itera


tions = 2500, print_cost=True)

Cost after iteration 0: 0.6930497356599888


Cost after iteration 100: 0.6464320953428849
Cost after iteration 200: 0.6325140647912677
Cost after iteration 300: 0.6015024920354665
Cost after iteration 400: 0.5601966311605747
Cost after iteration 500: 0.515830477276473
Cost after iteration 600: 0.4754901313943325
Cost after iteration 700: 0.4339163151225749
Cost after iteration 800: 0.4007977536203887
Cost after iteration 900: 0.3580705011323798
Cost after iteration 1000: 0.3394281538366412
Cost after iteration 1100: 0.3052753636196264
Cost after iteration 1200: 0.27491377282130164
Cost after iteration 1300: 0.24681768210614846
Cost after iteration 1400: 0.19850735037466116
Cost after iteration 1500: 0.1744831811255664
Cost after iteration 1600: 0.17080762978096148
Cost after iteration 1700: 0.11306524562164734
Cost after iteration 1800: 0.09629426845937152
Cost after iteration 1900: 0.08342617959726863
Cost after iteration 2000: 0.07439078704319081
Cost after iteration 2100: 0.0663074813226793
Cost after iteration 2200: 0.0591932950103817
Cost after iteration 2300: 0.053361403485605585
Cost after iteration 2400: 0.04855478562877016

Expected Output:

**Cost after iteration 0** 0.6930497356599888

**Cost after iteration 100** 0.6464320953428849

**...** ...

**Cost after iteration 2400** 0.048554785628770226

https://wvyomaulmfiajiopfkeeoo.coursera-apps.org/nbconvert/html/Week 4/Deep Neural Network Application%3A Image Classification/Deep Ne… 11/19


23/12/2019 Deep Neural Network - Application v8

Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.

Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the
training and test sets, run the cell below.

In [9]:

predictions_train = predict(train_x, train_y, parameters)

Accuracy: 1.0

Expected Output:

**Accuracy** 1.0

In [10]:

predictions_test = predict(test_x, test_y, parameters)

Accuracy: 0.72

Expected Output:

**Accuracy** 0.72

Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test
set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to
prevent overfitting.

Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic
regression implementation (70%, assignment week 2). Let's see if you can do even better with an L-layer
model.

https://wvyomaulmfiajiopfkeeoo.coursera-apps.org/nbconvert/html/Week 4/Deep Neural Network Application%3A Image Classification/Deep N… 12/19

You might also like