Neural Networks (Part 2)

Ahmed Imam
Analytics Vidhya
Published in
4 min readJun 18, 2020

--

Backpropagation

Tip: You should have good understanding of what is supervised machine learning, what is training data and testing data. (Also, I prefer to read the first part of this topic

  • As per supervised machine learning, we have inputs and actual output (labeled data)
  • We are using preliminary random weight values.
  • So we need the right values of weights to get the actual desired outputs or a nearest value to it with least error and this is the rule of back propagation.
Fig.1 FFNN and Back_propagation

Feedforward Neural Network

which is being discussed in previous tutorial

  • 1- Inputs are being received.
  • 2- Inputs are being modeled weights. The weights are usually randomly selected.
  • 3- Get calculated (predicted) output values at output layer neurons.

Back Propagation

4- Compute error terms on output layer neurons (π›Ώπ‘˜)

Compute error tarns on output layer narous π›Ώπ‘˜

5- Get Derivative of activation function

Activation Function (sigmoid function)

6- Back propagate error to output layer to adjust the weights such that the error is decreased.

7- Back propagate error to hidden layer to adjust the weights such that the error is decreased.

When to stop:

Repeat feed-forward process, compute error: (Mean Square Error)

if 𝐸𝑝 β©½ acceptable value then stop, else do more back-propagation.

Example:

Solution:

Run FeedForward Propagation

# Let's code it:
import numpy as np
# learning rate (eta):
eta = 0.1
# Actual output
y1 = 0.5
# Activation function (Sigmoid)
def f(s):
return 1/(1 + np.exp(-s))
x = np.array([
[0.9],
[-0.3]
])
hidden_weights = np.array([
[0.4, -0.2],
[1.1, 0.9]
])
out_weights = np.array([
[1.3, -0.4]
])
# Hidden-network linear-combiner:
net_hidden = np.dot(hidden_weights, x)
# Hidden-network output (I):
I = f(net_hidden)
print('Hidden-network output (I): \n', I)

Hidden-network output (I):
[[0.60348325]
[0.67260702]]

# Output-network linear-combiner:
net_output = np.dot(out_weights, I)
# Output-network output (out1):
out1 = f(net_output)
print('Output-network predicted (out1): \n', out1)

Output-network predicted (out1):
[[0.6260915]]

# Calculate Error:
delta_1 = y1 - out1
print(delta_1)
print('Mean Square Error: ', delta_1**2)

[[-0.1260915]]
Mean Square Error: [[0.01589907]]

==========================================================

Run BackPropagation (on output network):

delta_out_1 = out1 * (1 - out1) * delta_1
updated_out_weights = out_weights + (eta * delta_out_1) * np.transpose(I)
print(updated_out_weights)
[[ 1.29821863 -0.40198541]]

==========================================================

Run BackPropagation (on hidden network):

delta_hid = I * ([[1],[1]] - I) * np.transpose(delta_1 * out_weights)
print(delta_hid)

[[-0.03922437]
[ 0.01110648]]

updated_hid_weights = hidden_weights + (eta * np.transpose(x) * delta_hid)
print(updated_hid_weights)

[[ 0.39646981 -0.19882327]
[ 1.10099958 0.89966681]]

Re-run FeedForward Propagation

# Hidden-network linear-combiner:
net_hidden = np.dot(updated_hid_weights, x)
# Hidden-network output (I):
I = f(net_hidden)
print('Hidden-network output (I): \n', I)

Hidden-network output (I):
[[0.6026382 ]
[0.67282709]]

# Output-network linear-combiner:
net_output = np.dot(out_weights, I)
# Output-network output (out1):
out1 = f(net_output)
print('Output-network predicted (out1): \n', out1

Output-network predicted (out1):
[[0.62581368]]

# Calculate Error:
delta_1 = y1 - out1
print(delta_1)
print('Mean Square Error: ', delta_1**2)

[[-0.12581368]]
Mean Square Error: [[0.01582908]]

Part 1: Neural Network Part 1

--

--