Define mathematical model of artificial neural network. Discuss how Hebbian learning algorithm can be used to train a neural network. Support your answer with an example.

This answer is restricted. Please login to view the answer of this question.

Login Now

An artificial neuron network is a computational model based on the structure and functions of biological neural networks. information that flows through the network affects the structure of the ANN because a neural network changes-or learns, in a sense- based on that input and output . ANNs are considered nonlinear statistical data modeling tools where the complex relationships between inputs and outputs are modeled or patterns are found. ANN is also known as a neural network.

Hebbian Learning Algorithm

Hebb Network was stated by Donald Hebb in 1949. According to Hebb’s rule, the weights are found to increase proportionately to the product of input and output. It means that in a Hebb network if two neurons are interconnected then the weights associated with these neurons can be increased by changes in the synaptic gap.

This network is suitable for bipolar data. The Hebbian learning rule is generally applied to logic gates.

The weights are updated as:

W (new) = w (old) + x*y

Training Algorithm For Hebbian Learning Rule

The training steps of the algorithm are as follows:

  • Initially, the weights are set to zero, i.e. w =0 for all inputs i =1 to n and n is the total number of input neurons.
  • Let s be the output. The activation function for inputs is generally set as an identity function.
  • The activation function for output is also set to y= t.
  • The weight adjustments and bias are adjusted to:

Algorithm for Hebbian learning

  • The steps 2 to 4 are repeated for each input vector and output.

Implementing AND Gate : 

- Hamro CSIT

Truth Table of AND Gate using bipolar sigmoidal function

There are 4 training samples, so there will be 4 iterations. Also, the activation function used here is Bipolar Sigmoidal Function so the range is [-1,1].

Step 1 : 

Set weight and bias to zero, w = [ 0 0 0 ]T  and b = 0.

Step 2 : 

Set input vector Xi = Si  for i = 1 to 4.

X1 = [ -1 -1 1 ]T

X2 = [ -1 1 1 ]T

X3 = [ 1 -1 1 ]T

X4 = [ 1 1 1 ]T

Step 3 : 

Output value is set to y = t.

Step 4 : 

Modifying weights using Hebbian Rule:

First iteration –

w(new) = w(old) + x1y1 = [ 0 0 0 ]T + [ -1 -1 1 ]. [ -1 ] = [ 1 1 -1 ]T

For the second iteration, the final weight of the first one will be used and so on.

Second iteration –

w(new) = [ 1 1 -1 ]T + [ -1 1 1 ]T . [ -1 ] = [ 2 0 -2 ]T

Third iteration –

w(new) = [ 2 0 -2]T + [ 1 -1 1 ]T . [ -1 ] = [ 1 1 -3 ]T

Fourth iteration –

w(new) = [ 1 1 -3]T + [ 1 1 1 ]T . [ 1 ] = [ 2 2 -2 ]T

So, the final weight matrix is [ 2 2 -2 ]T

Testing the network : 

- Hamro CSIT

The network with the final weights

For x1 = -1, x2 = -1, b = 1, Y = (-1)(2) + (-1)(2) + (1)(-2) = -6

For x1 = -1, x2 = 1, b = 1, Y = (-1)(2) + (1)(2) + (1)(-2) = -2

For x1 = 1, x2 = -1, b = 1, Y = (1)(2) + (-1)(2) + (1)(-2) = -2

For x1 = 1, x2 = 1, b = 1, Y = (1)(2) + (1)(2) + (1)(-2) = 2

The results are all compatible with the original table.

Decision Boundary : 

2x1 + 2x2 – 2b = y

Replacing y with 0, 2x1 + 2x2 – 2b = 0

Since bias, b = 1, so 2x1 + 2x2 – 2(1) = 0

2( x1 + x2 ) = 2

The final equation, x2 = -x1 + 1

- Hamro CSIT

Decision Boundary of AND Function

If you found any type of error on the answer then please mention on the comment or report an answer or submit your new answer.
Leave your Answer:

Click here to submit your answer.

Discussion
0 Comments
  Loading . . .