This answer is restricted. Please login to view the answer of this question.

Login NowAn artificial neuron network is a computational model based on the structure and functions of biological neural networks. information that flows through the network affects the structure of the ANN because a neural network changes-or learns, in a sense- based on that input and output . ANNs are considered nonlinear statistical data modeling tools where the complex relationships between inputs and outputs are modeled or patterns are found. ANN is also known as a neural network.

**Hebb Network** was stated by Donald Hebb in 1949. According to Hebb’s rule, the weights are found to increase proportionately to the product of input and output. It means that in a Hebb network if two neurons are interconnected then the weights associated with these neurons can be increased by changes in the synaptic gap.

This network is suitable for bipolar data. The Hebbian learning rule is generally applied to logic gates.

The weights are updated as:

**W (new) = w (old) + x*y**

**Training Algorithm For Hebbian Learning Rule**

The training steps of the algorithm are as follows:

- Initially, the weights are set to zero, i.e. w =0 for all inputs i =1 to n and n is the total number of input neurons.
- Let s be the output. The activation function for inputs is generally set as an identity function.
- The activation function for output is also set to y= t.
- The weight adjustments and bias are adjusted to:

- The steps 2 to 4 are repeated for each input vector and output.

**Implementing AND Gate : **

**Step 1 : **

Set weight and bias to zero, w = [ 0 0 0 ]^{T }and b = 0.

**Step 2 : **

Set input vector X_{i} = S_{i }for i = 1 to 4.

X_{1} = [ -1 -1 1 ]^{T}

X_{2} = [ -1 1 1 ]^{T}

X_{3} = [ 1 -1 1 ]^{T}

X_{4} = [ 1 1 1 ]^{T}

**Step 3 : **

Output value is set to y = t.

**Step 4 : **

Modifying weights using Hebbian Rule:

First iteration –

w(new) = w(old) + x_{1}y_{1} = [ 0 0 0 ]^{T} + [ -1 -1 1 ]^{T }. [ -1 ] = [ 1 1 -1 ]^{T}

For the second iteration, the final weight of the first one will be used and so on.

Second iteration –

w(new) = [ 1 1 -1 ]^{T} + [ -1 1 1 ]^{T} . [ -1 ] = [ 2 0 -2 ]^{T}

Third iteration –

w(new) = [ 2 0 -2]^{T} + [ 1 -1 1 ]^{T} . [ -1 ] = [ 1 1 -3 ]^{T}

Fourth iteration –

w(new) = [ 1 1 -3]^{T} + [ 1 1 1 ]^{T} . [ 1 ] = [ 2 2 -2 ]^{T}

So, the final weight matrix is [ 2 2 -2 ]^{T}

**Testing the network : **

For x_{1} = -1, x_{2} = 1, b = 1, Y = (-1)(2) + (1)(2) + (1)(-2) = -2

For x_{1} = 1, x_{2} = -1, b = 1, Y = (1)(2) + (-1)(2) + (1)(-2) = -2

For x_{1} = 1, x_{2} = 1, b = 1, Y = (1)(2) + (1)(2) + (1)(-2) = 2

The results are all compatible with the original table.

**Decision Boundary : **

2x_{1} + 2x_{2} – 2b = y

Replacing y with 0, 2x_{1} + 2x_{2} – 2b = 0

Since bias, b = 1, so 2x_{1} + 2x_{2} – 2(1) = 0

2( x_{1} + x_{2} ) = 2

The final equation, x_{2} = -x_{1} + 1

If you found any type of error on the answer then please mention on the comment or report an answer or submit your new answer.

Click here to submit your answer.

HAMROCSIT.COM

## Discussion