in Artificial Intelligence recategorized by
3,135 views
0 votes
0 votes

In a single perceptron, the updation rule of weight vector is given by

  1. $w(n+1) = w(n) + \eta [d(n)-y(n)]$
  2. $w(n+1) = w(n) - \eta [d(n)-y(n)]$
  3. $w(n+1) = w(n) + \eta [d(n)-y(n)]*x(n)$
  4. $w(n+1) = w(n) - \eta [d(n)-y(n)]*x(n)$
in Artificial Intelligence recategorized by
3.1k views

1 Answer

2 votes
2 votes

ans is C  f

Steps[edit]

  1. Initialize the weights and the threshold. Weights may be initialized to 0 or to a small random value. In the example below, we use 0.
  2. For each example j in our training set D, perform the following steps over the input \mathbf {x} _{j}\,and desired output d_{j}\,:
    1. Calculate the actual output:
      {\displaystyle {\begin{aligned}y_{j}(t)&=f[\mathbf {w} (t)\cdot \mathbf {x} _{j}]\\&=f[w_{0}(t)x_{j,0}+w_{1}(t)x_{j,1}+w_{2}(t)x_{j,2}+\dotsb +w_{n}(t)x_{j,n}]\end{aligned}}}
    2. Update the weights:
      {\displaystyle w_{i}(t+1)=w_{i}(t)+(d_{j}-y_{j}(t))x_{j,i}\,}, for all features 0\leq i\leq n
       
       
      for more details refer https://en.wikipedia.org/wiki/Perceptron

1 comment

From which subject is this
0
0
Answer:

Related questions