Perceptron

Perceptron preview image

1 collaborator

Uri_dolphin3 Uri Wilensky (Author)

Tags

(This model has yet to be categorized with any tags)
Model group CCL | Visible to everyone | Changeable by group members (CCL)
Model was written in NetLogo 5.0.3 • Viewed 537 times • Downloaded 92 times • Run 1 time
Download the 'Perceptron' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


WHAT IS IT?

Artificial Neural Networks (ANNs) are computational parallels of biological neurons. The "perceptron" was the first attempt at this particular type of machine learning. It attempts to classify input signals and output a result. It does this by being given a lot of examples and attempting to classify them, and having a supervisor tell it if the classification was right or wrong. Based on this information the perceptron updates its weights until it classifies all inputs correctly.

For a while it was thought that perceptrons might make good general pattern recognition units. However, it was discovered that a single perceptron can not learn some basic tasks like 'xor' because they are not linearly separable. This model illustrates this case.

HOW IT WORKS

The two nodes on the left are the input nodes. They can have a value of 1 or -1. These are how one presents input to the perceptron. The single node on top is the bias node. Its value is constantly set to '1' and allows the perceptron to use a constant in its calculation. The one output node is on the right. The nodes are connected by links. Each link has a weight.

To determine its value, an output node computes the weighted sum of its input nodes. The value of each input node is multiplied by the weight of the link connecting it to the output node to give a weighted value. The weighted values are then all added up. If the result is above a threshold value, then the value is 1, otherwise it is -1. The threshold value for the output node in this model is 0.

While the network is training, inputs are presented to the perceptron. The output node value is compared to an expected value, and the weights of the links are updated in order to try and correctly classify the inputs.

HOW TO USE IT

SETUP will initialize the model and reset any weights to a small random number.

Pressing the TRAIN button will present a group of examples to the perceptron and weight will be updated. Press TRAIN ONCE to run only one epoch of training.

Moving the EXAMPLES-PER-EPOCH slider changes the number of training examples presented to the perceptron during each step of the TRAIN event.

Moving the LEARNING-RATE slider changes the maximum amount of movement that any one example can have on a particular weight. While the theoretical LEARNING-RATE range continues all the way to 1, the max in this model is set to .1 as high values can make the perceptron oscillate between different solutions, a low rate encourages convergence.

Pressing TEST will input the values of TEST-INPUT-NODE-1-VALUE and TEST-INPUT-NODE-2-VALUE to the perceptron and compute the output.

If SHOW-WEIGHTS? is on then the size of the edges will indicate the weight, and the color will indicate the sign. Blue indicates negative edges, and red indicates positive edges.

The TARGET-FUNCTION chooser allows you to decide which function the perceptron is trying to learn.

THINGS TO NOTICE

The perceptron will quickly learn the all of the functions except for 'xor', it will never learn the 'xor' function. Not only that but when trying to learn the 'xor' function it will never settle down to a particular set of weights as a result it is completely useless as a pattern classifier for non-linearly separable functions. This problem with perceptrons can be solved by combining several of them together as is done in multi-layer networks. An example of that can be found in the Artificial Neural Network model.

The RULE LEARNED graph visually demonstrates the line of separation that the perceptron has learned, and presents the current inputs and their classifications. Dots that are green represent points that should be classified positively. Dots that are red represent points that should be classified negatively. Everything on one side of the line will be classified positively and everything on the other side of the line will be classified negatively. As should be obvious from watching this graph, it is impossible to draw a straight line that separates the red and the green dots in the 'xor' function. This is what is meant when it is said that the 'xor' function is not linearly separable.

The ERROR VS. EPOCHS graph displays the relationship between the squared error and the number of training epochs.

THINGS TO TRY

Try different learning rates and see how this affects the motion of the RULE LEARNED graph.

Try training the perceptron several times using one of the non-'xor' rules and turning on SHOW-WEIGHTS? Does the model ever change after a rule has been learned?

Try modifying the EXAMPLES-PER-EPOCH slider. How does this affect the quickness of the rule being learned? How does it affect the ERROR vs. EPOCHS graph?

EXTENDING THE MODEL

Can you come up with a new learning rule to update the edge weights that will always converge even if the function is not linearly separable?

Can you modify the LEARNED RULE graph so it is obvious which side of the line is positive and which side is negative?

NETLOGO FEATURES

This model makes use of some of the link features. It also treats each node and link as an individual agent. This is distinct from many other languages where the whole perceptron would be treated as a single agent.

RELATED MODELS

Artificial Neural Net shows how arranging perceptrons in multiple layers can overcomes some of the limitations of this model (such as the inability to learn 'xor')

CREDITS AND REFERENCES

Several of the equations in this model are derived from Tom Mitchell's book "Machine Learning" (1997).

Perceptrons were initially proposed in the late 1950s by Frank Rosenblatt.

A standard work on perceptrons is the book Perceptrons by Marvin Minsky and Seymour Papert (1969). The book includes the result that single-layer perceptrons cannot learn XOR. The discovery that multi-layer perceptrons can learn it came later, in the 1980s.

Thanks to Craig Brozefsky for his work in improving this model.

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

globals [
  epoch-error   ;; average error in this epoch
  perceptron    ;; a single output-node
  input-node-1  ;; keep the input nodes in globals so we can refer
  input-node-2  ;; to them directly and distinctly
]

;; A perceptron is modeled by input-node and bias-node agents
;; connected to an output-node agent.

;; Connections from input nodes to output nodes
;; in a perceptron.
links-own [ weight ]

;; all nodes have an activation 
;; input nodes have a value of 1 or -1
;; bias-nodes are always 1
turtles-own [activation]

breed [ input-nodes input-node ]

;; bias nodes are input-nodes whose activation
;; is always 1.
breed [ bias-nodes bias-node ]

;; output nodes compute the weighted sum of their
;; inputs and then set their activation to 1 if
;; the sum is greater than their threshold.  An
;; output node can also be the input-node for another
;; perceptron.
breed [ output-nodes output-node ]
output-nodes-own [threshold]

;;
;; Setup Procedures
;;

to setup
  clear-all

  ;; set our background to something more viewable than black
  ask patches [ set pcolor white - 3 ]

  set-default-shape input-nodes "circle"
  set-default-shape bias-nodes "bias-node"
  set-default-shape output-nodes "output-node"

  create-output-nodes 1 [
    set activation random-activation
    set xcor 6
    set size 2
    set threshold 0
    set perceptron self
  ]

  create-bias-nodes 1 [
    set activation 1
    setxy 3 7
    set size 1.5
    my-create-link-to perceptron
  ]

  create-input-nodes 1 [
    setup-input-node
    set label "Node 1"
    setxy -6 5
    set input-node-1 self
  ]

  create-input-nodes 1 [
    setup-input-node
    set label "Node 2"
    setxy -6 0
    set input-node-2 self
  ]

  ask perceptron [ compute-activation ]
  reset-ticks
end 

to setup-input-node
    set activation random-activation
    set size 1.5
    my-create-link-to perceptron
    set label-color magenta
end 

;; links an input or bias node to an output node

to my-create-link-to [ anode ] ;; input or bias node procedure
  create-link-to anode [
    set color red + 1
    ;; links start with a random weight
    set weight random-float 0.1 - 0.05
  ]
end 

;;
;; Runtime Procedures
;;

;; train sets the input nodes to a random input
;; it then computes the output
;; it determines the correct answer and back propagates the weight changes

to train ;; observer procedure
  set epoch-error 0
  repeat examples-per-epoch
  [
    ;; set the input nodes randomly
    ask input-nodes
      [ set activation random-activation ]

    ;; distribute error
    ask perceptron [
      compute-activation
      update-weights target-answer
      recolor
    ]
  ]

  ;; plot stats
  set epoch-error epoch-error / examples-per-epoch
  set epoch-error epoch-error * 0.5
  tick
end 

;; compute activation by summing the inputs * weights 
;; and run through sign function which determines whether
;; the computed value is above or below the threshold

to compute-activation ;; output-node procedure
  set activation sign sum [ [activation] of end1 * weight ] of my-in-links
  recolor
end 

to update-weights [ answer ] ;; output-node procedure
  let output-answer activation

  ;; calculate error for output nodes
  let output-error answer - output-answer

  ;; update the epoch-error
  set epoch-error epoch-error + (answer - sign output-answer) ^ 2

  ;; examine input output edges and set their new weight
  ;; increasing or decreasing it by a value determined by the learning-rate
  ask my-in-links [
    set weight weight + learning-rate * output-error * [activation] of end1
  ]
end 

;; computes the sign function given an input value

to-report sign [input]  ;; output-node procedure
  ifelse input > threshold
  [ report 1 ]
  [ report -1 ]
end 

to-report random-activation ;; observer procedure
  ifelse random 2 = 0
  [ report 1 ]
  [ report -1 ]
end 

to-report target-answer ;; observer procedure
  let a [activation] of input-node-1 = 1
  let b [activation] of input-node-2 = 1
  report ifelse-value (run-result (word "my-" target-function " a b")) [1][-1]
end 

to-report my-or [a b];; output-node procedure
  report (a or b)
end 

to-report my-xor [a b] ;; output-node procedure
  report (a xor b)
end 

to-report my-and [a b] ;; output-node procedure
  report (a and b)
end 

to-report my-nor [a b] ;; output-node procedure
  report not (a or b)
end 

to-report my-nand [a b] ;; output-node procedure
  report not (a and b)
end 

;; test runs one instance and computes the output

to test ;; observer procedure
  ask input-node-1 [ set activation test-input-node-1-value ]
  ask input-node-2 [ set activation test-input-node-2-value ]

  ;; compute the correct answer
  let correct-answer target-answer

  ;; color the nodes
  ask perceptron [ compute-activation ]

  ;; compute the answer

  let output-answer [activation] of perceptron

  ;; output the result
  ifelse output-answer = correct-answer
  [
    user-message (word "Output: " output-answer "\\nTarget: " correct-answer "\\nCorrect Answer!")
  ]
  [
    user-message (word "Output: " output-answer "\\nTarget: " correct-answer "\\nIncorrect Answer!")
  ]
end 


;; Sets the color of the perceptron's nodes appropriately
;; based on activation

to recolor ;; output, input, or bias node procedure
  ifelse activation = 1
    [ set color white ]
    [ set color black ]
  ask in-link-neighbors [ recolor ]

  ifelse show-weights?
  [ resize-recolor-links ]
  [
    ask my-in-links [
      set thickness 0
      set label ""
      set color red + 1
    ]
  ]
end 

;; resize and recolor the edges
;; resize to indicate weight
;; recolor to indicate positive or negative

to resize-recolor-links
  ask links [
    set label precision weight 4
    set thickness 0.1 + 4 * abs weight
    ifelse weight > 0
    [ set color red + 1 ]
    [ set color blue ]
  ]
end 

There are 9 versions of this model.

Uploaded by When Description Download
Uri Wilensky almost 13 years ago Updated version tag Download this version
Uri Wilensky almost 13 years ago Updated to version from NetLogo 5.0.3 distribution Download this version
Uri Wilensky over 13 years ago Updated to NetLogo 5.0 Download this version
Uri Wilensky about 15 years ago Updated from NetLogo 4.1 Download this version
Uri Wilensky about 15 years ago Updated from NetLogo 4.1 Download this version
Uri Wilensky about 15 years ago Updated from NetLogo 4.1 Download this version
Uri Wilensky about 15 years ago Updated from NetLogo 4.1 Download this version
Uri Wilensky about 15 years ago Model from NetLogo distribution Download this version
Uri Wilensky about 15 years ago Perceptron Download this version

Attached files

File Type Description Last updated
Perceptron.png preview Preview for 'Perceptron' over 12 years ago, by Uri Wilensky Download

This model does not have any ancestors.

This model does not have any descendants.