GoGo Mapper

GoGo Mapper preview image

1 collaborator

Default-person Bryan Head (Author)

Tags

(This model has yet to be categorized with any tags)
Part of project 'My Test Project'
Model group LS426_2013 | Visible to everyone | Changeable by everyone
Model was written in NetLogo 5.0.3 • Viewed 729 times • Downloaded 47 times • Run 0 times
Download the 'GoGo Mapper' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


WHAT IS IT?

This model allows students to look into the brain of an autonomous vehicle, like the Curiousity rover, the Roomba, or Google's self-driving car, to see one strategy robots might employ to learn both their environment and their own limitations. The student can also take control of the robot to see if she can figure out how to best take advantage of the robot's learning algorithms. The robot's goal is to learn both the layout of its environment and its own physical attributes. There are several difficulties the robot has when trying to map its environment besides the mapping itself. First, it must work with the unreliable data of a single distance sensor of unknown range. Second, it must deal with motors and wheels that can slip, get stuck, get out of sync, and so forth. Finally, it must be able to deal with mistaken conclusions it reached about the environment as it acquires more information about itself. These unpredictabilities are challenges one only encounters when dealing with physical robots. The difficulty that these challenges present is easy to underestimate.

HOW IT WORKS

The view in NetLogo is the robot's conception of the space around it. If a patch is red, the robot suspects there might be an obstacle there. The brighter the red, the more likely the obstacle. The robot is trying to learn the following:

  • Which patches correspond to obstacles in its environment
  • How many degrees it turns when issued a turn command (turn-speed)
  • How far its sensor can sense (sensor-range)
  • How far the sensor sits in front of the robot's center (sensor-position)

Furthermore, it must learn if it has made a mistake and correct for that mistake.

To accomplish these goals, the model uses as agent based approach to learning. This algorithm is a rough fushion of a particle filter and a genetic algorithm. In the view, you will see many turtles. These turtles are the hypotheses the robot has about the states it might be in: let's call these turtles "possible states". Each possible state has slightly different position, heading, turn-speed, sensor-range, and sensor-position. The position of each possible state is a position that the robot thinks it might be in. Same for the rest of the variables. The line coming out of a possible state represents what the sensor is doing if that state is right, given the value of distance sensor.

These possible states are judged on whether or not they agree with the distance sensor's current reading, given the robot's current conception of its environment. Simultaneously, each possible state adjusts the likelihood that the patches in front of it are solid given what it would be if the possible state was correct.

The algorithm does the following each step:

  1. Each possible state fires an "echo" out in front of it. The distance of the echo depends on the sensor reading as well as what the possible state thinks the sensor-position and sensor-range are.
  2. As the echo travels out, the confidence in the possible state goes up if the behavior of the echo agrees with what the robot already thinks about the world, and down otherwise.
  3. The echoes are pulled back to each possible state. On their way, they adjust likelihood that the patches they travel over are solid.
  4. The possible states that performed the best are kept and the rest are destroyed.
  5. The remaining possible states reproduce. The children of the possible states have slightly different characteristics than their parents.
  6. The robot decides what behavior to carry out and does it. The possible states try to do the same movement given their turn-speed. Each possible state's movements will vary randomly by a small amount. Thus, if one of the robot's wheels slip, or its motors get slightly out sync, some of the possible states may have undergone similar variations.

The default behavior of the robot is very simple. It will go forward until it encounters an obstacle. It will then turn and back up slightly, and then continue forward.

HOW TO USE IT

First, try setting a little environment for the robot out of various objects. Press setup and go, and watch the robot try to learn about its environment.

Think you can do a better job then the robot? Turn off autonomous-mode and make it so that you can't see the robot and have your friends set up the robot's environment. Then, using only what you see in NetLogo, try to get the robot to map out its environment.

THINGS TO TRY

Try changing the mutation rates and initial values of the robot's attributes.

EXTENDING THE MODEL

By default, the robot behavior is very simple and not very smart. Try implementing a better behavioral strategy that will help the robot learn about itself and its environment more quickly.

CREDITS AND REFERENCES

The algorithm is based loosely on lectures by Sebastian Thrun during Stanford's online Introduction to AI class. If he saw this model, however, he would probably be appalled and deny any similarity between what he was talking and what is seen here.

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

;; There are two main problems that the robot must solve
;; 1) Figure out the layout of the room -- that is, what patches are solid, what aren't
;; 2) Calibrate itself the sensor range and turn speed, as well as allow for imprecise movements
;; This algorithm attempts to solve both problems at once.
;; It works by having many potential robots, each with somewhat characteristics and position.
;; Each potential robot checks it's accuracy against the sensor data. The confidence in each
;; potential robot is adjusted accordingly.
;; Then, each robot basically votes on which patches are solid and which are vacant, based on the
;; sensor data and the potential robot's attributes.
;; The best potential robots are then selected and the rest are removed.
;; The best robots reproduce with small variations.
;; The real robot and potential robots then move in an ideally similar fashion and the cycle repeats.
;;
;; Hopefully the robots conception of it's environment and itself converge.
;; The nice part of this algorithm is that it makes no assumptions about the sensitivity of the robot's
;; sensors, power of its motors, or the environment. It also requires no deduction, complex math,
;; or anything.
;;
;; It is loosely based on the particle filter algorithm used to orient robots in a known environment
;; while allowing for error in movements. This algorithm doesn't use any actual probability theory though.

extensions [ gogo ]

globals [
  serial-port
  test?
  controlable-srange
  ;; If the sensor reads > max-dist, we say it didn't hit anything
  time-step       ;; how long actions are done for
  max-confidence  ;; A patch can't have solidity > max-confidence; a robot can have confidence > max-confidence, but it only helps in selection, not reproduction
  previous-hit?
]

breed [ robots robot ]
breed [ fauxgos fauxgo ]  ;; fake gogo board robot for testing
breed [ echoes echo ]

robots-own [
  sensor-range ;; The sensors range in units of the distance the robot travels in time-step
  confidence    ;; Represents how confident we are in the bot
  turn-speed  ;; degrees the robot thinks it moves at a time
  sensor-position ;; how far in front of the center of rotation the sensor is
]

fauxgos-own [
  sensor-range
  turn-speed
  sensor-position
]

echoes-own [
  sensor-range
  sensor-position
  parent
  hit-patch
]

patches-own [
  solidity      ;; Represents how likely it is that the patch is solid
  solid?        ;; used for testing
]

to set-auton
  set autonomous-mode? true
end 

to set-auton-off
  set autonomous-mode? false
end 

to setup
  if not gogo:open? [ setup-gogo ]
  setup-brain
  set test? false
end 

to setup-gogo
  ifelse length (gogo:ports) > 0
    [ set serial-port user-one-of "Select a port:" gogo:ports ]
    [ user-message "There is a problem with the connection. Check if the board is on, and if the cable is connected. Otherwise, try to quit NetLogo, power cycle the GoGo Board, and open NetLogo again. For more information on how to fix connection issues, refer to the NetLogo documentation or the info tab of this model"
      stop ]
  gogo:open serial-port
  repeat 5
    [ if not gogo:ping
      [ user-message "There is a problem with the connection. Check if the board is on, and if the cable is connected. Otherwise, try to quit NetLogo, power cycle the GoGo Board, and open NetLogo again. For more information on how to fix connection issues, refer to the NetLogo documentation or the info tab of this model"] ]
  gogo:talk-to-output-ports [ "a" "b" "c" "d"]
end 

to setup-test
  setup-brain
  create-fauxgos 1 [
    set heading 0
    set turn-speed 3
    set sensor-position 3
    set shape "turtle"
    set size 5
    set sensor-range 3
  ]
  ask patches [
    set solid? pxcor = min-pxcor or
               pycor = min-pycor or
               pxcor = max-pxcor or
               pycor = max-pycor or
               (pxcor = 25 and pycor > -1) or
               (pxcor > -1 and pycor = 25) or
               (pxcor < 0 and pycor < 0 and (25 > abs (pxcor * pxcor + pycor * pycor - 625) ))
  ]
  ask patches with [solid?] [set pcolor white]
  set test? true
end 

to setup-brain
  ca
  set time-step 0.05
  set max-confidence 1000
  set previous-hit? false
  create-robots 1 [
    set heading 0
    set size 5
    set confidence max-confidence
    set turn-speed init-turn-speed
    set sensor-range init-sensor-range
    set sensor-position init-sensor-position
  ]
end 

to go
  sensor-check
  select-robots
  ask robots [reproduce]
  ask robots [set confidence 0]
  if autonomous-mode? = true
  [autonomous-move-behavior]
end 

to autonomous-move-behavior
  ifelse sense-dist > max-dist [robots-fd set previous-hit? false] [robots-rt robots-bk set previous-hit? true ]
end 

to autonomous-move-behavior1
  ifelse previous-hit? = false
  [ifelse sense-dist > max-dist [robots-fd set previous-hit? false] [robots-rt robots-bk set previous-hit? true ]]
  [ifelse sense-dist > max-dist [repeat 10 [robots-bk robots-lt sensor-check] set previous-hit? false] [robots-rt robots-bk set previous-hit? true ]]   
end 
;; observer procedure
;; Adjusts robots' confidences and patches' solidities based on sensor reading.

to sensor-check
  ask robots [set confidence confidence - 5 * [solidity] of patch-here]
  ask robots [fire-echo]
  ask robots [retract-echo]
  ask patches [
    if solidity < 0 [set solidity 0]
    if solidity > max-confidence [set solidity max-confidence]
    patch-recolor
  ]
  cd
end 

to-report sense-dist
  ifelse test? [
    let dist 0
    ask fauxgos [
      hatch-echoes 1 [
        set parent myself
        fd sensor-position
        while [distance myself < (sensor-range + sensor-position) and not [solid?] of patch-here and can-move? echo-speed] [
          fd echo-speed
        ]
        set dist distance myself - sensor-position
        die
      ]
    ]
    report 1.1 * max-dist * (random-normal 0 .1 + dist) / [sensor-range] of one-of fauxgos
  ] [report gogo:sensor 1]
end 

;; robot procedure
;; Gets the sensor's output in patches based on what this robot think's the sensor's range is

to-report get-dist
  report ifelse-value learn-sensor-range? [sensor-range][init-sensor-range] * sense-dist / max-dist
end 

;; robot procedure
;; Shoots an echo to the spot where the sensor sensed something (according to the robot)
;; The robots confidence level is adjusted on the way:
;; - The echo passing over a patch that the robot thinks is solid penalizes the robot
;; - The echo passing over a patch that the robot thinks is vacant rewards the robot slightly
;; - At the end, if the robot correctly predicted whether or not an obstacle was there, the robot is rewarded
;; - At the end, if the robot incorrectly predicted, the robot is penalized
;; All penalties/rewards are adjusted based on the solidity of the patches (as solidity sort of corresponds to confidence)

to fire-echo
  let dist get-dist
  hatch-echoes 1 [
    set size 1
    fd sensor-position
    pd
    set parent myself
  ]
  ask echoes with [parent = myself] [
    while [distance myself < (dist + sensor-position) and can-move? echo-speed] [
      let cur-patch patch-here
      ask parent [set confidence confidence + vacant-patch-reward-factor * (5 - [solidity] of cur-patch) / 10]
      fd echo-speed
    ]
    ifelse dist < sensor-range [
      set hit-patch patch-here
      ask parent [set confidence confidence + obstacle-reward-factor * ([solidity] of [hit-patch] of myself - (max-confidence / 3)) / 10]
    ] [
      set hit-patch nobody
    ]
  ]
end 

;; robot procedure
;; Like fire-echo, but adjusts solidity of the patches as the echo travels back to the robot.
;; The robots are essentially voting on what patches they think are solid

to retract-echo
  ask echoes with [parent = myself] [
    if hit-patch != nobody [ask hit-patch [ set solidity solidity + 5 ]]
    while [distance parent >= sensor-position] [
      face parent
      fd echo-speed
      if patch-here != hit-patch [ ask patch-here [ set solidity solidity - 1] ]
    ]
    die
  ]
end 

;; robot procedure
;; Each echo travels the same number of steps, otherwise it's biased to bigger sensor-ranges

to-report echo-speed
  report [sensor-range] of parent / 20
end 

;; observer procedure
;; Natural selection - best keep-robots robots are kept, rest die

to select-robots
  if count robots > keep-robots [
    ask min-n-of (count robots - keep-robots) robots [confidence] [die]
  ]
end 

;; robot procedure
;; Robots reproduce based on their confidence

to reproduce
  hatch-robots reproduce-amount [
    rt random-normal 0 .1
    if learn-sensor-range? [
      set sensor-range sensor-range + random-normal 0 sensor-range-mut-rate
    ]
    set color 5 + 10 * random 14
    if learn-turn-speed? [
      set turn-speed turn-speed + random-normal 0 turn-speed-mut-rate
    ]
    if learn-sensor-position? [
      set sensor-position sensor-position + random-normal 0 sensor-position-mut-rate
    ]
  ]
end 

to robots-fd
  ask robots [fd random-normal 1 vary-move]
  ifelse test? [
    ask fauxgos [
      let dist random-normal 1 0.1
      fd dist
      if [solid?] of patch-here [bk dist]
    ]
  ] [
    gogo:talk-to-output-ports ["a" "b"]
    gogo:set-output-port-power 7
    gogo:output-port-on
    wait time-step
    gogo:output-port-off
  ]
end 

to robots-bk
  ask robots [bk random-normal 1 vary-move]
  ifelse test? [
    ask fauxgos [
      let dist random-normal 1 0.1
      bk dist
      if [solid?] of patch-here [fd dist]
    ]
  ] [
    gogo:talk-to-output-ports ["a" "b"]
    gogo:set-output-port-power 7
    gogo:output-port-reverse
    gogo:output-port-on
    wait time-step
    gogo:output-port-off
    gogo:output-port-reverse
  ]
end 

to robots-rt
  ask robots [
    fd random-normal 0 0.01
    rt random-normal (ifelse-value learn-turn-speed? [turn-speed] [init-turn-speed]) vary-turn
  ]
  gogo-rt
end 

to gogo-rt
  ifelse test? [
    ask fauxgos [rt turn-speed]
  ] [
    gogo:talk-to-output-ports ["a" "b"]
    gogo:set-output-port-power 7
    gogo:output-port-on
    gogo:talk-to-output-ports ["a"]
    gogo:output-port-reverse
    wait time-step
    gogo:talk-to-output-ports ["b"]
    gogo:output-port-off
    gogo:talk-to-output-ports ["a"]
    gogo:output-port-off
    gogo:output-port-reverse
  ]
end 

to robots-lt
  ask robots [
    fd random-normal 0 0.01
    lt random-normal (ifelse-value learn-turn-speed? [turn-speed] [init-turn-speed]) vary-turn
  ]
  gogo-lt
end 

to gogo-lt
  ifelse test? [
    ask fauxgos [lt turn-speed]
  ][
    gogo:talk-to-output-ports ["a" "b"]
    gogo:set-output-port-power 7
    gogo:output-port-on
    gogo:talk-to-output-ports ["b"]
    gogo:output-port-reverse
    wait time-step
    gogo:talk-to-output-ports ["a"]
    gogo:output-port-off
    gogo:talk-to-output-ports ["b"]
    gogo:output-port-off
    gogo:output-port-reverse
  ]
end 

to patch-recolor
  set pcolor scale-color red solidity 0 max-confidence
end 

to robot-recolor
  set color scale-color color confidence 0 max-confidence
end 

There is only one version of this model, created over 12 years ago by Bryan Head.

Attached files

File Type Description Last updated
GoGo Mapper.png preview Preview for 'GoGo Mapper' over 12 years ago, by Bryan Head Download

This model does not have any ancestors.

This model does not have any descendants.