RealBasic: Robot A.I. – Part 4 – Robot Localization


In this final introduction chapter about robotics we are going to do something very exciting: Localization! In all our previous endeavours Robby not only knew exactly the map of the house, it also knew exactly its x and y position on this map.

But now we are going to take away this knowledge. When we put Robby on the map, the robot does not know if it is in room A of B. We are going to have to find a way so Robby can find its own postion with a calculated guess.

Luckily, some great algorithmes have been found to do this very quickly. The one we are going to use is my interpretation of the Monte Carlo Localization (MCL) technique. Sounds very James Bond, doesn’t it 🙂

Before we get into how it works, let’s have a look how Robby can find its position in the map of our house:

So how does MCL work? Here is a brief description of the principle:

The Monte Carlo approach to localization is based on a collection of samples (which are also known as particles). Each sample consists of a possible location the robot may currently occupy, along with a value which represents the probability that the robot is currently at that location. Strictly speaking these are not probabilities, so they are often referred to as importance weights, and we will use that terminology here.

How many samples should be used? The more you have, the more quickly the program will converge to a correct solution, but at the cost of more computational time and hence slower robot movement.

When the program begins the robot does not know where it is, so the current samples (really the locations the samples contain) are evenly distributed over the range of possible locations, and the importance weights are all the same. Over time the samples near the actual current position should become more likely, and those further away less likely. If we created a line graph which plotted the samples using location verses importance weight, the graph should spike over the actual location. Think of this curve as representing the robot’s belief state about the robot’s location. In the beginning it is a flat line (since the robot has no idea where it is), but over time it should show humps over likely locations, with (hopefully) a large spike over the actual location.

The general idea is as follows:

a. Initialize the set of samples (the current samples) so that their locations are evenly distributed and their importance weights are equal.
b. Repeat until done with the current set of samples:
—-1. Move the robot a fixed distance and then take a sensor reading.
—-2. Update the location of each of the samples (using the movement model)..
—-3. Assign the importance weights of each sample to the likelihood of that sensor reading given that new location (using the sensor model).

Remember that the actual distance moved may not be the expected distance. So model that by adding (or subtracting) a random error factor. Similarly, if the sensor reading seems to indicate a door, remember that is might be wrong, and take that into consideration. These are called, respectively, the movement and sensor models. One could stop refining the algorithm here – over time the importance weights would tend towards a solution.

However, the samples with low importance weights add little to figuring out the robot’s current location. That fact leads us to the answer of the next question: “Where does the Monte Carlo part come in – doesn’t that name suggest games of chance?” Yes, it does. And this is where the big improvement comes in, making the algorithm converge to a solution more quickly. In the loop above, we now add the following steps:

—-4. Create a new collection of samples by sampling (with replacement) from the current set of samples based on their importance weights.
—-5. Let this be new collection become the current set of samples.

In step four, samples with a high importance weight are more likely to be chosen than those with low probability, and some sample may be repeatedly chosen. So we (usually) end up with a set of samples with a higher cumulative importance weight that those we had before. This is similar to the approach genetic algorithms take – survival of the fittest!

It is now possible that more than one sample may be at the same location, and clearly samples will cluster around likely locations. So where is the robot likely to be? At the location (or area) with the most samples.

This is an extract taken from here

Read it a couple of times if you don’t understand it. You’ll see it is not that difficult and actually quite genius!

As we work with some kind of randomness, the algorithm may fail in some very rare cases. In my interpretation I made one extra step: reset. When it looks like the algorithm will fail, I’ll restart the whole thing from step a.

In our algorithms folder you can find an extra map with the MCL localizer:

My version of this algoithm may not follow MCL to the letter, but I’ve found it worked very well in RealBasic this way.
The first thing we do is spawn all the particles all around our map: Initially our robot could be in any position:

Private Sub SpawnParticles()
  redim Particles(ParticleCount)
  dim i as Integer
  
  for i = 0 to ParticleCount
    Particles(i) = RandomizePosition
  next
  dim Weight as Double = 1 / (ParticleCount+1)
  for i = 0 to ParticleCount
    Particles(i).Weight = weight
    Particles(i).CumulWeight = (i+1)/(ParticleCount+1)
  next
End Sub

The rest of the algorithm can be found in the SetNewDelta function.

Initially, we observe our particles (we look how far they are from a wall in our case) and give each particle a new weight.

  if isInit then
    for i = 0 to ParticleCount
      Particle = Particles(i)
      
      ParticleObserve(Particle)
      Particle.Weight = Correlation()
      if Particle.Weight < minWeight then
        minWeight = Particle.Weight
      end if
      if Particle.Weight > maxWeight then
        maxWeight = Particle.Weight
      end if
      totalWeight = totalWeight + Particle.Weight
      NewParticles.Append Particle
    next
  else ...

For each next loop we pick a particle taking its weight into account with ‘Particle = PickParticle()’ Then we move this particle a certain random delta ‘RandomizeMove() function’. We observe it again and give it a new weight. We do a spawn again if it seems the algorithm will fail.

  ...
  else
    for i = 0 to ParticleCount
      Particle = PickParticle()
      RandomizeDelta = RandomizeMove(DeltaX, DeltaY)
      Particle.x = particle.x + RandomizeDelta.x
      Particle.y = particle.y + RandomizeDelta.y
      ClearCounter = 0
      while not IsClearSpace(Particle.x, Particle.y)
        Particle = PickParticle()
        Particle.x = particle.x + RandomizeDelta.x
        Particle.y = particle.y + RandomizeDelta.y
        ClearCounter = ClearCounter + 1
        if ClearCounter > ParticleCount then
          ' we are probably completely wrong, let's restart our measurements
          app.DoEvents
          SpawnParticles
          Return
        end if
      wend
      
      ParticleObserve(Particle)
      Particle.Weight = Correlation()
      if Particle.Weight < minWeight then
        minWeight = Particle.Weight
      end if
      if Particle.Weight > maxWeight then
        maxWeight = Particle.Weight
      end if
      totalWeight = totalWeight + Particle.Weight
      NewParticles.Append Particle
    next
  end if

An intersting function to look at is the PickPaticle function. It does pick a particle taking its weight into account, but still with a certain amount of randomness.

Private Function PickParticle() As ABPoint
  dim low, up as integer
  up = UBound(Particles)
  dim ran as Double = rand.Number
  dim PickedIndex as integer
  dim Particle as ABPoint
  
  while true
    PickedIndex = floor((low + up)/2)
    Particle = Particles(PickedIndex)
    if Particle.CumulWeight > ran then
      up = PickedIndex
    else
      low = PickedIndex
    end if
    if up = low then
      Return new ABPoint(Particle.x, Particle.y,0)
    end if
    if up - low = 1 then
      Return new ABPoint(Particles(up).x, Particles(up).y,0)
    end if
  wend
End Function

More details can be found into the source code. There is also a compiled version where you can test it out with your own maps. The same syntax can be used as in Part 3.

Let’s see it one more time at work on a different map like our labyrinth! You’ll see it can find its likely position very fast!

You can download the source code and project here.

This concludes the robotics AI (for now). Some of it may have been heavier to take in for some of you but don’t let this discourage you from playing around with it.

Next time we’ll do something much easier. A neat image effect that does go around lately is the Tilt Shift Effect. You may know it from apps like Instagram and produces a miniaturize effect like this:

I’ll tackle Tilt Shift in our next article in both RealBasic and Basic4Android!

See you next time and happy coding!

Click here to Donation if you like my work

Advertisements

About Alwaysbusy

My name is Alain Bailleul and I'm the Senior Software Architect/Engineer at One-Two. I like to experiment with new technologies, Computer Vision and A.I. My projects are programmed in B4X , Xojo, C#, java, HTML, CSS and JavaScript. View all posts by Alwaysbusy

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: