And I mean this in an very friendly way, referring to one of their latest campaigns 🙂
I was doing some early spring cleaning on the dusty attic of this blog and came across some old projects I did.
Some of you long time followers of my blog may remember I wrote several posts programming in Xojo (previously RealStudio or even RealBasic before that). When I first started this blog, I was a huge fan of RB. The familiar VB6 syntax was what me attracted to it in the first place. I had a VB6 background, and with Microsoft abandoning it, RB was a nice alternative. I really didn’t care much for it being able to cross compile back then as Windows was my thing. But it was nice that it could.
Back then, many years ago, I used this place to write articles mainly on the Canvas (an 11 episode series!) and vintage games.
But also some frameworks like ABXVision that had OpenCV like capabilities (Augmented Reality in its early stages!) and a full blown physics engine ABPE.
The robotics series was a very fun project to do. I was doing a course on Artificial Intelligence at the time and I could use this new knowledge in the tutorials.
But because of Xojos decision a couple of years ago to start using a new syntax framework, most of these projects won’t work anymore without a major overhaul. So I feel it is time to let them go. I recently noticed Xojo had removed this blog from their resource list too, so they must’ve had the same feeling. Honestly, no hard feelings about that! I would too when a blog doesn’t write anything on me anymore.
But what is there currently to write about Xojo? The Web part hasn’t changed in many years and still looks like it is 1995. iOS still feels like it is only partly finished and is missing to many out-of-the-box features to be useful (will the new upcoming android suffer from the same problem?). Should I write about the many bugs and workarounds one has to do?
It must be said many of this remarks have to do with Xojo being a small team, and some stuff (like 64bit or Apples decisions) has been forced upon them. Luckily they have a small but enthusiastic community which is willing to take over the many shortcomings by writing frameworks like Aloe or iOSKit.
So, time to put those fond memories into a box and leave memory lane. Back to the real harsh world ;-).
I will leave the RB/Xojo projects on this blog until the end of the month (March). Maybe, one day, I will find the time to revamp them in Xojo or put them all up on GitHub and they will reappear.
Until then, this blog will mainly report on B4X and my own framework ABMaterial.
I Wish I Knew How To… Program the Canvas Control with Xojo Desktop is the latest book of Eugene Dakin in his excellent I Wish I Knew How To… series.
If you ever wondered how stuff is done with the canvas control in Xojo, this is the book you need to have on your virtual shelf. In his well known swift (no pun intended) style, Eugene has written the reference manual for you. Alwaysbusy’s Corner did some humble contributions to the more advanced topics.
For the novice Xojo user, you quickly can get started and learn about the basics of graphics. Step by step you learn more and more when you move through the more advanced topics. This 400 page volume covers a lot of interesting chapters and includes a lot of useful examples with source code:
Topics included in the book:
Building basic controls
and there are two games with step-by-step code explanations to help you build your own.
So head over to Eugene’s Personal Website and get your copy. Also check out his other books in the series on topics like SQLite, XML, PostgreSQL, Office etc…
DISCONTINUED: Interested parties in purchasing the source code can contact me via email.
This is the first article on the use of the new ABXVision framework. I’ve decided to start with the most fundamental class: ABXVImage.
ABXVImage is the core image object that will be used in every project you create with ABXVision. It’s a new ‘picture’ object in Xojo that is especially written to optimize the algorithms you can use (Color filters, Edge detectors, Blurs, Corner detectors, Shape Detectors, etc…).
To make it clear if we are working with a Xojo picture or an ABXVImage, I’ve split up the terms:
Whenever I use picture, I mean a Xojo picture. Whenever I use Image, I mean an ABXVImage.
So the first thing we need to do is convert a picture to an image and back:
' create a ABXVImage from a Xojo Picture
dim img as new ABXVImage
Img contains the data of the picture, ready to be processed. We can now make use of the filters. Some basic filters (like grayscaling, a thing you’ll be doing all the time), are also build-in the ABXVImage class.
For example, if we want to grayscale the image for further processing, we can use the following code:
You’ll notice the 8Bit part in the method name. A lot of the methods in ABXVision (like binarization) use this 8Bit pointer to perform their work. As a grayscaled image has the same value in its Red, Green and Blue channel, there is no need to run the whole filter over all 3 channels. Keeping record of one channel will do.
Once this function is performed, the ABXVImage will have its BitDepth property set to ABXVBITDEPTH.BIT8. Now, all other methods can check this property and see if it is ok to perform the action.
If, for some reason we want it back as a 32Bit (24Bit in many cases where no alpha is used), we can use the following code:
This will also set the BitDepth property of the image to ABXVBITDEPTH.BIT32.
Now, after our hard work, we want to have a Xojo picture with the result. This can be done with the GetPicture() method:
mBuffer = img.GetPicture()
Beware of overusing the GetPicture() method! If you have to run a lot of filters after each other and have no need to see every single step, there is no need to use the GetPicture() method. GetMethod() is mostly called after everything is done, e.g. when the image is analyzed, a glyph is found and the augmented part is drawn on the image.
1. GetPicture() will do a GrayScaled8BitTo32Bit() automatatically if it was a 8 Bit image.
2. To be complete, most methods can handle 32 bit picture (R, G, B + Alpha/Mask), but it’s not advised to use this kind of image. Those methods will also perform their action on the alpha channel, and this takes time. The SetPicture() method will set the image as ABXVMASKTYPE.MASK_NONE, so the Alpha channel is ignored. If for some reason you want your Alpha channel, use the Mask parameter of the SetPicture() method. You can use ABXVMASKTYPE.MASK_OLD (This is the old Realbasic format where the mask is also a complete picture object) or ABXVMASKTYPE.MASK_NEW (This is the new Xojo picture format, with an extra real Alpha channel. This is experimental and not fully tested)
3. A lot of optimization is done using #Pragmas in the framework. This means it will run a lot faster once compiled, but will do also less error checks. Always make sure you follow the rules of the syntax of the method. e.g. if the method asks an array with 4 points, don’t give it one with 3 points. Your program will crash!
There are many filters available in ABXVision. I’ve tried to split them up so they made some sense. (Click the titles to jump to the help)
Provides a number of 2 source filters – filters which take two input images to produce the result image. ABXVAdaptiveBinarization
Provides several adaptive binarization filters, which are aimed to find binarization threshold automatically and then apply it to the source image. ABXVBinarization
Provides different binarization filters, which may be used as in image processing, as in some computer vision tasks. ABXVColorFilters
Provides number of image processing filters, which allow to filter pixels depending on their color values.
These image processing filters may be used to keep pixels, which color falls inside or outside of specified range, and fill the rest of pixels with specified color. ABXVConvolution
Provides convolution filters and a set of derived filters, which allow to perform image convolution with common kernels. ABXVEdgeDetector
Provides a number of edge detection filters, which may suite different tasks providing different performance. ABXVMorphology
Provides a set of filters from mathematical morphology.
All of the filters may be applied as using default structuring element, as using custom specified structuring element. ABXVTransform
Provides transformations methods, which allow to perform image transformationq.
In the attached demo project, some of those filters are used to show the different techniques used. You’ll notice most of the filters have multiple calling methods. They are mostly split up as follows:
1. call(img1). The action is performed on the image.
2. img2 = call(img1). The action is performed on img and returned in img2 without altering img1.
3. call(img1, rect). Perform the action on the image itself, but only on the part described by the rectangle.
4. img2 = call(img1, rect). Perform the action on the image itself, but only on the part described in the rectangle and also only return this part as img2, without altering img1
I suggest you read up on the help and play with the ABXVImage object and familiarize with it. The demo project contains some filters, but you can try out others.
This concludes the introduction. I felt it was needed to talk about ABXVImage first before going into other subjects as this is a core object. The source code and framework can be found on the ABXVision page (see top of this page)
DISCONTINUED: Interested parties in purchasing the source code can contact me via email.
This is the first public release of ABXVision! I’ve spend a lot of time setting it up. Creating the documentation for the framework also took a fair amount of time, but as this project will grow in the future, it’s best I start documenting it from the beginning.
To centralize everything about ABXVision, I’ve created a new menu item on top of this page. Here you’ll be able to find the latest version, the help and the projects I’ll create with the ABXVision framework in the upcoming articles.
Halfway through writing the documentation, I realized this would take longer than expected so I stopped and build a tool to help me maintain the documentation: ABXDocumentor. I have a very good feeling about this tool and will probably update it and release it on this blog.
In the upcoming weeks I will try to write some demos using the framework. The first one will handle some basic functionality like ABXVImage as this object is really important. Later we’ll move on to Augmented Reality, Motion detection, maybe something on playcard recognition. Who knows were it takes us.
I just finish with this note: This is version 1.0. Not all functionality was tested so I can use your feedback to update it. I hope you join me on this ongoing journey and show us some projects your created with the framework.
Let’s learn Robby to react to an obstacle on its way to the target!
Just like your GPS has to recalculate the route to your destination if a road is blocked, Robby must adept and recalculate a new path to his target if a door is closed in front of him. You can close or open a doorway by rightclicking on the grey parts of the map.
When Robby starts a walk, his belief state is that all doors are open. If he encounters a closed door, he changes his belief state and recalculates an alternative route. Once the robot arrives at his destination, the belief state is reset:
Robby wants to go from point A to point B. The robot calculates its route and starts the walk (Dark blue path). Suddenly it finds a closed door in its path. Robby recalculates an alternative route to B and continues its walk (Light blue path). Pretty nifty hè! 🙂
Let’s look at it at work on our familiar floor plan:
As you can see I’ve also added the possibility to set Robby at any start position within the map and you can choose between a number of sensors.
Less sensors is faster but also less accurate and the algorithm may fail!
I’ve also been asked to extend the pathfinding example for the people that do not have RealBasic so they can use their own maps. To use your own maps with the ABExplorer.exe file you can give a map at the command line:
Systax: ABExplorer.exe /M=mapfile.png where mapfile is a picture file.
The map file specs:
1. must a picture size 500×500 pixels.
2. only contain 3 colors:
black RGB(0,0,0) for a wall
white RGB(255,255,255) for empty space
grey RGB(192,192,192) for a doorway that can be opened and closed
3. doorways that can be closed must be either horizontal or vertical, not diagonal.
4. make sure the map is ‘closed’: there is no way Robby can escape from the map.
In the next part of this series we’re going to learn Robby to find his location within the map. As for now, when we put Robby somewhere on the map, we tell him in which room and at what x,y coordinates he is. But what if he has to find this information for himself?
In this article we’ll going to learn Robby to use efficient pathfinding. The algortihm we’re going to use is spatial A-star. I used the implementation in C# of Christoph Husse to get me started. His original code can be found here.
Here you can see the algorithm at work on a labyrinth:
An explanation of the algorithm itself can be found here and for beginners here is an excellent tutorial.
In the structure of our project we’ve added a new folder: Algorithms. This will be the place where we will put all the algorithmes we’re going to use. ABSpatialAStar: The main class that can do a search. ABOpenCloseMap: The grid that contains our known map (made of ABPathNodes). ABPathNode: A single node that will be checked if Robby can walk here or if it is a wall. ABPriorityQueue: A good A* implementation does not use a standard Array for the open nodes. If a standard Array is used, the algorithm will spend a huge amount of time searching for nodes in that list; instead, a priority queue should be used. The one I used is a modified version of the one BenDi wrote. The original one can be found here.
I’ve added some ‘dirty’ tricks to speed up the process. e.g blowing up de map for a faster search. It scales down the map and blows up all the walls so they appear a lot thicker. This is easier for the algorithm to find the shortest path.
By using a reasonable blowup factor this has as a side effect that Robby stays away from the walls a little further :-). The result of the blowup can be seen here:
The code for the BlowUpMap function:
Function BlowUpMap(inPic as Picture, OddFactor as Integer, ScaleFactor as double) As Picture
Dim tmpPic As Picture
Dim w,h As Integer
Dim x,y As Integer
w = inPic.Width
h = inPic.Height
tmpPic = NewPicture(w*ScaleFactor, h*ScaleFactor, 32)
Dim inRGB As RGBSurface
Dim tmpRGB As RGBSurface
inRGB = inPic.RGBSurface
tmpRGB = tmpPic.RGBSurface
Dim FactorDev2 As Integer = Floor(OddFactor/2)
Dim FactorDev2Plus1 As Integer = FactorDev2 + 1
Dim a, b As Integer
For x = FactorDev2Plus1 To w - FactorDev2Plus1
For y = FactorDev2Plus1 To h - FactorDev2Plus1
If inRGB.Pixel(x,y) = &c000000 then
For a = x - FactorDev2 To x + FactorDev2
For b = y - FactorDev2 To y + FactorDev2
tmpRGB.Pixel(a*ScaleFactor,b*ScaleFactor) = inRGB.Pixel(x,y)
Implemented on our house plan this is what happens when Robby knows how to find the shortest path to the point we click on (make sure there is a path to the point you click):
The next series of articles will give a brief initiation in Robotic Artificial Intelligence. We’ll build a simulation of a robot with sensors. At first it will have no intelligence at all. But in the upcomming weeks we’ll learn it some tricks like finding the shortest path within a maze or house, handle closed passage ways and recalculate a new path to its destination. In another article we’ll learn the robot to find its location within a known map when it is dropped at an unknown location. Some exciting stuff!
But we’ll have to start at the beginning: building our robot Robby 🙂
Robby will have a couple of depth sensors (in reality these can be e.g. Kinect Depth cameras). They are placed on Robby so the robot can ‘feel’ everywhere around itself. They also have a limited range so Robby can only ‘see’ a limited distance. Robby is not a superhero, he cannot see through walls.
Robby changes to a red color when he is close to a wall, and colors green when he is at a safe distance.
Example of Robby: You can see the robot, its sensors and a circle showing the range of the sensors.
In our first program, Robby is just like remote driven toy. We can go forward, turn left and turn right (with the cursor keys).
A framework will be build so we can later expand this program and even create other types of robots, sensors, etc…
Here is the layout of the framework:
ABSpace: This will hold our entry point to the world of robotics. It will describe the real world: what is the space Robby is walking in, where are the robots, etc… Sensors: A simulation of sensors that can be attached to a robot. They can do measurements that the Robot A.I. can use. Robots: A simulation of different types of robots that. Global: Some classes we’ll need like Points, Maps, etc…
ABSpace is build like a movie with several frames. The steps we take are:
1. Initalize our space (give it a map, initialize our robot
2. Calculate a step (let all the sensors do measurements, calculate the new postition of the robot, let the Robot A.I. do it’s stuff)
3. Draw it so we’ll see the progress of our robots and what they have measured
4. Check if we’ll given some input to our robot (move, turn, etc)
5. Repeat from step 2
Here is a video to show what Robby Version 1 can do: