It needs to be able to ‘read’ a page where people color in preprinted empty circles and return a txt file with a grid that represents the users selections. The picture can be taken with any device like flat scanners, webcams or mobile phone cameras.
It turns out it was not that easy to build as it looks. Especially the light and the environment were quite a challenge. With the knowledge I learned by building the augmented reality library for android, I could use some of the same techniques in this program.
A couple of things needed to be done. I wanted to avoid any rotation or deformation of the picture to keep the maximum quality of the picture. That meant manipulating the picture in other ways to find the hand colored points.
After some grey scaling, smoothing, blurring, sharpening and the other usual picture manipulations I’ve got a workable picture. The next function searched the blobs. I’ve used some functions I made in the augmented reality lib and that seems to work very well. Next was finding the square shape of the found blobs so I could create the matrix for all hand colored blobs. This looks very simple for a human, but for a computer it is quite a challenge.
This could be solved using a technique called PointsCloud. It searched for the corners within a ‘cloud of points’. The last part was aligning the hand colored points into the found matrix.
We are very satisfied with our preliminary results. It works very well with different light intensities and even on wrinkled or with coffee smothered pages. Here is a little video that shows our results:
This is done by dirty experimental code so my next task is rewriting it in a nice framework and optimizing the code for speed.
Visit us at http://www.one-two.com to see more of our innovating projects.
Until next time!