A couple great news.

This is a somewhat outdated news but I’m happy to announce that our proposal for track in Retrieval of Objects Captured with Low-Cost Depth-Sensing Cameras has been accepted in the Shape Retrieval Contest (SHREC) 2013!

I will soon publish a link to the webpage with further information about the event and the track.

Also after completing capturing the Dataset two days ago, along with Daniel Magarreiro and Marcus Gomes, I just finished cropping and editing the meshes into their final forms. I’ve been using both Netfabb and Remesh for this task.

Now I’m only left with centering the models and testing in bulk to make sure nothing is wrong. Also, I still need to export the meshes in .ply and .off file formats.

But now I can finally start with the user tests. The final dataset will comprise of 192 objects, considering that 32 were rejected due to material incompatibilty or just poor model reconstruction.

Offtopic: The Leap

Not exactly new, but might be interesting to some

The collection

Like I mentioned previously, the theme for the object collection We’ll be reconstructing digitally is “Household items”. The decision was simple enough and was based on the grounds that, since there was no benchmark for 3D objects captured with a low cost depth-sensing camera, then the objects I would use should be low-cost everyday items.

While some tracks submitted to SHREC are targeted to be used in small perturbations contexts, and thus feature captures made to objects in different poses with some perturbations to its shape, the target with BeKi is quite different. We intend to define a human-generated ground truth, whose details I’ll go into later. I’ll just leave it with, if we intend to test our subjects with the full-on group of objects and therefore return valid results for the queries, we need every capture to respect to individual objects and have them all present at the time. For the collection size, we’re targeting around 200 individual household items.

We asked for collaboration among friends and family to share some items, and as of now, we collected, indexed and cataloged 197 objects. We’ll try to expand the collection a little beyond the 200 mark as some might be troublesome to capture, especially glass or chrome items. For the cataloging process, I counted with the collaboration of a fellow student, Marcus Gomes, who helped collect and catalog most of the collection in a backend which I developed in the course of a week in QT and SQLite.

Marcus during the cataloging stage

Of the collected items, it’s interesting to note that most of these are old toys that, perhaps unsurprisingly, people are quite willing to share for the purpose we specified.

Main screen of the backend application, with a progress bar

Another important step for the cataloging process was taking photos to help ID every item in later stages. For that we created a setup and took the pictures with a Nikon D300 as can be seen in the following photo:

Photobooth!

 

Example photo

 

Stay tuned for scenes from the next episodes!

Offtopic: Nano Quadrotors

And now I have this inexplicable urge to play Space Invaders…

Development on the last few weeks

I should take time to write here more often. I’ll present a summarized view that describes more or less what I’ve been doing lately on this project, and later go a bit into detail of a few problems I met along the way.

After failing to dabble with KinFu to produce any kind of meaningful results I decided to drop that approach. As can be seen in this video I posted a few weeks ago, the capture was still a little sluggish and included more information in the range scan than I hoped for. KinFu is oriented to large scale reconstruction, so in my scope it didn’t quite make much sense. Plus, I was losing a considerable amount of time and needed to pick up the planning.

I tried different approaches then. The first was using Skanect, which although featuring a more polished interface, is even more focused in large area reconstruction. On the opposite end of the vector there is Autodesk’s 123D Catch, which although being targeted at capturing small objects, seems to struggle often with geometric reconstruction.

Then, through “3D Puppetry : A Kinect-based Interface for 3D Animation” I was directed to ReconstructMe which offers something of an inbetween solution. It also implements ways to built large scale room reconstruction by allowing to merge captures, but the primary difference is that it defines a parameterizable cubic volume. This immediately caught my attention since it solved one of my problems which was the background removal I was secretly hoping not having to deal with directly. ReconstructMe also allows surface texturing and other fine applications. I recommend looking into them if you’re interested but they need not be mentioned in my work’s context. I posted an example of ReconstructMe’s use here.

Now, long I’ve feared that Kinect’s low sensitivity would make feature point estimation a tough nut to crack, especially since I’ll not be capturing range scans with plenty of detail. My target is small househeld items (you heard it first here!) which, by definition, are small. My fears weren’t short lived, since when I cut the volume to near the object’s size, ReconstructMe was unable to maintain the camera track at any time. My solution is to isolate the objects-to-capture amidst a noisy capture volume, 80cm by 80cm. Later I will cut the target objects from the mesh individually. The isolation within the scene is done by exploiting one of the Kinect’s weaknesses. Capturing chrome or transparent materials. An example can be seen here, and is more to be posted in the near future. Later I will be writing about the benchmark definition and cataloging process.

Temporary setup for a capture test