Campuses:

Team 1: Touch sensing, Silhouettes, and “Polygons-of-Uncertainty”

Monday, June 18, 2012 - 9:00am - 9:20am
Izhak (Zachi) Baharav (Corning Incorporated)
Touch interfaces for small-size consumer devices are becoming ubiquitous, and are now penetrating into new areas like large display-walls, collaborative surfaces, and more. However, different methods of sensing are called for in order to deal with the economics of touch-interface for a very large surface. One such method uses small number of cameras and multiple light sources, as depicted in the example in Fig 1 below. When an object is placed on the surface, it blocks the line of sight between a few of the light sources and various cameras, and thus creates silhouette images. Using the input from all cameras, one can try and reconstruct the shape and location of the object touching the screen.

Of course, one can use more cameras, and various light/camera arrangements, to get different performances. Things get a little more complicated when we consider non-Euclidean surfaces (tracking on a ball?).

There is plenty of current research into Shape From Silhouettes (SFS), especially in order to reconstruct a 3D shape. Our case might seem simpler, as it is 2D only, but it has many practical requirements to address: Limited number of camera-views, minimization of light-sources, and so on. Moreover, the specifications we have for the system consider (for example) resolution, minimum detectable object, and how close two-objects can be together and still detected.

In this work we will build the tools to analyze, using analytic-geometry, various combinations of surface-shapes, cameras, light sources, and objects touching the surface.
We will then venture into related aspects, depending on the inclination and composition of the team:

  1. Optimization problem: Given specification for performance, what is the minimal number of cameras/light sources (with associated cost function) to achieve these ? Where is the location of these?

  2. Analytic aspects: Can we quantify average numbers for the performance? Or can we quantify the information-content in the silhouettes?

  3. Non-Euclidean surfaces: What about flexible surfaces, or balls?

  4. Robustness: How robust our solution is to mal-functioning light-source, or camera?


The results have immediate implications to design, performance, and cost of such systems.

Figure 1: Sample system for touch sensing (see details in the abstract), and a finger (object) on the surface.


Prerequisites:

From all members:
- Analytic geometry
- Some familiarity with computer graphics will help as well. Many similarities exist.

At least from some of the group members:
- Ability to simulate using Matlab, Mathematica, or any similar tool.

Bibliography:
A basic reference:
  • “The Visual Hull Concept for Silhouette-Based Image Understanding”, Aldo Laurentini, IEEE Transactions on Pattern Analysis and Machine Intelligence , 16(2) , 150 – 162, 1994.


  • Some more recent works:
  • “Towards Removing Ghost-Components from Visual-Hull Estimations”, Michoud, B. , Guillou, E. , Bouakaz, S. , Barnachon, M. , Meyer, Fifth International Conference on Image and Graphics, 2009. ICIG '09, 20-23 Sept. 2009, 428 - 434

  • “Fast Joint Estimation of Silhouettes and Dense 3D Geometry from Multiple Images”, Kolev, Kalin; Brox, Thomas; Cremers, Daniel; IEEE Transactions on Pattern Analysis and Machine Intelligence , 34(3) , 493 – 505, 2012.