Cubical Homology Theory and Applications in Image Processing and Computer Vision
Homology theory is a well known concept in algebraic topology related to the notion of connectivity in multi-dimensional shapes. We will discuss the cubical homology theory that is very suited to the study of topological properties of images since by essence an image is a cubic grid in the plane. We will also report the recent progress made in designing algorithms and computer programs computing homology of spaces and maps as well as some examples of application of homology in Image Processing and Computer Vision.
Image Inpainting and High Order PDE's in Image Processing
Inpainting, the technique of modifying an image in an undetectable form, it is as ancient as art itself. The goals and applications of inpainting are numerous, from the restoration of damaged paintings and photographs to the removal/replacement of selected objects. We introduce a novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators. The algorithm automatically fills-in regions with information surrounding them. The fill-in is done in such a way that isophote lines arriving at the regions boundaries are completed inside. The technique does not require the user to specify where the novel information comes from. This is automatically done (and in a fast way), thereby allowing to simultaneously fill-in numerous regions containing completely different structures and surrounding backgrounds. In addition, no limitations are imposed on the topology of the region to be inpainted. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or publicity; and the removal of entire objects from the image like microphones or wires in special effects. This work also shows the importance of moving toward high order PDE's in image processing and the relations of those with other exciting areas of mathematical physics. Joint work with G. Sapiro, V. Caselles, and C. Ballester
A Level Set Framework for Active Contours and Mumford-Shah Segmentation
(co-author: Luminita A. Vese, UCLA).
In this talk, I will present a common framework for active contours and Mumford-Shah segmentation, based on the level set method of S. Osher and J. Sethian. First I will introduce an active contour model "without" edges, based on segmentation and level sets. By this model, we can detect objects whose boundaries are not necessarily defined by gradient, as well as interior contours automatically. Then I will show how this level set model can be generalized, in order to minimize the Mumford-Shah energy for segmentation, for piecewise-constant and piecewise-smooth approximations. We represent the set of edges via one or more level set functions, and we propose a new multiphase level set representation, which has some advantages: we use only $n$ level set functions to represent $2^n$ phases, and in addition, we do not have the problems of vacuum and overlap, naturally arising in multiphase problems. Also, we will see that triple junctions can be detected and represented. Finally, I will show numerical results on various images, in order to validate the algorithm.
Tree Approximation and Image Compression
Joint work with A. Cohen, W. Dahmer and R. DeVore studied tree approximations and used the results to build a practical coder that gives a performance similar to existing state-of-the-art coders. We derive certain optimality results for our concrete tree approximation algorithms. On the other hand, because the tree structure is known not to be optimal for images, this also points to shortcomings of state-of-the-art coders, which use a similar tree 'philosophy'. In addition, we discuss some adaptivity properties of our tree coder, useful when parts of very large images need to be examined.
Harmonic Analysis Perspective on Geometric Diffusions and Low level Vision
In recent years, two very important trends have emerged which are of compelling interest to the mathematically-trained who are thinking of applications in image processing. On the one hand, there is the widespread use of PDEs to process images, for example with the use of geometry-driven diffusions to remove noise from images and perform segmentation. On the other hand, there are intensive studies of computational vision, and interesting speculations and investigations about mathematical structures which might be involved in biological vision.
In my talk, I will take a completely different discipline -- Harmonic Analysis -- and consider some recent developments in this field. With constructions such as wavelets, time-frequency analysis, and other more exotic schemes, there is a wealth of ideas which can be compared and contrasted with recent developments in both geometry-driven diffusions and in computational vision.
In my talk I will focus on two topics:
 Existing geometry-driven diffusions go in the right direction -- smoothing anisotropically in the vicinity of edges. But this is only qualitatively correct. Do they really do the quantitatively correct thing? Does it matter?
 Many existing studies relating phenomena in natural images to computational structures that might be relevant to the visual cortex and computational analogs take Fourier and Gabor analysis, and more recently wavelet analysis, as models for the possible underlying structures which are best-adapted for image analysis. Are these the right ideas? How do other developments in harmonic analysis (e.g. brushlets, beamlets) compare?
I hope to convey both the spirit and some of the specific mathematical ideas of recent developments in applied harmonic analysis.
Parts of my talk will describe joint work with Emmanuel Candes (CalTech), with Drs. Georgina Flesia and Arne Stoschek (Stanford).
Dynamic Shapes of Arbitrary Dimension: The Vector Distance Functions
We present a novel method for representing and evolving objects of arbitrary dimension. The method, called the Vector Distance Function (VDF) method, uses the vector that connects any point in space to its closest point on the object. It can deal with smooth manifolds with and without boundaries and with shapes of different dimensions. It can be used to evolve such objects according to a variety of motions, including mean curvature. If discontinuous velocity fields are allowed the dimension of the objects can change. The evoluti on method that we propose guarantees that we stay in the class of VDFs and therefore that the intrinsic properties of the underlying shapes such as their dimension, curvatures can be read off easily from the VDF and its spatial derivatives at each time instant. The main disadvantage of the method is its red undancy: the sizeof the representation is always that of the ambient space even though the object we are representing may be of a much lower dimension. This disadvantage is also one of its strengths since it buys us flexibility.
Non-distorting Flattening for Virtual Colonoscopy
In this talk, we consider a novel 3D visualization technique based on conformal surface flattening for virtual colonoscopy. Such visualization methods could be important in virtual colonoscopy since they have the potential for non-invasively determining the presence of polyps and other pathologies. Further, we demonstrate a method which presents a surface scan of the entire colon as a cine, and affords a viewer the opportunity to examine each point on the surface without distortion. From a triangulated surface representation of the colon, we indicate how the flattening procedure may be implemented using a finite element technique. We give a simple example of how the flattening map can be composed with other maps to enhance certain mapping properties. Finally, we show how the use of curvature based colorization and shading maps can be used to aid in the inspection process.
Geometrical Image Representations with Bandelets
Following the "2nd generation image coding'' dream, improving current image transform codes will require to represent images with features that are meaningful for the scene analysis. Such an approach would allow us to build compact representations that can also be used for search in large data-bases of images. To achieve this goal, the first issue is to extract and efficiently represent the image geometry. An approach is presented with foveal wavelets and bandelets, with compression examples.
Donald E. McClure (Division of Applied Mathematics, Brown University) Donald.McClure@Brown.edu
Restoration and Reformatting of Motion Images
Postproduction processing of film and video is a source of a wide variety of image processing and analysis problems for motion images. I shall describe the formulation of problems in the areas of digital repair of damage to film, conversion between different digital video formats, and compression. Approaches to these problems used in a current workstation-based system will be described and illustrated with examples. Other contributors to the design of this system are D. Geman, S. Geman, K. Manbeck and C. Yang.
Modeling the Full Statistics of Local Image Patches
I will discuss the work of my group, Ann Lee and Jinggang Huang , and that of Ulf Grenander on seeking a full description of the joint probability model for all pixels in small image patches, e.g. 2x2 to 8x8. I will also compare the statistics of optical images with those of range images.
Level Set/PDE Based Algorithms for Image restoration, Surface Interpolation, and PDE's on General Manifolds
We shall present new, fast level set based algorithms for image restoration and for interpolating unorganized points, curves and surface patches in 3D. We shall also present a new framework for computing the solution of PDE'S and variational problems on general manifolds, (in particular 3D surfaces) and apply this to image processing problems. The work is joint with many people including P. Burchard, L-T Cheng, M. Bertalmio, R. Fedkiw, M. Kang, B. Merriman, H.K Zhao and A. Marquina
Some New Results on Optimality and Complexity of PDE-Based Segmentation Algorithms
We will present a very simple nonlinear diffusion equation and show its utility for image segmentation. We will show that it may be interpreted both as a variant of the Perona-Malik equation, and as the steepest descent equation for the total variation. Its analysis in 1-D will reveal it to be an exact solver of certain maximum likelihood detection/estimation problems. The major advantage over other methods to solve these problems is O(N log N) computational complexity in one spatial dimension. Finally, we will show our method to be a robust estimator (in the spirit of H-infinity estimation) for a restricted class of 1-D problems. Experiments suggest that the 2-D version of our algorithm retains robustness properties of the 1-D version. A remaining challenge is to extend our 1-D theoretical results to 2-D, designing fast and optimal image segmentation algorithms.
Accommodation as a Low-Level Visual Cue
I call accommodation cues measurable properties of images of a given scene that are associated with a change in the geometry of the imaging device. For instance, in the human eye the shape of the lens is controlled so as to bring the scene into focus at the fovea; in a video camera the lens translates for the same purpose.
Is accommodation an unambiguous cue? (i.e. is it possible to distinguish two arbitrary shapes solely from accommodation cues?) Can a surface be reconstructed uniquely from accommodation? Such conditions clearly depend upon geometry (the shape of the surface) as well as photometry (its radiance distribution). Is it possible to characterize the set of "sufficiently exciting" distributions?
In this talk I will present some preliminary results and partial answers to the above questions, as well as describe two optimal algorithms (in the sense of L2 and Information-divergence) to reconstruct shape from accommodation cues, under certain assumptions. This amounts to solving a blind deconvolution problem with some special features. I will discuss some open problems and potential applications of accommodation in visualization and endoscopic surgery.
Front-End Vision and Multiscale Image Analysis (a new tutorial book).
Both linear and nonlinear scale-space theory have booked much progress, and shown good performance in many compter vision algorithms. The field however sees a relatively small growth when compared to e.g. neural networks and wavelets. A new upcoming tutorial book on linear and nonlinear scale-space theory is presented, completely written in Mathematica 4. Computer algebra software has now reached the level that even for large datasets as multidimensional images complex mathematics can be done and presented in a fast prototyping way. Every topic is illustrated with (typically very short) code to carry out and modify all experiments. The red wire through the book is the mathematics of apertures. A thorough treatment of the front-end visual system is included. Demo's will be presented of differential invariants to 4th order, gauge coordinate systems, multiscale optic flow, steerable filters, time-scale, color differential invariants, deep structure, geometry-driven diffusion equations, winding numbers, edge focusing etc. The book and CD-ROM will appear end this year with Kluwer Academic Publishers.
Multiresolution Stochastic Models and Their Use in Modeling and Analysis of Random Processes and Fields
In this talk we describe a body of research concerned with the building and exploitation of multiresolution statistical models of random phenomena and imagery. We begin with a discussion of linear stochastic models on multiresolution trees, in particular discussing the very efficient algorithms these models admit, the realization of random phenomena using such models, some relationships to wavelet decompositions of signals and images, and applications of this formalism, and some of its critical limitations. Motivated by two of these limitations, we describe two research directions that use this basic formalism as a point of departure. The first of these is a class of nonlinear models we refer to as wavelet cascades, which maintain much of the exploitable structure of our linear-tree models but allow us to capture distinctive nonlinear characteristics of natural imagery. The second is the examination of stochastic models on graphical structures other than trees, a topic of considerable interest in a number of quite different domains.
Anthony Yezzi (Georgia Institute of Technology) email@example.com
Variational Methods for Image Segmentation, Smoothing, Interpolation, Magnification, Stereo Matching, and Shape from Shading
Partial Differential Equations have been used extensively to derive geometric active contour models for the purpose of image segmentation and to derive anisotropic diffusion models for image smoothing. They have also been employed in low level vision problems of inferring 3D structure from one or more 2D images (e.g. stereo-matching and shape-from-shading).
In the first part of this talk we present a class of statistically driven active contour models based upon deterministic energy functionals designed to maximally separate the values of selected statistics inside and outside each evolving contour. We follow this with a less restrictive model, based upon the Mumford-Shah functional, which simultaneously diffuses the image while evolving a set of active contours towards the boundaries of objects. A straightforward generalization of this model allows us to treat images with regions of missing data and to create a unified framework for simultaneous image segmentation, smoothing, and magnification. In the second part of this talk, we will present a novel approach to multiframe shape-from-shading which is stronly motivated by the multiframe stereo-matching work of Faugeras and Keriven.
Adaptive ENO-wavelets for Image Compression
We have designed an adaptive ENO-wavelet transform for approximating discontinuous functions without oscillations near the discontinuities. Our approach is to apply the main idea from Essentially Non-Oscillatory (ENO) schemes for numerical shock capturing to standard wavelet transforms. The crucial point is that the wavelet coefficients are computed without differencing function values across jumps. However, we accomplish this in a different way than in the standard ENO-schemes. Whereas in the standard ENO schemes, the stencils are adaptively chosen, in the ENO-wavelet transforms, we adaptively change the function and use the same uniform stencils. The ENO-wavelet transform retains the essential properties and advantages of standard wavelet transforms such as concentrating the energy to the low frequencies, obtaining arbitrary high order accuracy uniformly and having a multiresolution framework and fast algorithms, all without any edge artifacts. We have obtained a rigorous approximation error bound which shows that the error in the ENO-wavelet approximation depends only on the size of the derivative of the function away from the discontinuities. We will show some numerical examples to illustrate this error estimate.