|Institute for Mathematics and its Applications
University of Minnesota
400 Lind Hall
207 Church Street SE
Minneapolis, MN 55455
The IMA is delighted to announce that Daniel Bates (Notre Dame), Jason Gower (Cincinnati), Milena Hering (Michigan), Benjamin Howard (Maryland), Anton Leykin (University of Illinois at Chicago), Hannah Markwig (Kaiserlautern), Jiawang Nie (Berkeley), and John Voight (Sydney) have been appointed as IMA postdoctoral fellows in conjunction with the upcoming program on Applications of Algebraic Geometry. Laura Lurati (Brown) has been appointed as an IMA/Boeing industrial postdoc; she will spend her first year at Boeing, and her second year at the IMA.
This summer the IMA will offer four exciting programs. One is intended for midcareer mathematicians interested in exploring a new research direction, and two are exclusively for graduate students.
The New Directions Short Course Biophysical Fluid Dynamics, June 19–30, will be taught by Michael Shelley (Courant Institute) and Raymond Goldstein (University of Arizona). This intensive short course will provide the basic knowledge required for research in biological fluid dynamics. Participation will be limited to 25 midcareer researchers in the mathematical sciences. The application deadline is April 1.
The 2006 IMA Summer Program Symmetries and Overdetermined Systems of Partial Differential Equations, July 17–August 4, 2006, is organized by Thomas Branson (Iowa), Michael Eastwood (Adelaide), and Willard Miller Jr. (Minnesota). The first week will be devoted to expository/overview sessions; the following two weeks will focus on more specialized research talks, intermixed with discussions pointing out the most promising areas for further research.
The IMA is still accepting nominees for the PI Summer Program for Graduate Students Topology and its Applications, July 10–28, Mississippi State University. This program is open only to graduate students from IMA Participating Institutions. The program topics and instructors are "Topological Approximation and Surface Reconstruction" (Gunnar Carlsson, Stanford), "Applications to Molecular Biology" (John Harer, Duke), and " Applications to Dynamical Systems" (Konstantin Mischaikow, Georgia Tech). If you know of a graduate student at a Participating Institution who would benefit from the program, please encourage them to contact their department chair.
Mathematical Modeling in Industry X—A Workshop for Graduate Students, August 9–18, 2006, offers graduate students and qualified advanced undergraduates first hand experience in industrial research. Teams of up to six students will work under the guidance of a mentor from industry, who will guide the students in the modeling and analysis of real-world industrial problems. This year's mentors are Douglas C. Allan (Corning), Thomas Grandine (Boeing), SuPing Lyu (Medtronic), Rick Mifflin (ExxonMobil), Brendt Wohlberg (Los Alamos National Lab), and Chai Wah Wu (IBM Research). The application deadline is April 15.
During the 1980's and '90's computer vision focussed on (chrono-)geometrical issues (the when, where questions). The what questions have been neglected and will be an important focus in computer vision/image processing/computer graphics in the coming decades. Important topics include estimation of the light field, segmentation of natural objects, classification of materials on the basis of (illumination dependent) texture, (estimated) BRDF and color, "color constancy", and classification of the setting ("landscape", "cityscape", "office environment", etc.).
|1:25p-2:25p||Imaging methods for surveillance||Vassilios Morellas (Honeywell)||Room 20 Vincent Hall||IPS|
|8:15a-9:00a||Registration and coffee||EE/CS 3-180||W3.6-10.06|
|9:00a-9:10a||Welcome to the IMA||Douglas N. Arnold (University of Minnesota)||EE/CS 3-180||W3.6-10.06|
|9:10a-9:20a||Opening Remarks||Jan J. Koenderink (University of Utrecht)|
Jitendra Malik (University of California - Berkeley)
|9:20a-10:20a||TBA||Jitendra Malik (University of California - Berkeley)||EE/CS 3-180||W3.6-10.06|
|10:45a-11:45a||Stereo for curves and surfaces||Steven W. Zucker (Yale University)||EE/CS 3-180||W3.6-10.06|
|1:30p-2:30p||Natural images, multiscale manifold models, and compressive imaging||Richard Baraniuk (Rice University)||EE/CS 3-180||W3.6-10.06|
|2:30p-3:00p||Second Chances||EE/CS 3-180||W3.6-10.06|
|3:15p-4:30p||Poster Session/Reception||Lind Hall 400||W3.6-10.06|
|Inpainting of binary images using the Cahn-Hilliard equation||Andrea L. Bertozzi (University of California - Los Angeles)|
|Branch voxels and junctions in 3D skeletons of confocal microscope images of human brain tissue||Gisela Klette (University of Auckland)|
|Texture mixing via universal simulation||Guillermo R. Sapiro (University of Minnesota)|
|Physics-motivated features for distinguishing photographic images and computer graphics||Mao-Pei Tsui (University of Toledo)|
|A variational approach to image and video super-resolution||Todd Wittman (University of Minnesota)|
|9:00a-10:00a||How many categories can you recognize?||Pietro Perona (California Institute of Technology)||EE/CS 3-180||W3.6-10.06|
|10:30a-11:30a||Natural images, natural percepts and primary visual cortex||Daniel Kersten (University of Minnesota)||EE/CS 3-180||W3.6-10.06|
|1:30p-2:30p||Image statistics and surface perception||Edward H. Adelson (Massachusetts Institute of Technology)||EE/CS 3-180||W3.6-10.06|
|2:30p-3:00p||Second Chances||EE/CS 3-180||W3.6-10.06|
|9:00a-10:00a||Perception and classification of surface texture||Mike J. Chantler (Heriot-Watt University)||EE/CS 3-180||W3.6-10.06|
|10:30a-11:30a||Surface color perception in three-dimensional scenes: Estimating, representing and discounting the illuminant||Laurence T. Maloney (New York University)||EE/CS 3-180||W3.6-10.06|
|1:30p-2:30p||Image texture and the "flow of light"||Jan J. Koenderink (University of Utrecht)||EE/CS 3-180||W3.6-10.06|
|2:30p-3:00p||Second Chances||EE/CS 3-180||W3.6-10.06|
|9:00a-10:00a||Characterizing local features of illuminated objects||James Damon (University of North Carolina)||EE/CS 3-180||W3.6-10.06|
|10:30a-11:30a||Panoramic imaging and laser range finders for 3D scene visualization||Reinhard Klette (University of Auckland)||EE/CS 3-180||W3.6-10.06|
|1:30p-2:30p||Removing photographic blur caused by camera motion: How can you identify when an image looks blurred?||William T. Freeman (Massachusetts Institute of Technology)||EE/CS 3-180||W3.6-10.06|
|2:30p-3:00p||Second Chances||EE/CS 3-180||W3.6-10.06|
|6:00p-7:30p||workshop dinner||Loring Pasta Bar||W3.6-10.06|
|9:00a-10:00a||Natural image contours||James Elder (York University)||EE/CS 3-180||W3.6-10.06|
|11:30a-12:00p||Second Chances||EE/CS 3-180||W3.6-10.06|
|11:15a-12:15p||TBA||Arnd Scheel (University of Minnesota)||Lind Hall 409||PS|
|1:25p-2:25p||TBA||Peter Massopust||Room 20 Vincent Hall||IPS|
|1:25p-2:25p||Algorithms for digital color cameras||John Hamilton (Eastman Kodak Company)||Room 20 Vincent Hall||IPS|
|Edward H. Adelson (Massachusetts Institute of Technology)||Image statistics and surface perception|
|Abstract: It is mathematically impossible to tell whether a surface is white, gray, or black, by looking at it in isolation, since the luminance is the product of two unknown variables, illumination and reflectance (albedo). Nonetheless people can do it pretty well, proving that the human visual system is smarter than the people who study it. Real surfaces, such as paper, cloth, or stucco, have visual textures that depend on interreflections and specular reflections, and some of the resultant image statistics are correlated with surface properties such as albedo and gloss. By manipulating these statistics, we can make the surface look lighter or darker (and duller or shinier) without changing the mean luminance. In a related project, we are exploring how local statistics can be used to separate shading and albedo in natural images. Working in the derivative domain (as in Retinex), we train on images with ground truth "intrinsic images" of shading and albedo, and learn to estimate the derivatives based on local image patches. We then do a pseudoinverse to retrieve the images. The results are good: we can separate an image into its shading and albedo components better than previous methods, including our own previous methods that relied on classification rather than estimation.|
|Richard Baraniuk (Rice University)||Natural images, multiscale manifold models, and compressive imaging|
|Abstract: The images generated by varying the underlying articulation parameters
of an object (pose, attitude, light source position, and so on) can be
viewed as points on a low-dimensional "image appearance manifold"
(IAM) in a high-dimensional ambient space. In this talk, we will
expand on the observation that typical IAMs are not differentiable, in
particular if the images contain sharp edges. However, all is not
lost, since IAMs have an intrinsic multiscale geometric structure. In
fact, each IAM has a family of approximate tangent spaces, each one
good at a certain resolution. In the first part of the talk, we will
focus on the particular inverse problem of estimating, from a given
image on or near an IAM, the underlying parameters that produced it.
Putting the multiscale structural aspect to work, we develop a new
algorithm for high-accuracy parameter estimation based on a
coarse-to-fine Newton iteration through the family of approximate
tangent spaces. This algorithm is reminiscent of recently proposed
algorithms for multiscale image registration and super-resolution. In
the second part of the talk, we will explore IAMs in the context of
"Compressive Imaging" (CI), where we attempt to recover an image from
a small number of (potentially random) projections. To date, CI has
focused on sparsity-based image models; we will discuss how IAM models
could offer better performance for geometry-rich images.
This is joint work with Michael Wakin, Hyeokho Choi, and David Donoho.
|Andrea L. Bertozzi (University of California - Los Angeles)||Inpainting of binary images using the Cahn-Hilliard equation|
|Abstract: Joint work with Selim Esedoglu (Univ. of Michigan, Department of Mathematics) and Alan Gillette (UCLA, Department of Mathematics). Image inpainting is the filling in of missing or damaged regions of images using information from surrounding areas. We outline here the use of a model for inpainting based on the Cahn-Hilliard equation, which allows for fast, efficient inpainting of degraded text, as well as super-resolution of high contrast images.|
|Mike J. Chantler (Heriot-Watt University)||Perception and classification of surface texture|
|Abstract: I will present a simple first order model of how variation in
illumination affects the output of Filter Response Filters (FRF).
FRF are of interest because:
I'll show how naïve classifiers built using these simple features can fail, and how the model can be used to produce a classifier that is robust to illumination variation. What this will show is that single still images are not often not sufficient for the purposes of surface classification - either for human or automated systems. I'll conclude by describing some of our recent research that is investigating our perceptions of surface texture.
|James Damon (University of North Carolina)||Characterizing local features of illuminated objects|
|Abstract: The perceived shapes of objects in images result from a
of visual clues.
These clues follow from the interplay of geometric features
edges and corners, delineating curves on object surfaces, and
illumination such as shadow curves and specularity.
viewer gains such
information not just from static images but also from perceived
change in viewing direction.
In this talk, we explain how it is possible to determine a catalog of possible local models for the generic interplay between geometric features and shadow curves. This catalogue can be expanded to included the expected changes in such models under movement in viewing direction. Such a catalog is constructed through the use of singularity theory, which is a mathematical theory that allows construction of such classifications based on stability and possible perturbations. We explain the general features of the classification and indicate how it is obtained.
This is the result of joint work carried out with Peter Giblin and Gareth Haslinger.
|James Elder (York University)||Natural image contours|
|Abstract: The important role of contours in visual perception has been recognized
for many years (e.g., Wertheimer 1923/1938). While early Gestalt
insights derive from observation of highly idealized images, decades of
computer vision research have demonstrated the computational complexity
of inferring and exploiting contours in natural images. Physiological
data, while generating some intriguing clues, are often too local
(single unit recording) or too global (imaging) to provide the data
needed to constrain existing models or inspire new ones.
In this talk I will discuss recent work that attempts to bring together psychophysical, computational and physiological approaches to understanding contour processing in natural images. A unifying foundation for this effort is a continuing project to measure and model the statistics of natural image contours. These ecological results lead to new computer vision algorithms for natural contour grouping, normative models for contour processing that may be evaluated psychophysically, and to new models for neural selectivity to natural image contours that may be tested against physiological data.
|William T. Freeman (Massachusetts Institute of Technology)||Removing photographic blur caused by camera motion: How can you identify when an image looks blurred?|
|Abstract: Camera shake during exposure leads to objectionable image blur and
ruins many photographs. Conventional blind deconvolution methods
typically assume frequency domain constraints on images, or overly
simplied parametric forms for the motion path during camera shake.
Real camera motions can follow convoluted paths, and a spatial domain
prior can better maintain visually salient image characteristics. We
introduce a multi-scale method to remove the effects of camera shake
from seriously blurred images, by estimating the most probable blur
and original image using a variational approximation to the posterior
probability, and assuming a heavy-tailed distribution for bandpassed
image statistics. Our method assumes a uniform camera blur over the
image, negligible in-plane camera rotation, and no blur caused by
moving objects in the scene. The algorithm operator specifies an
image region without saturation effects within which to estimate the
blur kernel. I'll discuss issues in this blind deconvolution problem,
and show results for a variety of digital photographs.
Invitation to submit examples: I invite audience members to submit examples of motion-blurred photographs to me a few days ahead of time. I'll show the images you submit, and the result of our algorithm applied to them. If you have a favorite blind deconvolution or restoration algorithm, please apply it to your image and send it and I'll show that, too.
Joint work with: Rob Fergus, Barun Singh, both from MIT CSAIL, and Aaron Hertzman and Sam Roweis, both from the University of Toronto.
|John Hamilton (Kodak)||Algorithms for digital color cameras|
While digital color imaging has many problems in common with conventional silver halide imaging, it also has its own particular problems not faced in the analog world. Two of these problems, and two corresponding algorithmic solutions, are illustrated by example and discussed in detail. In addition, a mathematical perspective is presented to explain how these algorithms work.
The first problem is that of color interpolation (also called demosaicking). The pixels of most silicon sensors capture a single color measurement, usually a red, green, or blue color value. Because a fully processed color image requires all three color values at each pixel, two additional color values must be provided at each pixel. An algorithm is presented that addresses this problem for sensors using the Bayer color filter pattern.
The second problem is that of color aliasing. While the problem of aliasing is always present in a discrete imaging system, it is compounded for color sensors because the different color channels can alias in different ways. The resulting interference patterns have distinctive color components which constitute an obvious and annoying imaging artifact. An algorithm is presented that addresses this problem for sensors using the Bayer color filter pattern.
|Daniel Kersten (University of Minnesota)||Natural images, natural percepts and primary visual cortex|
|Abstract: The traditional model of primary visual cortex (V1) is in terms of a retinotopically organized set of spatio-temporal filters. This model has been extraordinarily fruitful, providing explanations of a considerable body of psychophysical and neurophysiological results. It has also produced compelling linkages between natural image statistics, efficient coding theory, and neural responses. However,there is increasing evidence that V1 is doing a whole lot more. We can get insight into early cortical processing by studying not only the relationship between image input and neural activity, but also between human visual percepts and early cortical activity. Natural percepts (in the sense of tapping into natural modes of processing) are as important as understanding natural images when trying to find out what primary visual cortex is doing. I will describe several results from functional magnetic resonance imaging (fMRI) studies which show that human V1 blood oxygenation level dependent (BOLD) response to patterns perceived as well-organized is less than to patterns perceived as less organized, V1 response to natural image contrast is correlated with perceived contrast, and apparent size modulates the spatial extent of V1 activity.|
|Gisela Klette (University of Auckland)||Branch voxels and junctions in 3D skeletons of confocal microscope images of human brain tissue|
|Abstract: Concepts for describing curve points in a continues space are well known in mathematics for a long time. We apply those concepts to the discrete space with the aim to analyse curve-like structures in digital images. For the characterization of 3D skeletons we distinguish between different types of voxels. We discuss approaches to define those elements of skeletons and their properties. We use the distribution and complexity of junctions to extract features for 3D medical images.|
|Reinhard Klette (University of Auckland)||Panoramic imaging and laser range finders for 3D scene visualization|
|Abstract: Joint work with Karsten Scheibe, DLR, Berlin-Adlershof.
The talk informs at a general level about new architectures of panoramic cameras (as designed and produced at DLR, the German Air and Space Institute at Berlin), their use for stereo imaging based on studies at CITR, and the combination of those high-resolution images (about 350 Megapixel each) with range data generated by a laser range finder. Results are illustrated for different objects such as the castle "Neuschwanstein" in Bavaria/Germany.
|Jan J. Koenderink (University of Utrecht)||Image texture and the "flow of light"|
|Abstract: The "Shading Cue" is conventionally framed in the context of perfectly smooth surfaces. "Shading" has ancient roots in the visual arts, and became canonized in the late 20thc. as the "Shape From Shading (SFS) Problem". I reconsider the problem as conventionally posed, presenting a novel analysis of its "observational basis''. When rough surfaces are considered the image structure is augmented (from mere contrast gradient in the former case) with the image illuminance flow structure revealed by texture. The direction and two differential invariants of this flow can be estimated robustly via the structure tensor. This changes the nature of the "shading cue" qualitatively. Shading alone does not specify surface curvature orthogonal to the illumination direction, a lack of data that has to be made up for by the surface integrability conditions. Hence conventional SFS algorithms are based on partial differential equations with global boundary conditions. Allowing illuminance flow as an additional observable alleviates this problem and purely local, algebraic approaches to SFS become feasible. Algorithms can be shown to exist that derive surface curvature from shading and flow observations through a linear operator applied to the observables, the operator being a function of surface attitude and beam direction. Such an approach neatly reveals the remaining group of ambiguity transformations in an intuitive way. I propose novel ways to deal with the intrinsic ambiguities of photomorphometrics. Instead of attempting to find the full class of equivalent solutions I look for specific solutions given certain a priori guesses. Such methods are much more similar to likely mechanisms of human psychogenesis, in particular visual perception, than the conventional "Marrian" approach. I present methods that boil down to linear, local computation, thus very robust and possibly implementable in neural wetware.|
|Laurence T. Maloney (New York University)||Surface color perception in three-dimensional scenes: Estimating, representing and discounting the illuminant|
|Abstract: Joint work with Katja Doerschner, Huseyin Boyaci.
Researchers studying surface color perception have typically used stimuli that consist of a small number of matte patches (real or simulated) embedded in a plane perpendicular to the line of sight (a Mondrian, Land & McCann, 1971). Reliable estimation of surface properties analogous to color is a difficult if not impossible computational problem in such limited scenes (Maloney, 1999). In more realistic, three-dimensional scenes the problem is not intractable, in part because considerable information about the spatial and spectral distribution of the illumination is usually available. We describe a series of experiments that (1) explore how the human visual system discounts the spatial and spectral distribution of the illumination (SSDI) in judging matte surface color and (2) what cues the visual system uses in estimating the SSDI of in a scene. We find that the human visual system uses information from cast shadows and specular reflections in estimating the SSDI and, when more than one cue type is present, combines these cues effectively. The SSDI can be very complex in scenes with many different light sources. We examine (3) the limits of human visual representation of the SSDI, reporting an experiment intended to tests these limits. Our results indicate that the human visual representation of the SSD of the illumination in a scene is well- matched to the task of perception of matte surface color perception.
Land, E. H. & McCann, J. J. (1971), Lightness and retinex theory. Journal of the Optical Society of America, 61,1-11. Maloney, L. T. (1999), Physics-based approaches to modeling surface color perception. In Gegenfurtner, K. R., & Sharpe, L. T. [Eds] (1999), Color Vision: From Genes to Perception. Cambridge, UK: Cambridge University Press, pp. 387-422.
|Pietro Perona (California Institute of Technology)||How many categories can you recognize?|
|Abstract: How many categories can you recognize? Currently the best estimate is due to Irv Biederman: 3000 entry-level categories and perhaps 3*104 categories overall. This estimate was obtained indirectly, by counting words in a dictionary. I will present a method to obtain a direct estimate. Alongside the estimate one gets frequencies of objects and categories for free. I will discuss the implications for visual recognition and other visual problems.|
|Guillermo R. Sapiro (University of Minnesota)||Texture mixing via universal simulation|
|Abstract: Joint work with G.
Brown (University of Minnesota and HP Labs) and G. Seroussi (MSRI).
A framework for studying texture in general, and for texture mixing in particular, is presented in this work. The work follows concepts from universal type classes and universal simulation. Based on the well-known Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one dimensional sequence is defined as the set of possible sequences of the same length which produce the same dictionary (or parsing tree) with the classical LZ incremental parsing algorithm. Universal simulation is realized by sampling uniformly from the universal type class, which can be efficiently implemented. Starting with a source texture image, we use universal simulation to synthesize new textures that have, asymptotically, the same statistics of any order as the source texture, yet have as much uncertainty as possible, in the sense that they are sampled from the broadest pool of possible sequences that comply with the statistical constraint. When considering two or more textures, a parsing tree is constructed for each one, and samples from the trees are randomly interleaved according to pre-defined proportions, thus obtaining a mixed texture. As with single texture synthesis, the k-th order statistics of this mixture, for any k, asymptotically approach the weighted mixture of the k-th order statistics of each individual texture used in the mixing. We present the underlying principles of universal types, universal simulation, and their extensions and application to mixing two or more textures with pre-defined proportions.
|Mao-Pei Tsui (University of Toledo)||Physics-motivated features for distinguishing photographic images and computer graphics|
|Abstract: Joint work with Tian-Tsong Ng, Shih-Fu Chang
Jessie Hsu, Lexing Xie (Columbia University).
The increasing photorealism for computer graphics has made computer graphics a convincing form of image forgery. Therefore, classifying photographic images and photorealistic computer graphics has become an important problem for image forgery detection. We propose a new geometry based image model, motivated by the physical image generation process, to tackle the above-mentioned problem. The proposed model reveals certain physical differences between the two image categories, such as the gamma correction in photographic images and the sharp structures in computer graphics. For the problem of image forgery detection, we propose two levels of image authenticity definition, i.e., imaging-process authenticity and scene authenticity, and analyze our technique against these definitions. Such definition is important for making the concept of image authenticity computable. Apart from offering physical insights, our technique with a classification accuracy of 83.5% outperforms those in the prior work, i.e., wavelet features at 80.3% and cartoon features at 71.0%. We also consider a recapturing attack scenario and propose a counter-attack measure.
|Todd Wittman (University of Minnesota)||A variational approach to image and video super-resolution|
|Abstract: Super-resolution seeks to produce a high-resolution image from a set of low-resolution, possibly noisy, images such as in a video sequence. We present a method for combining data from multiple images using the Total Variation (TV) and Mumford-Shah functionals. We discuss the problem of sub-pixel image registration and its effect on the final result.|
|Edward H. Adelson||Massachusetts Institute of Technology||3/5/2006 - 3/10/2006|
|Jung-Ha An||University of Minnesota||9/1/2005 - 8/31/2007|
|Pablo Arias||Universidad de la Republica||3/5/2006 - 3/11/2006|
|D. Gregory Arnold||Air Force Research Laboratory||3/6/2006 - 3/8/2006|
|Douglas N. Arnold||University of Minnesota||7/15/2001 - 8/31/2006|
|Donald G. Aronson||University of Minnesota||9/1/2002 - 8/31/2006|
|Richard Baraniuk||Rice University||3/5/2006 - 3/10/2006|
|Evgeniy Bart||University of Minnesota||9/1/2005 - 8/31/2007|
|Andrea L. Bertozzi||University of California - Los Angeles||3/5/2006 - 3/10/2006|
|Francisco Blanco-Silva||Purdue University||9/1/2005 - 6/30/2006|
|Edward Howard Bosch||National Geospatial Intelligence Agency||3/5/2006 - 3/10/2006|
|Mike J. Chantler||Heriot-Watt University||3/5/2006 - 3/12/2006|
|Qianyong Chen||University of Minnesota||9/1/2004 - 8/31/2006|
|Li Cheng||University of Alberta||3/5/2006 - 3/10/2006|
|Steven Benjamin Damelin||Georgia Southern University||8/9/2005 - 6/30/2006|
|James Damon||University of North Carolina||2/27/2006 - 6/30/2006|
|Brian DiDonna||University of Minnesota||9/1/2004 - 8/31/2006|
|Marco F. Duarte||Rice University||3/5/2006 - 3/11/2006|
|James Elder||York University||3/5/2006 - 3/10/2006|
|David Field||Cornell University||3/5/2006 - 3/10/2006|
|William T. Freeman||Massachusetts Institute of Technology||3/5/2006 - 3/10/2006|
|Donald Geman||John Hopkins University||3/5/2006 - 3/10/2006|
|Stuart Geman||Brown University||3/5/2006 - 3/10/2006|
|Raffaele Grompone||CMLA, ENS of Cachan||3/5/2006 - 4/7/2006|
|Changfeng Gui||University of Connecticut||9/12/2005 - 6/30/2006|
|Jooyoung Hahn||KAIST||8/26/2005 - 7/31/2006|
|Hazem Hamdan||University of Minnesota||3/6/2006 - 3/10/2006|
|John Hamilton||Eastman Kodak Company||3/30/2006 - 3/31/2006|
|Gloria Haro Ortega||University of Minnesota||9/1/2005 - 8/31/2007|
|Xiang Huang||University of Connecticut||9/1/2005 - 6/30/2006|
|Victoria Interrante||University of Minnesota||3/6/2006 - 3/10/2006|
|Jeremie Jakubowicz||CMLA, ENS of Cachan||3/2/2006 - 4/8/2006|
|Sookyung Joo||University of Minnesota||9/1/2004 - 8/31/2006|
|Sung Ha Kang||University of Kentucky||1/1/2006 - 5/31/2006|
|Chiu Yen Kao||University of Minnesota||9/1/2004 - 8/31/2006|
|Daniel Kersten||University of Minnesota||3/5/2006 - 3/10/2006|
|Seongjai Kim||Mississippi State University||3/5/2006 - 3/10/2006|
|Gisela Klette||University of Auckland||2/4/2006 - 5/31/2006|
|Reinhard Klette||University of Auckland||2/4/2006 - 5/31/2006|
|Jan J. Koenderink||University of Utrecht||3/1/2006 - 3/22/2006|
|Matthias Kurzke||University of Minnesota||9/1/2004 - 8/31/2006|
|Song-Hwa Kwon||University of Minnesota||8/30/2005 - 8/31/2007|
|Triet Le||University of California - Los Angeles||3/5/2006 - 3/10/2006|
|Chang-Ock Lee||KAIST||8/1/2005 - 7/31/2006|
|Stacey E. Levine||Duquesne University||12/30/2005 - 6/30/2006|
|Mike Lewicki||Carnegie Mellon University||3/5/2006 - 3/10/2006|
|Debra Lewis||University of Minnesota||7/15/2004 - 8/31/2006|
|Hstau Liao||University of Minnesota||9/2/2005 - 8/31/2007|
|Bradley J. Lucier||Purdue University||8/15/2005 - 6/30/2006|
|Alison Malcolm||University of Minnesota||9/1/2005 - 8/31/2006|
|Jitendra Malik||University of California - Berkeley||3/5/2006 - 3/9/2006|
|Laurence T. Maloney||New York University||3/5/2006 - 3/10/2006|
|Peter R. Massopust||Tuboscope Pipeline Services||3/24/2006 - 3/25/2006|
|Andrea Carlo Giuseppe Mennucci||Scuola Normale Superiore||3/23/2006 - 4/11/2006|
|James Money||University of Kentucky||3/5/2006 - 3/11/2006|
|Vassilios Morellas||Honeywell||3/3/2006 - 3/3/2006|
|Frank Natterer||Universitaet Muenster||3/4/2006 - 3/11/2006|
|Tian-Tsong Ng||Columbia University||3/5/2006 - 3/11/2006|
|Peter J. Olver||University of Minnesota||9/1/2005 - 6/30/2006|
|Pietro Perona||California Institute of Technology||3/5/2006 - 3/10/2006|
|Arlie O. Petters||Duke University||3/21/2006 - 3/23/2006|
|Peter Philip||University of Minnesota||8/22/2004 - 8/31/2006|
|Gregory J. Randall||Universidad de la Republica||8/18/2005 - 7/31/2006|
|Walter Richardson, Jr.||University of Texas - San Antonio||9/1/2005 - 6/30/2006|
|Alessandro Rizzi||University of Milan||3/4/2006 - 3/11/2006|
|Fadil Santosa||University of Minnesota||9/1/2005 - 6/30/2006|
|Guillermo R. Sapiro||University of Minnesota||9/1/2005 - 6/30/2006|
|P. Coleman Saunders||University of Minnesota||3/6/2006 - 3/10/2006|
|Arnd Scheel||University of Minnesota||7/15/2004 - 8/31/2006|
|Jin Keun Seo||Yonsei University||1/5/2006 - 6/5/2006|
|Tatiana Soleski||University of Minnesota||9/1/2005 - 8/31/2007|
|Vladimir Sverak||University of Minnesota||9/1/2005 - 6/30/2006|
|Carl Toews||University of Minnesota||9/1/2005 - 8/31/2007|
|Mao-Pei Tsui||University of Toledo||3/5/2006 - 3/11/2006|
|Alejandra Umana-Diaz||University of Puerto Rico at Mayaguez||3/5/2006 - 3/11/2006|
|Miguel Velez-Reyes||University of Puerto Rico at Mayaguez||3/5/2006 - 3/10/2006|
|Luminita Aura Vese||University of California - Los Angeles||3/5/2006 - 3/7/2006|
|Kevin Vixie||Los Alamos National Laboratory||3/2/2006 - 4/9/2006|
|Michael Wakin||Rice University||3/5/2006 - 3/10/2006|
|Jingyue Wang||Purdue University||9/1/2005 - 6/30/2006|
|Xiaoqiang Wang||University of Minnesota||9/1/2005 - 8/31/2007|
|Martin Welk||University of the Saarland||12/5/2005 - 3/10/2006|
|Todd Wittman||University of Minnesota||3/6/2006 - 3/10/2006|
|Boli Yarkulov||Samarkand State Architecture-Construction Institute||3/5/2006 - 3/11/2006|
|Ofer Zeitouni||University of Minnesota||9/1/2005 - 6/30/2006|
|Steven W. Zucker||Yale University||3/5/2006 - 3/10/2006|