| Institute for Mathematics and its Applications University of Minnesota 114 Lind Hall 207 Church Street SE Minneapolis, MN 55455 |
2008-2009 Program
See http://www.ima.umn.edu/2008-2009 for a full description of the 2008-2009 program on Mathematics and Chemistry.
Professor Hans Weinberger is Professor Emeritus in Department
of Mathematics
of University of Minnesota. He obtained his Sc.D.
http://en.wikipedia.org/wiki/Sc.D. on the thesis
/Fourier Transforms of Moebius Series/ advised by Richard
Duffin
Professor Ingrid Daubechies of Princeton University will give
the IMA
Math Matters public lecture
on October 29, 2008:
http://www.ima.umn.edu/2008-2009/PUB10.29.08/
Professor Daubechies obtained her Ph.D. from Vrije Universiteit
Brussel
The Institute for Mathematics and its Applications (IMA) in
conjunction
with the National Science Foundation
Division of
Math Sciences is organizing a one-day workshop on the new
initiative
called SOLAR
http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503298.
The goal of the workshop is to bring together a mixed audience
of
mathematicians, chemists and materials scientists to discuss
scientific
challenges in high efficiency harvesting, conversion and
storage of
solar energy. The workshop will aim to plant the seeds for
interdisciplinary collaborations necessary to address the grand
challenges in solar energy conversion and storage. It will aim
to
educate the mixed audience of attendees about the issues and
provide a
framework for detailed discussions on what needs to be done to
surpass
the grand challenges. The workshop will begin with four talks
by
speakers who are experts in fundamental science of solar energy
conversion and culminate in the afternoon with panel
discussions. For
detail, see:
http://www.ima.umn.edu/2008-2009/SW11.1.08/.
2:00pm DFT Math Session 2:00pm Algorithms Session
Jianzhong Su (University of Texas), Chair, Morning Session
Roger Lui (Worcester Polytechnic Institute) Chair, Afternoon Session
1. The representation of high-level information and low-level
data
2. The symbiotic linkage between information and data
3. The need to transform qualitative information into
quantitative data sets and vice versa
4. The need to think beyond the learning for classification.
5. How mathematics can be useful to the aforementioned domains
of interest in conjunction with information integration and
data fusion.
2) What have been successful applications of manifold
clustering?
3) What is the role of topology, geometry, and statistics, in
manifold learning, i.e.,
3) What are the open problems in manifold clustering?
1. The representation of high-level information and low-level
data
2. The symbiotic linkage between information and data
3. The need to transform qualitative information into
quantitative data sets and vice versa
4. The need to think beyond the learning for classification.
5. How mathematics can be useful to the aforementioned domains
of interest in conjunction with information integration and
data fusion.
![]()
IMA Annual Program Year Workshop
Mathematical and Algorithmic Challenges in Electronic Structure Theory
September 29 - October 3, 2008
Organizers: Eric Cances (CERMICS), Anna Krylov (University of Southern California), Juan C. Meza (Lawrence Berkeley Laboratory), John P. Perdew (Tulane University)
IMA Workshop
Differential Equations: Analysis, Applications & Computation
October 4, 2008
Organizers: Roger Lui (Worcester Polytechnic Institute), Willard Miller Jr. (University of Minnesota Twin Cities), Chehrzad Shakiban (University of Minnesota Twin Cities)
Board of Governors Meeting
October 12-13, 2008
IMA Workshop
Multi-Manifold Data Modeling and Applications
October 27-30, 2008
Organizers: Ronald DeVore (Texas A & M University), Tamara G. Kolda (Sandia National Laboratories), Gilad Lerman (University of Minnesota Twin Cities), Guillermo R. Sapiro (University of Minnesota Twin Cities)
Wednesday, October 1
All Day 9:00am-12:05pm Density Functional Theory for Physics and Chemistry Session (continued)
Chair: Heinz Siedentop (Ludwig-Maximilians-Universität München)
W9.29-10.3.08
8:30am-9:00am Coffee EE/CS 3-176
W9.29-10.3.08
9:00am-9:50am TBA Eberhard K. U. Gross (Freie Universität Berlin) EE/CS 3-180
W9.29-10.3.08
9:50am-10:20am Coffee EE/CS 3-176
W9.29-10.3.08
10:20am-11:10am Van der Waals density functional: theory, implementations, and applications David Langreth (Rutgers University) EE/CS 3-180
W9.29-10.3.08
11:15am-12:05pm New density functionals with broad applicability for
thermochemistry, thermochemical kinetics, noncovalent
interactions, transition metals, and spectroscopy Donald G. Truhlar (University of Minnesota) EE/CS 3-180
W9.29-10.3.08
12:05pm-2:00pm Lunch
W9.29-10.3.08
2:00pm-2:50pm Open mathematical issues in quantum chemistry: a personal
perspective Claude Le Bris (CERMICS) EE/CS 3-180
W9.29-10.3.08
2:50pm-3:20pm Coffee EE/CS 3-176
W9.29-10.3.08
3:20pm-4:10pm Exact embedding of local defects in crystals Mathieu Lewin (Université de Cergy-Pontoise) EE/CS 3-180
W9.29-10.3.08
Thursday, October 2
All Day
9:00am-11:55am DFT Math Session (continued)
Chair: Heinz Siedentop (Ludwig-Maximilians-Universität München)
Chair: François Gygi (University of California, Davis)
W9.29-10.3.08
8:30am-9:00am Coffee EE/CS 3-176
W9.29-10.3.08
9:00am-9:50am A linear scaling subspace iteration algorithm with
optimally localized non-orthogonal wave functions for Kohn-Sham
density functional theory Carlos J. Garcia-Cervera (University of California, Santa Barbara) EE/CS 3-180
W9.29-10.3.08
9:50am-10:20am Coffee EE/CS 3-176
W9.29-10.3.08
10:20am-11:10am Construction of exponentially localized Wannier functions Gianluca Panati (Università di Roma "La Sapienza") EE/CS 3-180
W9.29-10.3.08
11:15am-11:55am Second chances: The chair of the day will deliver a 30 minutes overview of the field followed by a discussion. Heinz Siedentop (Ludwig-Maximilians-Universität München) EE/CS 3-180
W9.29-10.3.08
11:55am-2:00pm Lunch
W9.29-10.3.08
2:00pm-2:50pm A direct constrained minimization algorithm for
solving the Kohn-Sham equations Chao Yang (Lawrence Berkeley National Laboratory) EE/CS 3-180
W9.29-10.3.08
2:50pm-3:20pm Coffee EE/CS 3-176
W9.29-10.3.08
3:20pm-4:10pm Augmented basis sets in finite cluster DFT James W. Davenport (Brookhaven National Laboratory) EE/CS 3-180
W9.29-10.3.08
6:30pm-8:30pm Workshop dinner at Caspian Bistro Caspian Bistro
2418 University Ave SE
Minneapolis, MN 55414
612-623-1133 W9.29-10.3.08
Friday, October 3
All Day Algorithms Session (continued)
Chair: François Gygi (University of California, Davis)
W9.29-10.3.08
8:30am-9:00am Coffee EE/CS 3-176
W9.29-10.3.08
9:00am-9:50am First-principles molecular dynamics for petascale computers François Gygi (University of California, Davis) EE/CS 3-180
W9.29-10.3.08
9:50am-10:20am Coffee EE/CS 3-176
W9.29-10.3.08
10:20am-11:10am Modern optimization tools and electronic structure calculations José Mario Martínez (State University of Campinas (UNICAMP)) EE/CS 3-180
W9.29-10.3.08
11:15am-12:05pm Partition-of-unity finite-element approach for large, accurate ab initio electronic structure calculations John E. Pask (Lawrence Livermore National Laboratory) EE/CS 3-180
W9.29-10.3.08
12:05pm-1:45pm Lunch
W9.29-10.3.08
1:25pm-2:25pm Dealing with stiffness in low-Mach number flows Caroline Gatti-Bono (Lawrence Livermore National Laboratory) Vincent Hall 570
IPS
1:45pm-2:35pm Mathematical and algorithmic challenges in the simulation of electronic structure and dynamics on quantum computers Alán Aspuru-Guzik (Harvard University) EE/CS 3-180
W9.29-10.3.08
2:35pm-3:05pm Coffee EE/CS 3-176
W9.29-10.3.08
3:05pm-3:45pm Second chances: The chair of the day will deliver a 30 minutes overview of the field followed by a discussion. François Gygi (University of California, Davis) EE/CS 3-180
W9.29-10.3.08
3:45pm-3:55pm Closing remark EE/CS 3-180
W9.29-10.3.08
Saturday, October 4
All Day A SYMPOSIUM IN HONOR OF HANS
WEINBERGER'S 80th BIRTHDAY:
DIFFERENTIAL EQUATIONS: ANALYSIS, APPLICATIONS &
COMPUTATION
SW10.4.08
9:00am-9:30am Registration and Coffee EE/CS 3-176
SW10.4.08
9:30am-9:40am Welcome Peter J. Olver (University of Minnesota) EE/CS 3-180
SW10.4.08
9:40am-10:30am Population spread and the dynamics of biological invasions Mark Lewis (University of Alberta) EE/CS 3-180
SW10.4.08
10:30am-10:40am Group photo
SW10.4.08
10:40am-11:10am Coffee EE/CS 3-176
SW10.4.08
11:10am-12:00pm Spectral properties, regularity and optimal bounds for
solutions of elliptic boundary value problems Howard A. Levine (Iowa State University) EE/CS 3-180
SW10.4.08
12:00pm-2:00pm Lunch
SW10.4.08
2:00pm-2:50pm Numerical work of Hans F. Weinberger John E. Osborn (University of Maryland) EE/CS 3-180
SW10.4.08
2:50pm-3:20pm Coffee EE/CS 3-176
SW10.4.08
3:20pm-4:10pm Speed selection for traveling waves Donald G. Aronson (University of Minnesota) EE/CS 3-180
SW10.4.08
6:00pm-9:30pm Banquet at the Carlson Private Dining Room (Carlson
School of Management)
321 19th Avenue South
Minneapolis, MN 55455Carlson Private Dining Room (Carlson
School of Management)
SW10.4.08
Monday, October 6
10:45am-11:15am Coffee break
Lind Hall 400
2:30pm-3:30pm Math 8994: Topics in classical and
quantum mechanics
Electronic structure calculations and molecular
simulation: A mathematical initiation
Eric Cances (CERMICS)
Claude Le Bris (CERMICS)Lind Hall 305
Tuesday, October 7
10:45am-11:15am Coffee break
Lind Hall 400
11:15am-12:15pm The Mathematical basis for molecular van der Waals
forces Ridgway Scott (University of Chicago) Lind Hall 305
PS
3:00pm-4:00pm Reading group for Professor Ridgway Scott's book "Digital Biology" Ridgway Scott (University of Chicago) Lind Hall 401
Wednesday, October 8
10:00am-11:00am A view of outstanding problems in density functional
theory Steven M. Valone (Los Alamos National Laboratory) Lind Hall 409
SMC
10:45am-11:15am Coffee break
Lind Hall 400
2:30pm-3:30pm Math 8994: Topics in classical and
quantum mechanics
Electronic structure calculations and molecular
simulation: A mathematical initiation
Eric Cances (CERMICS)
Claude Le Bris (CERMICS)Lind Hall 305
Thursday, October 9
10:45am-11:15am Coffee break Lind Hall 400
11:15am-12:15pm Panagiotis Stinis, University of Minnesota
TBAVincent Hall 570
AMS
Friday, October 10
10:45am-11:35am Coffee break Lind Hall 400
Monday, October 13
10:45am-11:15am Coffee break Lind Hall 400
2:30pm-3:30pm Math 8994: Topics in classical and
quantum mechanics
Electronic structure calculations and molecular
simulation: A mathematical initiation
Eric Cances (CERMICS)
Claude Le Bris (CERMICS)Lind Hall 305
Tuesday, October 14
10:45am-11:15am Coffee break Lind Hall 400
11:15am-12:15pm Model reference control in the biological systems Yongfeng Li (University of Minnesota) Lind Hall 409
PS
12:15pm-1:30pm postdoc lunch meeting Donald G. Aronson (University of Minnesota) Lind Hall 409
3:00pm-4:00pm Reading group for Professor Ridgway Scott's book "Digital Biology." Chapter 3 discussion Ridgway Scott (University of Chicago) Lind Hall 401
Wednesday, October 15
10:45am-11:15am Coffee break Lind Hall 400
4:00pm-5:00pm New efficient algorithms for a general blood tissue transport-metabolism model and stiff differential equations Dexuan Xie (University of Wisconsin) Lind Hall 409
SMC
Thursday, October 16
10:45am-11:15am Coffee break Lind Hall 400
11:15am-12:15pm Modeling dispersive fluid flow Ridgway Scott (University of Chicago) Vincent Hall 570
AMS
Friday, October 17
10:45am-11:15am Coffee break Lind Hall 400
1:25pm-2:25pm Virtual prototyping of hearing aids using numerical modeling and supercomputing Thomas H. Burns (Starkey Laboratories) Vincent Hall 570
IPS
3:45pm-4:45pm Seminar: Quantum vacuum energy in rectangular geometries
and the problems of moving beyond Stephen Fulling (Texas A & M University) Lind Hall 409
Monday, October 20
10:45am-11:15am Coffee break Lind Hall 400
Tuesday, October 21
10:45am-11:15am Coffee break Lind Hall 400
11:15am-12:15pm Born-Oppenheimer corrections near a Renner-Teller crossing Mark S. Herman (University of Minnesota) Lind Hall 305
PS
3:00pm-4:00pm Reading group for Professor Ridgway Scott's book "Digital Biology" Ridgway Scott (University of Chicago) Lind Hall 401
Wednesday, October 22
10:45am-11:15am Coffee break Lind Hall 400
Thursday, October 23
10:45am-11:15am Coffee break Lind Hall 400
2:30pm-3:30pm Waves and mixing Roman Schubert (University of Bristol) Vincent Hall 570
DSS
Friday, October 24
10:45am-11:15am Coffee break Lind Hall 400
Monday, October 27
All Day Morning Session Chair: Irina Rish (IBM)
Afternoon Session Chair: Richard Souvenir (University of North Carolina - Charlotte)
SW10.27-30.08
8:00am-8:45am Registration and coffee EE/CS 3-176
SW10.27-30.08
8:45am-9:00am Welcome to the IMA Fadil Santosa (University of Minnesota) EE/CS 3-180
SW10.27-30.08
9:00am-9:50am The best low-rank Tucker approximation of a tensor
Lars Eldén (Linköping University) EE/CS 3-180
SW10.27-30.08
9:55am-10:45am Detecting mixed dimensionality and density in noisy point clouds Gloria Haro Ortega (Universitat Politecnica de Catalunya) EE/CS 3-180
SW10.27-30.08
10:45am-11:15am Coffee EE/CS 3-176
SW10.27-30.08
11:15am-12:05pm A Geometric perspective on machine Learning Partha Niyogi (University of Chicago) EE/CS 3-180
SW10.27-30.08
12:05pm-2:00pm Lunch
SW10.27-30.08
2:00pm-2:50pm Manifold models for signal acquisition, compression, and
processing
Richard G. Baraniuk (Rice University) EE/CS 3-180
SW10.27-30.08
2:55pm-3:45pm Harmonic and multiscale analysis on low-dimensional data sets in high-dimensions Mauro Maggioni (Duke University) EE/CS 3-180
SW10.27-30.08
3:30pm-4:30pm Full-dimensional potential energy surfaces for small molecules Bastiaan J. Braams (Emory University) 283 Kolthoff Hall
3:45pm-4:00pm Group Photo
SW10.27-30.08
4:00pm-4:30pm Coffee EE/CS 3-176
SW10.27-30.08
4:30pm-5:30pm Large group discussion on What have we learned about manifold learning — what are
its implications for machine learning and numerical analysis? What
are open questions? What are successes? Where should we be
optimistic and where should we be pessimistic?
Partha Niyogi (University of Chicago) EE/CS 3-180
SW10.27-30.08
5:30pm-7:00pm Poster Session and Reception: 5:30-7:00
Poster submissions welcome from all participantsLind Hall 400
SW10.27-30.08
Compressive sampling reconstruction by subspace
refinement (poster)
Bradley K. Alpert (National Institute of Standards and Technology)
Analysis of scalar fields over point cloud data (poster) Frédéric Chazal (INRIA Saclay - Île-de-France )
Joint manifold models for collaborative inference (poster) Mark Andrew Davenport (Rice University)
The smashed filter for compressive classification(poster) Marco F. Duarte (Rice University)
3-D motion segmentation via robust subspace separation (poster) Ehsan Elhamifar (Johns Hopkins University)
teratively re-weighted least squares and vector valued data
restoration from lower dimensional samples (poster) Massimo Fornasier (Johann Radon Institute for Computational and Applied Mathematics )
Clustering on Riemannian manifolds (poster) Alvina Goh (Johns Hopkins University)
René Vidal (Johns Hopkins University)
Random projections for manifold learning (poster)
Chinmay Hegde (Rice University)
Representing and manipulating implicitly defined manifolds (poster) Michael E. Henderson (IBM)
Fast multiscale clustering and manifold identification (poster) Dan Kushnir (Yale University)
Supervised dictionary learning (poster) Julien Mairal (INRIA )
A supervised dimensionality reduction framework for
exponential-family variables (poster) Irina Rish (IBM)
Tensor approximation - structure and methods (poster)
Berkant Savas (Linköping University)
Structure determination of proteins using cryo-electron
microscopy (poster) Yoel Shkolnisky (Yale University)
Amit Singer (Princeton University)
High order consistency relations for classification and
de-noising of Cryo-EM images (poster) Yoel Shkolnisky (Yale University)
Amit Singer (Princeton University)
k-planes for classification (poster) Arthur Szlam (University of California, Los Angeles)
Manifold models for single- and multi-signal recovery (poster) Michael Wakin (Colorado School of Mines)
Using persistent homology to recover spatial information from
encounter traces (poster) Brenton Walker (Laborartory For Telecommunications Sciences)
Mixed data segmentation via lossy data compression (poster) John Wright (University of Illinois at Urbana-Champaign)
Orthant-wise gradient projection method for sparse reconstruction (poster) Qiu Wu (University of Texas)
High-dimensional multi-model estimation – its Algebra,
statistics, and sparse representation (poster)
Allen Yang Yang (University of California, Berkeley)
Approximate nearest subspace search with
applications to pattern recognition (poster)
Lihi Zelnik-Manor (Technion-Israel Institute of Technology)
Tuesday, October 28
All Day Morning Session Chair: Lihi Zelnik-Manor (Technion-Israel Institute of Technology)
Afternoon Session Chair: Michael E. Henderson (IBM)
SW10.27-30.08
8:30am-9:00am Coffee EE/CS 3-176
SW10.27-30.08
9:00am-9:50am Multilinear (tensor) manifold data modeling M. Alex O. Vasilescu (SUNY) EE/CS 3-180
SW10.27-30.08
9:55am-10:45am Recovering sparsity in high dimensions Ronald DeVore (Texas A & M University) EE/CS 3-180
SW10.27-30.08
10:45am-11:15am Coffee EE/CS 3-176
SW10.27-30.08
11:15am-12:05pm Clustering linear and nonlinear manifolds René Vidal (Johns Hopkins University) EE/CS 3-180
SW10.27-30.08
12:05pm-2:00pm Lunch
SW10.27-30.08
2:00pm-2:50pm Instance optimal adaptive regression in high dimensions
Wolfgang Dahmen (RWTH Aachen) EE/CS 3-180
SW10.27-30.08
2:55pm-3:45pm Spectral and geometric methods in learning Mikhail Belkin (Ohio State University) EE/CS 3-180
SW10.27-30.08
3:45pm-4:15pm Coffee EE/CS 3-176
SW10.27-30.08
4:15pm-5:15pm Large group discussion on:
Tristan Nguyen (Office of Naval Research) EE/CS 3-180
SW10.27-30.08
6:30pm-8:30pm Workshop dinner Kikugawa at Riverplace
43 Main Street SE Minneapolis MN 55414
612-378-3006 SW10.27-30.08
Wednesday, October 29
All Day Morning Session Chair: Stacey E. Levine (Duquesne University)
Afternoon Session Chair: Ramesh Natarajan (IBM Research Division)
SW10.27-30.08
8:30am-9:00am Coffee EE/CS 3-176
SW10.27-30.08
9:00am-9:50am Interpolation of functions on Rn Charles L. Fefferman (Princeton University) EE/CS 3-180
SW10.27-30.08
9:55am-10:45am Multi-manifold data modeling via spectral curvature clustering
Gilad Lerman (University of Minnesota) EE/CS 3-180
SW10.27-30.08
10:45am-11:15am Coffee EE/CS 3-176
SW10.27-30.08
11:15am-12:05pm Visualization & matching for graphs and data Tony Jebara (Columbia University) EE/CS 3-180
SW10.27-30.08
12:05pm-2:00pm Lunch
SW10.27-30.08
2:00pm-2:50pm Topology and data Gunnar Carlsson (Stanford University) EE/CS 3-180
SW10.27-30.08
2:55pm-3:45pm Dense error correction via L1 minimization Yi Ma (University of Illinois at Urbana-Champaign) EE/CS 3-180
SW10.27-30.08
3:45pm-4:15pm Coffee EE/CS 3-176
SW10.27-30.08
4:15pm-5:15pm Large group discussion on Manifold Clustering
1) What have have been recent advances on manifold
clustering?
a) Algebraic approaches
b) Spectral approaches
c) Probabilistic approaches
a) clustering based on the dimensions of the manifolds
b) clustering based on geometry
c) clustering based on statistics
René Vidal (Johns Hopkins University) EE/CS 3-180
SW10.27-30.08
5:15pm-6:30pm Math matters public lecture reception Lind Hall 400
SW10.27-30.08
7:00pm-8:15pm Math matters public lecture: Surfing with wavelets Ingrid Daubechies (Princeton University) Willey Hall 125
SW10.27-30.08
Thursday, October 30
All Day Chair: Lek-Heng Lim (University of California, Berkeley)
SW10.27-30.08
8:30am-9:00am Coffee EE/CS 3-176
SW10.27-30.08
9:00am-9:50am CPOPT: Optimization for fitting CANDECOMP/PARAFAC models Tamara G. Kolda (Sandia National Laboratories) EE/CS 3-180
SW10.27-30.08
9:55am-10:45am Semi-supervised learning by multi-manifold separation Xiaojin Zhu (University of Wisconsin) EE/CS 3-180
SW10.27-30.08
10:45am-11:15am Coffee EE/CS 3-176
SW10.27-30.08
11:15am-12:05pm Mathematical problems suggested by Analog-to-Digital conversion Ingrid Daubechies (Princeton University) EE/CS 3-180
SW10.27-30.08
12:05pm-12:10pm Closing remark EE/CS 3-180
SW10.27-30.08
12:10pm-2:00pm Conference lunch at Loring Pasta Bar in Dinkytown
Loring Pasta Bar in
Dinkytown
SW10.27-30.08
2:30pm-3:30pm Waves and mixing (part II) Roman Schubert (University of Bristol) Vincent Hall 570
DSS
Friday, October 31
10:45am-11:15am Coffee break Lind Hall 400
4:00pm-5:00pm Reading group for Professor Ridgway Scott's book "Digital Biology" Ridgway Scott (University of Chicago) Lind Hall 401
Event Legend:
AMS Applied Mathematics Seminar DSS Dynamical Systems Seminar IPS Industrial Problems Seminar PS IMA Postdoc Seminar SMC IMA Seminar on Mathematics and Chemistry SW10.27-30.08 Multi-Manifold Data Modeling and Applications SW10.4.08 Differential Equations: Analysis, Applications & Computation W9.29-10.3.08 Mathematical and Algorithmic Challenges in Electronic Structure Theory
Bradley K. Alpert (National Institute of Standards and Technology)
Compressive sampling reconstruction by subspace
refinement (poster)
Abstract: Spurred by recent progress in compressive sampling methods, we develop a new reconstruction algorithm for the Fourier problem of recovering from noisy samples a linear combination of unknown frequencies embedded in a very large dimensional ambient space. The approach differs from both L1-norm minimization and orthogonal matching pursuit (and its variants) in that no basis for the ambient space is chosen a priori. The result is improved computational complexity. We provide numerical examples that demonstrate the method's robustness and efficiency.
Joint work with Yu Chen.
Donald G. Aronson (University of Minnesota)
Speed selection for traveling waves
Abstract: No Abstract
Alán Aspuru-Guzik (Harvard University)
Mathematical and algorithmic challenges in the simulation of electronic structure and dynamics on quantum computers
Abstract: The exact simulation of quantum mechanical systems on classical computers generally scales exponentially with the size of the system N. Using quantum computers, the computational resources required to carry out the simulation are polynomial. Our group has been working in the development and characterization of quantum computational algorithms for the simulation of chemical systems. We will give a tutorial on our algorithms for the simulation of molecular electronic structure, molecular properties and quantum dynamics, and will discuss the opportunities, open questions and challenges in the field of simulation of physical systems using quantum computers or dedicated quantum devices.
Richard G. Baraniuk (Rice University)
Manifold models for signal acquisition, compression, and
processing
Abstract: Joint work with Mark Davenport, Marco Duarte, Chinmay Hegde,
and Michael Wakin.
Compressive sensing is a new approach to data acquisition in which
sparse or compressible signals are digitized for processing not via
uniform sampling but via measurements using more general, even random,
test functions. In contrast with conventional wisdom, the new theory
asserts that one can combine "low-rate sampling" (dimensionality
reduction) with an optimization-based reconstruction process for
efficient and stable signal acquisition. While the growing
compressive sensing literature has focused on sparse or compressible
signal models, in this talk, we will explore the use of manifold
signal models. We will show that for signals that lie on a smooth
manifold, the number of measurements required for a stable
representation is proportional to the dimensionality of the manifold,
and only logarithmic in the ambient dimension — just as for sparse
signals. As an application, we consider learning and inference from
manifold-modeled data, such as detecting tumors in medical images,
classifying the type of vehicle in airborne surveillance, or
estimating the trajectory of an object in a video sequence. Specific
techniques we will overview include compressive approaches to the
matched filter (dubbed the "smashed filter"), intrinsic dimension
estimation for point clouds, and manifold learning algorithms. We
will also present a new approach based on the joint articulation
manifold (JAM) for compressive distributed learning, estimation, and
classification.
Mikhail Belkin (Ohio State University)
Spectral and geometric methods in learning
Abstract: In recent years a variety of spectral and geometry-based methods have become popular for various tasks of machine learning,
such as dimensionality reduction, clustering and semi-supervised learning. These methods use a model of data as a probability distribution on a manifold, or,
more generally a mixture of manifolds. In the talk I will discuss some of these methods and recent theoretical results on their convergence.
I will also talk about how spectral methods can be used to learn mixtures of Gaussian distributions, which may be considered the simplest case
of multi-manifold data modeling.
Bastiaan J. Braams (Emory University)
Full-dimensional potential energy surfaces for small molecules
Abstract: (This will be an informal talk in Donald Truhlar's group meeting.)
Studies of molecular dynamics and molecular spectroscopy generally
start from the Born-Oppenheimer approximation and require some form of
analytical potential energy surface fitted to ab initio electronic
structure calculations. We have used computational invariant theory
and the MAGMA computer algebra system as an aid to develop
representations for the potential energy and dipole moment surfaces
that are fully invariant under permutations of like nuclei. We
express the potential energy surface in terms of internuclear
distances using basis functions that are manifestly invariant. A
dipole moment is represented with use of effective charges at
positions of the nuclei, which must transform as a covariant, rather
than as an invariant, under permutations of like nuclei.
Malonaldehyde (CHOHCHCHO) provides an illustrative application. The
associated molecular permutational symmmetry group is of order 288
(4!3!2!) and the use of full permutational symmetry makes it possible
to obtain a compact representation for the surface.
Thomas H. Burns (Starkey Laboratories)
Virtual prototyping of hearing aids using numerical modeling and supercomputing
Abstract: In an effort to efficiently manufacture quality products, numerical models and empirical measurements are used to predict (virtually) the performance of a hearing aid. Finite element analysis is used to study multi-physics processes such as thermo-mechanically induced stress due to heat flow from soldering, acoustic and structural interactions due to transducer vibration, and mechanical shock failure due to drop testing. Following a synopsis of hearing-aid anatomy, the presentation will show numerous animations depicting results from the virtual prototypes.
Dr. Burns received a Ph.D. in engineering acoustics from Penn State, specializing in signal processing of acoustical holography measurements. He joined Starkey Labs in November of 1999, following periods as a consultant in concert hall acoustics at Kirkegaard Associates, and a senior design engineer of condenser microphones at Shure. Currently, he is the Director of Starkey’s Applied Technology and Research Group, and serves on the Hearing Aid Measurement Standards committee for ANSI Bioacoustics (S3/WG48). By day, he directs an advanced development team of engineers at Starkey. By night, he changes diapers and lulls his kids to sleep by playing Chopin Nocturnes on his concert grand.
Eric Cances (CERMICS), Claude Le Bris (CERMICS)
Math 8994: Topics in classical and
quantum mechanics
Electronic structure calculations and molecular
simulation: A mathematical initiation
Abstract: Meeting time: Mondays and Wednesdays 2:30 ‐ 3:30 pm Room 305 Lind Hall.
The course will present the basics of the quantum theory
commonly used in
computational chemistry for electronic structure calculations,
and the basics
of molecular dynamics simulations. The perspective is
definitely
mathematical. After the presentation of the models, the
mathematical
properties will be examined. The state of the art of the
mathematical
knowledge will be mentioned. Numerical analysis and scientific
computing
questions will also be thoroughly investigated.
The course is intended for students and researchers with a
solid
mathematical background in mathematical analysis and numerical
analysis.
Familiarity with the models in molecular simulation in the
broad sense is not
needed. The purpose of the course to introduce the audience to
the field.
This is a 1‐3 credit course offered through the School of
Mathematics.
Non‐student participants are welcome to audit without
registering.
Note that no particular knowledge of quantum mechanics or
classical
mechanics will be necessary: the basic elements will be
presented.
For additional information and course registration, please
contact:
Markus Keel (keel@math.umn.edu).
Gunnar Carlsson (Stanford University)
Topology and data
Abstract: The nature and quantity of data arising out of scientific applications requires novel methods, both for exploratory analysis as well as analysis of significance and validation. One set of new methods relies on ideas and methods from topology. The study of data sets requires an extension of standard methods, which we refer to as persistent topology. We will survey this area, and show that algebraic methods can be applied both to the exploratory and the validation side of investigations, and show some examples.
Frédéric Chazal (INRIA Saclay - Île-de-France )
Analysis of scalar fields over point cloud data (poster)
Abstract: (Joint work with L. Guibas, S. Oudot and P. Skraba - to appear in proc.
SODA'09).
Given a real-valued function f defined over some metric space
X, is it possible to recover some structural information about
f from the sole information of its values at a finite subset L
of sample points, whose pairwise distances in
X are given? We provide a positive answer to this question. More
precisely, taking advantage of recent advances on the front of
stability for persistence diagrams, we introduce a novel algebraic
construction, based on a pair of nested families of simplicial
complexes built on top of the point cloud L, from which the
persistence diagram of f can be faithfully approximated. We derive
from this construction a series of algorithms for the analysis of
scalar fields from point cloud data. These algorithms are simple and
easy to implement, have reasonable complexities, and come
with theoretical guarantees. To illustrate the generality and
practicality of the approach, we also present experimental
results obtained in various applications, ranging from clustering to
sensor networks.
Wolfgang Dahmen (RWTH Aachen)
Instance optimal adaptive regression in high dimensions
Abstract: Joint work with Peter Binev, Ron DeVore,
and Philipp Lamby.
This talk addresses the recovery of functions of a large number of variables from point clouds in the context of supervised learning. Our estimator is based on two conceptional pillars.
First, the notion of sparse occupancy
trees is shown to warrant efficient computations even for a very large number of variables. Second, a properly adjusted adaptive tree-approximation scheme is shown to ensure instance optimal performance.
By this we mean the rapid decay (with increasing sample size) of
the probability that the estimator deviates from
the regression function (in a certain natural norm) by more than
the error of best n-term approximation in the sparse tree setting.
Ingrid Daubechies (Princeton University)
Math matters public lecture: Surfing with wavelets
Abstract: Wavelets are used in the analysis of sounds and images,
as well as in many other applications. The wavelet transform provides a
mathematical analog to a music score: just as the score tells a musician
which notes to play when, the wavelet analysis of a sound takes things
apart into elementary units with a well defined frequency (which note?)
and at a well defined time (when?). For images wavelets allow you to
first describe the coarse features with a broad brush, and then later to
fill in details. This is similar to zooming in with a camera: first you
can see that the scene is one of shrubs in a garden, then you
concentrate on one shrub and see that it bears berries, then, by zooming
in on one branch, you find that this is a raspberry bush. Because
wavelets allow you to do a similar thing in more mathematical terms, the
wavelet transform is sometimes called a "mathematical microscope."
Wavelets are used by many scientists for many different applications.
Outside science as well, wavelets are finding their uses: wavelet
transforms are an intergral part of the image compression standard
JPEG2000.
The talk will start by explaining the basic principles of wavelets,
which are very simple. Then they will be illustrated with some examples,
including an explanation of image compression.
Ingrid Daubechies (Princeton University)
Mathematical problems suggested by Analog-to-Digital conversion
Abstract: In Analog-to-Digital conversion, continuously varying functions (e.g.
the output of a microphone) are transformed into digital sequences from
which one then hopes to be able to reconstruct a close approximation to
the original function. The functions under consideration are typically
band-limited (i.e. their Fourier transform is zero for frequencies
higher than some given value, called the bandwidth); such functions
are completely determined by samples taken at a rate determined
by their bandwidth. These samples then have to be approximated by
a finite binary representation. Surprisingly, in many practical applications
one does not just replace each sample by a truncated binary expansion;
for various technical reasons, it is more attractive to sample much more
often and to replace each sample by just 1 or -1, chosen judicously.
In this talk, we shall see what the attractions are of this quantization
scheme, and discuss several interesting mathematical questions suggested
by this approach.
This will be a review of work by many others as well as myself. It is also a case
study of how continuous interaction with engineers helped to shape and
change the problems as we tried to make them more precise.
James W. Davenport (Brookhaven National Laboratory)
Augmented basis sets in finite cluster DFT
Abstract: Density functional theory provides a systematic approach to the electronic structure of atoms, molecules and solids. It requires the repeated solution of single particle Schrodinger equations in a self consistent loop. Most techniques involve some sort of basis set, the most common ones being plane waves or Gaussians. In crystalline materials the most accurate solutions involve augmented basis sets. These combine numerical solutions of the Schrodinger equation in regions near the atomic nucleii with so called ‘tail functions’ in more distant regions. In the linear augmented plane wave (LAPW) method the tail functions are plane waves. This formulation has been incorporated into the WIEN2k code. With the current interest in nanoscale clusters, biomolecules, and other finite systems it is desirable to have a comparably accurate method for these. While it is always possible to build supercells, it is often convenient to have completely localized functions which eliminate interaction between periodic images. We recently proposed a finite cluster version of the linear augmented Slater-type orbital (LASTO) method [1]. STO’s have the correct behavior at large distances and possess an addition theorem – they can be re-expanded about other sites with analytic coefficients. We solve the Poisson equation by replacing the spherical part of the density near the nucleii with a smooth pseudo-density. The full potential, including the non-sphrical piece is then solved on a grid. Examples of small clusters and comparison with the Gaussian based program NWChem will be given.
[1] K. S. Kang, J. W. Davenport, J. Glimm, D. E. Keyes, and M. McGuigan, submitted to J. Computational Chemistry.
Mark Andrew Davenport (Rice University)
Joint manifold models for collaborative inference (poster)
Abstract: We introduce a new framework for collaborative inference and
efficient fusion of manifold-modeled data. We formulate the
notion of a joint manifold model for signal ensembles, and
using this framework we demonstrate the superior performance of
joint processing techniques for a range of tasks including
detection, classification, parameter estimation, and manifold
learning. We then exploit recent results concerning random
projections of low-dimensional manifolds to develop an
efficient framework for distributed data fusion.
As an example, we extend the smashed filter – a maximum
likelihood, model-based estimation and classification algorithm
that exploits random measurements – to a distributed setting.
Bounds for the robustness of this scheme against
measurement noise are derived. We demonstrate the utility of
our framework in a variety of settings, including large scale
camera networks, networks of acoustic sensors, and multi-modal
sensors.
This is joint work with Richard Baraniuk, Marco Duarte, and
Chinmay Hegde.
Ronald DeVore (Texas A & M University)
Recovering sparsity in high dimensions
Abstract: We assume that we are in $R^N$ with $N$ large. The first problem we consider is that there is a function $f$ defined on $Omega:=[0,1]^N$ which is a function of just $k$ of the coordinate variables: $f(x_1,dots,x_N)=g(x_{j_1},dots,x_{j_k})$ where $j_1,dots,j_k$ are not known to us. We want to approximate $f$ from some of its point values. We first assume that we are allowed to choose a collection of points in $Omega$ and ask for the values of $f$ at these points. We are interested in what error we can achieve in terms of the number of points when we assume some smoothness of $g$ in the form of Lipschitz or higher smoothness conditions.
We shall consider two settings: adaptive and non-adaptive. In the adaptive setting, we are allowed to ask for a value of $f$ and then on the basis of the answer we get we can ask for another value. In the non-adaptive setting, we must prescribe the $m$ points in advance.
A second problem we shall consider is when $f$ is not necessarily only a function of $k$ variables but it can be approximated to some tolerance $epsilon$ by such a function. We seek again sets of points where the knowledge of the values of $f$ at such points will allow us to approximate $f$ well.
Our main consideration is to derive results which are not severely effected by the size of $N$, i.e. are not victim of the curse of dimensionality. We shall see that this is possible.
Marco F. Duarte (Rice University)
The smashed filter for compressive classification(poster)
Abstract: We propose a framework for exploiting the same measurement techniques used in emerging compressive sensing systems in the new setting of compressive classification. The first part of the framework maps traditional maximum likelihood hypothesis testing into the compressive domain; we find that the number of measurements required for a given classification performance level does not depend on the sparsity or compressibility of the signal but only on the noise level. The second part of the framework applies the generalized maximum likelihood method to deal with unknown transformations such as the translation, scale, or viewing angle of a target object. Such a set of transformed signals forms a low-dimensional, nonlinear manifold in the high-dimensional image space. We exploit recent results that show that random projections of a smooth manifold result in a stable embedding of the manifold in the lower-dimensional space. Non-differential manifolds, prevalent in imaging applications, are handled through the use of multiscale random projections that perform implicit regularization. We find that the number of measurements required for a given classification performance level grows linearly in the dimensionality of the manifold but only logarithmically in the number of pixels/samples and image classes.
This is joint work with Mark Davenport, Michael Wakin and Richard Baraniuk.
Lars Eldén (Linköping University)
The best low-rank Tucker approximation of a tensor
Abstract: The problem of computing the best multilinear low-rank approximation of a tensor can be formulated as an opimization problem on a product of Grassmann manifolds (by multilinear low-rank approximation we understand an approximation in the sense of the Tucker model). In the Grassmann approach we want to find (bases of) subspaces that represent the low-rank approximation. We have recently derived a Newton algorithm for this problem, where a quadratic model on the tangent space of the manifold is used. From the Grassmann Hessian we derive conditions for a local optimum. We also discuss the sensitivity of the subspaces to perturbations of the tensor elements.
Ehsan Elhamifar (Johns Hopkins University)
3-D motion segmentation via robust subspace separation (poster)
Abstract: We consider the problem of segmenting multiple rigid-body motions in a video sequence from tracked feature point trajectories. Using the affine camera model, this motion segmentation problem can be cast as the problem of segmenting samples drawn from a union of linear subspaces of dimension two, three or four. However, due to limitations of the tracker, occlusions and the presence of nonrigid objects in the scene, the obtained motion trajectories may contain grossly mistracked features, missing entries, or not correspond to any valid motion model.
In this poster, we present a combination of robust subspace separation schemes that can deal with all of these practical issues in a unified framework. For complete uncorrupted trajectories, we examine approaches that try to harness the subspace structure of the data either globally (Generalized PCA) or by minimizing a lossy coding length (Agglomerative Lossy Coding). For incomplete or corrupted trajectories, we develop methods based on PowerFactorization or L1-minimization. The former method fills in missing entries using a linear projection onto a low-dimensional space. The latter method draws strong connections between lossy compression, rank minimization, and sparse representation. We compare our approach to other methods on a database of 167 motion sequences with full motions, independent motions, degenerate motions, partially dependent motions, missing data, outliers, etc. Our results are on par with state-of-the-art results, and in many cases exceed them.
Charles L. Fefferman (Princeton University)
Interpolation of functions on Rn
Abstract: The talk explains joint work with Bo'az Klartag, solving the following problem and several variants.
Let f be a real-valued function on an N-point set E in Rn. Compute efficiently a function F on the whole Rn that agrees with f on E and has Cm norm close to least possible.
Massimo Fornasier (Johann Radon Institute for Computational and Applied Mathematics )
teratively re-weighted least squares and vector valued data
restoration from lower dimensional samples (poster)
Abstract: We present the analysis of a superlinear convergent algorithm for
L1-minimization based on an iterative reweighted least squares. We show
improved performances in compressed sensing.
A similar algorithm is then applied for the efficient solution of a
system of singular PDEs for image recolorization in a relevant real-life
problem of art restoration.
Carlos J. Garcia-Cervera (University of California, Santa Barbara)
A linear scaling subspace iteration algorithm with
optimally localized non-orthogonal wave functions for Kohn-Sham
density functional theory
Abstract: We present a new linear scaling method for electronic structure computations in the context of Kohn-Sham density functional theory (DFT). The method is based on a subspace iteration, and takes advantage of the non-orthogonal formulation of the Kohn-Sham functional, and the improved localization properties of non-orthogonal wave functions. We demonstrate the efficiency of the algorithm for practical applications by performing fully three-dimensional computations of the electronic density of alkane chains.
This is joint work with Jianfeng Lu, Yulin Xuan, and Weinan E, at Princeton University.
Caroline Gatti-Bono (Lawrence Livermore National Laboratory)
Dealing with stiffness in low-Mach number flows
Abstract: Numerical simulation of low-Mach number flows presents challenges because of the stiffness introduced by the disparity of time scales between acoustic and convective motions. In particular, the acoustic, high-speed modes often contain little energy but determine the time step for explicit schemes through the CFL condition. A natural idea is therefore to separate the acoustic modes from the rest of the solution and to treat them implicitly, while the advective motions are treated explicitly or semi-implicitly.
In this talk, we present a numerical allspeed algorithm that respects low-Mach number asymptotics but is suitable for any Mach number. We use a splitting method based on a Hodge/Helmholtz decomposition of the velocities to separate the fast acoustic dynamics from the slower anelastic dynamics. The acoustic waves are treated implicitly, while the advection is treated semi-implicitly. The splitting mechanism is demonstrated on two applications. The first application is a combustive flow, where Euler equations are completed by an enthalpy evolution equation. Then, we present a stratified atmospheric flow where the presence of gravity waves adds one more degree of complexity. Benchmark results are presented that compare well with the literature.
Alvina Goh (Johns Hopkins University), René Vidal (Johns Hopkins University)
Clustering on Riemannian manifolds (poster)
Abstract: Over the past few years, various techniques have been developed for learning a low-dimensional representation of a nonlinear manifold embedded in a high-dimensional space. Unfortunately, most of these techniques are limited to the analysis of a single connected nonlinear submanifold of a Euclidean space and suffer from degeneracies when applied to linear manifolds (subspaces).
This work proposes a novel algorithm for clustering data sampled from multiple submanifolds of a Riemannian manifold. First, we learn a representation of the data using generalizations of local nonlinear dimensionality reduction algorithms from Euclidean to Riemannian spaces. Such generalizations exploit geometric properties of the Riemannian space, particularly its Riemannian metric. Then, assuming that the data points from different groups are separated, we show that the null space of a matrix built from the local representation gives the segmentation of the data. However, this method can fail when the data points are drawn from a union of linear manifolds, because M contains additional vectors in its null space. In this case, we propose an alternative method for computing M, which avoids the aforementioned degeneracies, thereby resulting in the correct segmentation. The final result is a simple linear algebraic algorithm for simultaneous nonlinear dimensionality reduction and clustering of data lying in multiple linear and nonlinear manifolds.
We present several applications of our algorithm to computer vision problems such as texture clustering, segmentation of rigid body motions, segmentation of dynamic textures, segmentation of diffusion MRI. Our experiments show that our algorithm performs on par with state-of-the-art techniques that are specifically designed for such segmentation problems.
Eberhard K. U. Gross (Freie Universität Berlin)
TBA
Abstract: No Abstract
François Gygi (University of California, Davis)
First-principles molecular dynamics for petascale computers
Abstract: First-principles molecular dynamics (FPMD) is a simulation method that combines molecular dynamics with the accuracy of a quantum mechanical description of electronic structure. It is increasingly used to address problems of structure determination, statistical mechanics, and electronic structure of solids, liquids and nanoparticles. The high computational cost of this approach makes it a good candidate for use on large-scale computers. In order to achieve high performance on terascale and petascale computers, current FPMD algorithms have to be reexamined and redesigned. We present new, large-scale parallel algorithms developed for FPMD simulations on computers including O(103) to O(104) CPUs. Examples include the problem of simultaneous diagonalization of symmetric matrices used in the calculation of Maximally Localized Wannier Functions (MLWFs), and the Orthogonal Procrustes problem that arises in the context of Born-Oppenheimer molecular dynamics simulations.
Supported by NSF-OCI PetaApps through grant 0749217.
François Gygi (University of California, Davis)
Second chances: The chair of the day will deliver a 30 minutes overview of the field followed by a discussion.
Abstract: No Abstract
Gloria Haro Ortega (Universitat Politecnica de Catalunya)
Detecting mixed dimensionality and density in noisy point clouds
Abstract: We present a statistical model to learn mixed dimensionalities and densities present in stratifications, that is, mixture of manifolds representing different characteristics and complexities in the data set. The basic idea relies on modeling the high dimensional sample points as a process of translated Poisson mixtures, with regularizing restrictions, leading to a model which includes the presence of noise. The translated Poisson distribution is useful to model a noisy counting process, and it s derived from the noise-induced translation of a regular Poisson distribution. By maximizing the log-likelihood of the process counting the points falling into a local ball, we estimate the local dimension and density. We show that the sequence of all possible local countings in a point cloud formed by samples of a stratification can be modeled by a mixture of different Translated Poisson distributions, thus allowing the presence of mixed dimensionality and densities in the same data set. A partition of the points in different classes according to both dimensionality and density is obtained, together with an estimation of these quantities for each class.
Chinmay Hegde (Rice University)
Random projections for manifold learning (poster)
Abstract: We propose a novel method for linear dimensionality reduction of manifold-modeled data. First, we show that given only a small number random projections of
sample points in R^N belonging to an unknown K-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy.
Second, we prove that using only this set of random projections, we can estimate the structure of the underlying manifold. In both cases, the number of random projections (M) required is linear in K and logarithmic in N, meaning that K < M << N. To handle practical situations, we develop a greedy algorithm to estimate the smallest size of the projection space required to perform manifold learning. Our method is particularly relevant in distributed sensing systems and leads to significant potential savings in data acquisition, storage and transmission costs.
(Joint work with Michael Wakin and Richard Baraniuk.)
Michael E. Henderson (IBM)
Representing and manipulating implicitly defined manifolds (poster)
Abstract: This poster illustrates an algorithm which was developed to compute implicitly defined manifolds in engineering applications, where the manifold is of low (1-4) dimension, but is embedded in a very high dimensional space (100 and up).
Though the computational details (finding a point and the tangent space of the manifold) are different than in manifold learning, and the embedding is explicit instead of one of the unknowns, there are significant issues in common when computing any manifold.
The representation used closely follows the definition of a manifold. A set of spherical balls (with differing radii)serve as the chart domains, with the embedding mapping limited to the embedded center and the tangent space. In addition, polyhedra are associated with each chart so that overlapping charts correspond to faces of the polyhedra (common to the polyhedra of the overlapping charts). Boundary charts are represented in the same manner, beginning with a polyhedral cone, and adding faces for each overlapping chart.
The polyhedra of interior charts are Laguerre-Voronoi cells, and so it is very easy to locate points on the boundary of a partially represented manifold (if all vertices are closer to the origin than the radius of the ball the chart is completely surrounded by other charts). This provides a basic "continuation" or "extension" algorithm for creating a set of covering charts on a manifold.
In terms of data structures, the atlas of charts is a simple list. The polyhedra are in the same simple list, but also form cell complex whose dual is a Laguerre-Delaunay "triangulation" of the manifold. Interestingly, the fine structure is a cell complex, but so is the gross structure of the manifold. Manifolds with boundary are represented by another cell complex. In this one the faces are manifolds which share boundaries which are the boundary cells of the face.
So far this approach has been applied to three different kinds of manifolds which are common in dynamical systems. I hope to find similar applications in manifold learning.
Mark S. Herman (University of Minnesota)
Born-Oppenheimer corrections near a Renner-Teller crossing
Abstract: We perform a rigorous mathematical analysis of the bending modes of a linear triatomic molecule that exhibits the Renner-Teller effect. Assuming the potentials are smooth, we prove that the wave functions and energy levels have asymptotic expansions in powers of epsilon, where the fourth power of epsilon is the ratio of an electron mass to the mass of a nucleus.To prove the validity of the expansion, we must prove various properties of the leading order equations and their solutions. The leading order eigenvalue problem is analyzed in terms of a parameter b, which is equivalent to the parameter originally used by Renner. For 0 < b < 1, we prove self-adjointness of the leading order Hamiltonian, that it has purely discrete spectrum, and that its eigenfunctions and their derivatives decay exponentially. Perturbation theory and finite difference calculations suggest that the ground bending vibrational state is involved in a level crossing near b = 0.925. We also discuss the degeneracy of the eigenvalues. Because of the crossing, the ground state is degenerate for 0 < b < 0.925 and non-degenerate for 0.925 < b < 1.
Tony Jebara (Columbia University)
Visualization & matching for graphs and data
Abstract: Given a graph between N high-dimensional nodes, can we faithfully visualize it in just a few dimensions? We present an algorithm that improves the state-of-the art in dimensionality reduction by extending the Maximum Variance Unfolding method. Visualizations are shown for social networks, species trees, image datasets and human activity.
If we are only given a dataset of N samples, how should we link the samples to build a graph? The space to explore is daunting with 2^(N2) choices but two interesting subfamilies are tractable: matchings and b-matchings. We place distributions over these families and recover the optimal graph or perform Bayesian inference over graphs efficiently using belief propagation algorithms. Higher order distributions over matchings can also be handled efficiently via fast Fourier algorithms. Applications are shown in tracking, network reconstruction, classification, and clustering.
Biography
Tony Jebara is Associate Professor of Computer Science at Columbia University and director of the Columbia Machine Learning Laboratory. His research intersects computer science and statistics to develop new frameworks for learning from data with applications in vision, networks, spatio-temporal data, and text. Jebara is also co-founder and head of the advisory board at Sense Networks. He has published over 50 peer-reviewed papers in conferences and journals including NIPS, ICML, UAI, COLT, JMLR, CVPR, ICCV, and AISTAT. He is the author of the book Machine Learning: Discriminative and Generative (Kluwer). Jebara is the recipient of the Career award from the National Science Foundation and has also received honors for his papers from the International Conference on Machine Learning and from the Pattern Recognition Society. He has served as chair and program committee member for many learning conferences. Jebara's research has been featured on television (ABC, BBC, New York One, TechTV, etc.) as well as in the popular press (New York Times, Slash Dot, Wired, Scientific American, Newsweek, etc.). He obtained his PhD in 2002 from MIT. Jebara's lab is supported in part by the Central Intelligence Agency, the National Science Foundation, the Office of Naval Research, the National Security Agency, and Microsoft.
Tamara G. Kolda (Sandia National Laboratories)
CPOPT: Optimization for fitting CANDECOMP/PARAFAC models
Abstract: Joint work with Evrim Acar, and Daniel M. Dunlavy
(Sandia National Laboratories).
Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such as chemometrics, signal processing, and web analysis. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.
Dan Kushnir (Yale University)
Fast multiscale clustering and manifold identification (poster)
Abstract: We present a novel multiscale clustering algorithm inspired by algebraic
multigrid techniques. Our method begins with assembling data points according
to local similarities. It uses an aggregation process to obtain reliable
scale-dependent global properties, which arise from the local similarities.
As the aggregation process proceeds, these global properties affect the formation
of coherent clusters. The global features that can be utilized are for
example density, shape, intrinsic dimensionality and orientation. The last
three features are a part of the manifold identification process which is performed
in parallel to the clustering process. The algorithm detects clusters
that are distinguished by their multiscale nature, separates between clusters
with different densities, and identifies and resolves intersections between
clusters. The algorithm is tested on synthetic and real datasets, its running
time complexity is linear in the size of the dataset.
David Langreth (Rutgers University)
Van der Waals density functional: theory, implementations, and applications
Abstract: The van der Waals density functional of Dion, Rydberg, Schroder, Langreth,
and Lundqvist [Phys. Rev. Lett. 92, 246401 (2004)] will be reviewed,
discussing implementations and applications by our group and others.
New results relevalent for hydrogen storage in metal-organic framework (MOF)
materials, as well for the intercalation of drug molecules in DNA will
be presented.
Claude Le Bris (CERMICS)
Open mathematical issues in quantum chemistry: a personal
perspective
Abstract: I will overview some open mathematical questions related to the
models and techniques of computational quantum chemistry. The talk is based
upon a recent article coauthored with E. Cances and PL. Lions, and
published in Nonlinearity, volume 21, T165-T176, 2008.
Gilad Lerman (University of Minnesota)
Multi-manifold data modeling via spectral curvature clustering
Abstract: We propose a fast multi-way spectral clustering algorithm for multi-manifold data modeling. We describe the supporting theory as well as the practical choices guided by it. We emphasize the case of hybrid linear modeling, i.e., when the manifolds are affine subspaces in a Euclidean space, and then extend this setting to more general manifolds and other embedding metric spaces. We exemplify applications of the algorithm to several real-world problems while comparing it with other methods.
Howard A. Levine (Iowa State University)
Spectral properties, regularity and optimal bounds for
solutions of elliptic boundary value problems
Abstract: No Abstract
Mathieu Lewin (Université de Cergy-Pontoise)
Exact embedding of local defects in crystals
Abstract: By means of rigorous thermodynamic limit arguments, we derive a new
variational model providing exact embedding of local defects in insulating or
semiconducting crystals. A natural way to obtain variational discretizations
of this model is to expand the perturbation of the periodic density matrix
generated by the defect in a basis of precomputed maximally localized Wannier
functions of the host crystal. This approach can be used within any
semi-empirical or Density Functional Theory framework. This is a joint work
with Eric Cancès and Amélie Deleurence (Ecole Nationale des Ponts et
Chaussées, France).
Mark Lewis (University of Alberta)
Population spread and the dynamics of biological invasions
Abstract: Classical models for the growth and spread of introduced species
track the front of an expanding wave of population density. Models
are typically parabolic partial differential equations and related integral
formulations. One method to infer the speed of the expanding wave is
to equate the speed of spread of the nonlinear system with the speed of
spread of a related linear system. When these two speeds coincide we say that the
spread rate is linearly predictable. While many spread rates are linearly
predictable, some notable cases are not, such as those involving competition between
multiple species.
Hans Weinberger's work has impacted the theory of linear predictability,
both for single-species and for multi-species models. I will review
some of this theory, from the perspective of a mathematical ecologist
interested in applying the theory to biology. In my talk I will apply some of the results
to real biological problems, including species competition, spread of disease and
population dynamics of stream ecosystems.
Yongfeng Li (University of Minnesota)
Model reference control in the biological systems
Abstract: Motivated by the engineering control technique, model reference control(MRC) is introduced for controlling biological systems. Mathematical framework of MRC is provided. And numerical simulation of controlling SIRS disease models and BZ oscillatory reaction is employed to test its validity.
Yi Ma (University of Illinois at Urbana-Champaign)
Dense error correction via L1 minimization
Abstract: It is know that face images of different people lie on multiple low-dimensional subspaces. In this talk,
we will show that these subspaces are tightly bundled together as a "bouquet". Precisely due to this
unique structure, it allows extremely robust reconstruction and recognition of faces despite severe
corruption or occlusion. We will show that if the image resolution and the size of the face database
grow in proportion to infinity, computer can correctly and efficiently recover or recognize a face image
with almost 100% of its pixels randomly and arbitrarily corrupted, a truly magic ability of L1-minimization.
This is joint work with John Wright of UIUC.
Mauro Maggioni (Duke University)
Harmonic and multiscale analysis on low-dimensional data sets in high-dimensions
Abstract: We discuss recent advances in harmonic analysis ideas and algorithms for analyzing data sets in high-dimensional spaces which
are assumed to have lower-dimensional geometric structure. They enable the analysis of both the geometry of the data and of functions on the data, and they can be broadly subdivided into local, global and multiscale techniques, roughly corresponding to PDE techniques, Fourier and wavelet analysis ideas in low-dimensional Euclidean signal processing. We discuss applications to machine learning tasks, image processing, and discuss current research directions.
Julien Mairal (INRIA )
Supervised dictionary learning (poster)
Abstract: It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This work proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and multiple class-decision functions. The linear variant of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks. This is a joint work with F. Bach (INRIA), J. Ponce (Ecole Normale Supérieure), G. Sapiro (University of Minnesota) and A. Zisserman (Oxford University).
José Mario Martínez (State University of Campinas (UNICAMP))
Modern optimization tools and electronic structure calculations
Abstract: Optimization concepts will be reviewed with an eye on their proved or
potential application in Electronic Structure Calculations
and other Chemical Physics problems. We will discuss the role of
trust-region schemes, line searches, linearly and nonlinearly
constrained optimization, Inexact Restoration and SQP methods and the
type of convergence theories that may be useful in
order to explain the practical behavior of the methods. Emphasis will
be given on general principles instead of algorithmic details.
Tristan Nguyen (Office of Naval Research)
Large group discussion on:
Abstract: No Abstract
Partha Niyogi (University of Chicago)
Large group discussion on What have we learned about manifold learning — what are
its implications for machine learning and numerical analysis? What
are open questions? What are successes? Where should we be
optimistic and where should we be pessimistic?
Abstract: No Abstract
Partha Niyogi (University of Chicago)
A Geometric perspective on machine Learning
Abstract: Increasingly, we face machine learning problems in very high
dimensional spaces. We proceed with the intuition that although
natural data lives in very high dimensions, they have relatively few
degrees of freedom. One way to formalize this intuition is to model
the data as lying on or near a low dimensional manifold embedded in
the high dimensional space. This point of view leads to a new class of
algorithms that are "manifold motivated" and a new set of theoretical
questions that surround their analysis. A central construction in
these algorithms is a graph or simplicial complex that is data-derived
and we will relate the geometry of these to the geometry of the
underlying manifold. Applications to data analysis, machine
learning, and numerical computation will be considered.
John E. Osborn (University of Maryland)
Numerical work of Hans F. Weinberger
Abstract: In this talk we will survey several papers (listed
below) by
Hans Weinberger dealing with numerical and approximation
issues. We have
divided them into three categories: (i) approximation of
eigenvalues; (ii)
approximation theory issues; and (iii) error bounds for
iterative methods for
matrix inversion.
The seven papers listed are only a small part of Hans’ work—but
they
were very influential. We, of course, cannot discuss any of
these papers
in detail, but will instead concentrate on those results that
are especially
insightful and elegant.
Gianluca Panati (Università di Roma "La Sapienza")
Construction of exponentially localized Wannier functions
Abstract: The exponential localization of Wannier functions in two or three dimensions is proven for all insulators that display time-reversal symmetry, settling a long-standing conjecture. The proof make use of geometric techniques, which also imply that Chern insulators cannot display exponentially localized Wannier functions.
Finally, a new algorithm to explicitly construct the exponentially localized Wannier functions is exhibited.
John E. Pask (Lawrence Livermore National Laboratory)
Partition-of-unity finite-element approach for large, accurate ab initio electronic structure calculations
Abstract: Principle Collaborator:
Natarajan Sukumar
(University of California, Davis)
Over the past few decades, the planewave (PW) pseudopotential method has established itself as the dominant method for large, accurate, density-functional calculations in condensed matter. However, due to its global Fourier basis, the PW method suffers from substantial inefficiencies in parallelization and applications involving highly localized states, such as those involving 1st-row or transition-metal atoms, or other atoms at extreme conditions. Modern real-space approaches, such as finite-difference (FD) and finite-element (FE) methods, can address these deficiencies without sacrificing rigorous, systematic improvability but have until now required much larger bases to attain the required accuracy. Here, we present a new real-space FE based method which employs modern partition-of-unity FE techniques to substantially reduce the number of basis functions required, by building known atomic physics into the Hilbert space basis, without sacrificing locality or systematic improvability. We discuss pseudopotential as well as all-electron applications. Initial results show order-of-magnitude improvements relative to current state-of-the-art PW and adaptive-mesh FE methods for systems involving localized states such as d- and f-electron metals and/or other atoms at extreme conditions.
This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Irina Rish (IBM)
A supervised dimensionality reduction framework for
exponential-family variables (poster)
Abstract: Dimensionality reduction (DR) is a popular data-processing technique that serves the
following two main purposes: it helps to provide a meaningful interpretation and
visualization of the data, and it also helps to prevent overfitting when the number of
dimensions greatly exceeds the number of samples, thus working as a form of regularization.
However, dimensionality reduction should be driven by a particular data analysis goal.
When our goal is prediction rather than an (unsupervised) exploratory data analysis,
supervised dimensionality reduction (SDR) that combines DR with learning a predictor can
significantly outperform a simple combination of unsupervised DR with a subsequent learning
of a predictor on the resulting low-dimensional representation.
The problem of supervised dimensionality reduction can be also viewed as finding
a predictive low-dimensional representation, which captures the information about the class label
contained in the high-dimensional feature vector while ignoring the high-dimenional noise.
We propose a general SDR framework that views both features and class labels as exponential-family
random variables (PCA-like dimensionality reduction is included as a particular case of Gaussian data).
We learn data- and class-appropriate generalized linear models (GLMs), thus handling both
classification and regression, with both discrete and real-valued data. Our SDR-GLM optimization
problem can be viewed as discriminative learning based on minimization of conditional probability
of class given thehidden variables, while using as a regularizer the conditional probability of the
features given the low-dimensional (hidden-variable) representation.
The main advantage of our approach, besides being more general, is using simple closed-form
update rules when performing its alternate minimization procedure. This method yields a short
Matlab code, fast performance, and is guaranteed to converge. The convergence property, as well as
closed form update rules, result from using appropriate auxiliary functions bounding each part of
the objective function (i.e., reconstruction and prediction losses). We exploit the additive property
of auxiliary functions in order to combine bounds on multiple loss functions.
Promising empirical results are demonstrated on a variety of high-dimensional datasets. Particularly,
our results on simulated datasets convincingly demonstrate that SDR-GLM approach can discover underlying
low-dimensional structure in high-dimensional noisy data, while outperforming SVM and SVDM, often by far,
and practically always beating the unsupervised DR followed by learning a predictor. On real-life datasets,
SDR approaches outperfrom the unsupervised DR by far, while matching or sometimes outperforming SVM and SVDM.
Berkant Savas (Linköping University)
Tensor approximation - structure and methods (poster)
Abstract: We consider the tensor approximation problem. Given a tensor we want to approximate it by another tensor of lower multilinear rank. It turns out this problem is defined on a product of Grassmann manifolds. We describe the structure of various parts the approximation problem and present convergence plots for Newton and quasi-Newton methods (with BFGS and Limited memory BFGS updates) that solve the problem. All algorithms are applicable to both general and symmetric tensors and incorporate the Grassmann manifold structure of the problem. The benefits of these algorithms compared to traditional alternating least squares approaches are higher convergence rates and the existence of rigorous theory establishing convergence to a stationary point.
This is joint work with Lars Eldén (Linköping University) and Lek-Heng Lim (University of California, Berkeley).
Roman Schubert (University of Bristol)
Waves and mixing
Abstract: Part I: Wave equations have the property that in the limit of short wavelength the propagation of waves is driven by an underlying dynamical system. Two standard examples are quantum mechanics, which in the semiclassical limit is governed by classical mechanics, and the theory of light, which for short wavelength is accurately described by the rays of geometric optics. A natural question is how the properties of the underlying dynamical system are reflected in the propagation of waves and in the possible wave patterns that can emerge. In this talk we will focus on the case that the dynamical system is chaotic, in particular mixing, and discuss the classical conjectures and some rigorous results on the consequences for wave propagation and the behavior of eigenfunctions.
Roman Schubert (University of Bristol)
Waves and mixing (part II)
Abstract: Part II: In the second part we will focus on the major open problem in the field and the current approaches to deal with it. In many applications one is interested in wave propagation for small wavelength and large times and this poses serious problems. Currently we understand the theory only for times up to the Ehrenfest time, which is related to the Liapunov exponents of the underlying dynamical system, and which is unfortunately rather short. We will discuss the Ehrenfest time and its relation to a number of important open problems, and then present a recent approach to explore larger times.
Ridgway Scott (University of Chicago)
The Mathematical basis for molecular van der Waals
forces
Abstract: We show how van der Waals forces can be explained based on induced polarization of molecules. We derive an exact expression for the limiting behavior in the case of two induced dipoles that is faster than the usual Lennard-Jones potential.
Yoel Shkolnisky (Yale University), Amit Singer (Princeton University)
Structure determination of proteins using cryo-electron
microscopy (poster)
Abstract: The goal in Cryo-EM structure determination is to reconstruct 3D
macromolecular structures from their noisy projections taken at unknown
random orientations by an electron microscope. Resolving the Cryo-EM problem
is of great scientific importance, as the method is applicable to
essentially all macromolecules, as opposed to other existing methods such as
crystallography. Since almost all large proteins have not yet been
crystallized for 3D X-ray crystallography, Cryo-EM seems the most promising
alternative, once its associated mathematical challenges are solved. We
present an extremely efficient and robust solver for the Cryo-EM problem
that successfully recovers the projection angles in a globally consistent
manner. The simple algorithm combines ideas and principles from spectral
graph theory, nonlinear dimensionality reduction, geometry and computed
tomography. The heart of the algorithm is a unique construction of a sparse
graph followed by a fast computation of its eigenvectors.
Joint work with Ronald Coifman and Fred Sigworth.
Yoel Shkolnisky (Yale University), Amit Singer (Princeton University)
High order consistency relations for classification and
de-noising of Cryo-EM images (poster)
Abstract: In order for biologists to exploit the full potential embodied in the
Cryo-EM method, two major challenges must be overcome. The first challenge
is the extremely low signal-to-noise ratio of the projection images.
Improving the signal-to-noise by averaging same-view projections is an
essential preprocessing step for all algorithms. The second challenge is the
heterogeneity problem, in which the observed projections belong to two or
more different molecules or different conformations. Traditional methods
assume identical particles, therefore failing to distinguish the different
particle species. This results in an inaccurate reconstruction of the
molecular structure.
For the first problem, we present two different high order consistency
relations between triplets of images. The inclusion of such high order
information leads to improvement in the classification and the de-noising of
the noisy images compared to existing methods that use only pairwise
similarities. We also present a generalization of Laplacian eigenmaps to
utilize such higher order affinities in a data set. This generalization is
different from current tensor decomposition methods. For the second
challenge, we describe a spectral method to establish two or more ab initio
reconstructions from a single set of images.
Joint work with Ronald Coifman and Fred Sigworth.
Arthur Szlam (University of California, Los Angeles)
k-planes for classification (poster)
Abstract: The k-planes method is the generalization of k-means where the
representatives of each cluster are affine linear sets. We
will describe some possible modifications of this method for
discriminative learning problems.
Donald G. Truhlar (University of Minnesota)
New density functionals with broad applicability for
thermochemistry, thermochemical kinetics, noncovalent
interactions, transition metals, and spectroscopy
Abstract: This lecture reports on work carried out in collaboration with
Yan Zhao.
We have developed a suite of density functionals. All four
functionals are accurate for noncovalent interactions and
medium-range correlation energy. The functional with broadest
capability, M06, is uniquely well suited for good performance
on both transition-metal and main group-chemistry; it also
gives good results for barrier heights. Another functional,
M06-L has no Hartree-Fock exchange, which allows for very fast
calculations on large systems, and it is especially good for
transition-metal chemistry and NMR chemical shieldings. M08-2X
and an earlier version, M06-2X, have the very best performance
for main-group thermochemistry, barrier heights, and
noncovalent interactions. M06-HF has no one-electron
self-interaction error and is the best functional for charge
transfer spectroscopy. A general characteristic of the whole
suite is the optimized inclusion of kinetic energy density and
higher separate accuracy of medium-range exchange and
correlation contributions with less cancellation of errors than
previous functionals [1-4]; for example, the functionals are
compatible with a range of Hartree-Fock exchange and, although
one or another of them may be more highly recommended for one
or another property or application, all four are better on
average than the very popular B3LYP functional. A few sample
applications, including catalytic systems [5,6] and
nanomaterials [7], will also be discussed. Recent work on
lattice constants, band gaps, and an improved version of M06-2X
will be summarized if time permits.
[1] "Design of Density Functionals by Combining the Method of
Constraint Satisfaction with Parametrization for
Thermochemistry, Thermochemical Kinetics, and Noncovalent
Interactions," Zhao, Y. ; Schultz, N. E.; Truhlar, D. G.; J.
Chem. Theory Comput. 2006, 2, 364-382.
[2] "A New Local Density Functional for Main Group
Thermochemistry, Transition Metal Bonding, Thermochemical
Kinetics, and Noncovalent Interactions," Zhao, Y.; Truhlar, D.
G. J. Chem. Phys. 2006, 125, 194101/1-18.
[3] “The M06 Suite of Density Functionals for Main Group
Thermochemistry, Thermochemical Kinetics, Noncovalent
Interactions, Excited States, and Transition Elements: Two New
Functionals and Systematic Testing of Four M06-Class
Functionals and 12 Other Functionals,” Zhao, Y.; Truhlar, D. G.
Theor. Chem. Acc. 2008, 120, 215-241.
[4] "Density Functionals with Broad Applicability in
Chemistry," Zhao, Y.; Truhlar, D. G. Acc. Chem. Res. 2008 41,
157-167.
[5] “Attractive Noncovalent Interactions in Grubbs
Second-Generation Ru Catalysts for Olefin Metathesis," Zhao,
Y.; Truhlar, D. G. Org. Lett. 2007, 9, 1967-1970.
[6] "Benchmark Data for Interactions in Zeolite Model Complexes
and Their Use for Assessment and Validation of Electronic
Structure Methods," Zhao, Y.; Truhlar, D. G. J. Phys. Chem. C
2008, 112, 6860-6868.
[7] "Size-Selective Supramolecular Chemistry in a Hydrocarbon
Nanoring," Zhao, Y.; Truhlar, D. G. J. Am. Chem. Soc.2007, 129,
8440-8442.
Steven M. Valone (Los Alamos National Laboratory)
A view of outstanding problems in density functional
theory
Abstract: Constrained-search density functional theory (DFT) pioneered by Levy [1] poses the problem of the theory as one of searching over subsets of Hilbert space. As such it provides a hypothetical means of constructing density-based energy functionals for use in electronic structure applications. I will illustrate the constrained-search form with simple examples [2]. Early results on continuity of the energy functional [3] and the advent of "open-system" DFT [4] will be reviewed. The construction of energy functionals will discussed in the context of the Colle-Salvetti functionals [5] that played a subtle, but important, role in the 1998 Nobel Prize in Chemistry [6]. Alternative constructions based on constrained-search DFT will be discussed. Finally topics pertaining to excitations in homogeneous electron gases and from the introduction of other constraints to DFT calculations [7,8] will be entertained.
[1] M Levy, Proc Natl Acad Sci USA 76, 6062 (1979).
[2] SM Valone, "Vignette on Constrained-Search Density Functional Theory," private communication, August (2008).
[3] SM Valone, J Chem Phys 73, 1344 (1980).
[4] JP Perdew, RG Parr, M Levy, and JL Balduz, Jr, Phys Rev Lett 49, 1691 (1982).
[5] R Colle and O Salvetti, Theoret Chim Acta (Berl) 37, 329 (1975).
[6] C Lee, W Yang, and RG Parr, Phys Rev B 37, 785 (1988).
[7] X-Y Pan, V Sahni, and L Massa, Phys Rev Lett 93, 130401 (2004).
[8] Q Wu and T van Voorhis, Phys Rev A 72, 024502 (2005); J Behler, B Delley, K Reuter, and M Scheffler, Phys Rev B 75, 115409 (2007).
M. Alex O. Vasilescu (SUNY)
Multilinear (tensor) manifold data modeling
Abstract: Most observable data such as images, videos, human motion capture
data, and speech are the result of multiple factors (hidden
variables) that are not directly measurable, but which are of
interest in data analysis. In the context of computer vision and
graphics, we deal with natural images, which are the consequence
of multiple factors related to scene structure, illumination, and
imaging. Multilinear algebra offers a potent mathematical
framework for extracting and explicitly representing the
multifactor structure of image datasets.
I will present two multilinear models for learning (nonlinear)
manifold representations of image ensembles in which the multiple
constituent factors (or modes) are disentangled and analyzed
explicitly. Our nonlinear models are computed via a tensor
decomposition, known as the M-mode SVD, which is an extension to
tensors of the conventional matrix singular value
decomposition (SVD), or through a generalization of
conventional (linear) ICA called Multilinear Independent
Components Analysis (MICA).
I will demonstrate the potency of our novel statistical learning
approach in the context of facial image biometrics, where the
relevant factors include different facial geometries,
expressions, lighting conditions, and viewpoints. When applied to
the difficult problem of automated face recognition, our
multilinear representations, called TensorFaces (M-mode PCA) and
Independent TensorFaces (MICA), yields significantly improved
recognition rates relative to the standard PCA and ICA
approaches. Recognition is achieved with a novel Multilinear
Projection Operator.
Bio:
M. Alex O. Vasilescu is an Assistant Professor of Computer
Science at Stony Brook University (SUNY). She received her
education at MIT and the University of Toronto. She was a
research scientist at the MIT Media Lab from 2005-07 and at New
York University's Courant Insitute from 2001-05. She has also
done research at IBM, Intel, Compaq, and Schlumberger
corporations, and at the MIT Artificial Intelligence Lab. She has
published papers in computer vision and computer graphics,
particularly in the areas of face recognition, human motion
analysis/synthesis, image-based rendering, and physics-based
modeling (deformable models). She has given several invited talks
about her work and has several patents pending. Her face
recognition research, known as TensorFaces, has been funded by
the TSWG, the Department of Defense's Combating Terrorism Support
Program. She was named by MIT's Technology Review Magazine to
their 2003 TR100 List of Top 100 Young Innovators.
http://www.cs.sunysb.edu/~maov
René Vidal (Johns Hopkins University)
Clustering linear and nonlinear manifolds
Abstract: Over the past few years, various techniques have been developed for learning a low-dimensional representation of data lying in a nonlinear manifold embedded in a high-dimensional space. Unfortunately, most of these techniques are limited to the analysis of a single submanifold of a Euclidean space and suffer from degeneracies when applied to linear manifolds (subspaces). The simultaneous segmentation and estimation of a collection of submanifolds from sample data points is a challenging problem that is often thought of as "chicken-and-egg". Therefore, this problem is usually solved in two stages (1) data clustering and (2) model fitting, or else iteratively using, e.g. the Expectation Maximization (EM) algorithm.
The first part of this talk will show that for a wide class of segmentation problems (mixtures of subspaces, mixtures of fundamental matrices/trifocal tensors, mixtures of linear dynamical models), the "chicken-and-egg" dilemma can be tackled using an algebraic geometric technique called Generalized Principal Component Analysis (GPCA). In fact, it is possible to eliminate the data segmentation step algebraically and then use all the data to recover all the models without first segmenting the data. The solution can be obtained using linear algebraic techniques, and is a natural extension of classical PCA from one to multiple subspaces.
The second part of this talk will present a novel algorithm for clustering data sampled from multiple submanifolds of a Riemannian manifold, e.g. the space of probability density functions. The algorithm, called Locally Linear Manifold Clustering (LLMC) is based on clustering a low-dimensional representation of the data learned using generalizations of local nonlinear dimensionality reduction algorithms from Euclidean to Riemannian spaces.
The talk will also present a few motivating applications of manifold clustering to computer vision problems such as texture clustering, segmentation of rigid body motions, segmentation of dynamic textures, segmentation of diffusion MRI.
Michael Wakin (Colorado School of Mines)
Manifold models for single- and multi-signal recovery (poster)
Abstract: The emerging theory of Compressive Sensing states that a signal obeying a sparse model can be reconstructed from a small number of random linear measurements. In this poster, we will explore manifold-based models as a generalization of sparse representations, and we will discuss the theory and applications for use of these models in single- and multi-signal recovery from compressive measurements.
Brenton Walker (Laborartory For Telecommunications Sciences)
Using persistent homology to recover spatial information from
encounter traces (poster)
Abstract: In order to better understand human and animal mobility and its potential effcts on Mobile Ad-Hoc networks and Delay-Tolerant Networks, many researchers have conducted experiments which collect encounter data. Most analyses of these data have focused on isolated statistical properties such as the distribution of node inter-encounter times and the degree distribution of the connectivity graph.
On the other hand, new developments in computational topology, in particular persistent homology, have made it possible to compute topological invariants from noisy data. These homological methods provide a natural way to draw conclusions about global structure based on collections of local information.
We use persistent homology techniques to show that in some cases encounter traces can be used to deduce information about the topology of the physical space the experiment was conducted in, and detect certain changes in the space. We also show that one can distinguish between simulated encounter traces generated on a bounded rectangular grid from traces generated on a grid with the opposite edges wrapped (a toroidal grid). Finally, we have found that nontrivial topological features also appear in real experimental encounter traces.
John Wright (University of Illinois at Urbana-Champaign)
Mixed data segmentation via lossy data compression (poster)
Abstract: We consider the problem of clustering mixed data drawn sampled from distributions of (possibly) varying intrinsic dimensionality, e.g., degenerate Gaussians or linear subspaces. We approach this problem from the perspective of lossy data compression, seeking a segmentation that minimizes the number of bits needed to code the data up to a prespecified distortion. The coding length is minimized via an agglomerative algorithm that merges the pair of groups such that the resulting coding length is minimal. This simple algorithm converges globally for a wide range of simulated examples. It also produces state-of-the-art experimental results on applications in image segmentation from texture features and motion segmentation from tracked point features.
Qiu Wu (University of Texas)
Orthant-wise gradient projection method for sparse reconstruction (poster)
Abstract: Many problems in signal processing and statistics involve
in finding sparse solutions by solving the l1 regularized least square problem.
The orthant-wise method exploits that fact that the l1 term is a linear function over any given orthant. Hence, the objective function is differentiable orthant-wise.
We implement two variants of the orthant-wise gradient projection method:
one is based on steepest-descent search direction and the other one is based a quasi-Newton search direction. Experimental results with compressive
sampling problems demonstrate that the performance of our method compares favorably with that of alternative methods in literatures.
Dexuan Xie (University of Wisconsin)
New efficient algorithms for a general blood tissue transport-metabolism model and stiff differential equations
Abstract: Fast algorithms for simulating mathematical models of coupled blood-tissue
transport and metabolism are critical for the analysis of data on transport
and reaction in tissue. This talk will introduce a general blood tissue
transport-metabolism model governed by a large system of one-dimensional
hyperbolic partial differential equations, and then present a new parallel
algorithm for solving it. The key part of the new algorithm is to
approximate the model as a group of independent ordinary differential
equation (ODE) systems such that each ODE system has the same size as the
model and can be integrated independently. The accuracy of the algorithm is
demonstrated for solving a simple blood-tissue transport model with an
analytical solution. Numerical experiments were made for a large-scale
coupled blood tissue transport-metabolism model on a distributed-memory
parallel computer and a shared-memory parallel computer, showing the high
parallel efficiency of the algorithm.
In the second part of this talk, a well-known implicit Runge-Kutta
algorithm called the Radau IIA method will be discussed, which is a
favorite stiff ODE solver for the new parallel algorithm. The most time
consuming part of the Radau IIA method is to solve a large scale nonlinear
algebraic system of stage values. Currently, the widely-used nonlinear
solver was still a simplified Newton method proposed by Liniger &
Willoughby in 1970. In practice, it may suffer poor convergence problems,
forcing the Radau IIA method to select too small step sizes in order to
guarantee the convergence. To provide the Radau IIA method with a robust
nonlinear solver, this talk will present a new simplified Newton algorithm
and discuss its convergence and performance. Numerical results confirm that
the new algorithm can have better convergence properties than the current
one and can significantly improve the performance of the Radau IIA method.
Allen Yang Yang (University of California, Berkeley)
High-dimensional multi-model estimation – its Algebra,
statistics, and sparse representation (poster)
Abstract: Recent advances in information technologies have led to unprecedented large amounts of high-dimensional data from many emerging applications. The need for more advanced techniques to analyze such complex data calls for shifting research paradigms. In this presentation, I will overview and highlight several results in the area of estimation of mixture models in high-dimensional data spaces. Applications will be presented in problems such as motion segmentation, image segmentation, face recognition, and human action categorization. Through this presentation, I intend to emphasize the confluence of algebra and statistics that may lead to more advanced solutions in analyzing complex singular data structures such as mixture linear subspaces and nonlinear manifolds.
Chao Yang (Lawrence Berkeley National Laboratory)
A direct constrained minimization algorithm for
solving the Kohn-Sham equations
Abstract: I will present a direct constrained minimization (DCM) algorithm
for solving the Kohn-Sham equations. The key ingredients of this
algorithm involve projecting the Kohn-Sham total energy functional
into a sequences of subspaces of small dimensions and seeking the
minimizer of total energy functional within each subspace. The
minimizer of a subspace energy functional not only provides a
search direction along which the KS total energy functional decreases
but also gives an optimal ``step-length" to move along this search
direction. I will provide some numerical examples to demonstrate
the efficiency and accuracy of this approach and compare it
with the widely used method of self-consistent field (SCF) iteration.
I will also discuss a few other numerical issues in algorithms
designed to solve the Kohn-Sham equations.
Lihi Zelnik-Manor (Technion-Israel Institute of Technology)
Approximate nearest subspace search with
applications to pattern recognition (poster)
Abstract: Linear and affine subspaces are commonly used to describe the
appearance of objects under different lighting, viewpoint,
articulation, and even identity. A natural problem arising from their
use is -- given a query image portion represented as a point in some
high dimensional space -- find a subspace near to the query. This talk
presents an efficient solution to the approximate nearest subspace
problem for both linear and affine subspaces. Our method is based on a
simple reduction to the problem of nearest point search, and can thus
employ tree-based search or locality-sensitive hashing to find a near
subspace. Further performance improvement is achieved by using random
projections to lower the dimensionality of the problem. We provide
theoretical proofs of correctness and error bounds of our construction,
and demonstrate its capabilities on synthetic and real data. Our
experiments demonstrate that an approximate nearest subspace can be
located significantly faster than the exact nearest subspace, while at
the same time it can find better matches compared to a similar search
on points, in the presence of variations due to viewpoint, lighting,
and so forth.
Xiaojin Zhu (University of Wisconsin)
Semi-supervised learning by multi-manifold separation
Abstract: Semi-supervised learning on a single manifold has been the subject of intense study. We consider the setting of multiple manifolds, in which it is assumed that the target function is smooth within each manifold, yet the manifolds can intersect and partly overlap. We discuss our recent work to separate these manifolds from unlabeled data, and perform a 'mild' form of semi-supervised learning which is hopefully robust to the model assumption.
Rigoberto Advincula
University of Houston
10/31/2008 - 11/2/2008
Iman Aganj
University of Minnesota
10/27/2008 - 10/30/2008
Alina Alexeenko
Purdue University
10/31/2008 - 11/2/2008
Wesley D. Allen
University of Georgia
9/28/2008 - 10/1/2008
Bradley K. Alpert
National Institute of Standards and Technology
10/26/2008 - 10/30/2008
Arnaud Natesh Anantharaman
Ecole Nationale des Ponts et Chaussees
10/6/2008 - 10/12/2008
Ery Arias-Castro
University of California, San Diego
10/26/2008 - 10/31/2008
Donald G. Aronson
University of Minnesota
9/1/2002 - 8/31/2009
Donald G. Aronson
University of Minnesota
10/4/2008 - 10/4/2008
Alán Aspuru-Guzik
Harvard University
10/2/2008 - 10/3/2008
Alán Aspuru-Guzik
Harvard University
10/31/2008 - 11/2/2008
Gregory L Baker
Michigan State University
10/31/2008 - 11/2/2008
Amartya Sankar Banerjee
University of Minnesota
9/26/2008 - 10/3/2008
Gang Bao
Michigan State University
10/31/2008 - 11/2/2008
Leah Bar
University of Minnesota
10/27/2008 - 10/30/2008
Richard G. Baraniuk
Rice University
10/26/2008 - 10/27/2008
Rodney J. Bartlett
University of Florida
9/28/2008 - 10/1/2008
Axel D. Becke
Dalhousie University
9/28/2008 - 10/3/2008
Mikhail Belkin
Ohio State University
10/26/2008 - 10/30/2008
Peter Binev
University of South Carolina
10/25/2008 - 10/30/2008
Francisco Blanco-Silva
University of South Carolina
10/26/2008 - 10/30/2008
Edward Howard Bosch
National Geospatial Intelligence Agency
10/26/2008 - 10/31/2008
Khalid Boushaba
Iowa State University
10/3/2008 - 10/5/2008
Bastiaan J. Braams
Emory University
9/28/2008 - 11/8/2008
James Joseph Brannick
Pennsylvania State University
10/30/2008 - 11/2/2008
Michael P. Brenner
Harvard University
10/11/2008 - 10/13/2008
Maila Brucal
University of Kansas
10/3/2008 - 10/5/2008
Peter Brune
University of Chicago
9/8/2008 - 6/30/2009
Felipe Alfonso Bulat
Duke University
9/28/2008 - 10/4/2008
Kieron J. Burke
University of California, Irvine
9/29/2008 - 10/2/2008
Thomas H. Burns
Starkey Laboratories
10/17/2008 - 10/17/2008
Sun-Sig Byun
University of Iowa
9/26/2008 - 10/4/2008
Wei Cai
University of North Carolina - Charlotte
10/31/2008 - 11/2/2008
Maria-Carme T. Calderer
University of Minnesota
9/1/2008 - 6/30/2009
Hannah Callender
University of Minnesota
9/1/2007 - 8/31/2009
Eric Cances
CERMICS
9/1/2008 - 12/23/2008
Steve Cantrell
University of Miami
10/3/2008 - 10/5/2008
Gunnar Carlsson
Stanford University
10/28/2008 - 10/30/2008
Pete George Casazza
University of Missouri
10/26/2008 - 10/31/2008
William Austin Casey
Pacific Northwest National Laboratory
10/26/2008 - 10/30/2008
Isabelle Catto
Université de Paris IX (Paris-Dauphine)
9/26/2008 - 10/3/2008
Alessandro Cembran
University of Minnesota
9/26/2008 - 10/3/2008
Arindam Chakraborty
Pennsylvania State University
9/28/2008 - 10/3/2008
Frédéric Chazal
INRIA Saclay - Île-de-France
10/25/2008 - 10/30/2008
Guangliang Chen
University of Minnesota
10/27/2008 - 10/30/2008
Xianjin Chen
University of Minnesota
9/1/2008 - 8/31/2010
Xianjin Chen
University of Minnesota
10/4/2008 - 10/4/2008
Daniel M. Chipman
University of Notre Dame
9/14/2008 - 12/13/2008
Hi Jun Choe
University of Iowa
9/28/2008 - 10/4/2008
Matteo Cococcioni
University of Minnesota
9/29/2008 - 10/3/2008
Aron J. Cohen
Duke University
9/28/2008 - 10/3/2008
Chris Cosner
University of Miami College of Arts and Sciences
10/3/2008 - 10/5/2008
Ludovica Cecilia Cotta-Ramusino
University of Minnesota
10/1/2007 - 8/30/2009
Nathan R. M. Crawford
University of California, Irvine
9/27/2008 - 10/4/2008
Wolfgang Dahmen
RWTH Aachen
10/26/2008 - 10/29/2008
Steven Benjamin Damelin
Georgia Southern University
10/26/2008 - 10/30/2008
Ingrid Daubechies
Princeton University
10/29/2008 - 10/30/2008
James W. Davenport
Brookhaven National Laboratory
9/29/2008 - 10/3/2008
Mark Andrew Davenport
Rice University
10/26/2008 - 10/30/2008
Ajitha Devarajan
Iowa State University
9/28/2008 - 10/3/2008
Ronald DeVore
Texas A & M University
10/26/2008 - 10/29/2008
Kadir Diri
University of Southern California
9/28/2008 - 10/3/2008
David C. Dobson
University of Utah
10/31/2008 - 11/2/2008
Charles Doering
University of Michigan
10/11/2008 - 10/13/2008
Dan Dougherty
North Carolina State University
10/31/2008 - 11/2/2008
Qiang Du
Pennsylvania State University
10/31/2008 - 11/5/2008
Julio Duarte
Eastman Kodak Company
10/27/2008 - 10/30/2008
Marco F. Duarte
Rice University
10/26/2008 - 10/30/2008
Olivier Dubois
University of Minnesota
9/3/2007 - 8/31/2009
Olivier Dubois
University of Minnesota
10/4/2008 - 10/4/2008
Phillip Duxbury
Michigan State University
10/31/2008 - 11/2/2008
Weinan E
Princeton University
9/28/2008 - 10/2/2008
Weinan E
Princeton University
10/31/2008 - 11/6/2008
Lars Eldén
Linköping University
10/26/2008 - 10/30/2008
Ehsan Elhamifar
Johns Hopkins University
10/26/2008 - 10/30/2008
Ahmel El-Mawas
University of Minnesota
10/27/2008 - 10/30/2008
Maria Esteban
Université de Paris IX (Paris-Dauphine)
9/27/2008 - 11/15/2008
Kai Fan
North Carolina State University
9/25/2008 - 10/4/2008
Charles L. Fefferman
Princeton University
10/26/2008 - 10/29/2008
Jay P. Fillmore
University of San Diego
10/4/2008 - 10/4/2008
Heather Lyn Finotti
University of Tennessee
10/31/2008 - 11/2/2008
Daniel Flath
Macalester College
8/27/2008 - 12/20/2008
Andrea Floris
Freie Universität Berlin
9/28/2008 - 10/3/2008
Massimo Fornasier
Johann Radon Institute for Computational and Applied Mathematics
10/26/2008 - 10/30/2008
Roger Fosdick
University of Minnesota
10/4/2008 - 10/4/2008
Stephen Foster
Mississippi State University
10/31/2008 - 11/2/2008
Christopher Fraser
University of Chicago
8/27/2008 - 6/30/2009
Christopher Ray Fredregill
University of Minnesota
10/4/2008 - 10/4/2008
Mituhiro Fukuda
Tokyo Institute of Technology
9/25/2008 - 10/4/2008
Stephen Fulling
Texas A & M University
10/1/2008 - 10/30/2008
Alexander Gaenko
Iowa State University
9/28/2008 - 10/3/2008
Irene M. Gamba
University of Texas
10/31/2008 - 11/2/2008
Weiguo Gao
Fudan University
9/27/2008 - 12/13/2008
Carlos J. Garcia-Cervera
University of California, Santa Barbara
9/2/2008 - 12/12/2008
Caroline Gatti-Bono
Lawrence Livermore National Laboratory
10/2/2008 - 10/4/2008
Anna Gilbert
University of Michigan
10/11/2008 - 10/13/2008
Peter M.W. Gill
Australian National University
9/28/2008 - 10/3/2008
Benjamin David Goddard
University of Warwick
9/29/2008 - 10/10/2008
Alvina Goh
Johns Hopkins University
10/26/2008 - 10/30/2008
Guillermo Hugo Goldsztein
Georgia Institute of Technology
10/31/2008 - 11/2/2008
Jay Gopalakrishnan
University of Florida
9/1/2008 - 2/28/2009
Jay Gopalakrishnan
University of Florida
10/4/2008 - 10/4/2008
Andreas Görling
Friedrich-Alexander-Universität Erlangen-Nürnberg
9/28/2008 - 10/3/2008
John Greer
National Geospatial Intelligence Agency
10/26/2008 - 10/30/2008
Bella Grigorenko
M.V. Lomonosov Moscow State University
9/28/2008 - 10/3/2008
Eberhard K. U. Gross
Freie Universität Berlin
9/28/2008 - 10/3/2008
Yujin Guo
University of Minnesota
10/4/2008 - 10/4/2008
François Gygi
University of California, Davis
9/30/2008 - 10/3/2008
George A. Hagedorn
Virginia Polytechnic Institute and State University
9/28/2008 - 10/3/2008
Gloria Haro Ortega
Universitat Politecnica de Catalunya
10/25/2008 - 11/1/2008
Theodore Hatcher
Andrews University
10/3/2008 - 10/5/2008
Timothy F. Havel
Massachusetts Institute of Technology
9/28/2008 - 10/3/2008
Timothy F. Havel
Massachusetts Institute of Technology
10/31/2008 - 12/12/2008
Martin Head-Gordon
University of California, Berkeley
9/29/2008 - 10/3/2008
Chinmay Hegde
Rice University
10/26/2008 - 10/31/2008
Michael E. Henderson
IBM
10/26/2008 - 10/31/2008
William Henry
Mississippi State University
10/31/2008 - 11/2/2008
Mark S. Herman
University of Minnesota
9/1/2008 - 8/31/2010
Lotfi Hermi
University of Arizona
10/3/2008 - 10/5/2008
Gaston Hernandez
University of Connecticut
10/3/2008 - 10/5/2008
Jan S. Hesthaven
Brown University
10/31/2008 - 11/2/2008
Masahiro Higashi
University of Minnesota
9/26/2008 - 10/3/2008
Peter Hinow
University of Minnesota
9/1/2007 - 8/31/2009
Peter Hinow
University of Minnesota
10/4/2008 - 10/4/2008
Mark R. Hoffmann
University of North Dakota
9/28/2008 - 10/3/2008
Mary Ann Horn
Vanderbilt University
10/12/2008 - 10/14/2008
Dirk Hundertmark
University of Illinois at Urbana-Champaign
9/28/2008 - 10/10/2008
Yunkyong Hyon
University of Minnesota
9/1/2008 - 8/31/2010
Yunkyong Hyon
University of Minnesota
10/4/2008 - 10/4/2008
Olexandr Isayev
Jackson State University
9/28/2008 - 10/4/2008
Mark Iwen
University of Minnesota
9/1/2008 - 8/31/2010
Alexander Izzo
Bowling Green State University
9/1/2008 - 6/30/2009
Naresh Jain
University of Minnesota
10/4/2008 - 10/4/2008
Tony Jebara
Columbia University
10/27/2008 - 10/30/2008
Samson A. Jenekhe
University of Washington
10/31/2008 - 11/2/2008
Srividhya Jeyaraman
University of Minnesota
10/4/2008 - 10/4/2008
Srividhya Jeyaraman
University of Minnesota
9/1/2008 - 8/31/2010
Lijian Jiang
University of Minnesota
9/1/2008 - 8/31/2010
Shi Jin
University of Wisconsin
10/31/2008 - 11/1/2008
Max A. Jodeit
University of Minnesota
10/3/2008 - 10/4/2008
Erin R. Johnson
Duke University
9/28/2008 - 10/3/2008
Daniel D. Joseph
University of Minnesota
10/4/2008 - 10/4/2008
Ajay Joshi
University of Minnesota
10/27/2008 - 10/30/2008
Markos A. Katsoulakis
University of Massachusetts
10/31/2008 - 11/2/2008
Markus Keel
University of Minnesota
7/21/2008 - 6/30/2009
John Kemper
University of St. Thomas
10/4/2008 - 10/4/2008
Harvey B Keynes
University of Minnesota
10/4/2008 - 10/4/2008
Yongho Kim
University of Minnesota
9/26/2008 - 10/3/2008
Rollin A. King
Bethel University
9/29/2008 - 10/3/2008
Robert V. Kohn
New York University
10/11/2008 - 10/13/2008
Robert V. Kohn
New York University
10/31/2008 - 11/13/2008
Tamara G. Kolda
Sandia National Laboratories
10/26/2008 - 10/30/2008
Mario Koppen
TU München
9/28/2008 - 10/3/2008
Karol Kowalski
Pacific Northwest National Laboratory
9/28/2008 - 10/3/2008
Aliaksandr Krukau
Rice University
9/28/2008 - 10/3/2008
Anna Krylov
University of Southern California
9/25/2008 - 12/25/2008
Dan Kushnir
Yale University
10/26/2008 - 10/30/2008
Diane Lambert
Google Inc.
10/11/2008 - 10/13/2008
Arie Landau
University of Southern California
10/12/2008 - 10/28/2008
David Langreth
Rutgers University
9/29/2008 - 10/2/2008
Triet Minh Le
Yale University
10/26/2008 - 10/30/2008
Claude Le Bris
CERMICS
9/11/2008 - 5/30/2009
Federico Lecumberry
University of the Republic
10/27/2008 - 10/30/2008
Chiun-Chang Lee
National Taiwan University
10/4/2008 - 10/4/2008
Chiun-Chang Lee
National Taiwan University
8/26/2008 - 7/31/2009
Hijin Lee
Korea Advanced Institute of Science and Technology (KAIST)
10/4/2008 - 10/4/2008
Hijin Lee
Korea Advanced Institute of Science and Technology (KAIST)
9/29/2008 - 10/3/2008
Hijin Lee
Korea Advanced Institute of Science and Technology (KAIST)
10/27/2008 - 10/30/2008
Long Lee
University of Wyoming
10/31/2008 - 11/2/2008
Gilad Lerman
University of Minnesota
10/26/2008 - 10/30/2008
Howard A. Levine
Iowa State University
10/3/2008 - 10/5/2008
Stacey E. Levine
Duquesne University
10/25/2008 - 10/31/2008
Melvyn P. Levy
Duke University
9/28/2008 - 10/4/2008
Mathieu Lewin
Université de Cergy-Pontoise
9/26/2008 - 10/25/2008
Mark Lewis
University of Alberta
10/3/2008 - 10/4/2008
Bingtuan Li
University of Louisville
10/3/2008 - 10/5/2008
Jichun Li
University of Nevada
10/31/2008 - 11/2/2008
Tianjiang Li
Pennsylvania State University
10/26/2008 - 10/30/2008
Yongfeng Li
University of Minnesota
9/1/2008 - 8/31/2010
Yongfeng Li
University of Minnesota
10/4/2008 - 10/4/2008
Hstau Y Liao
Columbia University
10/26/2008 - 10/30/2008
Lek-Heng Lim
University of California, Berkeley
10/26/2008 - 10/30/2008
Florence J. Lin
University of Southern California
9/30/2008 - 10/2/2008
Tai-Chia Lin
National Taiwan University
8/23/2008 - 7/31/2009
Roland Lindh
Lund University
9/28/2008 - 10/3/2008
Walter Littman
University of Minnesota
10/4/2008 - 10/4/2008
Chun Liu
University of Minnesota
9/1/2008 - 8/31/2010
Chun Liu
University of Minnesota
10/4/2008 - 10/4/2008
Di Liu
Michigan State University
10/31/2008 - 11/2/2008
Kevin Long
Sandia National Laboratories
10/31/2008 - 11/2/2008
Carlos Silva Lopez
University of Minnesota
9/26/2008 - 10/3/2008
Gang Lu
California State University
9/28/2008 - 10/4/2008
Gang Lu
California State University
10/31/2008 - 11/2/2008
Jianfeng Lu
Princeton University
9/25/2008 - 10/4/2008
Roger Lui
Worcester Polytechnic Institute
10/3/2008 - 10/5/2008
Russell Luke
University of Delaware
9/28/2008 - 10/3/2008
Mitchell Luskin
University of Minnesota
9/1/2008 - 6/30/2009
Yi Ma
University of Illinois at Urbana-Champaign
10/26/2008 - 10/30/2008
Taylor Joseph Mach
Bethel University
9/29/2008 - 10/3/2008
Mauro Maggioni
Duke University
10/26/2008 - 10/30/2008
Julien Mairal
INRIA
10/26/2008 - 11/2/2008
Albert Marden
University of Minnesota
10/1/2008 - 10/1/2008
Alex Marker
Schott North America, Inc.
10/31/2008 - 11/2/2008
Laurence D. Marks
Northwestern University
9/28/2008 - 10/3/2008
Vasileios Maroulas
University of Minnesota
9/1/2008 - 8/31/2010
José Mario Martínez
State University of Campinas (UNICAMP)
9/28/2008 - 10/3/2008
Hiroshi Matano
University of Tokyo
10/4/2008 - 10/4/2008
James McCusker
Michigan State University
10/31/2008 - 11/2/2008
Juan C. Meza
Lawrence Berkeley National Laboratory
9/25/2008 - 10/4/2008
Steven L. Mielke
University of Minnesota
9/26/2008 - 10/3/2008
Willard Miller Jr.
University of Minnesota
10/1/2008 - 10/1/2008
Willard Miller Jr.
University of Minnesota
10/27/2008 - 10/30/2008
Washington Mio
Florida State University
10/26/2008 - 10/30/2008
Peter Monk
University of Delaware
10/31/2008 - 11/1/2008
Yoichiro Mori
University of Minnesota
10/4/2008 - 10/4/2008
Paula Mori-Sánchez
Duke University
9/28/2008 - 10/3/2008
Dmitriy Morozov
Duke University
10/26/2008 - 10/30/2008
Zuhair Nashed
University of Central Florida
10/31/2008 - 11/2/2008
Ramesh Natarajan
IBM Research Division
10/26/2008 - 10/30/2008
Junalyn Navarra-Madsen
Texas Woman's University
9/25/2008 - 10/3/2008
Alexander V. Nemukhin
Moscow State University
9/25/2008 - 10/3/2008
Tristan Nguyen
Office of Naval Research
10/26/2008 - 10/30/2008
Olalla Nieto Faza
University of Minnesota
9/26/2008 - 10/3/2008
Yasunori Nishimori
National Institute of Advanced Industrial Science and Technology
10/26/2008 - 10/30/2008
Partha Niyogi
University of Chicago
10/26/2008 - 10/30/2008
Arthur J. Nozik
Department of Energy
10/31/2008 - 11/2/2008
Duane Nykamp
University of Minnesota
10/4/2008 - 10/4/2008
Andrew Odlyzko
University of Minnesota
10/4/2008 - 10/4/2008
Peter J. Olver
University of Minnesota
10/4/2008 - 10/4/2008
Peter J. Olver
University of Minnesota
10/12/2008 - 10/12/2008
John E. Osborn
University of Maryland
10/3/2008 - 10/5/2008
Biao Ou
University of Toledo
10/3/2008 - 10/5/2008
Miao-Jung Yvonne Ou
Oak Ridge National Laboratory
9/25/2008 - 10/3/2008
Victor Padron
Normandale Community College
10/4/2008 - 10/4/2008
Igor Pak
University of Minnesota
10/27/2008 - 10/30/2008
Gianluca Panati
Università di Roma "La Sapienza"
9/24/2008 - 10/4/2008
Stephen D Pankavich
Indiana University
10/31/2008 - 11/7/2008
John E. Pask
Lawrence Livermore National Laboratory
9/30/2008 - 10/4/2008
George Pau
Lawrence Berkeley National Laboratory
9/28/2008 - 10/3/2008
Larry Payne
Cornell University
10/1/2008 - 10/5/2008
John P. Perdew
Tulane University
9/26/2008 - 10/3/2008
Arlie O. Petters
Duke University
10/11/2008 - 10/13/2008
Peter Polacik
University of Minnesota
10/4/2008 - 10/4/2008
Craig T. Poling
Lockheed Martin
10/11/2008 - 10/13/2008
Matej Praprotnik
Max Planck Institute for Polymer Research
10/8/2008 - 11/8/2008
Oleg Prezhdo
University of Washington
10/31/2008 - 11/2/2008
Emil Prodan
Yeshiva University
9/28/2008 - 10/10/2008
Emil Prodan
Yeshiva University
10/31/2008 - 11/2/2008
Keith Promislow
Michigan State University
10/31/2008 - 11/6/2008
Ignacio Ramirez
University of Minnesota
10/27/2008 - 10/28/2008
Shankar Rao
University of Illinois at Urbana-Champaign
10/27/2008 - 10/28/2008
Peter Rejto
University of Minnesota
10/4/2008 - 10/4/2008
Donald Richards
Pennsylvania State University
10/11/2008 - 10/13/2008
Christian Ringhofer
Arizona State University
10/31/2008 - 11/2/2008
Irina Rish
IBM
10/26/2008 - 10/28/2008
Marielba Rojas
Technical University of Denmark
9/28/2008 - 10/4/2008
Adrienn Ruzsinszky
Tulane University
9/26/2008 - 10/3/2008
Murti Sacapaka
University of Minnesota
10/27/2008 - 10/30/2008
Paul E. Sacks
Iowa State University
10/3/2008 - 10/5/2008
Fadil Santosa
University of Minnesota
7/1/2008 - 6/30/2010
Fadil Santosa
University of Minnesota
10/4/2008 - 10/4/2008
Guillermo R. Sapiro
University of Minnesota
10/26/2008 - 10/30/2008
Duane Sather
University of Colorado
10/3/2008 - 10/5/2008
Berkant Savas
Linköping University
10/26/2008 - 10/30/2008
Andreas Savin
Université de Paris VI (Pierre et Marie Curie)
10/8/2008 - 11/7/2008
Arnd Scheel
University of Minnesota
9/1/2008 - 6/30/2009
Arnd Scheel
University of Minnesota
10/4/2008 - 10/4/2008
Roman Schubert
University of Bristol
10/5/2008 - 11/8/2008
Ridgway Scott
University of Chicago
9/1/2008 - 6/30/2009
Gustavo E. Scuseria
Rice University
9/29/2008 - 10/1/2008
George R Sell
University of Minnesota
10/4/2008 - 10/4/2008
Tsvetanka Sendova
University of Minnesota
9/1/2008 - 8/31/2010
Tsvetanka Sendova
University of Minnesota
9/1/2008 - 10/31/2008
James Serrin
University of Minnesota
10/4/2008 - 10/4/2008
Chehrzad Shakiban
University of Minnesota
10/4/2008 - 10/4/2008
Chehrzad Shakiban
University of Minnesota
10/12/2008 - 10/12/2008
Yuk Sham
University of Minnesota
9/1/2008 - 6/30/2009
David H. Sharp
Los Alamos National Laboratory
10/10/2008 - 10/14/2008
David C. Sherrill
Georgia Institute of Technology
9/29/2008 - 10/1/2008
Stephen Shipman
Louisiana State University
10/31/2008 - 11/2/2008
Yoel Shkolnisky
Yale University
10/26/2008 - 10/30/2008
Chi-Wang Shu
Brown University
10/31/2008 - 11/2/2008
Heinz Siedentop
Ludwig-Maximilians-Universität München
9/22/2008 - 12/19/2008
Amit Singer
Princeton University
10/26/2008 - 10/30/2008
Ravishankar Sivalingam
University of Minnesota
10/27/2008 - 10/28/2008
Lyudmila V. Slipchenko
Iowa State University
9/25/2008 - 10/2/2008
Richard Souvenir
University of North Carolina - Charlotte
10/26/2008 - 10/30/2008
Andrew M. Stein
University of Minnesota
9/1/2007 - 8/31/2009
Andrew M. Stein
University of Minnesota
10/4/2008 - 10/4/2008
Marvin Stein
University of Minnesota
10/4/2008 - 10/4/2008
Panagiotis Stinis
University of Minnesota
10/27/2008 - 10/30/2008
Gabriel Stoltz
École Nationale des Ponts-et-Chaussées (ENPC)
9/23/2008 - 10/2/2008
Bernd Sturmfels
University of California, Berkeley
10/12/2008 - 10/13/2008
Jianzhong Su
University of Texas
10/3/2008 - 10/5/2008
Jianwei Sun
Tulane University
9/28/2008 - 10/4/2008
Qiyu Sun
University of Central Florida
10/31/2008 - 11/2/2008
Vladimir Sverak
University of Minnesota
10/11/2008 - 10/13/2008
Arthur Szlam
University of California, Los Angeles
10/26/2008 - 10/30/2008
Jianmin Tao
Los Alamos National Laboratory
9/28/2008 - 10/3/2008
P. Craig Taylor
Colorado School of Mines
10/31/2008 - 11/1/2008
William Toczyski
University of Minnesota
10/27/2008 - 10/30/2008
Carl Toews
Duquesne University
10/26/2008 - 10/30/2008
David J. Tozer
University of Durham
9/27/2008 - 10/4/2008
Sergei Tretiak
Los Alamos National Laboratory
10/31/2008 - 11/1/2008
Donald G. Truhlar
University of Minnesota
9/1/2008 - 6/30/2009
Birkan Tunc
Istanbul Technical University
10/25/2008 - 11/1/2008
Erkan Tüzel
University of Minnesota
9/1/2007 - 8/31/2009
George Vacek
Hewlett Packard
9/28/2008 - 10/3/2008
Rosendo Valero
University of Minnesota
9/26/2008 - 10/3/2008
Steven M. Valone
Los Alamos National Laboratory
9/8/2008 - 11/30/2008
Steven M. Valone
Los Alamos National Laboratory
10/4/2008 - 10/4/2008
Mark van Schilfgaarde
Arizona State University
10/31/2008 - 11/2/2008
M. Alex O. Vasilescu
SUNY
10/26/2008 - 10/30/2008
René Vidal
Johns Hopkins University
10/26/2008 - 10/30/2008
Oleg A. Vydrov
Massachusetts Institute of Technology
9/28/2008 - 10/4/2008
Michael Wakin
Colorado School of Mines
10/26/2008 - 10/29/2008
Brenton Walker
Laborartory For Telecommunications Sciences
10/26/2008 - 11/1/2008
Homer Walker
Worcester Polytechnic Institute
9/28/2008 - 10/3/2008
Hong Wang
University of South Carolina
10/31/2008 - 11/2/2008
Lin-Wang Wang
Lawrence Berkeley National Laboratory
10/31/2008 - 11/2/2008
Qi Wang
University of South Carolina
10/31/2008 - 11/2/2008
Yi Wang
University of Minnesota
10/27/2008 - 10/30/2008
Zhian Wang
University of Minnesota
9/1/2007 - 8/31/2009
Zhian Wang
University of Minnesota
10/4/2008 - 10/4/2008
Henry A. Warchall
National Science Foundation
9/29/2008 - 10/1/2008
Henry A. Warchall
National Science Foundation
10/31/2008 - 11/2/2008
Hans Weinberger
University of Minnesota
10/4/2008 - 10/4/2008
Jonathan Tyler Whitehouse
University of Minnesota
10/27/2008 - 10/30/2008
Colin Wolden
Colorado School of Mines
10/31/2008 - 11/2/2008
John Wright
University of Illinois at Urbana-Champaign
10/26/2008 - 10/30/2008
Margaret H. Wright
New York University
10/11/2008 - 10/13/2008
Qiu Wu
University of Texas
10/26/2008 - 10/31/2008
Dexuan Xie
University of Wisconsin
9/4/2008 - 12/15/2008
Wei Xiong
University of Minnesota
9/1/2008 - 8/31/2010
Wei Xiong
University of Minnesota
10/4/2008 - 10/4/2008
Zhenli Xu
University of North Carolina - Charlotte
9/25/2008 - 10/3/2008
Jue Yan
Iowa State University
10/31/2008 - 11/1/2008
Allen Yang Yang
University of California, Berkeley
10/26/2008 - 10/30/2008
Chao Yang
Lawrence Berkeley National Laboratory
9/8/2008 - 11/8/2008
Fei Yang
University of Minnesota
10/27/2008 - 10/30/2008
Ke Yang
University of Minnesota
9/26/2008 - 10/3/2008
Weitao Yang
Duke University
9/28/2008 - 10/1/2008
Xingzhou Yang
Mississippi State University
10/31/2008 - 11/2/2008
Ahmad S Yasamin
Indiana University
10/26/2008 - 10/30/2008
Luping Yu
University of Chicago
10/31/2008 - 11/1/2008
Ofer Zeitouni
University of Minnesota
10/1/2008 - 10/1/2008
Lihi Zelnik-Manor
Technion-Israel Institute of Technology
10/26/2008 - 10/30/2008
Teng Zhang
University of Minnesota
10/27/2008 - 10/30/2008
Weigang Zhong
University of Minnesota
9/1/2008 - 8/31/2010
Xiaojin Zhu
University of Wisconsin
10/26/2008 - 10/30/2008
Yu Zhuang
Texas Tech University
10/30/2008 - 11/2/2008