
Second Chances and Closing Remarks 
Abstract: No Abstract 
Saugata Basu (Georgia Institute of Technology) 
On the number of homotopy types of fibres of a definable map 
Abstract: I will describe some results giving a single exponential upper bound on
the number of possible homotopy types of the fibres of a Pfaffian map, in terms
of the format of its graph.
In particular,
we show that if a semialgebraic set S ⊂ ℝ^{m+n}
is defined by a Boolean formula with s polynomials of degrees less than d, and
π: (R)^{m+n} →
(R)^{n} is the projection
on a subspace, then the number of different homotopy types of fibres of
π does not exceed (2^{m} snd)^{O(nm)}.
All previously known bounds were doubly exponential.
As applications of our main results we prove single exponential bounds on the
number of homotopy types of semialgebraic sets defined by
polynomials having a fixed number of monomials in their support
(we need to fix only the number of monomials, not the support set
itself), as well as by polynomials with bounded additive complexity.
We also prove single exponential
upper bounds on the radii of balls guaranteeing local contractibility
for semialgebraic sets defined by polynomials with integer coefficients.
(Joint work with N. Vorobjov). 
Daniel J. Bernstein (University of Illinois) 
The tangent FFT 
Abstract: My goal in this talk is to advertise an algorithm
found by
James Van Buskirk, the first improvement in more than thirty
years in
the exact complexity of the discrete Fourier transform over the
reals.
The previous speed record was held by the splitradix FFT,
announced
by Yavne in 1968 and widely understood since the early 1980s.
The
splitradix FFT uses 4n\lg n6n+8 operations over the reals
for a
sizen complex DFT when n is a large power of 2, and
therefore
(12+o(1))n\lg n operations for a complex cyclic convolution
of
length n. Bruun's realfactor FFT also uses (12+o(1))n\lg n
operations. An analysis by Johnson and Frigo shows that Van
Buskirk's
new algorithm uses only (34/3+o(1))n\lg n operations. 
Sara Billey (University of Washington) 
Grassmannians and Schubert varieties 
Abstract: The study of Schubert varieties has grown out questions in enumerate
geometry from the 19th century. This field has flourished over the
past fifty years and now has applications in algebraic geometry,
representation theory, combinatorics, physics, computer graphics, and
economics. We will define Schubert varieties in the context of
Grassmannians and flag varieties. These varieties have many
interesting properties determined by combinatorial data like
partitions, permutations, posets, and graphs. We present five fun
facts on Schubert varieties and some open problems in the field. 
Stanislav Bulygin (TU Kaiserslautern) 
Decoding linear codes via solving systems of
polynomial equations 
Abstract: This is a joint work of the presenter and Ruud Pellikaan (Technical University of Eindhoven, Netherlands). We
investigate a question of decoding linear codes, in particular up to half the minimum distance. We propose a method based on solving systems of polynomial equations over a finite field. Such a method was already quite successfully applied for cyclic codes earlier, we move on to the general case. We concentrate on solving the systems that emerge with the use of Groebner bases. The poster reflects a theoretical framework, the main result (the system we want to solve has a unique solution, etc.), and also some results on complexity estimation and experimental results. 
Philip Robert Busse (University of Kentucky) 
List decoding BCH codes 
Abstract: List decoding ReedSolomon codes via an interpolation and rootfinding algorithm was pioneered by Madhu Sudan in the mid to late 1990's. In 2006, Kwankyu Lee and Michael O'Sullivan developed a Gröbner basis based implementation of the interpolation algorithm, which they showed was an efficient generalization of the BerlekampMassey Algorithm.
We propose to apply these ideas to list decoding BCH codes by
presenting BCH codes as subfield subcodes of generalized ReedSolomon
codes and to explore the resulting algorithm. In particular, we seek ways to optimize the new algorithm and to compare it with the BerlekampMassey algorithm. 
Peter Bürgisser (UniversitätGHS Paderborn) 
The probability that a slight perturbation of a numerical analysis problem is difficult 
Abstract: The running time of many iterative numerical algorithms is
dominated by the condition number of the input,
a quantity measuring the sensitivity of the solution with
regard to small perturbations of the input.
Examples are iterative methods of linear algebra,
interiorpoint methods of linear and convex optimization,
as well as homotopy methods for solving systems of polynomial
equations.
Spielman and Teng introduced in 2001 the seminal concept of
smoothed analysis, which
arguably blends the best of both worstcase and averagecase
analysis of algorithms.
This led to a much better understanding of the success of the
simplex method in practice.
We present a general theorem providing smoothed analysis
estimates for conic condition numbers of problems
of numerical analysis. Our probability estimates depend only on
geometric invariants of the corresponding sets
of illposed inputs. Several applications to linear and
polynomial equation solving show that the estimates obtained
in this way are easy to derive and quite accurate. The main
theorem is based on a volume estimate of εtubular
neighborhoods around a real algebraic subvariety of a sphere,
intersected with a disk of radius σ. Besides
ε and σ, this bound depends only the dimension of the
sphere and on the degree of the defining equations.
This is joint work with Felipe Cucker and Martin Lotz. 
Jennifer Chayes (Microsoft Research) 
Math matters  IMA public lecture: Epidemics in technological and social networks: The downside of six degrees of separation 
Abstract: During the past decade, complex networks have become increasingly important in communication and information technology. Vast, selfengineered networks, like the Internet, the World Wide Web, and Instant Messaging Networks, have facilitated the flow of information, and served as media for social and economic interaction. In social networks, the ease of information flow goes by many names: the "small world" phenomenon, the "Kevin Bacon phenomenon," and "six degrees of separation"—the claim that any two people on earth can be connected through a chain of acquaintances with at most five intermediaries. Unfortunately, many of the properties that facilitate information transmission also facilitate the spread of viruses in both technological and social networks. Dr. Chayes uses simple mathematical models to explain these epidemics and to examine strategies for their containment. 
Iwan Duursma (University of Illinois at UrbanaChampaign) 
Strongly selforthogonal codes for secure computation 
Abstract: The wellknown Shamir secret sharing scheme uses polynomial
interpolation to recover a shared secret. The scheme and its
application to secure computation generalizes to algebraic
curve based schemes (ChenCramer 2006). For secure computation
against an active adversary a scheme needs to be strongly
multiplicative. We show that this can be achieved by using
what we call strongly selforthogonal codes.
(Joint work with various authors) 
Heide GluesingLuerssen (University of Kentucky) 
The weight adjacency matrix of a convolutional code 
Abstract: Convolutional codes can be described by linear inputstateoutput systems. This gives rise to a state transition graph and an associated weight adjacency matrix. The latter counts in a detailed way the weights occurring in the code. After discussing some uniqueness issues we will present a MacWilliams identity theorem for convolutional codes and their duals in terms of the weight adjacency matrix. Furthermore, we will discuss isometries for convolutional codes and their effect on the weight adjacency matrix. It will be shown that for a particular class of codes the weight adjacency matrix forms a complete invariant under monomial equivalence, that is, under permutation and rescaling of the codeword coordinates. 
Venkat Guruswami (University of Washington) 
List decoding with optimal data rate 
Abstract: The fundamental tradeoff in coding theory is the one
between the rate of the code (a measure of amount of redundancy
introduced) and the amount of errors that can be corrected. In this
talk, I will describe an explicit construction of codes that achieves
the optimal tradeoff between these parameters, for a worstcase noise model where the channel can corrrupt the codeword arbitrarily subject to a bound on the total number of errors.
Formally, for every 0 < R < 1 and eps > 0, we present an explicit
construction of codes of rate R (which encode R.n symbols into n
symbols) that can be list decoded in polynomial time from up to a
fraction (1Reps) of errors. Clearly, one needs at least Rn correct
symbols in the received word to have any hope of identifying the Rn
message symbols, so recovering from (R+eps)n correct symbols is
informationtheoretically best possible. Our codes are simple to
describe: they are certain *folded* ReedSolomon codes, which are just
ReedSolomon codes, but viewed as a code over a larger alphabet by a
careful bundling of codeword symbols.
The talk will introduce the problem of list decoding, and
then give a peek into the algebraic ideas and constructions that lead
to the above result. Time permitting, I will also describe some challenging questions concerning algebraicgeometric codes that, if resolved, could improve the decoding complexity as one approaches the optimal tradeoff.
Based on joint work with Atri Rudra (UW).

Leonid Gurvits (Los Alamos National Laboratory) 
Polynomial time algorithms to approximate mixed volumes within a simply exponential factor 
Abstract: Let K =(K_{1…}K_{n}) be a ntuple of
convex compact subsets in the Euclidean space
R^{n}, and let V (·) be the Euclidean volume in
R^{n} . The Minkowski polynomial
V_{K} is defined as V_{K}(x_{1},
…, x_{n})= V (λ_{1}K_{1}
+…+λ_{n}K_{n}) and the mixed volume
V(K_{1,…,}K_{n}) as
V(K_{1…}K_{n}) =
∂^{n} / ∂λ_{1…}∂λ_{n}
V_{K}(λ_{1}K_{1}
+…λ_{n}K_{n}).
The mixed volume is one of the backbones of convexity theory.
After BKH theorem, the mixed volume( and its generalizations)
had become crucially important in computational algebraic geometry.
We present in this talk randomized and deterministic algorithms to
approximate the mixed volume of wellpresented convex compact sets. Our main result is
a polytime randomized algorithm which approximates
V (K_{1,…,} K_{n}) with multiplicative
error e^{n} and with better rates if the affine dimensions of most of the sets
K_{i} are small.
Because of the famous BaranyFuredi lower bound, even such rate is not achievable by a polytime deterministic oracle algorithm.
Our approach is based on the particular, geometric programming,
convex relaxation of log(V (K_{1,…,}
K_{n})). We prove the mixed volume analogues of the Van der
Waerden and the Schrijver/Valiant conjectures on the permanent. These results, interesting
on their own, allow to "justify" the above mentioned convex relaxation,
which is solved using the ellipsoid method and a randomized polytime time
algorithm for the approximation of the volume of a convex set. 
Herwig Hauser (LeopoldFranzens Universität Innsbruck) 
Resolution of singularities 
Abstract: This is a spectacular subject, both by its many applications
and its intricate proof. We describe in the talk the main ideas
(after Zariski, Hironaka, Abhyankar and various other people)
how to use blowups for the resolution of singular varieties,
and how to build up an induction procedure to choose at each
step of the resoluton process the center of the next blowup and
to show that the singularities have improved after blowup. The
key constructions and phenomena are illustrated by
visualizations of surface singularities.
As the talk will give mostly the intuitive ideas, we would like
to refer the interested reader for details to three notes we
have written on the subject. There, also quite complete lists
of other references can be found.
The three references to my talk are on my website
www.hh.hauser.cc 
Pascal Koiran (École Normale Supérieure de Lyon) 
Decision versus evaluation in algebraic complexity theory 
Abstract: wo main categories of problems are studied in algebraic complexity theory:
evaluation problems and decision problems. A typical example of an evaluation
problem is the evaluation of the permanent of a matrix. Such problems can be
studied within Valiant's framework.
Deciding whether a multivariate polynomial has a real root is a typical example
of a decision problem. This problem is NPcomplete in the BlumShubSmale model
of computation over the real numbers.
In this talk I will present a transfer theorem which shows that if certain
evaluation problems are easy, then other decision problems (including the
abovementioned NPcomplete problem) are easy too.
Therefore, to show that that P is different from NP over the set of real
numbers, one should first be able show that certain evaluation problems are
hard. 
Tamara Gibson Kolda (Sandia National Laboratories) 
IMA/MCIM Industrial problems seminar: Tensor decompositions, the MATLAB tensor toolbox, and applications to data analysis 
Abstract: Tensors (also known as multidimensional arrays or Nway arrays) provide powerful tools for data representation and analysis. Consequently, they have been used in a variety of sciences ranging from chemometrics to psychometrics and most recently to data mining. In this talk, I'll provide a brief tutorial on tensors and their decompositions, assuming only a background in linear algebra. I will then describe the MATLAB Tensor Toolbox for working with dense, sparse, and structured tensors. I'll conclude with examples from several data mining contexts including web mining and crosslanguage information retrieval. This is joint work with Brett Bader, Sandia National Labs. 
V. Lakshmibai (Northeastern University) 
Ubiquity of Schubert varieties 
Abstract: Hodge gave basis for the homogeneous coordinate rings of the Grassmannian
and its Schubert varieties in terms of "standard monomials" in the Plucker
coordinates. This clasical work of Hodge was extended to the generalized flag variety
and its Schubert varieties by LakshmibaiLittelmannMusiliSeshadri. We shall give
a survey talk on this generalization and will also relate many important algebraic
varieties to Schubert varieties. 
Joseph M. Landsberg (Texas A & M University) 
Geometry and the complexity of matrix multiplication 
Abstract: I'll give geometric formulations
and interpretations of some of the main results regarding the
complexity of matrix multiplication. I'll also explain the
relevance of this geometry for areas such as
algebraic statistics and quantum computing. 
Grégoire Lecerf (Université Versailles/Saint QuentinenYvelines) 
New recombination techniques for polynomial factorization algorithms based on Hensel lifting 
Abstract: I will present new deterministic and probabilistic
recombination
algorithms to compute the irreducible factorization of
bivariate
polynomials via the classical Hensel lifting technique, and for
the
dense representation model. In bidegree (d_{x},
d_{y}) with d_{y} ≤
d_{x}, and whatever the characteristic of the base field is,
these algorithms only require the precision of the lifting to be
d_{x}+1. The cost of the deterministic recombination algorithm
is subquadratic in d_{x} d_{y}, and the probabilistic version is
faster by
a factor of d_{y}. At the end, I will explain how these
algorithms can
be adapted to the computation of the absolute factorization,
and how
they extend to more than 2 variables. 
Kwankyu Lee (Korea Institute for Advanced Study (KIAS)) 
A complexityreduced interpolation algorithm for
softdecision decoding of ReedSolomon codes 
Abstract: Soon after Lee and O'Sullivan proposed a new interpolation algorithm for
algebraic softdecision decoding of ReedSolomon codes, there have been some
attempts to apply a coordinate transformation technique to the new
algorithm, with a remarkable complexity reducing effect. We propose a
conceptually simple way of applying the transformation technique to the
interpolation algorithm. 
Anton Leykin (University of Minnesota Twin Cities) 
IMA postdoc seminar: F...? 
Abstract: Algorithms for computing Groebner bases are in the core of all computer algebra systems. One of the fastest installations of a Groebner method is F4 produced by Faugere and makes use of fast linear algebra. Its modification, F5, has been claimed to crack certain examples intractable by other methods. But what is behind F4 and F5? 
Diane Maclagan (Rutgers University) 
Toric geometry 
Abstract: This talk gives a brief introduction to the theory of toric
varieties. In the first half of the talk I give seven different
definitions of a toric variety and compare them briefly. In the second
half I describe some of the uses of toric varieties. The first of these
is as a testing ground for algebraic geometric conjectures, as many
geometric questions about toric varieties can be turned into combinatorial
questions. A second use is as a good ambient variety in which to embed
and study other varieties, using the fact that smooth toric varieties are
natural generalizations of projective space. 
Gregorio Malajovich (Federal University of Rio de Janeiro) 
Algebraic geometry and applications seminar: On sparse polynomial systems, mixed volumes and condition numbers 
Abstract: Abstract is a pdf file format and is posted at
http://www.ima.umn.edu/20062007/seminars/malajovich/talkima.pdf. 
Susan Margulies (University of California) 
NP, coNP and the Nullstellensatz: lower bounds for
stable set and graph coloring Nullstellensatze 
Abstract: Systems of polynomial equations over the complex numbers can be used
to characterize NPComplete graphtheoretic decision problems. From
the point of view of computer algebra and symbolic computation, these
are interesting polynomial systems because they are provably hard:
solving them is as hard as solving the underlying NPComplete problem.
Furthermore, unless NP = coNP, there must exist infinite instances of
these infeasible systems whose Hilbert Nullstellensatz certificates
grow with respect to the underlying graphs. 
Klaus Meer (Syddansk Universitet (University of Southern Denmark)) 
The word problem for a
class of groups with infinite presentation: A mputationally universal problem in the BSS model 
Abstract: Joint work with Martin Ziegler.
The word problem for discrete groups is wellknown to be undecidable
by a Turing Machine; more precisely, it is reducible both to and from
and thus equivalent to the discrete Halting Problem.
The present work introduces and studies a real extension of
the word problem for a certain class of groups which are presented
as quotient groups of a free group and a normal subgroup.
The main result establishes the word problem for such groups to be
computationally equivalent to the Halting Problem for BSS machines
over the reals.
It thus provides the first nontrivial example of a concrete
problem that is computationally universal for the BSS model. 
Uwe Nagel (University of Kentucky) 
IMA postdoc seminar: Hilbert functions of standard graded algebras 
Abstract: Hilbert functions have been introduced more than 100 years ago and they
have been intensely studied ever since. Indeed, they continue to attract a
lot of interest because numerous problems in various areas of mathematics
can be reinterpreted as questions about Hilbert functions. In this talk we
will discuss some of the fundamental results about Hilbert functions as
well as several open problems.The minimum number of settheoretic defining equations of
algebraic varieties. 
Luis Miguel Pardo (University of Cantabria) 
On Smale's 17th problem: a probabilistic solution in average polynomial time 
Abstract: In this talk I will discuss several conceptual aspects leading a a probabilistic positive solution to the following problem proposed by S. Smale: "Can a zero of n complex polynomial equations in n unknowns be found approximately on the average, in polynomial time with a uniform algorithm?" 
Ragni Piene (University of Oslo) 
Curves and surfaces 
Abstract: This talk is intended as a brief introduction to certain aspects of the
theory of algebraic curves and surfaces.
To a projective algebraic variety one associates its arithmetic genus,
using the constant term of the Hilbert polynomial. Since a complex
projective algebraic curve can be viewed as a compact Riemann surface, it
also has a topological genus, equal to the number of holes in the Riemann
surface. Hirzebruch's RiemannRoch theorem says that the topological genus
is equal to the arithmetic genus. I sketch a proof of this fundamental
result, using algebra, geometry, and topology. For algebraic surfaces
there is a corresponding result, Noether's formula, expressing the
arithmetic genus in terms of topological invariants, which can be proved
in a similar way. The last part of the talk discusses the theory of curves
on surfaces, in particular the links to classical enumerative geometry and
to modern string theory in theoretical physics. 
Joel Roberts (University of Minnesota Twin Cities) 
Real algebraic geometry tutorial: Real and complex varieties: comparisons, contrasts and examples 
Abstract: Properties of a complex variety often can be
illustrated by a picture of its real locus. Conversely, we can
often understand a real variety by studying the corresponding
complex variety. At the same time, there are cases where the
properties of real varieties and complex varieties are quite
different. This can include basic properties like
connectedness, compactness, and local dimension near singular
points.
In this talk we will exhibit pictures of some curves and
surfaces, that illustrate some of these similarities and
differences. These figures will include several types of
surfaces in R^{3}.
Many of the surface pictures are interactive, in that they are
posted on a webpage, where the viewer – using nothing more
than a Javaenabled browser – can rotate the figure
continuously by dragging it with the mouse. 
J. Maurice Rojas (Texas A & M University) 
Algebraic geometry and applications seminar: Random polynomial systems and balanced metrics on toric varieties 
Abstract: Suppose c_{0},...,c_{d} are independent identically distributed
real Gaussians with mean 0 and variance 1. Around the 1940s,
Kac and Rice
proved that the expected number of real roots of the
polynomial
c_{0} + c_{1} x + ... + c_{d}
x^{d}i, then the
expected
number of real roots is EXACTLY the square root of d. Aside
from the cute
square root phenomenon, Kostlan also observed that the
distribution function
of the real roots is constant with respect to the usual
metric on the real projective line.
The question of what a "natural" probability measure for
general
multivariate polynomials then arises. We exhibit two
(equivalent)
combinatorial constructions that conjecturally yield such a
measure. We
show how our conjecture is true in certain interesting
special cases, thus
recovering earlier work of Shub, Smale, and McLennan. We
also relate our
conjecture to earlier asymptotic results of Shiffman and
Zelditch on random sections of holomorphic line bundles.
This talk will deal concretely with polynomials and Newton
polytopes, so no background on probability or algebraic
geometry is assumed.

Diego Ruano (Universität Kaiserslautern) 
Metric structure of linear codes 
Abstract: We use the study of bilinear forms over a finite field to give a decomposition of the linear codes similar to the one in [D. Ruano: On the structure of generalized toric codes, arXiv:cs.IT/0611010, 2006] for generalized toric codes. Such decomposition, called geometric decomposition of a linear code and which can be obtained in a constructive way, allows to express easily the dual of a linear code and gives a new paradigm for linear codes. The proofs for characteristic 2 are different, but they will be developed parallel. 
Claus Scheiderer (Universität Konstanz) 
Real algebraic geometry 
Abstract: While algebraic geometry is traditionally done over the complex
field, most problems from real life are modelled on real numbers and
ask for real, not for complex, solutions. Thus real algebraic
geometry  the study of algebraic varieties defined over the real
numbers, and of their real points — is important. While most
standard techniques from general algebraic geometry remain important
in the real setting, there are some key concepts that are fundamental
to real algebraic geometry and have no counterpart in complex
algebraic geometry.
In its first part, this talk gives an informal introduction to a few
such key concepts, like real root counting, orderings of fields, or
semialgebraic sets and TarskiSeidenberg elimination. We also sketch
typical applications. In the second part, relations between
positivity of polynomials and sums of squares are discussed, as one
example for a currently active and expanding direction. Such
questions have already been among the historic roots of the field.
New techniques and ideas have much advanced the understanding in
recent years. Besides, these ideas are now successfully applied to
polynomial optimization.

Jessica Sidman (Mount Holyoke College) 
Intersection theory 
Abstract: Intersection theory is a big subject that has played an important role in
algebraic geometry, and any attempt at a comprehensive introduction in 90
minutes would surely fail. With this in mind, I have decided instead to
attempt to convey some of the beauty and flavor of intersection theory by
way of discussing a few concrete classical examples of intersection theory
on surfaces.
The material in the talk is covered in almost any text in algebraic
geometry. There are many approaches to finding 27 lines on a cubic
surface. If one wishes to see a more rigorous version of the presentation
given in this talk, then please see Hartshorne's treatment in Chapter V.4
of the book cited in the refereces. Chapter V of Hartshorne's book is a
very nice introduction to intersection theory on surfaces. For a more
general orientation in the subject with good historical context, one might
wish to read Fulton's "Introduction to intersection theory in algebraic
geometry." Fulton's other book cited in the references is the standard in
the subject, and the other texts listed offer additional points of view. 
Meera Sitharam (University of Florida) 
Nonextendibility of mutually unbiased bases 
Abstract: Two orthonormal bases of a finite dimensional Hilbert space are
mutually unbiased if for every pair of vectors one from each basis,
the modulus of the standard inner product is the same.
The problem of determining bounds on the maximum number of
such bases (MUBs) for a fixed dimension is an important open problem
of 20 years that has been shown to arise in different guises in many
areas of mathematics.
The most common application of MUBs is found in quantum cryptography
where MUBs are the quantum states used in many protocols.
Here we consider 2 questions and provide preliminary results
and conjectures:
 When is a given set of MUBs nonextendible? I.e.,
characterize mutually unbiased sets
M1,M2,..,Mk of for which there is no
basis M unbiased to M1,..,Mk?
 Characterize families of bases where nonextendibility
of a set of MUBs imply maximality, and are there natural
families of bases with this property. I.e, within such a family of
bases, a greedy method for constructing MUBs would guarantee
a maximal cardinality MUB collection.
This is joint work with Oscar Boykin and Mohamad Tarifi
at the University of Florida. 
Milind Sohoni (Indian Institute of Technology) 
Geometric complexity theory (GCT) 
Abstract: We will outline a possible approach to outstanding computational
complexity theory (CCT) questions. The approach relies on a faithful
mapping of these questions to questions in geometric invariant (GIT)
theory and representation theory. We begin with a construction
of Valiant and show its connection to the Orbit Closure and
Membership problems in GIT. Proofs of nonexistence of algorithms
translate to the existence of obstructions. This immediately leads
to the important notion of stability in invariant theory. For stable
points, we show that obstructions are easily constructed.
We then outline proofs of the stability of forms such as the
determinant and the permanent, which are important in CCT.
We next define our notion of partially stable points. We pose our
problems as (i) to understand the orbit closure for stable points
with distinctive stabilizers, and (ii) extension of these results
to partially stable points. We finish with an outline of our current
attempts at these teo, and also relate it to the wrok of other
researchers.
This is joint work with Ketan Mulmuley. 
Frank Sottile (Texas A & M University) 
New fewnomial upper bounds 
Abstract: In 1980, Askold Khovanskii established his fewnomial bound
for the number of real solutions to a system of polynomials,
showing that the complexity of the set of real solutions to
a system of polynomials depends upon the number of monomials
and not on the degree. This fundamental finiteness result
in real algebraic geometry is believed to be unrealistically large.
I will Describe joint work with Frederic Bihan on a new
fewnomial bound which is substantially lower than Khovanskii's
bound and asymptotically optimal. This bound is obtained by
first reducing a given system to a Gale system, and then
bounding the number of solutions to a Gale system. Like
Khovanskii's bound, this bound is the product of an exponential
function and a polynomial in the dimension, with the exponents
in both terms depending upon the number of monomials. In our
bound, the exponents are smaller than in Khovanskii's. 
John Voight (University of Minnesota Twin Cities) 
Computing zeta functions of sparse, nondegenerate hypersurfaces 
Abstract: We discuss the efficient computation of the zeta function of a
hypersurface X defined over a finite field k. The zeta function
Z(X,T) of X is the exponential generating series for the number of
points on X defined over k and its finite extensions; Z(X,T) is a
rational function in T. Zeta functions arise naturally in coding
theory, arithmetic geometry, complexity theory, and many other areas.
We consider the situation when X is affine, toric, or projective, when
its defining equation f is nondegenerate (a condition which implies
that X is smooth and which holds outside of a codimension 1 subset),
and when f is sparse (consisting of few monomials). In this
situation, one can use Dwork cohomology to give a very efficient
method of computing Z(X,T). We exhibit this method and provide
several examples. 
Judy L. Walker (University of Nebraska) 
Pseudocodeword connections 
Abstract: The term "pseudocodeword" is used in relationship to decoding
linear block codes in at least three different ways. In linear
programming decoding (introduced by Feldman), any vertex of the
fundamental polytope is called a pseudocodeword. In a rigorous
formulation (as first proposed by Wiberg) of minsum or sumproduct
decoding, any valid configuration of any computation tree is called a
pseudocodeword. And, using a more intuitive interpretation of iterative
messagepassing decoding, any valid configuration on any finite cover of
the Tanner graph corresponds to a pseudocodeword (this is made precise
through Graph Cover Decoding, proposed by Vontobel and Koetter). Though
some justification of the tripleuse of the term has been given, these
three notions of pseudocodeword are indeed distinct. In this talk, we
will describe the connections and disconnections involved. 
Josephine Yu (University of California) 
IMA postdoc seminar: Tropical implicitization and elimination 
Abstract: Implicitization is the problem of computing the defining ideal of the image of a polynomial map, and elimination is the problem of computing the defining ideal of the projection of an algebraic variety. In this talk, I will show how to apply tropical methods to these problems. For generic cases, we can use polyhedral computations to construct the tropical varieties of the unknown ideals without computing the ideal. 