We discuss two recent developments in linear algebra algorithms for high performance machines. The first development is SuperLU, a new sparse Gaussian elimination routine for workstations and SMPs. A variety of innovations, including some closely tied to cache and superscalar architectures in modern workstations, lets serial SuperLU attain up to 50% of peak machine performance, depending on the sparse matrix. Parallel SuperLU attains up to 6 or 7-fold speedup on an 8-CPU AlphaServer 8400, an 8-CPU C90, or an 8-CPU SGI Power Challenge, again depending on the sparse matrix. This is joint work with X. Li, J. Gilbert, S. Eisenstat and J. Liu.
The second development is a parallel algorithm, due to B.
Parlett and I. Dhillon, for the symmetric tridiagonal eigenproblem.
While still under development, this algorithm, which is similar
to inverse iteration, promises to be highly accurate, guaranteed
O(n2) in cost, and embarrassingly parallel.
It could become the algorithm of choice on serial as well as
Back to Workshop Schedule