HOME    »    PROGRAMS/ACTIVITIES    »    Annual Thematic Program
July 1-26, 2002

University of Kentucky

Parallel Computing and Visualization

Craig C. Douglas (Departments of Computer Science and Mechanical Engineering, University of Kentucky, douglas@ccs.uky.edu) and Jun Zhang (Department of Computer Science University of Kentucky,  jzhang@cs.uky.edu)

A comprehensive introduction to parallel computing with respect to scientific computing will be given. No knowledge of parallel computing will be assumed, but some programming language needs to known by the student (e.g., C, C++, or Fortran). Differences between single and multiple processor algorithms and strategies will be given. Communication methods (MPI and OpenMP) and visualization techniques will be described. Students will have ample access to the largest Hewlett-Packard supercomputer on the planet.

Reading List:

Numerical Linear Algebra for High Performance Computers by J. Dongarra, I. Duff, D. Sorensen, and H. van der Vorst published by SIAM in 1998.

Numerical Methods for Partial Differential Equations

Jerome Jaffré (INRIA, Jerome.Jaffre@inria.fr) and Jean Roberts (INRIA, Jean.Roberts@inria.fr)

Partial differential equations model a wide variety of physical and socio-economic phenomena. Practical applications require numerical solutions for these equations. The choice of numerical method for an equation is crucial and depends strongly on the particular problem. In this course we will study several methods for different problems and will be concerned with questions of stability, precision, and conservation, with an emphasis on criteria for the choice of a suitable method.

Sparse Matrix Methods

Iain Duff (Rutherford Appleton Laboratory, I.Duff@rl.ac.uk)

Slides
Frontal Methods
pdf     postscript
Multifrontal Methods
pdf    postscript
Sparse direct methods & software for systems equations: Introduction

pdf    postscript
Multifront methods ... distributed memory
pdf
   postscript

Underlying the solution of most problems in science and engineering are sparse matrices used in either a linear or nonlinear problem formulation. We will focus on how to solve sparse matrix problems and will concentrate on examining the use of direct methods although some mention will be made of how they can be used to precondition iterative methods. The lectures can be grouped into three parts which we detail below.

Direct Methods I

We commence by illustrating the diversity of problems in which sparse matrices play a crucial role and illustrate the quite different characteristics of sparse matrices from a number of application areas. We then discuss basic issues for direct methods including pivoting for sparsity preservation and stability. We describe how these can be combined in sparse direct software and show the effect of resulting algorithms using HSL codes on realistic examples.

Direct Methods II

One of the most efficient kernels on any computer, whether a standard workstation or a supercomputer, is the GEMM Level 3 BLAS for matrix-matrix multiplication. In this lecture, we show how this kernel that is for dense matrices can be used in a sparse direct method. In particular, we study frontal methods, both for finite-element and non finite-element problems. Again we illustrate our points through examining the performance of actual codes on a range of test problems from various application areas. These runs will also be used to illustrate the limitations of frontal methods which we will address by generalizing the scheme to using many fronts, resulting in a multifrontal method.

Direct Methods III

In this lecture we will develop the multifrontal method further and indicate the rich variety of possible multifrontal approaches and their applicability to a wide range of problems and matrix types. We will also discuss at some length more recent work on designing sparse direct codes for distributed memory computers, in particular a parallel multifrontal code developed as part of an EU LTR Programme.

Reading List:

Iain S. Duff and Albert M. Erisman and John K. Reid, "Direct Methods for Sparse Matrices," Oxford University Press, Oxford, England, 1986, pages xiii + 341, ISBN 0-19-853408-6 (hardcover), LCCN, QA188 .D841, Bibliography date: Tue Dec 14 22:47:43 1993, 1986, US price: $37.50

Jack J. Dongarra and Iain S. Duff and Danny C. Sorensen and Henk A. van der Vorst, "Numerical Linear Algebra for High-Performance Computers," SIAM Press, Philadelphia, 1998.

RAL reports where most of my recent work (inlcuding several review articles) can be obtained. http://www.numerical.rl.ac.uk/reports/reports.html

ACTS Workshop

Tony Drummond LADrummond@lbl.gov and Osni Marques (Lawrence Berkeley National Laboratory, osni@nersc.gov)

Students will receive hands on experience using a number of the Department of Energy's software ACTS Toolkit for parallel computers. There will be tutorials and discussion sessions focused on solving specific computational needs of the participants. See http://acts.nersc.gov for information about the ACTS Toolkit.

Reading List:

(1) LAPACK, On-line: http://www.netlib.org/lapack/lug/lapack_lug.html
or:
LAPACK Users' Guide, Third Edition, E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammaring, A. McKenney, and D. Sorensen, SIAM Publication, 1999.

(2) ScaLAPACK
On-line: http://www.netlib.org/scalapack/slug/index.html
or:
ScaLAPACK Users' Guide, L. S. Blackford, J. Choi, A. Cleary, E. D'Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley, SIAM Publications, Philadelphia, 1997.

(3) On-line:
http://netlib2.cs.utk.edu/linalg/html_templates/Templates.html
or
R. Barrett , M. Berry , T. F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. Van der Vorst , Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition,SIAM, 1994, Philadelphia, PA

(4) On-line: http://www.cs.utk.edu/%7Edongarra/etemplates/index.html
or:
Templates for the Solution of Algebraic Eigenvalue Problems: a Practical Guide , Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst, SIAM Publication, 2000.

(5) Numerical Linear Algebra for High-Performance Computers, Jack J. Dongarra, Iain S. Duff, Danny C. Sorensen, and Henk A. van der Vorst, SIAM Publication, Philadelphia, 1998.

(6) Computational Science Educational Project Homepage http://csep1.phy.ornl.gov/csep.html

(7) The ACTS Toolkit Information Center: http://acts.nersc.gov

Bioinformatics and Its Relation to Scientific Computing

Toni Kazic (University of Missouri - Columbia,  toni@athe.cecs.missouri.edu )

A model of a cellular metabolism involves hundreds of uniquely defined pieces. Changing one can affect many others. The state of the art for the design of effective alternatives is almost a trial and error process. Predicting the metabolic fate of a compound or the metabolic changes produced by an altered enzyme requires the ability to identify which enzymes will react with the compound and its successors, and determine the extent to which other competing processes will bypass or contribute to the desired effect. The focus here will be on the models and the computational algorithms for the rational design of cellular metabolism.

IMA 2002 Summer Program for Graduate Students in Scientific Computing

Go