During July 1-26, 2002 the University of Kentucky Center for Computational Sciences will be the host of the Institute for Mathematics and its Applications (IMA) summer graduate program in mathematics. The course will concentrate on scientific computing with an emphasis on applications including a week on biomathematical computing. An additional feature will be a two day workshop on how to use the Department of Energy's software for parallel computers at the end of week 3.
This program is open to graduate students from IMA Participating Institutions. Students are nominated by their department head. Participating institution department heads nominate graduate students from their institution by an e-mail to visit@ima.umn.edu with the students' names and e-mail addresses. Places are guaranteed for two graduate students from each participating institution, with additional students accommodated as space allows. Note that registration and selections of qualified students are over. Students who have been nominated can fill out the online application form.
Topics and Speakers:
There will be a topic a week unless noted. The ACTS workshop will be at the end of Iain Duff's lectures during the third week.
Parallel Computing and Visualization: Craig C. Douglas (University of Kentucky) and Jun Zhang (University of Kentucky), July 1-2
A comprehensive introduction to parallel computing with respect to scientific computing will be given. No knowledge of parallel computing will be assumed, but some programming language needs to known by the student (e.g., C, C++, or Fortran). Differences between single and multiple processor algorithms and strategies will be given. Communication methods (MPI and OpenMP) and visualization techniques will be described. Students will have ample access to the largest Hewlett-Packard supercomputer on the planet.
Reading List:
Numerical Linear Algebra for High Performance Computers by J. Dongarra, I. Duff, D. Sorensen, and H. van der Vorst published by SIAM in 1998.
Numerical Methods for Partial Differential Equations: Jerome Jaffré (INRIA) and Jean Roberts (INRIA), July 3 & 5
Partial differential equations model a wide variety of physical and socio-economic phenomena. Practical applications require numerical solutions for these equations. The choice of numerical method for an equation is crucial and depends strongly on the particular problem. In this course we will study several methods for different problems and will be concerned with questions of stability, precision, and conservation, with an emphasis on criteria for the choice of a suitable method.
Sparse Matrix Methods: Iain Duff (Rutherford Appleton Laboratory), July 15-17
Underlying the solution of most problems in science and engineering are sparse matrices used in either a linear or nonlinear problem formulation. We will focus on how to solve sparse matrix problems and will concentrate on examining the use of direct methods although some mention will be made of how they can be used to precondition iterative methods. The lectures can be grouped into three parts which we detail below.
Direct Methods I
We commence by illustrating the diversity of problems in which sparse matrices play a crucial role and illustrate the quite different characteristics of sparse matrices from a number of application areas. We then discuss basic issues for direct methods including pivoting for sparsity preservation and stability. We describe how these can be combined in sparse direct software and show the effect of resulting algorithms using HSL codes on realistic examples.
Direct Methods II
One of the most efficient kernels on any computer, whether a standard workstation or a supercomputer, is the GEMM Level 3 BLAS for matrix-matrix multiplication. In this lecture, we show how this kernel that is for dense matrices can be used in a sparse direct method. In particular, we study frontal methods, both for finite-element and non finite-element problems. Again we illustrate our points through examining the performance of actual codes on a range of test problems from various application areas. These runs will also be used to illustrate the limitations of frontal methods which we will address by generalizing the scheme to using many fronts, resulting in a multifrontal method.
Direct Methods III
In this lecture we will develop the multifrontal method further and indicate the rich variety of possible multifrontal approaches and their applicability to a wide range of problems and matrix types. We will also discuss at some length more recent work on designing sparse direct codes for distributed memory computers, in particular a parallel multifrontal code developed as part of an EU LTR Programme.
Reading List:
Iain S. Duff and Albert M. Erisman and John K. Reid, "Direct Methods for Sparse Matrices," Oxford University Press, Oxford, England, 1986, pages xiii + 341, ISBN 0-19-853408-6 (hardcover), LCCN, QA188 .D841, Bibliography date: Tue Dec 14 22:47:43 1993, 1986, US price: $37.50
Jack J. Dongarra and Iain S. Duff and Danny C. Sorensen and Henk A. van der Vorst, "Numerical Linear Algebra for High-Performance Computers," SIAM Press, Philadelphia, 1998.
RAL reports where most of my recent work (inlcuding several review articles) can be obtained.http://www.numerical.rl.ac.uk/reports/reports.html
Slides
Frontal Methods pdf
Multifrontal Methods pdf
Sparse direct methods & software for systems equations: Introduction pdf
Multifront methods ... distributed memory pdf
ACTS Workshop: Tony Drummond (Lawrence Berkeley National Laboratory) and Osni Marques (Lawrence Berkeley National Laboratory), July 18-19
Students will receive hands on experience using a number of the Department of Energy's software ACTS Toolkit for parallel computers. There will be tutorials and discussion sessions focused on solving specific computational needs of the participants. See http://acts.nersc.gov for information about the ACTS Toolkit.
Reading List:
(1) LAPACK, On-line: http://www.netlib.org/lapack/lug/lapack_lug.html
or:
LAPACK Users' Guide, Third Edition, E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammaring, A. McKenney, and D. Sorensen, SIAM Publication, 1999.
(2) ScaLAPACK
On-line: http://www.netlib.org/scalapack/slug/index.html
or:
ScaLAPACK Users' Guide, L. S. Blackford, J. Choi, A. Cleary, E. D'Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley, SIAM Publications, Philadelphia, 1997.
(3) On-line:
http://netlib2.cs.utk.edu/linalg/html_templates/Templates.html
or
R. Barrett , M. Berry , T. F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. Van der Vorst , Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition,SIAM, 1994, Philadelphia, PA
(4) On-line: http://www.cs.utk.edu/%7Edongarra/etemplates/index.html
or:
Templates for the Solution of Algebraic Eigenvalue Problems: a Practical Guide , Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst, SIAM Publication, 2000.
(5) Numerical Linear Algebra for High-Performance Computers, Jack J. Dongarra, Iain S. Duff, Danny C. Sorensen, and Henk A. van der Vorst, SIAM Publication, Philadelphia, 1998.
(6) Computational Science Educational Project Homepage http://csep1.phy.ornl.gov/csep.html
(7) The ACTS Toolkit Information Center: http://acts.nersc.gov
Bioinformatics and Its Relation to Scientific Computing: Toni Kazic (University of Missouri - Columbia), July 22-26
A model of a cellular metabolism involves hundreds of uniquely defined pieces. Changing one can affect many others. The state of the art for the design of effective alternatives is almost a trial and error process. Predicting the metabolic fate of a compound or the metabolic changes produced by an altered enzyme requires the ability to identify which enzymes will react with the compound and its successors, and determine the extent to which other competing processes will bypass or contribute to the desired effect. The focus here will be on the models and the computational algorithms for the rational design of cellular metabolism.
Daily Schedule
We will meet on Monday through Friday. The first day we will meet at 9:00 to get acquainted. All lectures will be in 327 McVey Hall unless announced otherwise. Students are expected to attend all of the lectures.
9:30-10:30 | First lecture |
10:30-11:00 | Break |
11:00-12:00 | Second lecture |
12:00-2:00 | Lunch |
2:00-4:00 | Informal study sessions |
In addition, there will be an informal get together on Monday evenings.
Facilities
The university also has a 224 processor HP Superdome cluster. Each processor is rated at about 3 gigaflops (peak) and has 2 gigabytes of main memory. There are three 64 processor and one 32 processor SMP's with five terabytes of attached disk storage. In addition, there are workstations and PC's. Students should bring a laptop computer if possible. There are apartments, cafeterias, restaurants, bookstores, libraries on or near campus. Lexington is a city of 275,000 people and has many other facilities. Its airport is served by 6 major airlines. Cincinnati is about 75 miles from campus.