The study of the running times of algorithms in computer science can be broken down into two broad types: worst-case analyses and average-case analyses. For many problems this distinction is very important as the orders of magnitude (in terms of some measure N of problem size) of the running times may differ significantly in each case, providing useful information about the merits of the algorithm.
A similar situation occurs for certain real number algorithms which are used in commonly available software in numerical analysis although, in the worst case, these algorithms may fail completely. Analysis of real number algorithms from this perspective was initiated by Smale in his pioneering study of Newton's method. Subsequent work in the area of linear algebra was undertaken by Demmel and has been elaborated on by many others. This work on Newton's method and in linear algebra will be reviewed; then recent work by the author and Harbir Lamba concerning similar analysis for commonly used ODE software packages (such as MATLAB) will be described.
A valid criticism of the average-case analyses of theoretical computer science is that they are highly dependent on the measure placed on the space of problems. This viewpoint will be discussed in the context of real number algorithms.