Automatic speech recognition systems work by using sophisticated pattern matching software to find the best fit between a speech waveform and a string of words. The pattern matcher is driven by two data components: acoustic models, which provide the pattern matcher with probabilistic representations of speech sounds and contexts, and the language model, which incorporates a dictionary of words and pronunciations plus a stochastic grammar that encodes the likelihood of word contexts. In this talk, I will describe a dictation application of speech recognition. The application is interesting because, in addition to its commercial value, it offers challenges to language modeling that have not been well studied in the technical community. I will provide a brief outline of the main technical issues and then discuss a novel approach to language modeling that Linguistic Technologies has developed for making this application feasible.