Fifty years ago it was more or less taken for granted that automatic control meant feedback control. Other fields, such as biology and economics, adopted the word feedback and, through these more popular disciplines, the word came into widespread use. However, as time evolved control theory, itself, largely ignored the idea of feedback, concentrating instead on technical issues associated with optimal control, estimation, modeling, etc. In fact, a quick examination of popular books on control fails to find any significant discussion of feedback or other conceptual issues associated with the implementation of control actions. This is in contrast with the literature in psychology, and neural science. In these fields there is a growing interest in the relationship between structure and function and this has motivated researchers to worry a great deal about the distinction between feedback and open loop implementations. For example, this distinction is particularly important when it comes to describing the role of learning in finding better ways to execute movements. In this talk I will layout a general mathematical formulation of the problem of implementing control policies leading to a new class of optimization problems in which the complexity of implementation is traded off against the quality of the trajectories. From a mathematical point of view the resulting theory is a field theory as opposed to a particle theory. One might say that it stands in relationship to optimal control in somewhat the same way that Maxwell's equations stand in relationship to the equations of motion for a charged particle. The resulting frame work, although technically more difficult, permits one to give a crisp definition of some slippery terms such as attention and practice.