Simulation Based Approximation of Value Function for Process Control
Thursday, December 5, 2002 - 12:40pm - 1:10pm
Jay Lee (Georgia Institute of Technology)
Although model predictive control (MPC) has firmly etched itself in process control practice, its large on-line computational demand and inability to rigorously consider information feedback under uncertainty limits its usage in complex systems, which are characterized by multi-scale, nonlinear, hybrid dynamics and significant uncertainties. In this talk, we propose an alternative approach based on the infinite horizon cost-to-go (the 'value function'). The key issue lies in obtaining an accurate approximation of the value function for the relevant regions of state space. We propose to build an approximation using simulation data and improve it iteratively through policy or value iteration and additional simulation. We demonstrate the efficacy of the approach on two different bioreactor optimal control problems. Along the way, we also point out some critical issues and outstanding theoretical problems.