Using POMDPs to Understand and Support Human Sequential Decision Making with Uncertainty
Friday, December 2, 2011 - 1:25pm - 2:25pm
Brian Stankiewicz (3M)
Humans possess the remarkable ability to make thousands of decision every day under conditions of incredible uncertainty. Furthermore the outcomes of a decisions may not be felt for hours, days, weeks or even years and there may even have been many decisions made before there is any cost or reward generated. Developing a robust decision making system for these conditions remains a computational challenge due to the combinatoric nature of these problem along with being able to dynamically formulate an estimate of the system. However, you and I do it every day hundreds if not thousands of times. Thus we remain existence proof that such a computational system can exist. To better understand how humans accomplish this task we leverage the work in Partially Observable Markov Decision Processes (POMDPs) to provide us with a framework for providing optimal decision policies when making sequential decision under uncertainty. By comparing human performance to the optimal policy we have developed methods to dissect and identify which aspects of human cognition are optimal and sub-optimal. By identifying the computational strengths and limits of the human mind we can then identify the necessary computations to support and improve the human decision making process.