We build models of two navigation tasks in the water maze: the standard reference memory task, in which the goal (a hidden platform) is always at a fixed location, and the delayed match to place task, in which the platform is moved on each day. These tasks differ experimentally in the demands they make on hippocampal plasticity.
The models are based on a reinforcement learning method called temporal difference learning, which is a general way of learning near optimal behavior in complicated environments. We show that a place cell representation of position is ideal for learning to solve the reference memory task, and show how to augment it to solve the delayed match to place task.
This is joint work with David Foster and Richard Morris.