Abstract
Behaviour in spatial navigation is often organised into map-based (placedriven) versus map-free (cue-driven) strategies; behaviour in operant conditioning research is often organised into goal-directed versus habitual strategies. Here we attempt to unify the two. We review one powerful theory for distinct forms of learning during instrumental conditioning, namely model-based (maintaining a representation of the world) and model-free (reacting to immediate stimuli) learning algorithms. We extend these lines of argument to propose an alternative taxonomy for spatial navigation, showing how various previously identified strategies can be distinguished as "model-based" or "model-free" depending on the usage of information and not on the type of information (e.g. cue vs place). We argue that identifying "modelfree" learning with dorsolateral striatum and "model-based" learning with dorsomedial striatum could reconcile numerous conflicting results in the spatial navigation literature. From this perspective, we further propose that the ventral striatum plays key roles in the model-building process. We propose that the core of the ventral striatum is positioned to learn the probability of action selection for every transition between states of the world. We further review suggestions that the ventral striatal core and shell are positioned to act as "critics" contributing to the computation of a reward prediction error for model-free and model-based systems, respectively. © 2012 Khamassi and Humphries.
Original language | English |
---|---|
Journal | Frontiers in behavioral neuroscience |
Issue number | 2012 |
DOIs | |
Publication status | Published - 29 Oct 2012 |
Keywords
- Action-outcome
- Basal ganglia
- Nucleus accumbens
- Reinforcement learning
- Stimulus-response