The role of dopamine in operant learning

Loading...

Date

2020

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

166
views
280
downloads

Abstract

In order to ensure survival, animals must learn to repeat actions that produce beneficial

outcomes. When an action leads to a desired outcome, animals learn to associate the action with

the outcome. These associations can be studied as changes in the probabilities of actions based

upon outcome. In order to better understand how these changes in behavior arise it is important

to understand how specific brain circuits change these probabilities. Self-stimulation paradigms

have been essential in investigating the contribution of different brain circuits in repeated

behavior. Out of this research has emerged a role of dopamine in increasing the probability an

action would be repeated (Wise., 1978). The ability of DA to change behavior may arise from

the resulting period of increased synaptic plasticity (Quinlan et al., 2018; Yagashita et al., 2014;

Harley et al., 1989). Any synaptic inputs that mediate beneficial actions during this plasticity

window will be potentiated, and lead to an increased probability of this action in the future.

Based on the role of dopamine in self-stimulation I sought to further investigate how the function

of DA differs across brain regions. Specifically, I took an approach where I examined different

dopaminergic nuclei, or downstream targets, in an effort to assess differences in learning

strategies based upon the brain regions recruited.

In the first set of experiments I examine the ventral tegmental area (VTA), a midbrain

dopaminergic nucleus with projections to the striatum. To investigate this brain region, I

developed a novel closed-loop self-stimulation paradigm in an open-field. A key feature of this

task is it allowed us to investigate how dopamine stimulation reinforces naturalistic behaviors.

Further, this task was designed for within animal comparisons by including a baseline,

stimulation, and recovery epoch. Specifically, I investigate speed-contingent stimulation. Upon

reaching a speed threshold, animals received optogenetic activation of dopamine neurons in the

VTA. Using this paradigm, I show that freely moving animals will increase their speed to receive

stimulation. These animals often develop stereotyped trajectories while attempting to reach the

speed threshold (Figure 6, b,c). These high degrees of stereotypy can also be observed in

movement patterns (i.e. circular trajectories) of animals during a fixed-ratio self-stimulation task

(Figure 9, b,d), even though movement away from the lever is not required. These results suggest

that VTA dopamine may generate a long window of plasticity, resulting in the learning of

unreinforced behaviors.

In the second set of experiments I used the same open-field, and operant self-stimulation

paradigms, with animals who had channelrhodopsin expressed in D1+ neurons in the dentate

gyrus (DG). Optogenetic activation of these neurons was sufficient to generate repeated

behavior, suggesting that this novel population of neurons underlies self-stimulation behavior in

the hippocampus (Ursin et al., 1996). Further, I find that the VTA does not project to the dorsal

DG, but the Locus Coeruleus (LC) does. LC projections to the hippocampus are sufficient to

generate repeated behavior, suggesting that hippocampal dopamine comes from the LC. The

response magnitude, and stereotypy were decreased in the DG animals, suggesting that there

may be a functional difference in how this behavior is repeated.

In the third set of experiments I sought to investigate the physiology of D1+ neurons in

the hippocampus. D1+ medium spiny neurons in the striatum will fire in response to learned

actions (Cui et al., 2013), however less is known about D1+ neurons in the hippocampus. If these

neurons are sufficient to generate repeated behavior, it is possible that they encode aspects of

operant learning during pursuit of natural rewards. Using calcium imaging, I found a percentage

of D1+ neurons responded to lever pressing. This percentage was higher than the number of

neurons that responded to passive reward delivery, suggesting a preferential role in operant,

compared to associative learning.

Together, these experiments begin to paint a picture of how dopamine functions in

different brain regions. Comparisons across these brain regions demonstrate that dopamine and

dopamine binding neurons are sufficient to generate repeated behavior regardless of projection

target. However, there are clear differences between brain regions, suggesting that the target

neurons play a role in the strategy used to repeat the behavior.

I find that VTA stimulation differs from stimulation in D1+ neurons in the DG

magnitude, as well as in stereotypy (Figure 9). A potential explanation for these differences is

that activation of the VTA causes compulsive or habitual behavior (Taha et al., 1982), whereas

activation of D1+ neurons in the hippocampus results in more flexible behavior (Packard &

White., 1991).

Further, the finding that a population of D1+ DG neurons are modulated by lever

pressing suggests a role for the hippocampus in operant learning, rather than just spatial coding.

Overall, I have laid the groundwork to begin studying how operant learning strategies differ

depending on the target of dopaminergic projections.

Description

Provenance

Subjects

Neurosciences, Dopamine, Operant

Citation

Citation

Petter, Elijah (2020). The role of dopamine in operant learning. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/20997.

Collections


Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.