Introduction

In order to thrive, organisms must choose which rewards to pursue, and must learn from the results of those choices. Difficulties with these reward-related functions are present in psychiatric disorders including depression, schizophrenia, and addiction [1,2,3]. Although mesocortical dopamine (DA) is a key neurobiological substrate for reward-based decision-making and learning, it remains unclear how these functions are affected by DAergic medications.

Medications that increase DA increase willingness to exert effort for reward [4, 5] and reward learning [6, 7], while DA depletion techniques decrease exertion of effort for reward [8, 9] and impair reward learning [6, 7, 10, 11]. However, preclinical studies suggest that these processes depend on different DA signals. Specifically, local, non-bursting release of DA from nucleus accumbens synapses appear to influence effort for rewards, while burst-firing of DA neurons encodes reward learning [12, 13]. Moreover, effort may be more closely tied to activity at more sensitive D2 receptors, while learning may be dependent on less sensitive D1 receptors [14,15,16,17]. This suggests that the same DAergic manipulation could exert differing effects on effort-related decision-making and reward learning.

Consistent with this idea, overexpression of nucleus accumbens D2 receptors and enhancing extracellular DA via DA transporter knockdown increases willingness to exert effort for reward [18, 19], but not reward learning in mice [18]. While no within-subject comparisons have been conducted in humans to date, similar drugs and doses across studies have affected one function [5, 20], but not the other [21,22,23]. The dose-response curve for DA and reward functioning is believed to follow an inverted-U, such that there is an “optimal” level of DA, with both overly low and overly high levels of DA impairing reward functioning [24, 25]. One study found that a moderate dose of d-amphetamine increased the willingness of rats to exert effort for reward, while a high dose decreased willingness to exert effort [26]. This work suggests that reward-related processes may have different dose-response relationships with DA and different “optimal” levels of DA.

In the current work, we examined the within-subject effects of two doses (10/20 mg) of d-amphetamine on an effort-related decision-making task in which participants choose how much effort to exert for rewards with a given expected value, and on a reward learning task which examines the development of a response bias for more highly rewarded options. We selected d-amphetamine because, although it is not DA-specific, it has been shown to improve reward learning on a probabilistic reward task in rodents [27] and affect effort-based decision-making in humans [4] and rodents [26]. Further, tyrosine depletion studies suggest that although d-amphetamine affects other neurotransmitter systems, its effects on reward functioning seem to be dopamine-mediated [28].

In addition to testing multiple doses of d-amphetamine, we also examined individual differences that may interact with d-amphetamine to affect effort-based decision-making and reward learning. First, based on evidence from rodent studies suggesting that baseline willingness to exert effort moderates the effects of stimulants on effort [29], we examined whether baseline willingness to exert effort moderated the effects of d-amphetamine. Furthermore, enhancing dopamine has been shown to differentially affect reward learning according to working memory capacity [22, 30,31,32]. Therefore, we also evaluated whether working memory moderates the effects of d-amphetamine.

Materials and methods

Participants

Thirty healthy adults were recruited via flyers, internet ads, and a database of participants in previous studies. Eligibility screening consisted of a physical exam, electrocardiogram, the Structured Clinical Interview for DSM-5 performed by Master’s-level trained counselors, and drug use history questionnaire. Potential participants were excluded for: 1. Contraindication to amphetamine (high blood pressure, abnormal ECG, pregnancy, or breastfeeding); 2. Conditions requiring regular medication, or regular use of a supplement with hazardous interactions with d-amphetamine (e.g., St. John’s wort); 3. Previous adverse reaction to d-amphetamine; 4. No prior experience with psychoactive substances (this addresses human subjects concerns with administration of psychoactive substances to completely substance-naïve participants; psychoactive substance was broadly defined [e.g. alcohol], prior experience with stimulants or illicit drugs was not required); 5. Current DSM-V diagnosis besides mild Substance Use Disorder (≤ 3 symptoms); 6. Lifetime history of moderate to severe Substance Use Disorder (≥ 4 symptoms), mania, or psychosis; 7. BMI below 19 or above 29; 8. Less than a high-school education or not fluent in English; 9. Smoking more than 10 cigarettes per week.

Participants were instructed to abstain from using drugs 48 h prior to each session (confirmed via Reditest urine drug screen for cocaine, amphetamines, methamphetamine, tetrahydrocannabinol, opioids, and benzodiazepines), avoid consuming alcohol for 24 h prior to each session (confirmed via breath testing) and get adequate sleep, fast for 9 h prior to sessions, and maintain their typical consumption of nicotine and caffeine (verified by self-report upon arrival). As menstrual phase can affect responses to d-amphetamine [33], women were scheduled during the follicular phase with the exception of women on hormonal birth control or with extreme cycle irregularity. The University of Texas Health Science Center at Houston IRB approved this study and all participants gave written informed consent in accordance with the Declaration of Helsinki. See Table 1 for sample characteristics.

Table 1 Sample characteristics.

Measures

Manipulation checks

We used the Elation sub-scale of the Profile of Mood States (POMS) to assess typical mood effects of the drug [34, 35]. Participants were also administered the Drug Effects Questionnaire (DEQ) [36], which contains an item assessing the extent to which participants felt a drug. Mean arterial pressure (MAP) was used to track cardiovascular effects of the drug.

The effort expenditure for rewards task (EEfRT)

The EEfRT is a measure of effort-based decision-making that has been described thoroughly elsewhere [37]. Briefly, each trial presents the participant with a choice between an “easy” keypress task worth $1.00 and a “hard” keypress task worth a variable amount of reward ($1.24–$4.21). Participants are shown the amount the hard task is worth and the probability of winning (88%, 50%, and 12%) before making each choice. The primary outcome variable was choice of the hard task. Key press speed was measured to control for psychomotor effects of d-amphetamine. Two “win” trials were randomly chosen for payout.

The probabilistic reward task (PRT)

The PRT was selected because it has been widely used in human studies, has translational value, and because prior work in rodents and humans indicates it is sensitive to dopamine manipulations [27]. On each trial of the PRT, participants are presented with a cartoon face and must select the length (short or long) of a feature. The face is presented for 100 ms, making the decision difficult. Multiple versions with different features (mouth vs. nose) were used in counterbalanced order to avoid practice effects. Following some correct responses, participants received a 5-cent monetary reward. Correct identification of one length was rewarded more frequently than the other length. Healthy adults typically develop a response bias for the more rewarded category. Reward learning was measured via a signal detection approach [38]. The primary outcome was response bias (logb), the propensity to select the more rewarded response:

$$\log b = \frac{1}{2}\log \left( {\frac{{Rich_{correct} \ast Lean_{incorrect}}}{{Rich_{incorrect} \ast Lean_{correct}}}} \right)$$

Discriminability (logd), the participants’ ability to differentiate the two stimuli was also measured to control for possible perceptual/attentional improvements due to d-amphetamine:

$$logd = \frac{1}{2}\log \left( {\frac{{Rich_{correct} \ast Lean_{correct}}}{{Rich_{incorrect} \ast Lean_{incorrect}}}} \right)$$

Working memory task

Participants completed a brief validated working memory battery at baseline only that included items measuring operation span, reading span, and symmetry span [39]. The primary outcome was the “partial” working memory score, which counts all correct identifications, even in partially recalled strings.

Procedures

Participants first attended an orientation in which they practiced study tasks and completed the baseline working memory task and a baseline measure of the EEfRT. All subsequent drug study sessions began at 9 am. Following completion of measures of mood, subjective drug effects, and blood pressure, participants were administered the drug or placebo at 9:30 am. While waiting for the drug effect to reach peak, participants were allowed to watch a movie or read a book. Manipulation checks were repeated at 10 am and 11 am. Participants completed study tasks ~1.5 h after drug administration to coincide with peak drug effect. The order of tasks was randomized. Manipulation checks were completed every hour until at least 1 pm, or until effects of the drug returned to baseline, at which point they completed an End of Session Questionnaire and left the lab (~1:30 pm for most participants). Sessions were separated by at least 72 h.

Data analytic plan

A series of mixed-effects models assessed the effects of d-amphetamine on manipulation checks and EEfRT/PRT performance. All analyses were performed in R [40] using lmer and lmertest using the Satterthwaite method for degrees of freedom [41, 42]. All continuous variables were mean centered and categorical variables were contrast coded. We established our random-effects models by generating a maximal model and iteratively reducing it per Bates, Kliegl, Vasishth, and Baayen using the RePsychLing package [41, 43, 44]. Follow-up tests of significant main effects or interactions were conducted using the emmeans package [45].

Manipulation checks

Subjective (POMS: Elation and DEQ: Feel Drug) and cardiovascular effects (MAP) were modeled using a linear mixed-effects model (LMM), with fixed effects for Drug (placebo, 10 mg or 20 mg d-amphetamine), Time (pre-capsule, 30 min. post-capsule, 90 min. post-capsule, 180 min. post-capsule, 240 min. post-capsule), and their interactions.

EEfRT

Choices were modeled using a generalized linear mixed-effects model (GLMM) with a logit link function for the binomial (hard/easy) outcome. Fixed effects were Drug (placebo, 10 mg or 20 mg), Probability (12%, 50%, or 88%), Amount ($1.24–$4.21), and their interactions (note that Probability and Amount interaction is referred to as the expected value of a reward). We also included fixed effects for Trial Number (0–50) and Session (1, 2, or 3) to account for effects of fatigue and practice, consistent with prior analyses [4, 46]. For the control analyses of psychomotor speed, we first used an LMM to model key pressing speed as a function of Drug (placebo, 10 mg or 20 mg) and type of task chosen (Hard vs. Easy). Individual estimates for the linear effect of drug on tapping speed were entered as a between-subject covariate in the final model.

Significant effects of d-amphetamine on the EEfRT were followed up by fitting a series of computational models described in Cooper et al. (2019) (see Supplementary Materials for full description). Briefly, the subjective value (SV) model uses the reward (R), effort (E), and probability of each option (P) to estimate the subjective value of each option on each trial:

SV = RPh – kEFree parameter h modifies subjective value according to the probability that the reward will be received and can be interpreted as sensitivity to probability, while free parameter k reduces subjective value based on the amount of effort required, independent of probability of reward receipt. The fit of the SV model was compared with the fit of a simple model that does not use trial-wise information to guide choice (ΔBIC; see Supplementary Table S1 and Supplementary Figure S1). ΔBIC and best-fitting parameters k and h were examined using repeated-measures ANOVA with d-amphetamine dose as a within-subjects factor. First session (placebo, 10 mg, 20 mg) was included as a between-subjects factor to control for potential order effects. Due to non-normal distributions, free parameters were log-transformed prior to analysis. Greenhouse–Geisser corrected statistics are reported where sphericity was violated. The goal of these analyses was to test whether d-amphetamine affected the degree to which participants use available information to guide choice, or best-fitting model parameters (i.e. effort discounting, sensitivity to probability).

Finally, we examined whether baseline EEfRT performance or working memory moderated effects of d-amphetamine on choice by including percent of hard task choices at baseline and composite working memory scores into the final choice model as between-subjects mean-centered continuous fixed effects [47].

Two participants were excluded from all analyses involving the EEfRT, leaving twenty-eight participants. One excluded participant chose all hard trials across all sessions, and the other completed only high-value trials in order to bias payout.

PRT

PRT variables (response bias, discriminability, reaction time, and reinforcement schedule) were modeled using LMM. For response bias and discriminability, fixed effects were Drug (placebo, 10 mg or 20 mg), Block (1, 2), and their interaction. For analyses of reaction time and reinforcement schedule, Stimulus (rich, lean) was included. (See Supplementary Material for results). We examined whether baseline EEfRT performance or working memory moderated effects of d-amphetamine on response bias by including percent of hard task choices at baseline and composite working memory scores as between-subjects mean-centered fixed effects in two separate models. One participant was excluded from PRT analyses due to excessive (>20) trials with outlier reaction times.

Relationship between EEfRT, PRT, and Working Memory

We examined the relationship between performance on the EEfRT (% hard choices) and the PRT (response bias) under placebo and drug effects on these tasks via Pearson’s Correlations. Pearson’s correlations were also used to examine associations between drug effects (20mg-PL and 10mg-PL) and baseline measures of EEfRT and working memory.

Results

Manipulation checks

d-Amphetamine demonstrated typical subjective and cardiovascular effects, peaking at or near the time of the behavioral tasks (see Table S2 and Fig. S3 for full results).

Effort expenditure for rewards task

Effort

As shown in Fig. 1a, d-amphetamine increased choice of the hard task at the 20 mg dose, consistent with our previous results (linear Drug effect, B = 1.90, SE = 0.29, z = 2.24, p = 0.03). However, the effect of the drug was most evident at low to moderate expected values; linear Drug × quadratic Probability × Amount interaction, B = 1.72, SE = 0.50, z = 3.43, p < 0.001, full results in Supplementary Table S3. This likely occurred because near ceiling levels of effort were exerted in all drug conditions when both probability and reward amount were high. Figure 1b shows the effect of drug at each level of probability, graphed at representative points across the range of possible reward amounts. Including the effect of drug on keypress speeds as a covariate did not change these results.

Fig. 1: Overall drug effects on the EEfRT.
figure 1

a Main effect of d-amphetamine on % hard task choices. Error bars represent standard error of the mean. b Probability × amount interaction.

Computational modeling

In a separate set of analyses using our computational modeling approach, we found that model fit (BIC) for the subjective value model was similar across all doses of d-amphetamine, F(2, 50) = 0.192, p = 0.826. Model fit (ΔBIC) also did not show differences across d-amphetamine doses, F(2, 50) = 0.065, p = 0.937, suggesting that d-amphetamine did not affect the degree to which participants systematically incorporated trial-wise reward and probability information when allocating effort. In a repeated-measures ANOVA model that included Parameter (h and k), representing sensitivity to probability and effort, respectively, and d-amphetamine dose, we observed a significant interaction between parameter (h or k) and d-amphetamine dose, F(1.54, 38.60) = 3.841, p = 0.040, such that parameter k (effort aversion) showed differences across levels of amphetamine, F(1.52, 38.09) = 9.265, p = 0.001, while parameter h did not, F(1.46, 36.37) = 1.529, p = 0.230. Moreover, the effort aversion parameter showed a significant linear contrast, F(1,25) = 12.064, p = 0.002, suggesting that amphetamine exhibited a dose-dependent effect on effort discounting, but did not affect discounting based on probability (Fig. 2). We additionally observed a trend-level effect on the inverse temperature parameter such that choices were more consistent under higher doses of amphetamine (see Supplementary Materials).

Fig. 2: Computational modeling results on the EEfRT.
figure 2

a Changes in mean values for parameter k (effort sensitivity) across doses of d-amphetamine. b Changes in mean values for parameter h (probability sensitivity) across doses of d-amphetamine.

Moderators

Baseline effort expenditure did not moderate the overall drug effect on choice. However, baseline effort expenditure significantly interacted with drug and probability, as well as drug and reward amount. d-Amphetamine differentially increased effort in individuals with lower baseline effort at low to moderate probabilities of reward (quadratic Drug × quadratic Probability × Baseline effort interaction, B = −0.54, p = 0.26, z = −2.06, p = 0.04. Baseline effort was a continuous measure, but for purposes of interpretation, we graphed and performed post-hoc tests at −1 SD (“Low”) and +1 SD (“High”) levels of baseline effort (Fig. 3a, b). Post-hoc GLMM revealed that, for low baseline effort, 20 mg of d-amphetamine increased hard task choices at low probabilities of reward (B = 3.11, SE = 1.73, z = 2.03, p = 0.04), and both 10 mg and 20 mg of d-amphetamine increased hard task choices at medium probability (B = 2.59, SE = 0.97, z = 2.20, p = 0.04 and B = 3.22, SE = 1.71, z = 2.54, p = 0.01, respectively). Those with low baseline effort showed no significant effects of d-amphetamine at any probability level. d-Amphetamine also differentially increased effort expenditure for those with high baseline effort at low amounts of reward (linear Drug × Amount × Baseline Effort interaction, B = 0.84, SE = 0.37, z = 2.29, p = 0.02). We tested the effects of drug at representative “low” ($1.96) vs. “high” ($3.40) reward values (Fig. 3c, d). 20 mg of d-amphetamine increased choice of the hard task in low baseline effort at low reward values (B = 4.43, SE = 0.68, z = 3.12, p = 0.002), but did not increase choices of the hard task in high baseline effort at either level of reward. Together, these results suggest more effect of being “low” vs. “high” at the same low to intermediate “expected values” of reward where the drug had more effects overall (although it should be noted that the full Drug × Probability × Amount × Baseline Effort interaction did not reach significance; Supplementary Table S2).

Fig. 3: Baseline effort moderator analyses on the EEfRT.
figure 3

Graphs display one standard deviation below and above the mean baseline effort for visual purposes only. In the analysis, baseline effort was used as a continuous moderator. a % hard task choices by probability in low baseline effort. b % hard task choices by probability in high baseline effort. c % hard task choices by probability and amount in low baseline effort. d % hard choices by probability and amount in high baseline effort.

Baseline working memory did not moderate the overall effect of the drug on willingness to exert effort for reward (linear drug × Baseline Working Memory interaction, B = 0.03, SE = 0.31, z = 0.10, p = 0.91). However, baseline working memory capacity influenced the effect of the drug at low to intermediate values of expected reward (linear Drug × linear Probability × Amount × Baseline Working Memory interaction, B = 1.76, SE = 0.76, z = 2.31, p = 0.02; quadratic Drug × quadratic Probability × Amount × Baseline Working Memory interaction, B = 0.98, SE = 0.47, z = 2.09, p = 0.04). For purposes of interpretation, we graphed and performed post-hoc tests at −1 SD (“Low Working Memory”) and +1 SD (“High Working Memory”) levels of baseline working memory, and at low ($1.96) and high ($3.40) levels of reward (Fig. 4a, b). Both doses of d-amphetamine increased hard task choices in individuals with low working memory across several low to intermediate points on the expected value spectrum, but only increased hard task choices in individuals high in working memory when rewards had low probability and low amounts, or high probability but low amounts. In sum, individuals with lower baseline working memory showed effects of the drug across a greater range of expected values of reward (see Supplementary Table S3).

Fig. 4: Working memory moderator analyses on the EEfRT.
figure 4

Graphs display low and high working memory for visual purposes only. In the analysis, working memory was used as a continuous moderator. a % hard task choices by probability and amount in low working memory. b % hard task choices by probability and amount in high working memory.

Probabilistic reward task

Response bias

Participants developed a bias for the more highly rewarded stimulus (intercept B = 0.09, SE = 0.01, t = 6.80, p < 0.001). Response bias increased from block 1 to block 2, consistent with a reward learning process (main effect of block, B = 0.04, SE = 0.02, t = 2.61, p = 0.01). There were also significant session (practice) effects such that response bias increased from session 1 to session 2 and went back down in session 3 (quadratic effect of session, B = 0.08, SE = 0.03, t = 3.03, p < 0.01). However, d-amphetamine did not affect the overall strength of the response bias and did not change the rate at which the response bias developed (Supplementary Fig. S4 and Table S4).

Moderators

Response bias was not moderated by baseline performance on the EEfRT or overall working memory (Supplementary Table S4).

Relationships between motivation, learning and baseline measures

There was no relationship between hard choices on the EEfRT and response bias on the PRT during the placebo session (r = 0.071, p = 0.726). There was also no significant relationship between the effect of 10 mg of d-amphetamine (r = 0.312, p = 0.113) or 20 mg of d-amphetamine (r = 0.027, p = 0.895), on the EEfRT and PRT, suggesting at least partially separable processes. There was no significant relationship between baseline effort for reward and baseline working memory (r = 0.10, p = 0.61).

Discussion

The current study aimed to establish a dose-response curve of d-amphetamine on two distinct reward functions and test whether baseline reward motivation and working memory capacity moderated this relationship. Therapeutic doses of d-amphetamine increased overall willingness to exert effort for reward in a dose-dependent manner. This effect was particularly evident when reward magnitudes were small and/or probability of reward was low to moderate. However, d-amphetamine did not significantly affect reward learning, nor did baseline measures of reward motivation and working memory capacity moderate the effect of DA enhancement on reward learning.

Our results are congruent with robust evidence demonstrating that DA enhancement increases willingness to exert effort for a reward in rodents [48] and healthy humans [4, 5]. Further, because d-amphetamine increases extracellular DA by inhibiting DA transport, these results extend findings from preclinical research that demonstrates that DA transporter inhibitors increase willingness to exert effort in rodent models of effort-based decision-making [49,50,51,52]. These findings also closely replicate prior work in which the effect of 20 mg of d-amphetamine was particularly evident at low levels of reward probability, while here we saw a more complex interaction indicating that d-amphetamine was particularly effective at low to moderate levels of expected value, which incorporates both probability and amount information. The larger sample collected here may have enabled us to detect this more complex interaction [4, 53].

To further investigate the effect of amphetamine on allocation of effort for rewards, we also used a recently developed computational modeling approach. Our analyses revealed that enhancing DA affected effort discounting and not sensitivity to probability. This suggests that the observed interactions between amphetamine and probability may be driven primarily by expected values in the low to moderate probability levels that are more sensitive to individual differences in the effects of amphetamine on effort discounting, rather than a direct effect of amphetamine on probabilistic discounting. In addition, we found that amphetamine did not alter the strategies that individuals employed when making effort-based decisions. Finally, we note that effort discounting measured by this task only captures an effort/reward tradeoff, and additional work will be needed to distinguish decreased sensitivity to effort costs from increased sensitivity to reward (e.g. Westbrook et al. [53]).

Our findings also indicate that d-amphetamine boosted willingness to exert effort for reward more in individuals with lower baseline reward motivation and lower working memory performance. This effect was evident in the conditions where the effect of d-amphetamine was strongest, namely when the reward amount was low, and/or the probability of the reward was low to moderate. While baseline willingness to exert effort and working memory capacity do not exactly correspond to baseline DA functioning [31, 32, 54,55,56,57,58], these findings are consistent with the inverted-U hypothesis of DA and reward functioning, and suggest these baseline measures may be useful for tailoring DAergic treatments to individuals.

In contrast to our results with effort-related decision-making, we did not find any effect of d-amphetamine on response bias, nor did baseline measures modulate the effect of drug on response bias. This is consistent with the hypothesis that higher levels of DA stimulation might be needed to alter reward learning. Doses that increased effort in rats were in the 0.125 to 0.25 mg/kg range, with 0.5 mg/kg actually decreasing effort [26], while reward learning in rats was increased at a dose of .5 mg/kg [27]. This suggests that future research would benefit from investigating the effect of larger doses of d-amphetamine on human reward learning. An alternative explanation might be that drugs that act via auto receptors have a stronger effect on learning compared to reuptake inhibitors like d-amphetamine. The drug manipulation studies in healthy adults that have found an effect on reward learning mostly administered D2-selective agents, e.g., cabergoline and haloperidol [30, 59,60,61]. It is possible that the broad-spectrum effects of d-amphetamine may be more important when weighing decisions among complex options (e.g., in the EEfRT) by engaging cortical DA signaling, compared to D2 drugs, which may have greater effects in the striatum [30, 62, 63]. Direct comparison of these drugs to d-amphetamine in the same sample would be valuable.

Limitations

The primary limitation of this study is lack of neurochemical specificity in our DA manipulation and the indirect nature of the relationship between our baseline measures and DA baseline levels. d-Amphetamine also has noradrenergic and serotonergic effects, which have been linked to both reward processing and motivation [64,65,66,67,68]. Studies in rhesus monkeys suggest that norepinephrine is implicated in the valuation of effort costs and choice consistency, rather than willingness to exert effort per se [64, 65]. This is consistent with rodent studies in which administration of norepinephrine transport inhibitors failed to alter effortful responding [32, 49]. Further, while the role of serotonin in effort-based decision-making is limited, studies in both humans and rodents suggest a critical role for serotonin in reward learning [67,68,69]. Thus, future studies should consider using more specific pharmacological manipulations of DA in combination with PET to examine the role of baseline DA more directly. Second, it is unclear how different DA signaling dynamics may relate to these results. In addition, we were unable to explore the temporal dynamics of the evaluation of rewards and their associated costs. While our results would indicate that d-amphetamine affects evaluation of effort costs more than the integration of reward feedback into future choices (i.e., reward learning), it is unclear whether d-amphetamine affects decisions at the time of option evaluation or during the choice itself. Studies that utilize imaging with paradigms that sequentially measure multiple aspects of reward processing may be particularly productive for investigating the dynamics of DA in human reward motivation and learning.

Conclusions

In summary, the present study establishes that, with d-amphetamine, effort for reward may be more amenable to intervention than reward learning. This study also provides novel evidence linking individual differences in reward motivation and working memory to DA stimulant effects on effort for reward. This is a crucial step in establishing dose-response curves in reward processing and validating human models of the role of DA in reward. Establishing dose-response curves of therapeutic medications and identifying potential individual differences that may predict response is also critical for understanding and evaluating treatments for neuropsychiatric disorders that involve dysfunctional reward processing.

Funding and disclosure

The authors note that this was work was funded in part by the National Institutes of Mental Health R00MH102355 and R01MH108605 to MTT and F32MH115692 to JAC. It was also supported in part by the National Institute on Drug Abuse K08DA040006 to MCW and F32DA048542 to HES. Finally, this project was also supported in part by a fellowship from “la Caixa” Foundation (ID 100010434) LCF/BQ/DI19/11730047 to PLG. In the past 3 years, MTT has served as a paid consultant to Avanir Pharmaceuticals, and Blackthorn Therapeutics. MTT is a co-inventor of the EEfRT, which was used in this study. Emory University and Vanderbilt University licensed this software to BlackThorn Therapeutics. Under the IP Policies of both universities, MTT receives licensing fees and royalties from BlackThorn Therapeutics. In addition, MTT has a paid consulting relationship with BlackThorn. The terms of these arrangements have been reviewed and approved by Emory University in accordance with its conflict of interest policies, and no funding from these entities supported the current project. The authors declare no competing interests.