• N&PD Moderators: Skorpio | thegreenhand

Excess DA and NE in the PFC

aced126

Bluelighter
Joined
May 18, 2015
Messages
1,047
http://psychopharmacologyinstitute....te-adhd-mechanism-of-action-and-formulations/

This article states that excess DA and NE release in the PFC cause stress which results in decreased functioning. Roughly what kind of dosage levels would this effect start to manifest in regular patients on methylphenidate? My feeling is even at very high stimulant doses, some people don't get "stressed". Could this mean that pushing the doses in high ranges like that would improve functioning (let's ignore cardiovascular risks etc for a moment).

Also how relevant/similar is this law in relation to the graph the first link provides? https://en.wikipedia.org/wiki/Yerkes–Dodson_law
 
http://psychopharmacologyinstitute....te-adhd-mechanism-of-action-and-formulations/

This article states that excess DA and NE release in the PFC cause stress which results in decreased functioning. Roughly what kind of dosage levels would this effect start to manifest in regular patients on methylphenidate? My feeling is even at very high stimulant doses, some people don't get "stressed". Could this mean that pushing the doses in high ranges like that would improve functioning (let's ignore cardiovascular risks etc for a moment).

Also how relevant/similar is this law in relation to the graph the first link provides? https://en.wikipedia.org/wiki/Yerkes–Dodson_law

He's not saying that high levels of catecholamine release cause people to feel stressed. Rather, he is using stressful situations as examples of times where DA release would be high enough to negatively affect cognition.

It is pretty well established that many of the responses to DA in PFC are non-monotonic, ie with inverted-U-shaped responses. That is probably one of the underlying reasons for the Yerkes-Dodson law.
 
Last edited:
He's not saying that high levels of catecholamine release cause people to feel stressed. Rather, he is using stressful situations (he probably really means "flight or fight" situations) as examples of situations where DA release would be high enough to negatively affect cognition.

What is the mechanism by which excess DA release into the PFC negatively affects cognition?
 
What is the mechanism by which excess DA release into the PFC negatively affects cognition?

Working memory (WM) is a great example of this, with lots of insights from the work of Pat Goldman-Rakic. WM, which requires the brain to remember details of a sensory stimulus (identity, position, etc) after its offset, occurs in PFC. Basically what happens is that a group of cells representing details of the stimulus have to continue to fire after the stimulus disappears. That may sound simple, but it is actually very complex, and it requires the interplay of excitatory and inhibitory neurons to keep the correct cell ensembles firing while silencing neighboring cells. D1 activation facilitates this process by helping to keep the correct cells firing and supressing the activity of other cells in the network. Basically D1 increases the signal to noise ratio. But too much D1 activation inhibits the firing of all cells nonspecifically, including those that represent the stimulus.

You can think of it like this: the PFC receives input from sensory processing regions. If you see a red ball in front of you at a particular location, particular cells in the PFC will fire. If you close your eyes, the stimulation of visual processing regions will cease. But if the cells in the PFC keep firing, that will link back to the cells in the visual and parietal lobes that processed the visual scene and identified the color, texture and size of the object, recognized it as a ball, and identified the location in 3D space. So keeping those PFC cells firing produces a short term record of the sensory event.
 
Last edited:
Working memory (WM) is a great example of this, with lots of insights from the work of Pat Goldman-Rakic. WM, which requires the brain to remember details of a sensory stimulus (identity, position, etc) after its offset, occurs in PFC. Basically what happens is that a group of cells representing details of the stimulus have to continue to fire after the stimulus disappears. That may sound simple, but it is actually very complex, and it requires the interplay of excitatory and inhibitory neurons to keep the correct cell ensembles firing while silencing neighboring cells. D1 activation facilitates this process by helping to keep the correct cells firing and supressing the activity of other cells in the network. Basically D1 increases the signal to noise ratio. But too much D1 activation inhibits the firing of all cells nonspecifically, including those that represent the stimulus.

You can think of it like this: the PFC receives input from sensory processing regions. If you see a red ball in front of you at a particular location, particular cells in the PFC will fire. If you close your eyes, the stimulation of visual processing regions will cease. But if the cells in the PFC keep firing, that will link back to the cells in the visual and parietal lobes that processed the visual scene and identified the color, texture and size of the object, recognized it as a ball, and identified the location in 3D space. So keeping those PFC cells firing produces a short term record of the sensory event.

Interesting...

A while ago on this forum the topic of why 5HT releasers have different physiological effects from 5HT receptor agonists, and one reason was due to the 5HT receptors coupling differently according to the ligand bound. Does similar stuff happen here, where dopamine would cause one response and a D1 agonist drug could have a different effect?
 
^^Apparently some people call that "ligand directed" or "biased signaling", I was curious about that too. Some 5HT2A agonists activate G-q and then PLC, while some others activate other G proteins and then signal through arachidonic acid metabolites. What does normal serotonin activate?
 
^^Apparently some people call that "ligand directed" or "biased signaling", I was curious about that too. Some 5HT2A agonists activate G-q and then PLC, while some others activate other G proteins and then signal through arachidonic acid metabolites. What does normal serotonin activate?

Not too sure mate, let's wait until serotonin replies lol
 
Hahaha okay. I got anotha question for either of y'all, is D1 inhibitory or does it have a lot of connections with inhibitory cells? Just wondering how D1 would facilitate the suppression of other cells that were noise and not signal. Or does the D1 ramp up to the point where it is "louder" than the other cells and that's how it increases signal to noise ratio?
 
D1 receptors are neuromodulatory. In this particular case the effect of D1 on working memory is thought to be mediated by enhancement of the response to NMDA receptor activation.

More often then not, GPCRs can exhibit biased signaling. There are some biased D1 ligands...the ones I recall hearing about do not activate beta-arrestin (which is involved in internalization).

5-HT2A is coupled to several G-proteins, PLC through Gq, apparently Gi/o, Rho, beta-arrestin. The activation of PLA2 involves both Gq and Gi/o (at least in certain cell types). Serotonin activates all of those pathways.
 
Enhancement of the response to NMDA receptor activation? Sorry I'm confused, is this some sort of LTP?
 
Enhancement of the response to NMDA receptor activation? Sorry I'm confused, is this some sort of LTP?

No, the key with LTP is that it is persistant. The effect of D1 is acute. Neural membranes are dynamic, nonlinear systems and there are a variety of interactions that can amplify or shunt current flow. For example, there are voltage-gated ion channels that can modify the response to NMDA receptor activation, and some of the channels are targets of second messengers downstream from GPCRs.
 
Last edited:
So when a GPCR like D1 signals downstream to a voltage gated ion channel this will amplify the NMDA signal but somehow keep the signal relevant to whatever information it's encoding?

Sorry I'm still confused as to how D1 is connecting to specifically NMDA via the ion channels (I guess when I think connections between things I think heterodimers). Am I even understanding correctly that the second messenger signaling from D1 is enhancing NMDA currents via the ion channels?
 
So when a GPCR like D1 signals downstream to a voltage gated ion channel this will amplify the NMDA signal but somehow keep the signal relevant to whatever information it's encoding?

Sorry I'm still confused as to how D1 is connecting to specifically NMDA via the ion channels (I guess when I think connections between things I think heterodimers). Am I even understanding correctly that the second messenger signaling from D1 is enhancing NMDA currents via the ion channels?

Neurons use ion channels to control and shape their excitability. That allows the cells to do things like prioritize which excitatory input they will respond to. Neurons receive input from various classes of cells but the termination pattern is not uniform -- individual cell classes usually target a particular dendritic region. So the postsynaptic cell can selectively shape its responsivity by segregating voltage-gated ion channels into particular dendritic regions.

Regarding D1 and NMDA receptors, the interaction isn't as specific as dimerization. Ligand- and voltage-gated ion channels can be segregated together in membrane microdomains, such as dendritic spines or lipid rafts, which effectively focuses the interaction. .
 
Thank you very much. I'm curious about hallucinations with excessive dopamine or hypo functioning glutamate, and how those two might be opposite sides of the same coin. It seems to me that being able to visualize a red ball (working memory) is somewhat akin to a hallucination but excuse me if I'm off base drawing that comparison.

(Correct me if I'm wrong) Normally visual input would activate D1-NMDA and keep that input alive in this scenario running in what seems to me to be a bit of an amplifying circuit, now I can understand if this scenario is true how excessive dopamine would lead to a big increase in NMDA and hence that amplifying circuit to the point where that red ball could be visualized readily with the eyes open. But on the other hand would NMDA antagonism or hypo-glutamate function lead to not necessarily an aberrant increase in meaningful signal strength like dopamine might cause but rather a decrease in meaningful signal strength resulting in excessive noise? Also sorry to be working off the assumption that NMDA antagonism is an okay model for visual hallucinations. Also I do suppose that excessive D1 (from a ligand for example) would result in tons of signal prolongation that wasn't from any meaningful visual input, so I guess there is the aspect of non-physiological activity concerning drug models.
 
Thank you very much. I'm curious about hallucinations with excessive dopamine or hypo functioning glutamate, and how those two might be opposite sides of the same coin. It seems to me that being able to visualize a red ball (working memory) is somewhat akin to a hallucination but excuse me if I'm off base drawing that comparison.

(Correct me if I'm wrong) Normally visual input would activate D1-NMDA and keep that input alive in this scenario running in what seems to me to be a bit of an amplifying circuit, now I can understand if this scenario is true how excessive dopamine would lead to a big increase in NMDA and hence that amplifying circuit to the point where that red ball could be visualized readily with the eyes open. But on the other hand would NMDA antagonism or hypo-glutamate function lead to not necessarily an aberrant increase in meaningful signal strength like dopamine might cause but rather a decrease in meaningful signal strength resulting in excessive noise? Also sorry to be working off the assumption that NMDA antagonism is an okay model for visual hallucinations. Also I do suppose that excessive D1 (from a ligand for example) would result in tons of signal prolongation that wasn't from any meaningful visual input, so I guess there is the aspect of non-physiological activity concerning drug models.

NMDA channel blockade actually works indirectly to cause widepsread increases in glutamatergic, serotonergic, dopaminergic transmission, etc. Those secondary effects are thought to underlie many of the hallucinogenic effects of NMDA receptor antagonists.

Holding the stimulus in WM isn't akin to a hallucination because it isn't necessarily accompanied by a top-down activation of sensory cortices. You don't necessarily "see" the red ball.

But in any event, memories are usually not classified as hallucinations based on their intensity.
 
How is it that NMDA antagonists mediate excitability if not through heterodimers with inhibitory cells?

Also I have a question regarding endogenous ligands that might be really dumb, it occurred to me that because different ligands have different efficacies/affinities for different receptors that the endogenous ligand might have different efficacy/affinities for different receptors, if that is the case my question is do we know what serotonin receptor serotonin has the highest or lowest efficacy/affinity for? Or is serotonin a full agonist at all serotonin receptors and activates all the second messengers equally across all receptors? That just seems too perfect.. I was thinking that when considering spectrums of efficacy you would have to compare it to the endogenous ligand's efficacy at specifically that receptor. That being said, are there ligands that are more than "full agonists", assuming that full agonist means equivalent to the endogenous ligand?
 
How is it that NMDA antagonists mediate excitability if not through heterodimers with inhibitory cells?

Also I have a question regarding endogenous ligands that might be really dumb, it occurred to me that because different ligands have different efficacies/affinities for different receptors that the endogenous ligand might have different efficacy/affinities for different receptors, if that is the case my question is do we know what serotonin receptor serotonin has the highest or lowest efficacy/affinity for? Or is serotonin a full agonist at all serotonin receptors and activates all the second messengers equally across all receptors? That just seems too perfect.. I was thinking that when considering spectrums of efficacy you would have to compare it to the endogenous ligand's efficacy at specifically that receptor. That being said, are there ligands that are more than "full agonists", assuming that full agonist means equivalent to the endogenous ligand?

NMDA receptors are a major source of excitatory input to inhibitory interneurons, so NMDA receptor blockade causes disinhibition. I'm not sure what you mean by heterodimers in the context of this discussion.

None of ur questions have been dumb...
Full agonism is defined by the response to the endogenous ligand. Super agonists are agonists with efficacy >100%.

The affinity of serotonin at all serotonin receptor subtypes is known. In general, the affinity tends to be 5-HT1>5HT2. But it also depends on whether you use agonist or antagonist radioligands.

Because receptors can couple to multiple effectors, there is no way to define the absolute efficacy of serotonin at a given receptor, and no way to compare across receptors. The responses also depend on the level of receptor expression.
 
Very interesting.. Thank you so much for putting up with me by the way. Are you an MD or neuroscientist by trade then? How long have you been doing this?

I have a practical question for once, so I take an orexin antagonist at night, and I've been pondering at what time would be most effective to take it. It has a half life of 12 hours. So I'm wondering, how long do antagonists bind to and occupy the receptors? Is this a question of what affinity the antagonist in question binds with to the receptor because the endogenous ligand will essential knock it out of place or something? Or is the antagonist more likely to leave the receptor by other means, or does the receptor disappear altogether at some point?
 
Last edited:
Neurons use ion channels to control and shape their excitability. That allows the cells to do things like prioritize which excitatory input they will respond to. Neurons receive input from various classes of cells but the termination pattern is not uniform --

This is exactly the theory of MAT specific Nav channels adding to the efficacy of cocaine analogues with their anesthetic bridge in-tact. I wish I could find a published paper which looks into this specific mechanism and see if there is anything to it.
 
Very interesting.. Thank you so much for putting up with me by the way. Are you an MD or neuroscientist by trade then? How long have you been doing this?

I have a practical question for once, so I take an orexin antagonist at night, and I've been pondering at what time would be most effective to take it. It has a half life of 12 hours. So I'm wondering, how long do antagonists bind to and occupy the receptors? Is this a question of what affinity the antagonist in question binds with to the receptor because the endogenous ligand will essential knock it out of place or something? Or is the antagonist more likely to leave the receptor by other means, or does the receptor disappear altogether at some point?

I'm not sure I understand your question correctly, but you know that unless there is a covalent bond formed (irreversible binding) the free drug and drug-receptor complex are in dynamic equilibrium, meaning that it attaches and detaches very quickly all the time. The affinity of the antagonist in comparison to the endogenous ligand determines at what concentrations of said drug you can talk about sufficient receptor blockade, so knowing that you can estimate how long it will take for the concentration to fall below a certain "threshold". The drug you're taking isn't an irreversible antagonist, right? Because in that case half-life would have little meaning.
 
Top