• N&PD Moderators: Skorpio | thegreenhand

Wouldnt withdrawal of drugs with disphoric effects be pleasant?

"I think what you actually mean is that there is feedback whereby consciousness affects neural activity and/or behavior."

Ahhhh okay this is much cleaner wording, as opposed to something that may give the impression that consciousness is directly activating a transcription factor deliberately or something.

Do you think this idea qualifies as a scientific theory?

We can follow a line of reasoning like "We are talking about consciousness so this consciousness must have an effect on behavior", or once again the coherence between injury and unpleasant sensations requiring consciousness to have participated (albeit possibly very indirectly) in natural selection by affecting neurology, but does logic really make something scientific, or would we need some experimental component?
 
By your own admission - I believe -, animals like ants and their colonies don't have our sort of conscious awareness but have still evolved appropriate responses to sustaining things vs dangerous thing. The kind of advantage of intelligent consciousness to reflect, analyze etc so that more complex anticipations and adaptive behaviors can be learned or developed sure does rely on feedback, that is: the system interacting with itself, without simple intuitive mechanistics. Yet still the conscious experience can be considered a backseat passenger because it's still possible to consider the cognitive acrobatics a complex calculation where previous sensation correlates are fed back by 'simulating' them (like accessing memories) and recombined cognitively to give rise to new mental products. As I've been saying, consciousness in that sense can be a phenomenon emerging from this calculation and it is neither necessary nor elegant to get it actively involved as a 'seperate ingredient'.

At the same time it is tempting because it should correlate very well with the coherence involved in these cognitive calculations. (Another semantic issue may be whether we are really disagreeing and whether the 'sidestep' you make in your descriptions is significant at all times). But I think at least partially it is significant and demonstrated by conscious awareness lagging some time behind actually made decisions (so why not all other cognitive calculations?).
I don't think the argument would work to say that after that small lag, it may have special or additional interactivity. There couldn't really be interactivity since what is experienced seems to correlate entirely with the cognitive calculations some milliseconds ago. Do you think that is too big of an assumption to make? I don't see why the lag experimentally demonstrated should be an exception.

I'd say the lag seen is the time needed to synthesize the stream of consciousness (coherently :) ) after the actual magic has already happened - the book was written but the movie still had to be made. So backseat passenger then it seems.

It seems to suggest that apart from the awareness involved with animals having any sensory system whatsoever, sophisticated consciousness may only be synthesized with animals so cognitively complex that they developed behavior from calculation and reflection. Or are there more primitive advantages to a more centrally executed consciousness vs. just a switchboard?

Thanks for clarifying there, it helped me to consider your perspectives better! They are probably not far apart at all, some of this is subtle IMO.

Good point @ dan dennett.
 
Last edited:
I'm certainly opposed to the idea of free will, but I don't think that consciousness is 100% in the backseat - 2 pieces of evidence for your consideration:

1. I can report my subjective experience.

I can't think of another way that the word consciousness could be entered into this keyboard unless my subjective awareness was having some effect. In the same vein, we have the issue of qualia - I can tell you that seeing the color red is a subjectively different experience than seeing the color orange.

If we were just switchboards, we may talk about the difference between red and orange as a matter of "this object is orange" etc, but I don't think that the subjective experience aspect would ever pop up in the convo if consciousness was 100% in the backseat.

2. We still have the issue of coherence between pleasant sensations and things that increase reproduction, and vice versa. I don't feel like I've gotten a satisfying answer here that doesn't involve consciousness interacting with the classical realm.


If consciousness doesn't interact with the classical realm and therefore can't play a role in natural selection, how come humans didn't evolve to feel euphoria when injured and dysphoria when we eat? It would have had zero effect on selection if consciousness didn't affect the classical realm, so it shouldn't have been a disadvantage.

The only other explanation is kinda convergent evolution - that the optimal neural configuration for approach/avoidance and such also leads to a somewhat coherent subjective experience, and that they evolved parallel to each other but with no selection pressure from the consciousness end (and that's quite the coincidence).

I hope I'm not getting too stuck in duality-land.

Re: awareness of a decision/free will neuroimaging studies

If we assume that there is a bi-directional bridge between the classical realm and consciousness, when you are imaging the brain you are also somewhat imaging the effect that consciousness has on the brain.

Input -> output is still in play, so I guess I'm not sure why we would declare that consciousness is in the backseat if consciousness and biology are intertwined and thus are essentially surrogate measures of each other.

In other words, when somebody looks at fMRI and predicts that somebody is going to move their left hand a full 5 seconds before the subject is aware of having made their decision, that doesn't necessarily mean that consciousness played zero role in the final decision, even if they were only aware of it at a later time.

Once again we don't have free will because of prior causes etc, but if biology feeds into consciousness and consciousness then feeds back into biology, we should be able to still predict decisions given info on one of the two if they are both strong enough surrogate measures of each other.


I suppose that also gets at the issue of multiple consciousness that are somewhat discrete in one brain (as in one network could have been aware of the decision around the same time but would have been unable to communicate that verbally, only able to carry out the act of deciding itself).

With insects and such, the issue of "where does consciousness really become significant enough to have an impact on behavior" seems to come up - is 302 neurons enough? I guess it could be something opposite to additive increase - multiplicative decrease. A butterfly flaps its wings, if you will.
 
Top