Honestly, a humanity willing to give up their humanity and move on to "live" inside super-computers scares me even more than total extinction.
This, basically; Kurzweil et al., the entire trans/post-humanism, "Singularity" people are nuts, especially when the latter takes on apocalyptic (and therefrom essentially inevitably) cultic overtones, actually starts to get scary. Good ol' Tim Leary got involved with some of this stuff towards the end of his life and McKenna, of course, is one of it's close progenitors.
It also has disturbing existential sociopolitical implications; in the practical, despite being nonsense, it a dangerous line of thought; is innately elitist, anti-human, anti-spiritual, perhaps above all, nihilistic and lacking grounding in any sociological, political or philosophical understanding of human existence. From a
2010 piece in the leftist publication
CounterPunch on the whole bit of sour porridge, and a sour porridge it is;
As the cultural critic Lewis Mumford warned, technological progress is not a function of any essential quality and inherent driver within technology, but rather reflects a deliberate effort by individuals and institutions to drive technology as a means to achieve certain economic and political ends. As “the hand-mill gives you society with the feudal lord” wrote Karl Marx in the Poverty of Philosophy, “the steam-mill, society with the industrial capitalist.” What kind of society does science and technology directed and controlled by military and corporate interests give us? ...
While there are those interested in developing a politics of technology that interrogates the social and environmental costs of technological change, it’s rarely the critical version offered by Mumford but more often one that ignores the political economy of technological change and instead focuses on a superficial politics of pollution or negative externalities. But the pressing political need made evident by the rise of the singularity movement is the necessity of a politics of technology that considers the social costs of military technologies of human perfectibility.
Technological change transforms society but never in the way its enthusiasts predict. Technological change works to reconstitute the conditions not just of production but of survival in capitalist society in ways that transform the structure of our interests. What kind of society is created, for example, when human genetic engineering is controlled by corporate interests? For whom does that society serve if not those who control the technologies for profit? This is a particularly germane question for the singularity movement given who’s funding and profiting from the technologies whose praises the singularity movement sings.
The silly musings of singularity’s futurists and technologists make their utopian visions sound like episodes of the Jetsons. But the bourgeois dream of class domination and faith in technoscience hide its corporate face with scientific fact, military control with techno-enthusiasm, and ruling class ideology for general human benefit.
So why the lack of skeptical consideration? The silly Techno-capitalist-transcendent language of the singularity movement finds broad social acceptance because of a remarkably underdeveloped politics of technology on the political left. To the easily fooled, The Singularity looks like a) a good idea, b) a baffling idea beyond the ability of radical politics to critique, or c) a silly science-fictiony idea not worth addressing. But it’s none of the above. Instead it represents the highest aspirations of reactionary politics to foreclose the possibility of radical social change.
The "simulation argument" &c. are sophomoric and you have conferences involving some real movers and shakers in the technological world who may be proficient in their technical fields and in the abstruse and abstract mathematics (e.g., computability theory) but tend not to have a realistic grounding in the humanities, that is, in a more than sophomoric understanding of the existential to philosophical implications of what they are teaching.
Of course, I believe that consciousness,
eo ipso, involves self-referentiality (and ultimately, at the bottom, a soul); which, as more technical students of Computer Science, as well as the once very trendy
Gödel, Escher, Bach, will instantly understand that also is a significant and essential problem in their field. That is to say, there is no computer program
H( f ( x ) ) that can determine whether an arbitrary program
f will ever come to a complete halt if given input
x. Obviously, you can't do so empirically, because that would render
H( f, x ) ≡ f ( x ) and therefore the
H-function practically useless; we would most likely want to employ it to make sure our computer wouldn't crash for undertaking
f, or at least that the latter was well-formed.
This is one of the most famous problems in computer science and was proven by Turing in the sixties, and relates of course to the incompleteness theorem of Gödel which states (basically) that you basically can't create a mathematical or logical system that doesn't involve some sort of underlying assumptions. There is a striking degree of overlap here with problems of infinite regress involving consciousness (or creation, for that matter), the answers to which are all more or less theological, whether calling themselves so and invoking a
primum movens God, or not.
The Singularist/transhumanist believes that this can be dispatched with, at least practically,
in silico, and this doesn't, if you'll excuse the pun, "compute." Of course I'm oversimplifying things here, but there are profound philosophical and existential questions that need be answered before we start thinking in these terms (
Do androids dream ...?; issues around causality, inevitability and Free Will.) Basically a lot of old wine in new bottles and stuff that philosophy has been struggling with since there
has been such a thing as philosophy.
As for transhumanism and Singularity in the wider sense, that is, that we will (a) augment ourselves technologically and therefore (b) will change, fundamentally, what it means to be human; this has pretty much already happened. To believe in these ideas in a stronger sense runs into the stronger philosophical problems above as it relates to so-called "artificial intelligence," and so forth. An "artificial intelligence," and that would include
you in a simulation/brain-in-a-vat scenario, will never be anything more than
n upgrades to something like Siri, Cortana (remember "Ask Jeeves?"), or particularly efficacious applications of the same which are restricted to limited problem sets (algorithms, some designed around "emergent phenomena" which arise from additional inputs from the real world, we interact with this sort of thing all the time, like when using our phones for driving directions or looking at recommended books on Amazon, and so forth), the idea of a "generalized AI" is absurd.
Since this is the psychedelic forum after all, can you imagine a "GAI" tripping? Tripping is, of course, nothing but a chemical veil, filter, and permutation of sensory inputs leading to a subjectively altered consciousness (and from whence, alterations in behavior; not to self-promote but this is of course what I was writing about in the last draft excerpt I linked to here.) But this isn't actually just tripping (or being on drugs, generally), because you could say the same (veil, filter, and permutation) of neurochemistry itself; we are "ghosts on drugs," as I phrased it, after the idea of the "ghost in the [biological] machine."
But for very fundamental reasons you won't have a ghost in the silicon machine. You may have some pretty impressive
impersonation of one (and work on this has been going back for a
long time, cf. the
ELIZA(1) program, also as early as the late the 60's, an era of great optimism about AI), and going down through some of "her" descendants, Siri et al., and we can expect this to improve as technology, particularly database technology and techniques for programming and self-generating related databases and functions, improves, so we may at some point do well in simulating
you-ness, which is the subject of the well-known Turing test (can you, conversing over a teletype, tell man from machine?;
related philosophical question—does hiding a bilingual man in a black box through which you pass pieces of paper in English and get them out in Mandarin or vice versa mean 箱子流利的中文和英文? Does the fact that Google can translate for me, and you, mean the same for Google, or for me or for you?
But the idea of simulating
I-ness (consciousness) is self-refuting (the "homonculus argument," viz. if I am simulating having-a-consciousness,
to whom am I simulating having-a-consciousness, and how does that target of my simulation differ from a consciousness?) So the idea of simulating consciousness is equivalent to
creating consciousness or, more to the point,
simulating consciousness is a faulty way of describing
creating consciousness, i.e. man playing God and violating the iron-clad laws of fundamental logic, computation, and indeed existence (i.e. the problems of self-reference) that I started this post by mentioning. Transcending
that is transcending the limits of consciousness which, again syllogistically, we can't do; so the whole thing is a bunch of hot air, but, as above, by implication,
scary hot air ... where we are with technology (i.e. already in the "singularity," and, at least by choice when hooked into the internet and so on, at least somehow "posthuman") is
already terrifying in it's sociopolitical and existential implications!
People who embrace this want to take us down a very dark road.
I won't make this a post based on theology; but note—the promise of the serpent in Eden
* you shall become like God. The result was not a good one. The same promise is made by the transhumanists. Their stated goals are philosophically questionable at best, but as for the ultimate outcome, we've heard the story before.
* Not meaning to turn this post into a specifically Christian theological/hamartiological/soteriological discussion, I'll relegate to a footnote a brief attempt to describe the traditional Christian exegesis of the account of the fall of Man in Genesis 3, God has already placed two trees in the garden of Eden; one being the "tree of Life," granting, presumably, immortality, innocence, and harmony; the other is the "tree of knowledge of good and evil." This, or no, translation really does this justice. The promise of the serpent is that by eating therefrom, we shall "become like God, having knowledge of good and evil." God knows good from evil yet remains eternally good; but for man, "having knowledge of good and evil" meant something totally different. Until that day, man had only an inborn good nature, but not being an omniscient and omnipotent Deity, "knowledge of evil" meant gaining also an inborn evil nature staining and intermixing with the good nature, whereby humankind is cursed with all the vicissitudes of our life and requires redemption.