• Philosophy and Spirituality
    Welcome Guest
    Posting Rules Bluelight Rules
    Threads of Note Socialize
  • P&S Moderators: JackARoe | Cheshire_Kat

Morality Of Future Human Advancement.

8L4YN3

Bluelighter
Joined
Mar 7, 2006
Messages
1,447
Location
i got a drug problem: i need twice the amount to g
Hello, first off i should explain the thread and it's title. I basically want to propose everyone a question.

Is it right for humanity to advance to such a degree, that we are not human anymore? Was humanity meant for perpetual evolvement, and in a sense there is no solid species per say, but an ever slowly evolving one, which would render sarificing most of our humanity for advancement not a moral issue at all..

Now lets think of humanities place in the universe. We have all seen the hubble pictures, giving us greater perception into how vast the universe truely is, and we have an idea how old it is.

Is it a stretch to think that a technology using species such as humans could evolve to such a level in one or a few millions years, that humans have implemented 'neuro-hacking' and thus, brain-computer interfaces, artificial intelligence, or some other intelligence-enhancement technology will transcend the human condition. To the point i am talking about above, not even human. Superhuman...

Technology is a product of intelligence. So when intelligence is enhanced by technology, you've got transhumans who are more effective at creating better transhumans, who are more effective at creating even better transhumans.

Cro-Magnons changed faster than Neanderthals, agricultural society changed faster than hunter-gatherer society, printing-press society changed faster than clay-tablet society, and now we have "Internet time". And yet all the difference between an Internet CEO and a hunter-gatherer is a matter of knowledge and culture, of "software".

Our "hardware", our minds, emotions, our fundamental level of intelligence, are unchanged from fifty thousand years ago. Within a couple of decades, for the first time in human history, we will have the ability to modify the hardware. Is modifying this hardware and transcending what some may refer to as our god given condition moral? Of course it dosn't stop there. The first-stage enhanced humans or artificial minds might only be around for months or even days before creating the next step. Then it happens again. Then again.

To put it another way: As of 2000, computing power has doubled every two years, like clockwork, for the past fifty-five years. This is known as "Moore's Law". However, the computer you're using to read this Web page still has only one-hundred-millionth the raw power of a human brain - i.e., around a hundred million billion (10^17) operations per second (2). Estimates on when computers will match the power of a human brain vary widely.

Once computer-based artificial minds (a.k.a. Minds) are powered and programmed to reach human equivalence, time starts doing strange things. Two years after human-equivalent Mind thought is achieved, the speed of the underlying hardware doubles, and with it, the speed of Mind thought. For the Minds, one year of objective time equals two years of subjective time. And since these Minds are human-equivalent, they will be capable of doing the technological research, figuring out how to speed up computing power. One year later, three years total, the Minds' power doubles again - now the Minds are operating at four times human speed. Six months later... three months later...

When computing power doubles every two years, what happens when computers are doing the research? Four years after artificial Minds reach human equivalence, computing power goes to infinity. That's the short version. Reality is more complicated and doesn't follow neat little steps (3), but it ends up at about the same place in less time - because you can network computers together, for example, or because Minds can improve their own code.

From enhanced humans to artificial Minds, the creation of greater-than-human intelligence has a name: Singularity. The term was invented by Vernor Vinge to describe how our model of the future breaks down once greater-than-human intelligence exists.

We're fundamentally unable to predict the actions of anything smarter than we are - after all, if we could do so, we'd be that smart ourselves. Once any race gains the ability to technologically increase the level of intelligence - either by enhancing existing intelligence, or by constructing entirely new minds - a fundamental change in the rules occurs, as basic as the rise to sentience.

Okay, you get the picture. Do you think it would be right for humanity to develop to a level where perhaps say someones mind is tranfered into a robot. And they have superintelligence. Say we're talking 500 millions years from now.

How do you feel about humanities destiny? Are we destined to learn how to live in perfect peace and harmony with each other and nature? Or is it our job to keep doing the impossible. To keep pushing ourselves further and further away from our original human selves and connections to nature.

I'm going to withold my own opinion until later as i'm not 100% solid on what i think yet. Also i used abit of material from this link http://yudkowsky.net/obsolete/tmol-faq.html#orient_singularity , after reading some of this it got me curious what other people think about humanities future. And of course this is all hypthetically assuming we don't blow ourselves to shit before then.
 
Are we destined to learn how to live in perfect peace and harmony with each other and nature? Or is it our job to keep doing the impossible. To keep pushing ourselves further and further away from our original human selves and connections to nature.
I don't see these as only separate options. I believe there's a big chance we can and will 'accomplish' both.
 
I'm, like you, still unsure about the singularity issue. I don't even know if there will be a singularity (in the foom sense). I do know that Eliezer Yudkowski is one smart motherfucking son of a bitch and his ideas should not be taken lightly. He is of the opinion, I think, that AIs will probably end up outpacing digital humans and that, therefore, our main focus should be on making sure these AIs are friendly towards humans (or their descendants) before the singularity occurs and we all end up getting turned into paperclips.
 
Last edited:
our conceptions of "what is nutural" and "unnatural" make no sense. the idea of "corn" eg, there is no "natural corn". you can't get corn as it was when we first started farming.

it is in every material entity's (e.g. us, e.g. corn) nature to change, and to have no such "nature" (in a static sense)

as humanity progresses to heights previously unimaginable, there is always resistance from our desire to stay in comfort zones, to keep our world and worldview static, to maintain equilibrium. but we've made it this far, and if we ensure we don't dip down into another dark ages, the next couple decades will be an amazing time to live...

soldiers are already flexing their new hands after their old ones got blown off (and even gone back into combat), robots are already fighting our wars for us, both of which were science fiction a few decades ago.
 
A more pertinent point might be, not what is natural, which forever begs the question, but rather what is normative. I agree with the OP and most Transhumanists (and if we avoid the existential threats to our species) that we are heading for paradigmatic shifts that require the assistance of social scientists, philosophers, and even some theologians as we enter unchartered territory.

Science has elevated human dominion over the entire globe (whether wisely is another question), and Minsky and Kurtzweil's predictions in the info-bio-nano-techno explosions begin the radical change in humanity they seek, not all is well with TranshumanismAI's slow progress is telling, let us hope the blue brain project begine to give definitive answers on mind-brain inter-relationships. I think that Fukuyama has it about right that radical life extension might be the first technology to raise moral issues. For the first time mankind might have the option to radically transform itself raising moral questions that have never been explored before.

Personally I would like to see some progress on equality before we offer species altering solutions (?)
 
I don't think development itself is either moral or immoral - it depends on what the development is used for.

Since at the moment we are burning resources faster than the planet is replenishing them, the status quo is arguably immoral (or at least impractical).

Either we allow a large % of the human population to die off or we continue our technological advancement to the point where things like fussion power and other carbon neutral forms of power generation that don't depend on non renewable resources actually work properly.

Personally i am not too worried about the possiblility of 'minds' are you describe them. Generally i see that people with a better understanding of a given situation are more likely to act in a positive way; unless we treat the AIs we create badly should they not respond to our example and behave well, only smarter?
 
Last edited:
I have a tangential question. How would this level of super human intelligence be manifest?
Would they be socially perfect, having great personalities? Would they be technologically perfect, able to construct systems of seemingly impossible complexity and utility? Would they be able to make decisions regarding public policies? Would they be able to act as CEO of a company? Would they be able to answer the question of the meaning of life and existence?
Would they be able to create meaniful art(of any variety)? Or would they be all of the above and then some?

I personally think it's absolutely fine to manipulate the matter in our universe in any way we are able. If that means manipulating it into a container for consciousness then so be it. I can't think of why it would be morally wrong surpass our current hardware.
 
>>I have a tangential question. How would this level of super human intelligence be manifest?>>

like any technology, gradually. and it's hard to predict the later stages. but, this process has already begun. technology is simply an extension of human willpower, and its "symbiosis" "with us" (i'd prefer, though, to say we and it are one) has been growing in capacity in more and more obvious ways.

e.g., the printing press enhanced human intelligence. genetic evolution stopped for the time being, but had advanced far enough to allow social evolution. (which, incidentally, advanced far enough to allow us to now continue our genetic evolution). then, the radio. then the TV, then the computer, the internet... and like in every other scientific and technologic field, the interval between breakthroughs is getting shorter and shorter.

>>Would they be socially perfect, having great personalities? Would they be technologically perfect, able to construct systems of seemingly impossible complexity and utility? Would they be able to make decisions regarding public policies? Would they be able to act as CEO of a company? Would they be able to answer the question of the meaning of life and existence?>>

i would hope all of the above, though it depends on the direction we take. one concrete example i can allude to is Dr Bashir in star trek ds9. normally we think of being able to multitask (listen to "950 songs") and sift through data as being the advantage of "synthetic computers", AKA Mr. Data. but DS9 increased the complexity in regards to intelligence... bashir has a genetically enhanced brain that gives him those same computational capabilities as Mr Data.

when asked for an estimation of their chance of survival, which was 36.7% according to bashir, someone told him he's as cold as a vulcan. his reply was "then how do you explain my boyish smile?" so separating engineering-type intelligence, artistic-type intel, personality/social intelligence, emotional intelligence, running a CEO and administration, etc... separating these IMO has been a mistake, and science has been moving toward increasing interdisciplinary studies over history.

>>Would they be able to create meaniful art(of any variety)?>>

of course. they'd be humans, but modified better. or, they'd be sophisticated AI's. or, they'd be hybrids. or, they'd be "post-humans". in any case, their art would surpass normal human art.

>>I personally think it's absolutely fine to manipulate the matter in our universe in any way we are able. If that means manipulating it into a container for consciousness then so be it. I can't think of why it would be morally wrong surpass our current hardware.>>

i think it's a moral imperative for our species, if we are to avoid another dark ages and avoid extinction and pursue our natural curiosity and drive to expand.

personally, my tangential question is... wtf does lady gaga's alejandro video mean?
 
So somebody has been reading Ray Kurzweil eh?

The main focus of transhumanism is to achieve immortality and population reduction. Artifcial Intelligent "nano-bots" will help you think YOUR OWN thoughts but at like x100 times the speed and intensity.

interesting stuff; but I don't believe a human soul can be contained; nor copied.
 
The 'Human Soul' (or mind or whatever) is already contained - in the human body and brain. So clearly it can be contained.

Whether or not a technological deviece can involve itself in this is another question. Given that we already have people benefiting from deep brain electrode stimulation (cures things like epilepsy in some situations) as well as artificial vision i would suggest that brain / technology interfacing is already happening, albeit in a simplistic fashion (turning off a bunch of neurons that cause fits or stimulating the optical centres of the brain to produce vision).

I see nothing in our present world that would suggest this kind of technology will not continue to develop in the future.

Anyone who has every taken a psychoactive drug (especially if it was one that was artificially synthesised as opposed to shrooms for example) knows we posses the technology to alter the human mind, even if it's not a particularly precise technology at the moment and involves drugs rather than cybernetic implants or genetic manipulation. This modification was always part of the attraction for me when taking drugs.
 
Last edited:
Researching Transhumanist ethics actually.

Where you lack belief in human replication the Transhumanist need only demonstrate true AI to persuade others that mind=matter - period! Population reduction is not a stated goal but is implicit in their vision (though one could make a strong counter argument. Beside the psychosocial element of accelerated human evolution, Transhumanism raises serious Ethical questions of unqualified magnitude. They promote not only radical life extention, but cyborgianism which raises the question of what it is to be a normal human being, and intrudes on our accepted values regarding health and wellbeing.

Transhumanism accepts eugenics (in a liberalised form), ectogenesis , mind:machine interfaces, off-the-shelf character traits and eventually wish to bring about a man-made eschaton, or singularity, or Omega point, a point in the near future where the exponential shift of mankind will become so great (possibly infinite), and a new species is derived, human 2.0, with abilities so advanced as to be indistinguishable from Magic (Minsky), post-humanogenesis, which some argue will be like the creation of new gods, those who don't keep up with the pace might be viewed as we view animals now.

Such a process renders normative concepts, such as morality, identity, Justice, death and emotions into new and unchartered territory. These concepts will have to be remodelled, or discarded for new ones.

Bio-ethics and stem-cell research gives a flavour of the debates that will ensue.

Personally I agree with Penrose that mind is non-logarithmic, and is as such irreducible to computation, rendering AI impossible. Transhumanism emphatically requires AI, but is not naught without it.

I personally think that Transhumanism is the ultimate expression of individualism, if not egoism and reflects the Hermetic-alchemical impulse, the quest for the elixir vitae, in essence self-deification.

This may appear as science fiction, but Transhumanism proactively engages with the sciences and sold the dream to many to the Financiers, directing billions of dollars in technologies that are the sine qua non of every element of Transhumanism, it is not some benign futurology, but a Holistic program for the creation of a new species - homo-perfectus.

They are offering up deification, who wouldn't want to buy into that ideaology?

I am no techno-Luddite but the transhumanist agenda leans too far on proaction, than precaution.
 
Top