8L4YN3
Bluelighter
Hello, first off i should explain the thread and it's title. I basically want to propose everyone a question.
Is it right for humanity to advance to such a degree, that we are not human anymore? Was humanity meant for perpetual evolvement, and in a sense there is no solid species per say, but an ever slowly evolving one, which would render sarificing most of our humanity for advancement not a moral issue at all..
Now lets think of humanities place in the universe. We have all seen the hubble pictures, giving us greater perception into how vast the universe truely is, and we have an idea how old it is.
Is it a stretch to think that a technology using species such as humans could evolve to such a level in one or a few millions years, that humans have implemented 'neuro-hacking' and thus, brain-computer interfaces, artificial intelligence, or some other intelligence-enhancement technology will transcend the human condition. To the point i am talking about above, not even human. Superhuman...
Technology is a product of intelligence. So when intelligence is enhanced by technology, you've got transhumans who are more effective at creating better transhumans, who are more effective at creating even better transhumans.
Cro-Magnons changed faster than Neanderthals, agricultural society changed faster than hunter-gatherer society, printing-press society changed faster than clay-tablet society, and now we have "Internet time". And yet all the difference between an Internet CEO and a hunter-gatherer is a matter of knowledge and culture, of "software".
Our "hardware", our minds, emotions, our fundamental level of intelligence, are unchanged from fifty thousand years ago. Within a couple of decades, for the first time in human history, we will have the ability to modify the hardware. Is modifying this hardware and transcending what some may refer to as our god given condition moral? Of course it dosn't stop there. The first-stage enhanced humans or artificial minds might only be around for months or even days before creating the next step. Then it happens again. Then again.
To put it another way: As of 2000, computing power has doubled every two years, like clockwork, for the past fifty-five years. This is known as "Moore's Law". However, the computer you're using to read this Web page still has only one-hundred-millionth the raw power of a human brain - i.e., around a hundred million billion (10^17) operations per second (2). Estimates on when computers will match the power of a human brain vary widely.
Once computer-based artificial minds (a.k.a. Minds) are powered and programmed to reach human equivalence, time starts doing strange things. Two years after human-equivalent Mind thought is achieved, the speed of the underlying hardware doubles, and with it, the speed of Mind thought. For the Minds, one year of objective time equals two years of subjective time. And since these Minds are human-equivalent, they will be capable of doing the technological research, figuring out how to speed up computing power. One year later, three years total, the Minds' power doubles again - now the Minds are operating at four times human speed. Six months later... three months later...
When computing power doubles every two years, what happens when computers are doing the research? Four years after artificial Minds reach human equivalence, computing power goes to infinity. That's the short version. Reality is more complicated and doesn't follow neat little steps (3), but it ends up at about the same place in less time - because you can network computers together, for example, or because Minds can improve their own code.
From enhanced humans to artificial Minds, the creation of greater-than-human intelligence has a name: Singularity. The term was invented by Vernor Vinge to describe how our model of the future breaks down once greater-than-human intelligence exists.
We're fundamentally unable to predict the actions of anything smarter than we are - after all, if we could do so, we'd be that smart ourselves. Once any race gains the ability to technologically increase the level of intelligence - either by enhancing existing intelligence, or by constructing entirely new minds - a fundamental change in the rules occurs, as basic as the rise to sentience.
Okay, you get the picture. Do you think it would be right for humanity to develop to a level where perhaps say someones mind is tranfered into a robot. And they have superintelligence. Say we're talking 500 millions years from now.
How do you feel about humanities destiny? Are we destined to learn how to live in perfect peace and harmony with each other and nature? Or is it our job to keep doing the impossible. To keep pushing ourselves further and further away from our original human selves and connections to nature.
I'm going to withold my own opinion until later as i'm not 100% solid on what i think yet. Also i used abit of material from this link http://yudkowsky.net/obsolete/tmol-faq.html#orient_singularity , after reading some of this it got me curious what other people think about humanities future. And of course this is all hypthetically assuming we don't blow ourselves to shit before then.
Is it right for humanity to advance to such a degree, that we are not human anymore? Was humanity meant for perpetual evolvement, and in a sense there is no solid species per say, but an ever slowly evolving one, which would render sarificing most of our humanity for advancement not a moral issue at all..
Now lets think of humanities place in the universe. We have all seen the hubble pictures, giving us greater perception into how vast the universe truely is, and we have an idea how old it is.
Is it a stretch to think that a technology using species such as humans could evolve to such a level in one or a few millions years, that humans have implemented 'neuro-hacking' and thus, brain-computer interfaces, artificial intelligence, or some other intelligence-enhancement technology will transcend the human condition. To the point i am talking about above, not even human. Superhuman...
Technology is a product of intelligence. So when intelligence is enhanced by technology, you've got transhumans who are more effective at creating better transhumans, who are more effective at creating even better transhumans.
Cro-Magnons changed faster than Neanderthals, agricultural society changed faster than hunter-gatherer society, printing-press society changed faster than clay-tablet society, and now we have "Internet time". And yet all the difference between an Internet CEO and a hunter-gatherer is a matter of knowledge and culture, of "software".
Our "hardware", our minds, emotions, our fundamental level of intelligence, are unchanged from fifty thousand years ago. Within a couple of decades, for the first time in human history, we will have the ability to modify the hardware. Is modifying this hardware and transcending what some may refer to as our god given condition moral? Of course it dosn't stop there. The first-stage enhanced humans or artificial minds might only be around for months or even days before creating the next step. Then it happens again. Then again.
To put it another way: As of 2000, computing power has doubled every two years, like clockwork, for the past fifty-five years. This is known as "Moore's Law". However, the computer you're using to read this Web page still has only one-hundred-millionth the raw power of a human brain - i.e., around a hundred million billion (10^17) operations per second (2). Estimates on when computers will match the power of a human brain vary widely.
Once computer-based artificial minds (a.k.a. Minds) are powered and programmed to reach human equivalence, time starts doing strange things. Two years after human-equivalent Mind thought is achieved, the speed of the underlying hardware doubles, and with it, the speed of Mind thought. For the Minds, one year of objective time equals two years of subjective time. And since these Minds are human-equivalent, they will be capable of doing the technological research, figuring out how to speed up computing power. One year later, three years total, the Minds' power doubles again - now the Minds are operating at four times human speed. Six months later... three months later...
When computing power doubles every two years, what happens when computers are doing the research? Four years after artificial Minds reach human equivalence, computing power goes to infinity. That's the short version. Reality is more complicated and doesn't follow neat little steps (3), but it ends up at about the same place in less time - because you can network computers together, for example, or because Minds can improve their own code.
From enhanced humans to artificial Minds, the creation of greater-than-human intelligence has a name: Singularity. The term was invented by Vernor Vinge to describe how our model of the future breaks down once greater-than-human intelligence exists.
We're fundamentally unable to predict the actions of anything smarter than we are - after all, if we could do so, we'd be that smart ourselves. Once any race gains the ability to technologically increase the level of intelligence - either by enhancing existing intelligence, or by constructing entirely new minds - a fundamental change in the rules occurs, as basic as the rise to sentience.
Okay, you get the picture. Do you think it would be right for humanity to develop to a level where perhaps say someones mind is tranfered into a robot. And they have superintelligence. Say we're talking 500 millions years from now.
How do you feel about humanities destiny? Are we destined to learn how to live in perfect peace and harmony with each other and nature? Or is it our job to keep doing the impossible. To keep pushing ourselves further and further away from our original human selves and connections to nature.
I'm going to withold my own opinion until later as i'm not 100% solid on what i think yet. Also i used abit of material from this link http://yudkowsky.net/obsolete/tmol-faq.html#orient_singularity , after reading some of this it got me curious what other people think about humanities future. And of course this is all hypthetically assuming we don't blow ourselves to shit before then.