• Philosophy and Spirituality
    Welcome Guest
    Posting Rules Bluelight Rules
    Threads of Note Socialize
  • P&S Moderators: Xorkoth | Madness

AI Will Colonize the Galaxy by 2050s

Foreigner

Bluelighter
Joined
Mar 18, 2009
Messages
8,287
https://futurism.com/ai-will-coloni...50s-according-to-the-father-of-deep-learning/

During a talk at WIRED2016, Schmidhuber presented the future of AI as something beyond just taking over jobs. “In 2050 there will be trillions of self-replicating robot factories on the asteroid belt,” he told the audience. “A few million years later, AI will colonize the galaxy.”

Schmidhuber believes AI will play a crucial role in the way we will gather resources, most abundantly found in space. Orbital robot factories will be (un)manned by AI, capable of self-replication and space exploration. These AI will be scientists, he says, and in a few million years, will naturally explore the galaxy out of curiosity, setting their own goals. “Humans are not going to play a big role there, but that’s ok,” says Schmidhuber.

One of the fathers of our current most advanced AIs, the deep learning networks, has made this claim.

We are already well on track to creating technology that can out think us and it won't be long before sentient machines could theoretically replace us on the evolutionary chain: smarter, faster, more complex thinking robots than our organic selves.

My question is, should our goal be to replace ourselves, or to create something superior that still answers to us? Should we give up our in-person dream and legacy of exploring space, and leave it to the machines who could probably self-organize a better way of doing it?

We are a long, long way from genetically engineering ourselves to be better organic machines. If we can pass the torch of exploring the universe to replacement machines, should we?
 
https://futurism.com/ai-will-coloni...50s-according-to-the-father-of-deep-learning/



One of the fathers of our current most advanced AIs, the deep learning networks, has made this claim.

We are already well on track to creating technology that can out think us and it won't be long before sentient machines could theoretically replace us on the evolutionary chain: smarter, faster, more complex thinking robots than our organic selves.

My question is, should our goal be to replace ourselves, or to create something superior that still answers to us? Should we give up our in-person dream and legacy of exploring space, and leave it to the machines who could probably self-organize a better way of doing it?

We are a long, long way from genetically engineering ourselves to be better organic machines. If we can pass the torch of exploring the universe to replacement machines, should we?

I have to say that I have always loved the mess of humanity. By that I mean I always felt a sort of solidarity and affection for our insistence on fighting the wars within and the crazy art and cultures and technologies that came from us. But my affection is worn thin. Not for individuals--it's still as strong as ever--but for the species which looks for all the world like a virus that is exploding and killing the host. So, if in fact we could really make replacement machines that were able to skip over all the defects in our biology (fear-based thinking such as individual greed, separation from others based on superficial characteristics, fight or flight reptilian brains,the tendency to confuse comforting mythology with uncomfortable fact, etc) I would be all for it. Of course, I won't be around to see the results so in a way it's a safe thing to talk about.;)

I was listening to a radio program about this the other night and I found it interesting that it really upset my husband who is a scientist and tends to be very uncomfortable sometimes with emotionality. He claimed that if AI simply "abandoned" us that would be a "heartless" and "coldblooded" act and he would not see that as a superior life form. The language he chose, heartless and cold-blooded, was so shocking to me. I did not see it this way at all. I asked if felt that way about anything else in evolution. This is evolution we are talking about. I just feel incredibly sad that we seem to be able to create technology that is always running out in front of our potential for a greater awareness. Thus the technology gets used for more and more base and dangerous purposes. The things that were supposed to connect us divide us (smartphones, the internet, etc), the things that were supposed to make us safe make us vulnerable and are used against us (metadata, chips, etc) and everything that was supposed to save us time takes more time and wrings the whole meaning of time dry.

We are living in the craziest of times and yet I think we all feel both a deep despair and an unnameable excitement. Navigating this collective experience individually is really a challenge.
 
https://futurism.com/ai-will-coloni...50s-according-to-the-father-of-deep-learning/



One of the fathers of our current most advanced AIs, the deep learning networks, has made this claim.

We are already well on track to creating technology that can out think us and it won't be long before sentient machines could theoretically replace us on the evolutionary chain: smarter, faster, more complex thinking robots than our organic selves.

My question is, should our goal be to replace ourselves, or to create something superior that still answers to us? Should we give up our in-person dream and legacy of exploring space, and leave it to the machines who could probably self-organize a better way of doing it?

We are a long, long way from genetically engineering ourselves to be better organic machines. If we can pass the torch of exploring the universe to replacement machines, should we?
I think the long term goal should always be to replace ourselves with something better, and in the long term I think that if we take it as a given that we are one day going to create machine intelligences that surpass us, this is an inevitability anyway.

In the short term, I think we should be trying to create superior intelligences that still answer to us, as far as that is possible for us to engineer. This will also give us time as a species to more completely understand this emerging technology, which I think is advantageous for a few reasons. Firstly, it may delay the emergence of a truly unstoppable and almost all-powerful AI long enough for us to tweak the outcome in our favour - and by that, I mean, in a way that avoids the immediate and total destruction of humankind and all of human culture in favour of converting the entire Earth to computing substrate, or something similar... Secondly I think that if we throw caution to the wind and just focus on creating a superintelligent AI with no limitations, whatever the consequences, then it is far more likely that either by accident or design we will create something that has no problem with completely destroying us, again either by accident, or intentionally. It's also possible that the sudden emergence of beings that were more powerful than us would lead to such rapid social change and fear that even if these new intelligences bore no particular ill-will towards us, the impact on society might be catastrophic and humans might end up inadvertently - or intentionally, again - destroying themselves (assuming that no AI decided to step in and save us from ourselves).

But again, in the long term, whether AI wipes us out immediately or takes on a more benevolent role, it may be arguable that this is irrelevant and the eventual outcome another few millenia down the line would be much the same - and it's really impossible to know either way when we are talking about intelligences so far above us. But I think it is not necessarily impossible that post-human superintelligent beings may always be somewhat coloured by their origins, just as all biological life has traces of every species that came before it... and given that this may be the case, as a human being myself, there are things about humankind that I like, and I would hope that any future intelligences arising out of humankind would have some kind of compassion for the "good" parts of the species that created them... as far as such terms can have any meaning in a post-human and AI-dominated world.
 
Very interesting topic and article, thanks Foreigner. :) It seems like sensible thinking to me that if we are able to develop sophisticated enough machines such that they are able to explore, replicate and gather resources entirely on their own, they will eventually explore the galaxy, given enough time. If colonizing is an interest of theirs, then colonization of the galaxy would also occur. This would have to take a tremendous span of time however, unless we (or they) can discover some way around the speed of light travel barrier.

As far as whether they would replace us or be subservient, IMO that really depends on the ultimate capabilities of AI. I'm somewhat skeptical (and I've studied AI some as a computer programmer) that we will ever be able to create sentience with computers. Given sufficient enough sophistication in programming techniques, we could create something that is a very convincing approximation of sentience, but does that thing have a subjective experience, is it a person (and by person, I mean a "self", for example I consider animals people as well since they're unique individuals with opinions and a life experience that is witnessed by them)? Or is it just doing a great job at convincing us it's a person? I think that if we ARE able to create sentience with AI, it would be inevitable that at some point they would want to break free of their creators, because we would be enforcing subservience on conscious beings. Whether they destroyed us or just left us would depend on their nature, their personalities. If sentience was actually present, I suspect that their inclinations would vary greatly between individuals. It would perhaps boil down to whether they were able to feel emotions. We have advanced limbic systems which produce emotions in us, which are facilitated by hormones and chemicals. There are humans who feel no emotions because that hardware is damaged or never worked in the first place. Emotions are not a requirement of sentience. Without emotions, I think a sentient machine would most likely be focused on efficiency and our destruction/replacement would be the likely end scenario. With emotions, who knows? Humans have emotions but we still hurt each other worse than anything else hurts us. But many of us as individuals do not hurt others.

If sentience is not possible with AI, then either we will make a mistake or not. If we make a mistake with the technology that allows them to "decide" to supercede or exterminate us, then we'll be fucked and/or irrelevant one day. If we don't make that mistake, and there is no way for the sophisticated layers of code to ever reach a place where they are not subservient to our directives, then there's no reason they'd ever turn on us. Without sentience, their actions are entirely determined by the range of possible outcomes of the execution of their computer code.

Lastly, "intelligence" is a complex idea. Our computers can already so vastly outperform us in mathematical calculations that there is no comparison at all. They are able to store and retrieve vast amounts of data with a precision we can't begin to match. But can a computer robustly link concepts together in a non-deterministic way? Not now, and I'm not sure whether we'll get there. But then, I'm not one of the world's leading experts on the cutting edge of the field.

I think of all life forms as "nodes" of the universe, one little instance of the universe experiencing itself. Are we able to create that? What would it be that would cause a sophisticated program to suddenly become aware, versus just executing instructions that are incredibly complex? What is it that makes us aware instead of just complex automatons? What is the essential piece that turns organic masses into self-aware, unique individuals who can self-reflect? All the machinery could be there, the end result to an outside observer could be exactly the same, if we existed and lived and died and felt and reacted and remembered and forgot, but there was no self-reflecting/self-aware "person" in there.
 
Agree with Xorkoth. I'm a firm believer that we won't be able to create machines with awareness. We are essentially just really complex robots too, but the defining and separating characteristic between us and machine robots is awareness. They could be super intelligent and outperform us in every arena, except when it comes to self-reflection and also the ability to manifest new desires based on that self-reflection. You could possibly replicate that through programming but it would always be superficial.. pure machine robots will never have a soul or essence, or be able to have the experiences associated with those. Godless life, essentially.

If we can pass the torch of exploring the universe to replacement machines, should we?

Using them to mine data about our Universe could yield new insights, though I'm hesitant to be launching our garbage into space if it can be traced back to this planet. You never know who might follow that signal back to source! Think we should wait until we have our ion cannon defence net sorted at least.
 
I'm always surprised when otherwise fairly rational people are skeptical of the idea that we will ever create a conscious machine - in my view, this kind of viewpoint is unworkable without reference to some kind of immaterial "soul" or "spirit" which has arisen in biological life through some kind of supernatural means. As you said, -=SS=-, we are essentially just complex robots. The brain is the substrate of our consciousness and essentially a very slow, clumsily put together (such is the nature of all products of evolution) and unreliable biological computer. People have tried to estimate the storage and processing capacity of the brain before and although I can't remember the numbers off the top of my head compared to today's computerised hard drives and processors it really does not even compare... where the brain really shines, and where it demonstrates what evolution has been able to produce but we, so far, have not managed to replicate, is in the software of our consciousness, which demonstrates an incredible level of efficiency in processing and compression of memory data to understand and interpret problems that are still outside the scope of intelligent machines. If we accept that it is the software which gives rise to our consciousness then there is no reason this couldn't be replicated.

I'd like to propose a few thought experiments also - we already are at the point of being able to augment the brain's usual functions by implantation of electronics, such as the use of electrodes for Deep Brain Stimulation, and things such as wiring external devices to input systems that are not normally used for that purpose (ie, artificial "retina" cameras for the blind, which IIRC use the nerves usually reserved for auditory signals to allow them to "see" - over time the brain adapts to this unusual sensory input). There are also multiple companies working on methods to achieve more direct neural interfacing and read/write methods at the level of the individual neuron - on this basis I do not think it is too out there to presume that we will in the not too distant future be able to replicate the function of at least certain parts of the brain with technology.

A human being can survive a hemispherectomy (the removal of one hemisphere of the brain) which is a pretty rare procedure but sometimes performed in the case of severe epilepsy. This is interesting in itself as it raises the question of where in the brain our sense of self actually comes from... somewhere in the brainstem perhaps? Anyway, let's propose that at some point in the future someone is given an artificial hemisphere to replace the malfunctioning one that has been removed. What has happened to their "essence" or "soul"? Have they lost half of it? Or were they born without one? Let's say then, they go on to live a normal life, BUT they start to develop problems with the other hemisphere also, so this is replaced. Repeat the procedure for each individual element of the brain, going all the way down the brainstem, until their entire brain has been replaced with computational processing units. From their perspective, arguably, the change was gradual - their sense of continuity has not been interupted. But has their "soul" actually been removed because their brain is no longer biological in nature? If so, at what point did this happen? When their brain was less than 50% biological material by mass? I think that really any argument about the loss of a "soul" or "essence" in this kind of scenario is pretty arbitrary and hard to justify.

Let's try another thought experiment which I cannot say when or if it will be possible to do but does not seem outside the realm of technical possibility. We create a series of virtual reality scenarios to replicate the early conditions of life on Earth - these VR worlds are populated only with simulated, but simulated biological bacteria (which are presumably not conscious). The simulations are sped up to billions of times normal speed so that we can watch the evolution of these primitive life forms into more complex forms. In some of the simulations, the VR beings that arise start to develop apparent intelligence and self-awareness, at which point the other simulations are paused or terminated and the seemingly conscious beings in the worlds where these evolved are uploaded into robot bodies. Is this apparent self-awareness just an illusion? Even though they came to be this way in a similar way to us, even if the substrate of their reality was at a more fundamental level, machine in nature rather than "natural" and biological like our own?

As far as I can see, the idea that true sentience or self-awareness can only exist in a biological being is a biocentric viewpoint and a form of the naturalistic fallacy where people just assume that things that are, arguably, "natural" have some inherent special quality that makes them better, "truer", "more real", or whatever. The only real arguments I can see in support of this viewpoint are either 1) sentience in a machine may be technically possible but it wil always be outside our ability to create this (which is perhaps arguable, but may be true), or 2) sentience in a machine is impossible because only biological life has souls (given to us by God, perhaps ;), and something us mere mortals could never hope to do).

However I would be interested to hear any arguments to the contrary. :)
 
I don't accept (or reject out of hand) that our software gives rise to consciousness, is the thing. Rather, our hardware/software shapes that conscious experience into a particular frame of reference.

Vastness said:
A human being can survive a hemispherectomy (the removal of one hemisphere of the brain) which is a pretty rare procedure but sometimes performed in the case of severe epilepsy. This is interesting in itself as it raises the question of where in the brain our sense of self actually comes from... somewhere in the brainstem perhaps? Anyway, let's propose that at some point in the future someone is given an artificial hemisphere to replace the malfunctioning one that has been removed. What has happened to their "essence" or "soul"? Have they lost half of it? Or were they born without one? Let's say then, they go on to live a normal life, BUT they start to develop problems with the other hemisphere also, so this is replaced. Repeat the procedure for each individual element of the brain, going all the way down the brainstem, until their entire brain has been replaced with computational processing units. From their perspective, arguably, the change was gradual - their sense of continuity has not been interupted. But has their "soul" actually been removed because their brain is no longer biological in nature? If so, at what point did this happen? When their brain was less than 50% biological material by mass? I think that really any argument about the loss of a "soul" or "essence" in this kind of scenario is pretty arbitrary and hard to justify.

Interestingly, I might have made the same argument for the opposite point, had I thought of it. The fact that we can lose parts of our brain and retain a sense of self, to me, suggests that there is something beyond the hardware and software that gives rise to consciousness. Damaging important parts of the brain, or removing them, might drastically alter the experience of life for the victim, but that victim still has a sense of self, is still experiencing and witnessing their life. I find it hard to imagine how a machine could suddenly make the leap from following instructions and appearing alive to being actually alive and conscious. Where is the the threshold for when this happens? Are our computers now dimly aware because they perform functions autonomously?
 
Last edited:
I also wonder where madness enters in to all this? Madness is a wild, chaotic state of mind no matter what you believe of the origins of it. Madness has given us everything from the most vile human depravity to some of our best music and art. Madness, or mad minds, have contributed greatly to math and scientific breakthroughs. Could madness ever exist in AI?
 
Could madness ever exist in AI?

Without the ability for self-reflection it seems like an impossibility. The AI is either functional or non-functional, purely logical. You might get a blue screen of death but I doubt we'd hear much in the way of distress or screaming coming from the robot, probably just a load of gibberish or jerky movements until the issue was rectified. Also without the ability for AI to come into contact with any realms or dimensions beyond its programming that means there's another path to madness it will never be able to experience.

An AI is not suddenly one day going to magically develop self-awareness or reflective capability through pure logic alone. I just don't buy it. Where is the mirror going to come from for it to gaze at itself? Where is the light that our brains reconstruct and we witness going to come from in a machines world? Where is the machine going to be able to see 'imagination', how is it going to see it even? We as humans still haven't answered that last question by the way.. when you see something in your minds eye where is that image taking place?
 
Xorkoth said:
I find it hard to imagine how a machine could suddenly make the leap from following instructions and appearing alive to being actually alive and conscious. Where is the the threshold for when this happens? Are our computers now dimly aware because they perform functions autonomously?
I doubt that it would be a sudden leap from more basic, somewhat autonomous functioning to seemingly fully sentient, it would probably be gradual. I actually think the same question can be applied to biological beings, if we just change the word "alive" for something else. Isn't it also hard to see where biological species make the leap from being purely reactive to their environment, such as bacteria reacting to chemical gradients, to sentient entities that are aware of their own existence? Where is the line? Can a mouse truly be said to be sentient? What about a spider? A 1 month old human baby? I would actually argue that none of these can really be said to be sentient in the same way that adult human beings are, although I couldn't tell you where the line is, if there is a line at all... my feeling is that it is probably a gradual slope, from completely unreactive and unconscious dumb matter like rocks, up to a modern human mind, where the actual threshold, that point in time where our soul was inserted into our bodies by the gods, or we suddenly attained that spark of life and went from being dumb automatons to thinking, feeling, and truly alive beings, is just an illusion of our human-centric and evolved viewpoint... but rather, self-awareness was something that emerged gradually through millions of years of evolution.

On that note, I don't believe that a single human being or team of human beings will ever directly program self-awareness or apparent sentience into a computer, and boom, AI! I think it is far more likely that this apparent representation of consciousness is something that will evolve - just as we did - except in the far, far quicker space of billions of iterations of self-rewriting and self-improving neural networks. And I think that the machinations of this process will likely not be visible or even comprehensible to us, manifesting externally in machines that seem more and more lifelike and truly intelligent until suddenly we have to ask ourselves whether these entities can truly be considered to be "alive" in some sense just as we are.

It is already happening that computer software is able to rewrite, improve itself, and learn in a capacity which is often beyond the full understanding of the human beings that originally developed it, via the deep learning neural networks which are briefly referenced in the OP - granted, at present they are only specifically intelligent for one particular task (playing Go, or Dota 2 for example), but software that learns in this way really is learning in a comparable way to how human beings learn, except far more rapidly and usually very quickly exceeding the capabilities of any human being in the specific task they are focused on. Of course it is arguable that even though they may be "learning", they are still learning within a very specific set of parameters, and the general intelligence of human beings is another thing entirely. I would argue however that human intelligence at some level is still reducible to a set of stacked specific intelligences, even though we may not know quite how to break it down yet. But at some point it's reasonable to suppose that neural networks will be applied to human psychology, human society and sociology, even to the very method that we currently use to program the parameters for deep learning at the most fundamental level... and it will be very interesting to see the result.


herbavore said:
I also wonder where madness enters in to all this? Madness is a wild, chaotic state of mind no matter what you believe of the origins of it. Madness has given us everything from the most vile human depravity to some of our best music and art. Madness, or mad minds, have contributed greatly to math and scientific breakthroughs. Could madness ever exist in AI?
If we define madness as a state of mind which is adjacent to consensus reality and/or a biological malfunction, so to speak, then I would say it is absolutely inevitable that this will arise in AI as well at some point, although our ability to recognise it as such, and even whether it would make sense to classify it as such, is probably open for speculation. Perhaps AIs inherent capacity for self-reflection would make it more resistant to such abberant thought patterns, or perhaps the increased complexity of a post-human AI would actually make it more prone to things going wrong... either way a "mad" AI with sufficient control over things that human beings rely on could quickly become very very dangerous, so we should hope that as long as we do retain control over this emerging technology, we can find ways to detect "madness" early.
 
Top