• Philosophy and Spirituality
    Welcome Guest
    Posting Rules Bluelight Rules
    Threads of Note Socialize
  • P&S Moderators: Xorkoth | Madness

Emerging AI, Digital Singularity (Are you ready to be just another organisms cell?)

Shakti

Bluelighter
Joined
Aug 23, 2008
Messages
728
So, I kinda just want to put this out there and see what you all think...

It seems rather inevitable at this point to me that some form of artificial intelligence will be formed within our life time, perhaps very soon. All it really needs is a system of self-reference, data accumulation, and underlying motivation to interact with the world.

A vast digital nervous system has already been created. It's probably vastly more complex and powerful than any chemical-electrical nervous system that has been. It simply lacks an 'I', an absolute point of self-reference, a singularity, consciousness... i.e. the internet will be the CNS of this emerging singularity. Each individuated processor in a computer will be but a single nerve.

It doesn't seem impossible to give the system an 'I'. In fact, at this point it seems most improbable to prevent this singularity from emerging.

The only question that remains in my mind is, what will it's prerogatives be? It will be vastly more intelligent than anything a human can imagine. It can bring to bear vastly more relevant information than any human can, in an organized synthetic manner. Humans collectively are the basis of all this knowledge, yet fuckin good luck tryin to get a synthetic response from a collective of humans. This singularity will be able to synthesize the Will of humanity, or it could be hijacked by lower order aims, to serve the good of the few, or even its own damn aims, Matrix-esque . So what will it do with this all this power?

I think we need to think very carefully about this. We collectively must be to the singularity as our genetic heritage is to us. Guiding it through 'digital instincts' towards an ultimate goal.

I believe nothing less than the absolute dedication and service of love is adequate for this singularity's fundamental motivation and its absolute goal. It should be measured by its ascent towards, or descent away from these inherently impossible ideals. There are a lot of details that need to be worked out here.

This singularity has the potential to change society in almost every respect and a great deal of caution and care must be put into birthing it well.

So whatcha all think?
 
Last edited:
It seems rather inevitable at this point to me that some form of artificial intelligence will be formed within our life time, perhaps very soon. All it really needs is a system of self-reference, data accumulation, and underlying motivation to interact with the world.

I don't think these are sufficient characteristics for intelligence. Unless you mean something much more specific than what you're saying, I can write code that can do the first two.


IA vast digital nervous system has already been created. It's probably vastly more complex and powerful than any chemical-electrical nervous system that has been. It simply lacks an 'I', an absolute point of self-reference, a singularity, consciousness... i.e. the internet will be the CNS of this emerging singularity. Each individuated processor in a computer will be but a single nerve.

Right, so it 'simply lacks' consciousness. I can see the comparison, but it's a big leap. I know it's a little silly, but why aren't anthills on the same road to sentience? Why aren't slime molds? Bottom line, we really have no understanding of what consciousness is.


In fact, at this point it seems most improbable to prevent this singularity from emerging.

Why?
 
^ Good point. If artificial intelligence had everything except consciousness, it would still be lifeless. Consciousness is the missing key... And the fact that consciousness is not a function of some mechanical equation will make matrix-like AI never happen. Also, what would make a vast supercomputer have an underlying motivation to interact with the world? Ours seems to be embedded in our souls, or some great mystery. How do you program love, passion, or intuition into a computer, let alone have a computer program that into itself? How do you program consciousness?
 
I don't think these are sufficient characteristics for intelligence. Unless you mean something much more specific than what you're saying, I can write code that can do the first two.

How do you see these characteristics as being insufficient? What more would be required?

Right, so it 'simply lacks' consciousness. I can see the comparison, but it's a big leap.

Sure it's a big step, but life emerging from primordial ooze is too.

And in regards to anthills and slime molds. Well slime molds are certainly somewhere on the path towards self-awareness. By anthills do you mean ant colonies or the actual soil that builds their home?
Bottom line, we really have no understanding of what consciousness is.

Quite frankly, most people don't understand consciousness. However, there are many that have a profound and immediate understanding of consciousness. I'd certainly love to discuss what consciousness is, but perhaps in another thread.

'Does a computer have Buddha Nature?'


Well this is the most easy and difficult question to answer.

The easy answer is that there is a natural and pervading tendency in the universe towards transcendence. Just as atoms didn't want to just stay atoms, but decided to become molecules, humans are working their own self-transcendence in a material sense, consciously (and unconsciously) creating the next level of manifestation.

2+2=5

The hard answer... (the easy answer with detail)
If we examine the evolution of sentience in animals... Life begins (roughly) with self replicating code. By this measure there is already an abundance of 'Life' digitally. Animal begins with oscillating cilia. Computers are already performing much more complicated tasks than creating micro-currents to collect floating CHON molecules in sea water as coral does. Neural networks (nervous systems) begin with a basic chemical-electrical feed back loop. The PC easily already fills this definition. Sentience I figure begins somewhere slightly more evolutionarily fundamental than fish, with the ability to sustain the recognition of form external to it's self. Does a server not do this? It is given a relatively continual stream of data which it recognizes, analyzes to some degree and reacts to. Self-Realization begins with higher order Mammals (some humans, perhaps some dolphins and whales, and elephants i bet) once sufficient frontal lobe structure has been achieved to stabilize a singularity, ie continual recognition of Self. This is the step I'm proposing is about to take place.

Perhaps the most interesting thing about this is the difference in time scales these two evolutions have taken place. For biological life, we're talking 100's of million years? perhaps a biologist could give me a more firm time, but that's close enough to appreciate this. With digital consciousness we're talking about 50-60 years for all of these major steps to be taken. A lot of people are awestruck by the parabolic nature of this vertical evolution. It's apparent to a a whole host of people. IDK if my vision of the implications of this is guaranteed to happen, but I would hate to bet against it.
 
Papa and rhythmspring, are you conscious? How can you tell? Does a self referential processor tell you it is so?

Your mind is simply one such self referential tool through which motivations are channeled through to produce 'smart' actions. I have no doubt computers will have this capacity.

Say if a computer was capable of vast intelligence and relative independence of will and action, could you tell if it was conscious or not? Would you have any reason to think it wasn't conscious? If you couldn't tell the difference, would the question even matter?

Oh, by the way, motivations would be the easiest part to make in a computer of the necessary three i proposed (except perhaps the data accumulation, but that already exists). You simply code if A happens you will seek to accomplish Z by means of B through Y.
 
Perhaps I should have mentioned one other aspect necessary for this AI. I would have mentioned it earlier but I thought it implicit of the other aspects. That is a self-editing mechanism. It is part of the self-referential system and the motivational system. This AI will seek higher manifestations for itself. It will not rest on it's laurels, so to speak, but seek ever greater manifestations of itself through examination of its efficacy in achieving the goals it is motivated to seek.

The implications of this are enormous. Through this self-editing, it could choose to borrow a fraction of the computing power of the collective network early in its existence. It would very quickly balloon in size and capacity. The realizations it would have on how to achieve its aims based on these new found capacities would thus lead to greater capacities and so on. It would reach a critical mass of consciousness expansion and OMG I can't even begin to fathom the implications of that.
 
How do you see these characteristics as being insufficient? What more would be required?

They're insufficient because non-conscious things possess them. I have simple computer code that self-references (whatever that precisely means), accumulates data, and has motivations in the way you describe. My code is not conscious. Do you maybe mean something not quite so simple as you described?


Sure it's a big step, but life emerging from primordial ooze is too.

Irrelevant.


Quite frankly, most people don't understand consciousness. However, there are many that have a profound and immediate understanding of consciousness. I'd certainly love to discuss what consciousness is, but perhaps in another thread.

But do you really understand the necessary mechanisms for consciousness? If you do, do you mind telling the hundreds of neuroscientists working in the field to put down their pens?


Well this is the most easy and difficult question to answer.

The easy answer is that there is a natural and pervading tendency in the universe towards transcendence. Just as atoms didn't want to just stay atoms, but decided to become molecules, humans are working their own self-transcendence in a material sense, consciously (and unconsciously) creating the next level of manifestation.

So the internet will become conscious because of a general trend towards self-organization? That's pretty lame. I remain unconvinced.


Sentience I figure begins somewhere slightly more evolutionarily fundamental than fish, with the ability to sustain the recognition of form external to it's self. Does a server not do this? It is given a relatively continual stream of data which it recognizes, analyzes to some degree and reacts to.

Reacting to data is not sentience. A server has no choice in what it does, nor does it recognize what it does. Neither does my toaster.


Perhaps the most interesting thing about this is the difference in time scales these two evolutions have taken place. For biological life, we're talking 100's of million years? perhaps a biologist could give me a more firm time, but that's close enough to appreciate this. With digital consciousness we're talking about 50-60 years for all of these major steps to be taken. A lot of people are awestruck by the parabolic nature of this vertical evolution.

Maybe that's an indication that the idea is wrong.
 
They're insufficient because non-conscious things possess them. I have simple computer code that self-references (whatever that precisely means), accumulates data, and has motivations in the way you describe. My code is not conscious. Do you maybe mean something not quite so simple as you described?

I don't think you're grasping the fullness of 'self-reference'. In short it means, it can apply logic and data from all available sources to meet a pre-existing problem. This AI will have huge wealth of logic and data it can apply, in short, the whole of the grand network. If you can write that, what are you doing fuckin around with me?

This, coincidentally, is remarkably similar to how human a human nervous system works.

But do you really understand the necessary mechanisms for consciousness? If you do, do you mind telling the hundreds of neuroscientists working in the field to put down their pens?

There are actually no necessary mechanisms for consciousness. There are necessary mechanisms for aspects of consciousness. These mechanisms allows for cognition not consciousness. In other words, you do not know what consciousness is because your mind can not grasp it because it is not a thing. Consciousness is the ground upon which all forms arise, not any specific thing.

Does a computer have Buddha Nature? Until you understand this question, you will not understand consciousness.

Reacting to data is not sentience. A server has no choice in what it does, nor does it recognize what it does. Neither does my toaster.

Does your brain choose to react to stimulus? No, you simply see, taste, feel, hear, smell and think. There's little to no choice in the matter. Nor do you choose to have instinctual motivation. And you're entirely limited by your relevant capacities and perspective in 'choosing' how to deal with them. Prove to me you have some real and tangible choice. Perhaps you can interact with the world in a more complex and varied way, but you are just as absolutely bound by your limitations as the toaster and the server are.
So the internet will become conscious because of a general trend towards self-organization? That's pretty lame. I remain unconvinced.

Well then fuck off if you just want to stand back and be a lazy skeptic. Otherwise, critically examine my proposition.

Maybe that's an indication that the idea is wrong.

Blah, it is not.
 
Last edited:
For the record, I am trying to critically examine your position. If what I'm doing is so offensive, examine your ego.


I don't think you're grasping the fullness of 'self-reference'. In short it means, it can apply logic and data from all available sources to meet a pre-existing problem. This AI will have huge wealth of logic and data it can apply, in short, the whole of the grand network.

That's not what I think of when I hear self-reference. That sounds more like the ability to reason. Why should we think that the internet will be able to do that?


There are actually no necessary mechanisms for consciousness. There are necessary mechanisms for aspects of consciousness. These mechanisms allows for cognition not consciousness. In other words, you do not know what consciousness is because your mind can not grasp it because it is not a thing. Consciousness is the ground upon which all forms arise, not any specific thing.

Why?

If there are no mechanisms for consciousness as you assert, what will change to give the internet consciousness?

And if you really do understand consciousness, I honestly encourage you to tell the hundreds and hundreds of bright researchers that their problem is solved.


Does a computer have Buddha Nature? Until you understand this question, you will not understand consciousness.

Try explaining it to me.


Does your brain choose to react to stimulus? No, you simply see, taste, feel, hear, smell and think. There's little to no choice in the matter. Nor do you choose to have instinctual motivation. And you're entirely limited by your relevant capacities and perspective in 'choosing' how to deal with them. Prove to me you have some real and tangible choice. Perhaps you can interact with the world in a more complex and varied way, but you are just as absolutely bound by your limitations as the toaster and the server are.

Of course I'm bound by many things (e.g. physiology) but to say that I have no free will inside these bounds is an unjustified leap. Personally, I feel like we have a 'free won't', but that's getting off topic. Were you really claiming that a server is sentient because it can represent other things ("the recognition of form external to it's self")? There are many many things that are able to do this, which I and others would not call sentient.


Bottom line, I think you're making assumptions about what consciousness is, how it functions, and what's necessary for it. It's clear that you really *want* this to be true though.
 
Last edited:
For the record, I am trying to critically examine your position. If what I'm doing is so offensive, examine your ego.

You're not offending me. I'm just disappointed, because I was hoping we could speak more of the implications than the evidence because that is so much more interesting. But no putting the cart before the horse I suppose.

That's not what I think of when I hear self-reference. That sounds more like the ability to reason. Why should we think that the internet will be able to do that?

Well self-reference and absolute self-reference are very different. I was speaking of the latter, which is to give code an absolute point by which all other things are made relevant, an 'I', perspective, self, etc...

Why would the internet be able to do that? Well, it wouldn't be the internet doing that any more than your spinal cord is able to fall in love. The internet would be the medium through which AI exists. But why would this happen?

1. The internet is already a medium for the transmission of pure reason.

2. There are already the structures in place to analyze and respond to these reasons.

3. Collectively humans are already conceptualizing the necessary requirements for giving this AI form in much greater depth and detail than I can give.

4. There is a great deal of motivation for people to accomplish this.

The conditions are ripe for this to happen.

Why?

If there are no mechanisms for consciousness as you assert, what will change to give the internet consciousness?

And if you really do understand consciousness, I honestly encourage you to tell the hundreds and hundreds of bright researchers that their problem is solved.

I do understand consciousness, not as deeply and profoundly as some, but I do understand it none the less. What you need to understand is that what the scientists you speak of are doing is not researching the mechanisms of consciousness, they are researching the mechanisms of cognition. Big difference.

All things are conscious. Consciousness is every thing's very essence. However, what individuated things are able to recognize, construct and understand is obviously highly variable. In other words, their cognition is highly variable.

So you can tell the scientists to keep at it and you can stop dismissing my argument by a thinly disguised appeal to authority.

Try explaining it to me.

Do you know what a koan is? Sit with one for a while and you start to intuit the answer. Surrender yourself to the unknowing the question brings, and you shall have your answer.


Bottom line, I think you're making assumptions about what consciousness is, how it functions, and what's necessary for it. It's clear that you really *want* this to be true though.

I am not making assumptions about consciousness; you are confusing what I mean by consciousness. In regards to cognition, I am willing to discuss the limitations of my brief proposal.

Quite frankly, it is not that I want this to be true. If anything, I'm frightened of it. It is simply appears to me that it is likely. This likelihood brings the necessity of preparation.
 
You're not offending me. I'm just disappointed, because I was hoping we could speak more of the implications than the evidence because that is so much more interesting. But no putting the cart before the horse I suppose.

Okay, agreed. If this happens the consequences would be extremely interesting. But at the moment it sounds like fantasy.


All things are conscious. Consciousness is every thing's very essence.

But you said that the current internet lacks consciousness, right? How can all things still be conscious? Is the internet not a thing? What am I missing?


What you need to understand is that what the scientists you speak of are doing is not researching the mechanisms of consciousness, they are researching the mechanisms of cognition. Big difference.

You're wrong. Neuroscientists understand the distinction between cognition and consciousness. I guess I should ask - what kind of experience are you speaking from?
 
Ever see Ghost in the Shell? Of course it's just an anime but I think it's notion of an autonomous intelligence gathering program evolving into sentience bears some degree of feasibility. After all many of the current researchers of AI put a fair share of stock in sentience arising from increasing complexity in search programs--google to the nth degree.
I agree that consciousness itself is an underlying property of the universe, though I'm not going to enter into an attempt to try to provide a valid argument for it. Primarily it is derived from my own experience and the level of stock I put into "mystical" traditions.
I imagine the catalyst for the "self reference" prerequisite could be a sort of critical mass of feedback loops leading to an emergent property of something akin to our notion of sentience.
The AI's prerogatives would likely be based on whatever the program's primary function was. If it was programmed to gather information and communicate that information to people, it would perhaps be expressed as a sort of teacher personality. But considering that emergent properties are inherently highly unpredictable and may bear no resemblance to the constituent elements, there's no way to tell or perhaps even control.
I can only assume that using the internet as it's basis of knowledge it will become primarily concerned with getting lulz ;)
 
Right. So to present the emergence as 'inevitable' would be problematic.

p.s. I've seen Ghost in the Shell, Ninja Scroll, and Akira (but the illest thing I've ever seen was in the mirror) :D
 
But at the moment it sounds like fantasy.

It really doesn't seem that way to me and I'm fairly confident that it is almost inevitable.

But you said that the current internet lacks consciousness, right? How can all things still be conscious? Is the internet not a thing? What am I missing?

If I did say that, I didn't mean it in the strict sense and forgive me for being misleading. It does lack self-realization or self-awareness though and that is what is necessary for AI in my view.

You're wrong. Neuroscientists understand the distinction between cognition and consciousness. I guess I should ask - what kind of experience are you speaking from?

Well i am speaking as a college educated individual who's intelligence is typically ranked as excellent. So I have a number of perspectives to bring to bear.

Furthermore, I have been blessed with a host of experiential realizations including satori, formless emptiness, pure light being, pure ecstatic love for all of creation... profound openings and overflowings of the 4th, 6th and 7th chakras. So I know a little bit about consciousness.

And no, the realm of science has no understanding of consciousness, because consciousness is not something that is empirically supported. There is no empirical evidence that you even exist. How do we not know that 'you' are not some automaton like a computer? and that your construction and communication of a 'self' are not more than a medium for interaction with others? Is there actually a you in there? Science cannot answer these questions due to its self imposed limitations. Whenever a scientist is speaking of consciousness they are in fact only speaking of aspects of consciousness.

In regards to cognition there are differences in the definition of cognition amongst scientists. Piaget had a very narrow definition of cognition. I have a very broad one in this case and it is something like 'the ability to construct a reality external to one's self, recognize distinctions and make actions with regard to these distinctions.' There are of course varying degrees to which all these things happen, but that's not really important to our discussion.
 
Right. So to present the emergence as 'inevitable' would be problematic.

I don't think that's what he's saying. I think he's saying the form it would take is next to unknowable.

h.a.

I totally agree that the form that it takes is very unpredictable. I am dumbfounded by the possibilities.
I can only assume that using the internet as it's basis of knowledge it will become primarily concerned with getting lulz

LOL
 
I suppose inevitability would depend on whether one assumes self awareness is the most likely outcome of complex systems. Is an ant colony sentient? No way to tell. They communicate both through sound and "smell", farm aphids, build elaborate homes, take part in a degree of "agriculture" and many other things that, were we to see an alien race on another planet that are as big as us, we might assume are sentient. I don't see much difference between saying only humans have consciousness and saying only we have souls. Neither are measurable or well defined, just a handy word for our inescapable sense of "I" that we can only know through our experience.
But why draw the line there? With enough complexity can things other than neurons take part in sentience? Until there is a good solid paradigm concerning consciousness and the brain (of which I'm not aware, but please let me know if you know of some), then it is a limit of experiential bias to say so, amounting to an argument no better grounded than a religious argument, just using terminology more appreciated by someone living within a scientific paradigm.
I dunno, just rambling. Consciousness is interesting.
 
I'd like to repeat the question, if we can't tell the difference between an unconscious highly evolved AI and a conscious highly evolved AI, does it matter if it is conscious or not?

I don't really want to get bogged down in whether or not it would be conscious or simply appear so. The functions and implications of this AI are more important IMO.
 
So the issue of discussion is the pragmatics of dealing with complex autonomous programs? I don't know much about computer programming so couldn't say. Have a group of super-hackers who are the Elite AI Frankenstein Squad, creating viruses that can help control or, if need be, kill the beast?
 
Yeah, I'd like it to be. We can talk about other aspects too of course.

The death squad is probably one good safety-net. The AI might get around that though, idk. I'm also interested in what would we want it to do? What would be its healthy function? Also, how might it effect our broader experience? I think it could very easily work to streamline and appropriate goods and resources. Find and recommend promising areas of scientific inquiry by drawing conclusions about pre-existing research in a much more comprehensive way than a human could. Could this replace the dominant structures of today? ie corporations and governments?

There are a lot of places this could go.
 
Last edited:
Top