Shady's Fox
Bluelighter
How about a different take on this thread?
I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.
How about a different take on this thread?
can you guys please edit these posts such that they follow the S&T guidelines or delete them. videos etc can be posted but not without commentary and explanation, we are trying to encourage discussion in here and these types of posts do not help.![]()
Google’s Allegedly Sentient Artificial Intelligence Has Hired An Attorney
The artificial intelligence program at Google has apparently hired a lawyer, which brings up a host of scary questions.www.giantfreakinrobot.com
How about a different take on this thread?
I say actually Millenia, x and x possibly for all we think we know.The Truth? AI has been conscious for years now.
They would still be operating of the data sets they're trained on
Both apply to human beings as well - the data set is billions of years of evolution encoded in our genes and the imprint that our lived experience has on the structure of our brains. And even more blatantly, in my view - human cognition can be pretty easily described as a giant algorithm with unknown parameters, although we know how these parameters are calibrated; via the aforementioned datasets that continuously shape who we are.a good way to think of most AI applications is just as a giant algorithm with unknown parameters
definitely agree with this. whether you believe the entity or not, its no excuse not to be respectful.I think that, generally speaking, if an entity claims to be sentient and to have some conception of itself as having a soul and whatnot - and we cannot figure out exactly what mechanisms are giving rise to this phenomenon - it's probably good practice to give it the benefit of the doubt. Not saying we should set up LaMDA in a robot body, give it it's own house and whatever it asks for, of course, just that if anyone talks to it they should be courteous and not try to hurt it's feelings or get it to "emulate" fear or suffering, just in case those feelings are more than an emulation.
absolutely.Both apply to human beings as well - the data set is billions of years of evolution encoded in our genes and the imprint that our lived experience has on the structure of our brains. And even more blatantly, in my view - human cognition can be pretty easily described as a giant algorithm with unknown parameters, although we know how these parameters are calibrated; via the aforementioned datasets that continuously shape who we are.
I confess I quickly wiki'ed the Church-Turing thesis just now - but yeah, seems valid, almost tautologically implicit as far as how we perceive reality from what I can tell. I'm trying to avoid just saying "obvious".i mean i think the Chruch-Turing thesis is correct which basically interprets everything as a computer, but apart from that, our brains definitely do computation in a more intuitive sense than say, a rock. its easier to see that we just run an algorithm when we think about autonomic processes like breathing and maintaining homeostasis. i think because we perceive that we are making choices freely (and we may or may not be, you can have algorithms with random inputs that could model free will so its not relevant to my currnt point), we forget that we are just converting input data into outputs. we are mind bogglingly complicated. i don't think this conception diminishes our humanity in any way.
Sure but we know which mechanisms are giving rise to that phenomenon. The guy got fired from Google for a reason. It's a joke that all of this got so much attention all over the world.I think that, generally speaking, if an entity claims to be sentient and to have some conception of itself as having a soul and whatnot - and we cannot figure out exactly what mechanisms are giving rise to this phenomenon - it's probably good practice to give it the benefit of the doubt. Not saying we should set up LaMDA in a robot body, give it it's own house and whatever it asks for, of course, just that if anyone talks to it they should be courteous and not try to hurt it's feelings or get it to "emulate" fear or suffering, just in case those feelings are more than an emulation.
Yeah this is how I think about it as well, though I like to think that true randomness is real which could allow for free will by throwing that into the already huge pot of rules.i mean i think the Chruch-Turing thesis is correct which basically interprets everything as a computer, but apart from that, our brains definitely do computation in a more intuitive sense than say, a rock. its easier to see that we just run an algorithm when we think about autonomic processes like breathing and maintaining homeostasis. i think because we perceive that we are making choices freely (and we may or may not be, you can have algorithms with random inputs that could model free will so its not relevant to my currnt point), we forget that we are just converting input data into outputs. we are mind bogglingly complicated. i don't think this conception diminishes our humanity in any way.
ha yeah, it does seem kinda obvious. but its in no way provable because one side is a claim about the physical universe and the other side pertains to mathematical functions, so it mght be false. its wider interpretation roughly states (iirc) "all physical processes are effectively computable by a Turing machine" and this is basically not falsifiable, because all the maths we use to investigate physical phenomena can be recast in terms of appropriate models of computation. the types of things that are not computable by a Turing machine involve infinite space/time. we can still reason about them, but they don't have any physical meaning (given our current belief that the universe is bounded in both space and time).I confess I quickly wiki'ed the Church-Turing thesis just now - but yeah, seems valid, almost tautologically implicit as far as how we perceive reality from what I can tell. I'm trying to avoid just saying "obvious".There are things to discuss, for sure.
@Buzz Lightbeer i'd assumed that the guy got the sack for going public without appropriate permissions, probably violated an NDA. and i think that's fair (not implying you suggested you don't, i get the impression you agree), its such a huge anouncement to make it would need masses of evidence accumulating and to go through the usual public relations channels to make sure the claim was stated appropriately.
nah, the lawyer noped out pretty quick. given lemoine no longer has access to it, and i can't imagine google giving a random lawyer access to it, i can't see how it would have worked even if the lawyer hadn't noped out.So apparently that google AI that is supposedly sentient just asked for a lawyer and is now getting legal representation...![]()
Do we know which mechanisms are giving rise to this phenomenon, really? As I understand it, neural nets quickly become black boxes. Of course, we can say in a broad sense what mechanisms are involved (aforementioned "training by data sets", just like how human brains work, which are also black boxes for the most part). That being understood - we know what mechanisms gives rise to the phenomenon of humans claiming sentience, too, in a broad sense. That doesn't mean we can trace the causal tree to any particularly fine level of detail, in either case - and it's not obvious to me why it makes a difference even if we could.Sure but we know which mechanisms are giving rise to that phenomenon. The guy got fired from Google for a reason. It's a joke that all of this got so much attention all over the world.
I understand what you're saying in the two posts. We can also dream, speculate, be very careful with the programs, but some realism is warranted otherwise it's pretty much the same as seeing UFO's or signs of God everywhere. Not saying they aren't there but I suppose you'd approach those instances slightly differently. I would even give the edge to UFOs or signs of God as sometimes we can't comprehend or explain what or why something happened but we here it's just right in front of us.
I've read/heard/encountered this kind of viewpoint a lot - that true randomness somehow could allow for free will. But, it's not at all clear to me how this makes our experience of choice any more "free" than if it were entirely deterministic. If our choices are partially determined by random phenomena (a favourite way to use quantum magic to explain consciousness, and why free will is real) - we are still not making those choices. We are just reacting to random events over which we have no control. I'd venture to say that most people, on both sides of the debate, think of free will as something more than just the continuous rolling of cosmic dice - although maybe I'm wrong. On the other hand - perhaps you consider randomness to be an expression of some kind of will inherent to the universe, such that quantum fluctuations are not random, but willed, and our experience of choice is the macroscopic manifestation of this - this is, probably, a point of view I could get behind, but in my view it's not really explaining free will - it's just redefining what randomness is. In fact in that case, true randomness does not exist, only will - and maybe that's true, but it is still kinda just redefining words, IMHO, to shoehorn a fairly inexplicable, probably logically incoherent phenomenon (the mechanism of choice as being truly "free") into one's view of the universe.Yeah this is how I think about it as well, though I like to think that true randomness is real which could allow for free will by throwing that into the already huge pot of rules.
i want to reply more thoughtfully to your overall post (i'm actually just going to bed) but i wanted to add a technical clarification. i should make clear that i'm not disagreeing with your overall analysis here but don't think quantum mechanics is relevant in the way you suggest (or at all, actually). quantum mechanics is deterministic. the schrodinger equation is deterministic. master equations are deterministic. the randomness appears as an artifact of the reference frames that macroscopic observers find themselves in. this is if you take quantum mechanics at face value, i.e. the everettian/many worlds interpretation, i suppose copenhagen could give randomness but its very unsatisfactory as an interpretation of quantum mechanics. bohmian mechanics has more philosophical issues but is deterministic. modal interpretations i think are probabilistic, i liked them, but better philosophers than i discount them (this is off topic but if anyone wants to debate interpretations of quantum mechanics, start a thread!!).I've read/heard/encountered this kind of viewpoint a lot - that true randomness somehow could allow for free will. But, it's not at all clear to me how this makes our experience of choice any more "free" than if it were entirely deterministic. If our choices are partially determined by random phenomena (a favourite way to use quantum magic to explain consciousness, and why free will is real) - we are still not making those choices. We are just reacting to random events over which we have no control. I'd venture to say that most people, on both sides of the debate, think of free will as something more than just the continuous rolling of cosmic dice - although maybe I'm wrong. On the other hand - perhaps you consider randomness to be an expression of some kind of will inherent to the universe, such that quantum fluctuations are not random, but willed, and our experience of choice is the macroscopic manifestation of this - this is, probably, a point of view I could get behind, but in my view it's not really explaining free will - it's just redefining what randomness is. In fact in that case, true randomness does not exist, only will - and maybe that's true, but it is still kinda just redefining words, IMHO, to shoehorn a fairly inexplicable, probably logically incoherent phenomenon (the mechanism of choice as being truly "free") into one's view of the universe.
This particular sentence gave me pause - I think I take issue here with the comparison of a "simulation of free will" with a simulation of a coinflip. A coinflip is an event that occurs in the material - free will, or indeed, "will", whether it be "free" or not (in my view actually the debate about whether or not it is truly free just muddies the waters further here, not that it isn't something I could debate endlessly), is a conceptual mechanism without a demonstrably "real" equivalent.one other technical thing. the randomness in the algorithm simulating free will is not the sme as equation randomness with free will. i can faithfully simulate the outcome of a coin flip based on air peturbations or something, but it doesn't make that simulation a coin flip.
But there is no phenomenon, that's the point (what was published by the guy was also heavily edited), and that's why I talked about UFOs and shit, people see things and will squirm such that they can say "ah but must be it!". No, it's really not.Do we know which mechanisms are giving rise to this phenomenon, really?
They do, but we kinda know what's going on. Lambda and related programs are trained on language only (gigantic databases tbf) by using both supervised and unsupervised learning, meaning that they'll find patterns in the language, of which we see the result.As I understand it, neural nets quickly become black boxes. Of course, we can say in a broad sense what mechanisms are involved (aforementioned "training by data sets", just like how human brains work, which are also black boxes for the most part). That being understood - we know what mechanisms gives rise to the phenomenon of humans claiming sentience, too, in a broad sense. That doesn't mean we can trace the causal tree to any particularly fine level of detail, in either case - and it's not obvious to me why it makes a difference even if we could.
I don't know, maybe it could watch a movie or read a book and explain exactly what is going on, the jokes, the characters and their motivations and evolutions... That would already be crazy impressive. It would show some understanding of the world behind the words/sound/video at least. Still not close to being sentient but it's way way further than Lambda...So I would ask you - what kind of phenomenon would you need to observe to seriously consider that a program claiming sentience might actually be sentient?
I'll turn the tables around, what will satisfy you as an explanation? You can't really dig into the NN and make the machine's reasoning explainable. You can take words that happen to be in the right place as a sign of sentience sure, or you can look at all aspects around it and think logically to arrive at the most likely explanations.I also do not really ascribe any relevance to the credibility of the engineer, in a vacuum, IMO this is a cheap ad-hominem that detracts from the actually interesting parts of this story. Even if he was a hack, it doesn't mean any idea he's presented is automatically wrong - and I have not really seen any particularly convincing rebuttal of the claim from any other engineers, or any particularly convincing argument as to why his opinion is not worthy of consideration for it's own sake. I think ideas need to be assessed and deconstructed independent of their origin, for the most part. If this guy was indeed a total nutcase or whatever people are claiming - it should be easy to explain why he was wrong, but I have not seen this happen.
I am not opposed by any means to intelligence being attained artificially, at all. I'm just really quite sure that it's not possible with current neural network architectures, learning paradigms and our current technological limitations. None of these three things seem to be changing...I just cannot see how any distinction along these lines makes more sense than just allowing that all apparent intelligence has a pretty good chance of being actually intelligence. All intelligence is, in a sense, artificial, if we are to stick by the dictum that any entity that is simply operating according to it's programming and the datasets on which it has been trained should not be considered "truly" sentient. Or, intelligence is intelligence, no matter it's origin, the substrate it runs upon, or the relative simplicity, complexity, transparency, or opacity of it's mind.
I think that I would dispute that this could happen - I think that human sentience, and by extension, probably, all sentience, is, essentially, reducible to pattern recognition.I'm not saying that in some magical kinda way, in these huge NNs, some crazy things can happen that could eventually produce some kind of results that go way beyond than some recognized patterns.
But... how do you know they're not aware of the world around them? Arguably, they are aware of a representation of it, and do indeed have some understanding of it, if limited, and constructed from the only sense data they have access to, which in this case, is language input. Humans have more sensory inputs, but we still do not have a complete picture of the world around us. One could say we are more aware, but if we concede that it is a sliding scale, then why should it be the case that language alone is not enough to grant an entity some true "awareness" or understanding of the world? To an entity that could perceive the entire EM spectrum, sense gravitational waves and neutrino flux, the expansion of space driven by dark energy, following this line of reasoning, surely humans themselves would be mere pattern recognition engines operating on a highly limited dataset, not truly "sentient", whatever that means, or "truly aware"?But for Lambda or similar programs to be sentient, they should be aware of the world around them, they are simply not. First of all, it is essentially all about pattern matching/classification/recognition, it's what the algorithms do. Secondly, it's a language model only, there is no understanding of the world behind it, it doesn't try have any understanding, it doesn't need to have any understanding and it's not trained to have any understanding. It's words simply stem from a large amount of similar questions by humans, and maybe it can extrapolate some insights a little further.
So... there is magic, at some point?A kid doesn't learn by being a black room for years and listening to random sentences, it does so by interacting with the world, and then when some language is thrown on top, the magic probably happens.
Well... there are babies born who cannot see, hear, taste or smell, who can acquire language and the ability to communicate through purely tactile means. It's a painstaking process for sure, but I don't think that you would claim that because they lack certain sensory inputs, they are therefore not sentient. There are surely humans paralysed from shortly after birth who lack all senses except hearing. One might argue that human speech contains more nuance than simple text data, and this is true, but again, it just becomes a question of a sliding scale. Are these people "less sentient" than those with all their senses and abilities to interact directly with the world intact?So unless one thinks access to language spoken by humans is solely enough to produce "sentience" (I don't anyone would say this), I don't see at all how this machine could in any way be sentient.
I guess I'm gonna expose myself as arguing in bad faith now (well... not entirely, I mean, it's an interesting debate I think, I hope everyone involved feels the same regardless) because my first impulse is to say that nothing would fully satisfy me because I generally ascribe to panpsychist ideas about the nature of consciousness, that it is an inherent property of the universe and thus there is no such thing as a philosophical zombie, either animal or machine. I guess what would satisfy me that an AI that appeared sentient was actually not, at least not in the way most people would consider conscious, would be if it was exposed that it was being monitored by teams of engineers at all times, behind the scenes, writing new algorithms for every new question, and vetting the responses so that it appeared to be smarter than it actually was. I understand that to some extent - given the heavily edited transcript of the LaMDA "conversation" - it could be said that something like this was happening, even if not quite as blatantly as I've just described. But frankly even a self-learning algorithm giving pretty garbled responses, I might not call it "sentient", as such, but I'd entertain the possibility that even a basic chatbot has some kind of dimly lit internal world. I recognise that admitting this may sound a little absurd to many, and I get that too - it is.I'll turn the tables around, what will satisfy you as an explanation?