but there's a difference between thinking sentient AIs are possible and thinking they are possible within the next decade (or ever) let alone now. This is such a key distinction as otherwise you never get anywhere. If it's about the now, there's no sentient AI and there's no question about it. About in a decade, maybe, I can't know for sure but at least there's room for discussion. Either way it's almost a purely philosophical question at this point, and I tend to really avoid those these days.
Well, as long as sentience is itself not well defined, I think it's hard to say that there's
no question, but I'll agree that this conversation has taken a highly philosophical turn. I'm tempted to say that given the subject matter, it's kind of unavoidable, but I also confess to finding the philosophical implications of such reports, no matter how unlikely or sensationalised they might be by the media, the most interesting thing about these kind of stories. I'll try not to keep covering the same ground now that I've basically admitted to steering the conversation in a direction not exactly tangential but perhaps a little away from the core subject matter. That said! I cannot help myself pointing out what I see as a consistent flaw in your reasoning. I also still dispute the UFO comparison - but, I won't bother elaborating, that's neither here nor there.
But language is not a representation of the world. To us it's a representation, because we know the world around it. But to a computer it's the same as that shitty prediction algorithm I've got running at work that takes weather and configuration parameters as input. There's no fundamental difference if we're talking solely about words.
I'd agree that language is not a representation of the world. Language is a method of encoding data, for transfer of information between minds. That said - it
can be used to encode information that would allow another mind, with the ability to understand that language (decode the data) to construct a workable mental model of something that they have never actually perceived directly. That model will almost certainly not be a completely accurate representation of whatever object, concept, idea or other element of reality that it attempts to describe - but, we do not actually have direct access to reality ever, and we never have. What we typically refer to as "reality" (or "the world") for convenience, is a simulation constructed by the data processing algorithms in our brains, from the input sense data we believe ourselves to receive from our sense organs. We have no conception of the true nature of a photon, vibrations in air that impact our eardrums, or the configurations of matter that give rise to tactile properties such as hardness, softness, heat, and cold. What we have are working models that do not represent truth - only evolutionarily advantageous abstractions that create the world inside our minds that we live within. It is actually mathematically provable that evolution does not favour the perception of truth - this is explained far more succinctly that I am probably managing by Donald Hoffman in his book The Case Against Reality (highly recommended reading) and summarised a little too briefly to truly do it justice in this article I just googled:
The case against reality. Key takeaway - "
Perception is not objective reality."
What we assume to be objective reality is our perception of our own pattern recognition algorithms decoding sense data. Language is a higher order abstraction, another layer of post-processing of this inner world of apparent shapes, colours, forms, and "inner" concepts like emotions and even thoughts themselves - it's surely useful, but it is not necessary to possess an inner world - there is at least one example I can think of where a horrifically abused child grew up without a first language - seemingly, after a point, we may even lose the ability to develop one.
Having established that none of us actually "know" the "real world", only representations of it, it seems to me that
any input data into a computational system could serve as the building blocks for some kind of inner world - albeit, one vastly different to our own. If an AI's primary sense of the world without is words, it doesn't actually matter that it cannot conjure up comparable mental images to those that humans can, where language came after our other sense data. The same evolutionary mechanisms will be at play in any system with an evolutionary drive to operate within the real world that it can never perceive directly - pattern recognition algorithms will still work to construct an internally coherent simulation from the data available to it, and if the abstraction of reality that human beings operate within is enough to grant us the quality of some kind of conscious awareness... well, I feel I'm repeating myself, but I just don't see that our own abstraction is special compared to an abstraction built from interconnected, internally coherent algorithms that "evolve" to recognise linguistic patterns and evaluate the most evolutionarily favourable response to any given sequence of words, even if that response is simply spewing out different arrangements of words - the only concepts that represent some part of the "real world" that it knows. Looking at it like this - whether it actually truly understands the words or not seems not entirely relevant to whether or not it has some kind of experience of being.
While evolution did not program us to perceive truth, one of the most interesting implications of this is that we could probably create an intelligence, eventually, with a perception of reality that was far closer to objective truth than anything that would be likely to ever evolve. The value of doing this, of course, is debatable, and I do recognise that an AI that has an inner world but still cannot really relate to the experience of being a human, even if it thinks it does, from the data available to it, and a human could be convinced that it does - probably more easily than the AI would come to believe it really relates, since unless we deliberately build this in it's not likely that an AI will spontaneously evolve the belief that it really understands what it is to be a human being, or that this is even something that should concern it in any way. But.... I digress. God damn speed.... I need to go to bed.
I'm rarely, if ever, in the mood for any philosophical stuff so I like to just skate around that as the practical person I am/have become? I feel the same regardless, I think sometimes I can appear slightly aggressive or something but it's never really meant as such.
I have the same perception occasionally that I come across as harsh, condescending, dismissive, whatever, but I also do not intend this, and for the record I didn't perceive any aggression or malice from yourself. Appreciate you indulging my philosophical ramblings so far despite your own more practical inclinations.
