@Skorpio, I agree that we do not, I think, actually disagree.
Skorpio said:
Also i think if ai attains consciousness, it may well be very different than biological consciousness. This of course is negated by the initial conditions of the argument. Unless this is done through simulations of neurons and all other squishy bits, I think there could very well be a valid ai consciousness that is fundamentally dissimilar to biology.
For sure yes, I would expect a true AI, or rather, "AGI" - Artificial
Generally Intelligent - entity - to have a very different psychology to a biological being, especially later iterations of AI that were even further removed from their biological progenitors (we might, perhaps, expect the first few to have a structure of mind which somewhat mirrors the only examples that we currently have to work with).
Skorpio said:
It takes insane amounts of computing power to simulate a very small number of molecules in atomic detail, expanding it to all known matter, with the non linear interactions at a larger scale seems absurd to me.
For sure yes, but it would not be necessary to simulate every single subatomic particle to create a perfectly convincing imitation of reality.
Skorpio said:
Second it assumes that life on other planets will be at all like humans (actually much more advanced). I feel like even with the constraints of "intelligent", life elsewhere would probably be completely unrecognizable.
I do not see that this assumption has been made, and while I agree that life that evolved on a different planet under an alien sun with a completely different biochemistry and evolutionary history would likely be so different to us that we may find such beings difficult or impossible to relate to, technology is something that should be recognisable universally, because all of us - denizens of this particular universe as we are - are bound to operate by the same laws of physics no matter what planet we were born on.
Bringing it back to the actual discussion about simulated realities, I think that even massive psychological disparities are somewhat irrelevant as far as the likelihood of these societies creating simulations once they have the technology to do so. Simulations are very useful, and we have already created many - even if not of the kind of calibre that people usually envision when the Simulation Hypothesis is discussed. Any species which develops technology and then computation is going to have a non-zero potential to be another progenitor of multiple reality simulations of various flavours and various fidelities.
Skorpio said:
I would like you to expand a little bit on the ai versus biological and qualia biases. I think those are fairly interesting. (My view of the conscious experience is that it is a combination of our perception +errors of physical reality plus some neural circuits that make us self aware. (The sims characters for example are not aware, and are quite a bit more algorithmic than a biological system)).
I would be happy to expand on this further, it's one of my favourite things to think about!
My point essentially is that the lines we draw when we use terms like "self awareness", "consciousness", etc, are entirely arbitrary and really a lot fuzzier than we typically assume them to be. I would probably agree that the Sims characters are not aware, their behaviours being largely algorithmic. But the behaviour of any conscious, seemingly self-aware entity is entirely algorithmic once you peel back enough layers. Granted, any given human personality is a far more complex algorithmic processor than the code behind the personality of a Sim, but this is a sliding scale with no obvious markers that denote where "self awareness" as we understand it, begins.
It's not possible IMO to discuss the Simulation Hypothesis without also branching into discussion about the nature of consciousness, and the feasibility of digitised consciousness, since obviously if we are all simulated beings, then we are all AIs, unbeknown to us. The "qualia bias" I refer to is using our experience of being as the set point against which any other experience of being must be measured. We are self aware, but a robot that we program (for example) is (we think) probably not. Likely the first AIs to repeatably pass some version of the Turing test will have software minds that have been deliberately written to act in a way that we would expect a human to behave, so there will be various internal software modules devoted to language processing, interpreting emotion, evaluating context, etc, but rather than being a side effect of our evolved intelligence, as in us, they will have been written from the objective of being able to interact in a convincing way with a human - and make it
seem like this artificial entity is thinking on it's own. It will presumably be possible for teams of human beings to sift through the code and internal logs during or after a Turing test conversation has taken place and understand exactly what was going on in the software that allowed this simulated conversation to occur. We might assume that this entity does not experience qualia, and perhaps it doesn't.
But it would be possible, and to some extent a necessity, to add hardcoded "self-awareness" functions, so to speak, by which I mean the same entity having this simulated conversation with a human would also have the ability to look inside itself at it's own reasons for responding in a certain way and make tweaks as needed. Is it "self-aware" now? Plenty of software does this already, with built in audits and security layers to catch bugs and unauthorised tampering, for example. These software programs are
aware of themselves, in that they are capable of observing the logic processes occurring within them as they run and making adjustments accordingly, but we would probably not say that they are actually
self aware, at least not as we understand it - because self awareness to us means experiencing the unique flavours of qualia that our senses and evolved neural architecture provides.
I feel I may be diverging somewhat from the point I'm trying to make, but as I see it the criteria that are typically used to evaluate whether a given entity has it's own internal world is entirely devoid of logic, and essentially based on whether that entity can pull the right strings of our fickle emotional instruments. Once the deliberately coded conversation-simulator AI is developed to the point that it can crack jokes, understand sarcasm, and reel off half ad-libbed, half pre-encoded philosophical musings in response to probing questions about the nature of it's existence - and especially if we give it's robot body a cuddly exterior, then people will start to find it harder to destroy it when it starts pleading with them to please not kill it - even if those pleadings are themselves the result of additions to the code by mischievous hackers, and even if the entire mindscape of this robot could still be laid out on a page and it did not appear that any true introspection or original thought was going on.
At some point of course the complexity of the internal machinations of such a mind would be complex enough that no single human could hope to decipher them, just as it is unlikely any single human could hope to decipher the personality of another living, breathing, biological human even if they were presented with an atom-perfect 3D model of their brain as well as pages and pages of documented interneuronal firings. I believe that this incomprehensible indecipherability is one of the criteria that we unconsciously use to assign self-awareness to the other conversation simulator intelligences around us (by which I mean of course, the other humans

). But there are already AIs that have somewhat written themselves using advanced neural network and machine learning technologies, such as AlphaGo and it's successor, AlphaGo Zero, both idiot-savant AIs with a superhuman ability to win the game Go, with at least somewhat evolved "minds" that no single human could fully understand, or write, even though there are large teams continuing to analyse the exceedingly complex result of aeons of machine-time that the AI spent playing the game with itself. This AI is not generally intelligent, obviously, it cannot hold a conversation - but the complexity of it's "mind" in some ways exceeds that of our own and is in some ways inscrutable to us. Is it therefore possible that this AI has it's own qualia, and internal world of a sort? Existing in a bizarre enclosed universe where it's only goal in life is to play Go endlessly, to the best of it's ability.
Ahem, anyway, I'm getting off track again. But my point is that we have no clue what self awareness really is, and that gap in our understanding of the nature of being leads us to make, in my view, unfounded conclusions about other aspects of reality. I believe that the origin of most religions or belief systems that have some element of the supernatural all stem from this bias towards human qualia - we see our experience of being as something different to nature and pure deterministic causality when it applies to inanimate matter. We see ourselves, or the experience of being human as, in a sense, something "supernatural". And if there is at least one supernatural thing that we know of - that of our experience of being alive - then why not other things too, like gods?
This flawed mode of thought, IMO, is also evident when people discuss the possibility of there being a creator of the universe, and the creation being a
deliberate act of will rather than simply a natural event. But will is itself a natural event, and the idea of will as something separate from nature is an illusory one. Say our universe is one of many, sprouting and growing on some kind of cosmic tree in the greater multiverse - or big bangs being the result of random collisions of the multidimensional branes floating in the Bulk, as in String Theory. This seems like a natural event. On the other hand, what if this is true - but the Bulk is contained within some kind of "Grow you own universe!" toy on the shelf of some higher reality - or the Bulk is part of some kind of ornament, such as a lava lamp, the multitude of complexity within it a mere side effect of it's design, unknown to the transcendent manufacturers of this lava lamp, or known but simply irrelevant and of not much interest except to scientists who study brane mechanics as some higher dimensional allegory of fluid mechanics - in this sense, the creation of the multiverse containing our universe is, in a sense, deliberate. But it's also irrelevant. Whatever world, if any, exists "outside" the universe we know is likely so inconceivably incomprehensible to us that we would not be able to distinguish if the Brane pool or "Bulk" is the inside of a lava lamp or simply a puddle on an otherwise lifeless planet - or whatever higher dimensional allegory of a planet makes sense.
In either case, it's a part of nature, and it's tempting to us to try to think of everything beyond the nature that we know and (somewhat) understand in the same terms - but it's likely that distinctions like conscious creation and just random natural occurrences are illusory the further outside of narrow frame of reference we go. It's possible that there are entities with their own "experiences" in some sense, if you go far enough up the reality tree, but these "higher qualia" would be incomprehensible to us, and whatever entities experiencing them would be forces of nature, just as human beings and human civilisation is a force of nature to a bacteria, which has no clue whether it's been grown in a petri dish in a lab, "deliberately" for some incomprehensible purpose, or "naturally" in a muddy puddle somewhere. Equally it's conceivable that bacteria and even viruses or particles have some "experience" of being, or "lower qualia" but this is obviously so different to our own that we generally do not consider it. The same is true for software objects, ie, Sims characters. If it's like something to be an electron, or a rock, maybe it's like something to be a line of code, or a digital avatar in a video game, but this is just so far outside our frame of reference that we don't even bother to consider it. This again is what I mean by "qualia bias" - when we discuss the Simulation Hypothesis, most people (I think) are imagining some higher reality much like our own, or even identical, with aware beings who are
aware that we're aware, but this might not be the case, it may not even be very likely at all - our entire reality and everything we experience may seem as irrelevant as the reality of an 8 bit video game like Pong to whatever architects of our reality "created" it, whether they be higher dimensional godlike aliens writing software, or an inanimate event like a splash in the puddle that we call the Bulk, but whatever "gods" above us would simply call "a puddle"... or whatever higher dimensional allegory might apply.