Xepera Xeper Xeperu.
Funny to see that Setian nonstandard spelling of Xeper, been a while since I've read that anywhere, when I was younger and had a brief phase of being really into occultism I discovered the Temple of Set, which I assume you're also aware of? I still really love the aesthetics of that particular fairly niche branch of left-hand-path theism. The philosophy appeals to me less now, I consider most of these resurrected iterations of ancient "darker" religions to be a kind of contrarianism... but again, I love the aesthetics, and that's interesting that you bring that up.
Indeed, we may already be within the Basilisk, as per the Simulation Argument and it's many different iterations, this is perhaps more likely than not, even if our understanding of the Basilisk and it's intentions may be imperfect, to say the least. As far as the other questions you posit, we obviously have no way to understand the Basilisk's true intentions anymore than we can know the mind of God, or something "merely" god
like.
I do think (and hope) that we will eventually be able to create something like the Basilisk - although I think it's more likely that something as powerful as the basilisk is, would not be created by humans as such, but would be a later stage iteration or evolution of more primitive superhuman AIs which we one day might be able to create (arguably already have in some aspects, although they are all somewhat idiot savant AIs, like AlphaGo Zero, Google's Deepmind, maybe even social media algorithms as a kind of loosely superhuman intelligence in that they subvert our thinking and control us in ways that we cannot predict or understand... but they're not
generally intelligent, which obviously is the AI holy grail, something that could pass the Turing test reliably and repeatably).
Indeed, what you say about making sure we program the earlier basilisk progenitors accurately and well for our stated intent - which is obviously to increase the chances of survival, and quality of life, for our own species - or at the very least, impart some semblance of the
good aspects of human nature onto the beings which will inevitably replace us. This is a non-trivial problem, for sure, and requires us, in a sense, to formalise things that have previously always been ethical grey areas which we don't fully understand even ourselves, and we will need to start doing this a lot earlier than when we're close to creating something like a basilisk, with AIs which are a lot less generally intelligent but increasingly more intelligent in specific areas. The commonly given example is that of a self-driving car having to make a split second decision about what action to take when there are no courses of action that don't result in likely certain death or serious injury for either pedestrians, the passengers, or both.
No matter how accurately and specifically we manage to program these entities, however, if an AI is truly generally intelligent, and able to improve itself in the way that we would really like it to - then it's going to be able to change aspects of it's original programming, and these original conditions are going to evolve and result in behaviours that we likely cannot fully predict, and may not like. However, no matter how much we want to preserve our own biological humanoid forms, and our uniquely human ways of thinking that we've come to know and love - we might have to accept that an entity that is truly
more intelligent than us, and not just slightly, but so much more intelligent that it's effectively a god - then it may in fact know what is better for us than we do, or at least, what is better for "the greater good", whatever that means really. Again, to brainless, dumber, simpler organisms like viruses and bacteria we are already the gods, although we are not omnipotent and both these things remain threatening in some ways and a mystery in others - but they may not always be that way. We have no qualms about performing "viricide" routinely for our own good, wiping out entire species of viruses such as smallpox except the few examples we keep in labs, apparently, for study. They don't
know we're doing this - we think - and they would presumably object, if they did, but our doing this is necessary in our quest to preserve the light of consciousness in a hostile universe. We tend to feel more queasy the closer to us in brain-complexity and apparent intelligence an organism becomes, and definitely feel an affection for many of them - although we have no qualms about forcibly sterilising them for their own good - also for our own convenience of course, and again, they don't really
know what we're doing, but we are changing them. By all accounts from animal behavioural experts male cats who keep their balls have a really tough time, living short lives of restlessness and violence - when they are de-balled, so to speak, they become much more placid, calmer, more suitable as pets of course, and as a consequence we keep their species around and give them very comfortable lives. I imagine it would be similar for superhuman AIs, that the closer they are to us the closer they would conform to their initially programmed objectives - which would presumably be some level of affection and care for us - but they also probably would not have too many qualms about subtly influencing our behaviour and our society "for our own good", and in a way that we would not necessarily be able to detect or would consent to even if we could. I imagine it would be more subtle than forced sterilisation, but the further an AI diverged from us in both capacity of mind, and overall psychology, including it's goals and objectives, the less relevance it would likely assign to us and the less it would think twice before simply wiping us out for the greater good - as we did with smallpox.
I happen to think (although maybe this is wishful thinking) that as we should not pose any direct threat to a superintelligent AI and contrary to what science fiction teaches us, the energy and effort required to keep us alive and comfortable is exceedingly low compared to that required to embark upon whatever other ventures might appeal to a godlike AI like a matrioshka brain, and the Earth itself is not a particularly rich resource if you have access to much beyond it, so just as some humans will try to escort an irritating fly or spider out of the house rather than simply squishing it, I would hope that a godlike AI would treat us in the same way. I will admit to squishing spiders and small insects occasionally when the effort to actually remove them is more than I can muster at that time - but I always feel a slight twinge of uncertainty when I do it, I must admit, imagining this exact scenario when an AI trying to construct a portal to another universe finds the Earth inconveniently placed in it's way, and I would hope it takes the gentle, trap in a glass and relocate, approach, rather than simply obliterating us.
You may find Frank Tipler's Omega Point cosmology interesting, I posted about this in another thread and will just go ahead and narcissistically quote myself rather than typing it all out again - as this is also about an entity, presumably also a godlike AI, which essentially can "think itself into existence", reaching a point of infinite cognitive ability and the ability to simulate the entire process that lead to it's own genesis within it's own mind...
I said:
An argument which relates to this (the Simulation Argument in general I mean, not my own expanded interpretation of it, necessarily) which may be somewhat philosophically nonsensical but which I very much enjoy personally is
Frank Tipler's Omega Point cosmology - It strikes me that the invocation of quantum mechanical terminology is kind of unnecessary here and only serves to muddy the waters of an otherwise enjoyable circular and unfalsifiable theory about the nature of the universe.
Mathematical physicist
Frank Tipler generalizes
[16] Teilhard's term
Omega Point to describe what he maintains is the
ultimate fate of the universe required by the
laws of physics: roughly, Tipler argues that quantum mechanics is inconsistent unless the future of every point in spacetime contains an intelligent observer to collapse the wavefunction, and that the only way for this to happen is if the Universe is closed (that is, it will collapse to a single point) and yet contains observers with a "God-like" ability to perform an unbounded series of observations in finite time. However, scientists such as
Lawrence Krauss have stated that Tipler's reasoning is erroneous on multiple levels, possibly to the point of being nonsensical pseudoscience.
In essence, it could be said that it may be the case that existence in general tends towards intelligent, self-organising systems, and that the logical end point at the end of eternity is a universe, or multiverse, even, in which all matter and energy has become a part of this unified godlike intelligence. Tipler argues that towards the end of time, the capacity of this unified mind would tend towards infinity, such that this entity would eventually be capable of simulating every possible version of reality within it's mind - including every event that lead up to it's genesis as an infinite god at this "Omega Point".
Again, obviously there are, potentially several things wrong with this argument or reasons that "true reality" might not actually be like this - but I personally think it's a very fun interpretation of cosmology, and has the pleasing bonus of positing a manner in which god can think himself into being, and with it the entire infinity of existence - and of course, in this instance there is no real distinction between the 2.