guys can we please be aware of the S&T forum guidelines.
articles and videos without commentary are not allowed. i'll leave what is already here but the purpose of this forum is discussion, not posting videos and articles. they are welcome if you include your own comments and summary, and the dscribe the points of discussion you think they contribute to.
i saw the LaMDA stuff in the news recently. i doubt that our computational architecture and software design is sufficiently complex to yield true intelligence at present. i don't think there is anything special about the substrate offered by our brains, in terms of supporting consciousness, but from what i understand about the behaviour of neurons, it is significantly more complex than modern computers logical processing units. that complexity doesn't actually change computability results, but probably means, in conjunction with our sophisticated memories and variety of input data, a supercomputer based on current architecture would need to be huuuuggggeeeee and specifically designed for consciousness to arise.
i'm basing that on intuition, not any actual facts though. in the case of consciousness where no one knows what the fuck it is, i think that's valid.
i don't know how we'd even work out if an AI was conscious. super interesting but given how complicated they are and the fact we can't even predict what current neural nets will learn, i'm not sure how we'd even differentiate between a really decent AI and a conscious one.
would be interesting to look into the tech used for LaMDA, and the compute resources it uses, but i'm supposed to be working so can't do that right now.