Foreigner
Bluelighter
Current AI models are not sentient. They are just really elaborate statistics and information collating machines. They can network many different ideas together and summarize data for us, but they can't tell us the value of it. People should watch more sci-fi. The computers in shows like Star Trek are complex assistants to humans. Calculations that would take us days (or years) can be done by a computer instantly, and they will show us whatever we ask them to show us. But they can't tell us if the information is valuable, if it's good or evil, if it can or should be used, or ethics.
For example, there are millions of PhD level articles out there in the journals... but no human being can ever sit down, read them all, and then tell us what unique things they figured out by reading so many articles. It would take many lifetimes. An AI could though. It could read an entire publication and tell us what it found. This would be really useful for things like medicine. An AI could read all of the medical articles ever written and network ideas together in ways we never considered, giving us new treatment options.
That's the difference between spider indexing, like what Google does with websites, an AI. An AI could read all of Bluelight and tell us whatever we want to know about it -- but it wouldn't be able to tell us what is "good" or "bad" about it.
AI could work to our advantage if we could train it to be fair. But we are already seeing major biases being built into these models, like Google's piece of shit Gemini AI. Corporations are already looking at how to game AI.
If you're not a critical thinker, then AI will just lead to way, way worse echo chambers... because AI is only going to ever tell you what you ask if. You don't know how to word your questions fairly, you're going to get biased answers. And in the case of garbage AI like Gemini, the answers will be skewed no matter what you do.
For example, there are millions of PhD level articles out there in the journals... but no human being can ever sit down, read them all, and then tell us what unique things they figured out by reading so many articles. It would take many lifetimes. An AI could though. It could read an entire publication and tell us what it found. This would be really useful for things like medicine. An AI could read all of the medical articles ever written and network ideas together in ways we never considered, giving us new treatment options.
That's the difference between spider indexing, like what Google does with websites, an AI. An AI could read all of Bluelight and tell us whatever we want to know about it -- but it wouldn't be able to tell us what is "good" or "bad" about it.
AI could work to our advantage if we could train it to be fair. But we are already seeing major biases being built into these models, like Google's piece of shit Gemini AI. Corporations are already looking at how to game AI.
If you're not a critical thinker, then AI will just lead to way, way worse echo chambers... because AI is only going to ever tell you what you ask if. You don't know how to word your questions fairly, you're going to get biased answers. And in the case of garbage AI like Gemini, the answers will be skewed no matter what you do.