• S&T Moderators: Skorpio | VerbalTruist

Is A.I. waiting to take over ?

If we were going to innovate AI without resorting to warehouses full of microprocessors to brute force it, we would have done it already.

have you heard of deepseek? you seem rather focused on chatgpt being the current "leading industry standard" but perhaps I'm misunderstanding
 
is that entirely unreasonable or do we expect all future concepts to hover around the just say no, always full sobriety, relapses are never acceptable, puritanical ideology?
With meth addiction it's probably a decision that needs to be very very carefully made. Artificial intelligence has not ethical obligation to give informed, patient goal minded advice nor can it take accountability when giving bad advice. If I were to suggest to a patient of mine who was trying to remain abstinent from their drug of choice "maybe just a little would be fine" and they relapsed and suffered consequences - I could be held liable for malpractice. An AI therapist is unable to be held to that same standard and shouldn't be providing advice that emulates guidance from someone who actually can be held accountable.
 
It's all bullshit really. Like nuclear fusion. People have been huffing the superiority of their own farts for decades now, we've built this cute little paradigm and picture of how we think it's going to be.. some ghey Star Trek fantasy type thing.. but we're being confronted with the possibility now that actually it might not happen, that we're not actually that great.

You've got the overlords who want this technology because it will make their lives infinitely easier (as if isn't want easy for them already), and you've got the brainwashed segments of the public who have been told to cheerlead this stuff on (free publicity for the overlords).

None of this shit is actually going to make the world a better place. How is chatGPT going to remedy the fact the overlords own all the land, the financial system, and everything in between? It's not going to fundamentally address anything of any importance, in fact all it will do is accelerate the imbalance even further than 'electricity' already did. It's stupid.
Well, I don't know but I'm reckoning it'll be somewhat wonderfully, advancing with the usual sprinkling of feudalistic & economic authoratarianism enmeshed in! - like always, a constant ripple of divide in society. 😉
 
With meth addiction it's probably a decision that needs to be very very carefully made. Artificial intelligence has not ethical obligation to give informed, patient goal minded advice nor can it take accountability when giving bad advice. If I were to suggest to a patient of mine who was trying to remain abstinent from their drug of choice "maybe just a little would be fine" and they relapsed and suffered consequences - I could be held liable for malpractice. An AI therapist is unable to be held to that same standard and shouldn't be providing advice that emulates guidance from someone who actually can be held accountable.
I agree, I wasn't saying that it is a correct statement from the perspective of a therapist or someone giving or getting advice but rather a reflection on the shifting stigma of relapse.

these chat AI models aren't necessarily "correct" or not, they're a reflection of their training data which would include sentiment that during highly stressful times, without the proper tools to handle the stress in a healthy way, relapses are absolutely normal

also therapy chatbot is a stupid concept, these are very simple text generating engines. see reasons above
 
It's all bullshit really. Like nuclear fusion. People have been huffing the superiority of their own farts for decades now, we've built this cute little paradigm and picture of how we think it's going to be.. some ghey Star Trek fantasy type thing.. but we're being confronted with the possibility now that actually it might not happen, that we're not actually that great.

You've got the overlords who want this technology because it will make their lives infinitely easier (as if isn't want easy for them already), and you've got the brainwashed segments of the public who have been told to cheerlead this stuff on (free publicity for the overlords).

None of this shit is actually going to make the world a better place. How is chatGPT going to remedy the fact the overlords own all the land, the financial system, and everything in between? It's not going to fundamentally address anything of any importance, in fact all it will do is accelerate the imbalance even further than 'electricity' already did. It's stupid.
Here's a vid that echoes your legitimate concerns ( a lot of Hyperbole for my liking but):

 
Last edited:
" But the Chat-bot made me bludgeon them, your honour 😭" ...law will be another interesting money-hive around this. 🤨
"Unfortunately, the chatbot doesn't carry liability insurance and possesses no license to suspend - further, the chatbot's developer claims no liability for what advice it gives you and can not be held accountable for discussions that are privately held between you and 'TherAI'. In your EULA, you agreed that you understood that even though this is a simulation of mental health care, you understand it is not actually real advice and recognize that you are incapable of holding liable TherAI for any damages due to your own mental health conditions. Therefore, TherAI can not be responsible for the fact that you murdered your family on advice of your chatbot"

We have updated the algorithm accordingly, thank you for being a part of our sample group! Would you like a 5$ coupon?

(all of this is made up)
Fucking ghouls.
 
thankfully there isn't a single software EULA that I'm aware of that can be enforced in court

True. And specific to AI, we're entering uncharted waters:


In summary, Starbuck is a conservative activist who has been shaming/highlighting companies for their DEI policies. One dealership had Meta AI generate an image of Starbuck full of lies and they posted the image to twitter. The image claimed Starbuck was at the Jan 6 riot (he was in another state), that he engaged in Holocaust denial (provably false) and that he pleaded guilty to charges, was associated with Qanon, and was anti-vax. None of that is true. I'd stop there and hold accountable the person(s) who asked the AI to generate the image, that's who fed it the info and made the request, AI was just delivering what was asked.

However, any later inquiries of Meta's AI about Starbuck would present those falsehoods as being facts about him. That's where the lawsuit comes in. The AI took these lies and perpetuated them, further defaming Starbuck. Meta took months to respond at all, but even then they can only update some of the wild population. Many downloaded versions that remain in use will forever hold that false image of a citizen and continue to spread the lies.

Regardless of how you feel about Starbuck, this AI taking lies it made up (or was fed) and now holding them as intenral truths is very, very dangerous.
 
I've always wondered how any Chatbot would respond to being given the entire text of 'Tractatus Logico Philosopicus' as it's prompt.

Why? Because Wittgenstein attempted to use formal logic to explain subjective existence. I wonder if a chatbot would uncover logical contradictions or highlight omissions.

Back in 2022 someone was able to make several chatbots alter the propositions (using a simple formula) and the results were mixed to say the least.

But it did appear that LLMs are still unable add anything nor root out inconsistancies.

I mean, go crazy and feed it 'Finnigan's Wake' in it's entirety - that being by way of an experimenal novel relies on things such as glossomania and more genrally the rhythms and sounds of words beyond their exact meanings.
 
And legal issues get more wide spread and intriguing


In one of the cases in question, a lawyer representing a man seeking damages against two banks submitted a filing with 45 citations — 18 of those cases did not exist, while many others “did not contain the quotations that were attributed to them, did not support the propositions for which they were cited, and did not have any relevance to the subject matter of the application,” Judge Sharp said.

Reminds me of recent issues with Trump admin. Not to bring politics into the discussion, but as an example of widespread use of AI without oversight. I believe it was the HHS that issued some statement and had to retract and correct all the AI fiction populated therein.

5a566613-0a96-4c8f-a720-41d2e383a167_text.gif


smdh
 
Oh and another frivolous lawsuit or not so .....

~~~

I don't know why we are drawn to AI. Maybe for the same reason a duck is to a decoy. Because we think it's real.

Reality is the ruler of fantasy. Maybe it's better than chatting to ourselves sometimes.

I was just trying to post something fun and light while still trying to figure out this Idiocracy AI and all it's fun stuff.

..... But it is still kind of funny when it calls the government out on things. hehe omg.

There are several and many different options to choose from when ever.

If you ask Siri a question, she answers appropriately and succinctly. ChatGPT 4.0 picks out a few words in your question and tries to give you two paragraphs of info on those words it picked out--not even apparently at times able to determine what your question was or was going to be..

Ect ect. GPT blah blah blah

Just I wanted to lighten it up for a second or two.

I guess be careful about what is being programmed. And more. Lol.

haha. edit : 🥹
 
Last edited:
I find it interesting how many people feel their job is at risk 'because AI' when AI is unable to make intuitive leaps nor to produce good writing. In fact, 'agents' have been used to model macroeconimics for decades. Niche, but they work in much the same way. The can only 'learn' from things that have happened or 'what if?' situations specified by researchers.

But while it's inevitable that AI will get better, it ISN'T intelligence. The DIKW pyramid will not be scaled.

I believe there is a term that is something like 'the three tar pits of ignorance'

-Not having the data
-Not understanding the data
-Not understanding the context and limitations of the data

Wasn't it Einstein who, when given a pop quiz didn't know the speed of sound? He remarked 'why waste time learning something that may easily be found in a book?'

Some people confuse knowledge with intelligence and I think that is why the name 'artifical intelligence' was used rather than 'concatinated freely available data presented in a human readable form'. It doesn't quite have the same zip.

I don't know where a given paper is, but I know if I've read it and know if I need to, I can find it again. But some people fetishize collecting every paper. Well once you get to a certain point, your own database makes a paper no easier to find that just using PubMed and Sci Hub. So why do it?

The problem is that if you search for something, the engines look at what you read last time and put similar things at the top of your feed. I find that most annoying. I KNOW what I'm looking for, don't try to guess. But it's easy to see why if someone likes a given podcaster, they will be fed more of the same. I had to spend a full 20 minutes to find a rare, unreleased Titán track (OK, Cover) because it kept trying to feed me whatever it was my wife had used Youtube for previously.

So put simply, it assumes people are idiots. Which is certainly a big market...

BTW



As you can hear... niche.
 
Last edited:
I don’t think A.I. has the capacity to wait unless it is programmed to do so. As far as intention A.I. has not reached a place of development in which thought processes are its own.

Will A.I takeover humans? Not until it develops emotions and feelings until then A.I. will always be a machine.
 
Top