• S&T Moderators: Skorpio | VerbalTruist

Is A.I. waiting to take over ?

Again "So far". Look at where we've been decade by decade, and the prorgress that occurs over time. Speculation of the where we'll be in the next decade is all over the place.
But machine learning and AI isn't even new, it's been worked on for a very long time. The most recent stuff with the LLMs has already peaked, you can't just throw more hardware at it because fundamentally its the algorithms that determine the advancement not the hardware. There is nothing they can really add to what has already been created, so it's back to the drawing board to try and figure out a new algorithmic approach altogether.
I was going to go with some manual labor positions that would be protected from AI, but it's really damn hard to see any of them remining untouched. My first thought was a welder, but there are already robotic welders (follow a pre-programmed path across fixtured pieces held together). So what about the quality inspection of those welds? Better cameras, perhaps x-ray, and the Robot Welder could take that over as well. Same for loading the parts for welding, a lot of that is automatic already. All of the welder's work can be replaced...but have you actually saved anything or invested more than was warranted? Your point stands on spending x10 to automate. It's not always the right choice.
I mean we already have automated welders and laser cutters, they're great for assembly lines where there isn't any real deviation and things are being done from scratch. But as soon as you introduce any variability that's not something AI is going to tackle. I mean you could perhaps have cameras and monitors to show the human welder stuff he can't perceive, but like a lot of this it's going to come down to business principles.. only the mega corporations would even consider it, and even then it might not even be worth it. Perhaps for super high-end stuff like military manufacturing where money is no object, but commercial stuff.. they're always looking to the bottom line.
 
^ on which specific implementations of ai are you basing your "It's just a clever gimmick" opinion?
The LLM's, like GPT. I mean obviously.. that is what all the recent AI hype is about..

It is a gimmick, because it isn't approximating intelligence at all.. there's no reasoning or any logical deduction going on under the hood, just a probability calculation about what the next token should be. This is why it hallucinates, why it can't count accurately, and the other erroneous responses it gives out. It's convincing in the same way a politician can reel off word salad. It's about probability of the words based on training data, there's no intelligence in there what so ever. Hence, a gimmick.
 
The LLM's, like GPT. I mean obviously.. that is what all the recent AI hype is about..

it's not obvious which is why i asked.

before i start, i'm not an ai fanboy and i'm not being contrary. i see some real garbage from ai.

but, even with gpt, there's a difference between 4o mini and 4o full. as with most things, these are tools and they're right for some jobs and not for others. as they evolve - improvements in reasoning models. etc. - the speed and quality of the change can be pretty impressive.

some of the implementations - e.g. gpt o1, claude sonnet - have the advantage that they can not only write, for example, code but they can execute it too. i read an article where a guy prompted "create an interactive tool that visually shows me how correlation works, and why correlation alone is not a great descriptor of the underlying data in many cases. make it accessible to non-math people and highly interactive and engaging” and the result was pretty impressive.

anyhoo, with the tech changing as fast as it is, it's going to be interesting to see where we are in even just 5 years.

oh, when i said i've seen some garbage, i saw this just today:



it sounds almost... human :)

alasdair
 
it's not obvious which is why i asked.

before i start, i'm not an ai fanboy and i'm not being contrary. i see some real garbage from ai.

but, even with gpt, there's a difference between 4o mini and 4o full. as with most things, these are tools and they're right for some jobs and not for others. as they evolve - improvements in reasoning models. etc. - the speed and quality of the change can be pretty impressive.

some of the implementations - e.g. gpt o1, claude sonnet - have the advantage that they can not only write, for example, code but they can execute it too. i read an article where a guy prompted "create an interactive tool that visually shows me how correlation works, and why correlation alone is not a great descriptor of the underlying data in many cases. make it accessible to non-math people and highly interactive and engaging” and the result was pretty impressive.
Well I mean the recent hype over the past few years is specifically about LLMs, about "generative AI". A few years prior it was all about machine learning. Tomorrow, when AI runs out of road, it will probably be some quantum bullshit.

The underlined: This is the fundamental misunderstanding though, there is zero reasoning in these systems. None. It's statistical probability calculations about what the next likely token/word should be.. there is absolutely no reasoning in there, no intelligence in there, at all. If you asked GPT to describe something to you, it may give you a correct description of that object, but it has absolutely no conception about anything it has just relayed to you.. it's all a convincing hallucination that it statistically determined would most likely be the acceptable answer, based off the training data.

If you track what they've done, they've just thrown more hardware at the problem. There is no real innovation going on at a fundamental level, at the level of the architecture itself. This idea that it will exponentially, or even linearly increase in sophistication is not based in reality at all but in the media and investor hype. The amount of money riding on this means the hype has completely dwarfed the reality of the situation.

The coding stuff. I mean yeah it can spit out code, again based on what it has seen other coders write before, but there's no logical deduction in there.. which is dangerous if you're building anything more complicated than a todo-list. Coding is all about logic, and these LLM's don't understand what they are actually doing.. it's just clever regurgitation. And it's not much use having these things chuck out 1 million lines of code if there are bugs buried in the code, and also because coding large projects is about mental maps and architecture.. something it doesn't understand.. which means some poor soul will have to spent hours trying to build a mental map of the code which might actually be fundamentally flawed (it may have been quicker just to write it yourself).

In terms of coding, it's like Microsoft Excel was to the manual tabulation it replaced. It bumped up the bar of productivity, but then just became the baseline quickly. At the end of the day it is absolutely no where near to replacing people, and the companies that are doing so its more about overhire cutting and executives who don't understand the technology.
 
Ah, but how do you GET capable workers? You train them.
Sure then we might have some workers in a decade or two. Too bad that Trump already burned bridges before your country was able to make a neat switch. Also do you really want American sweat shops like in Asia? Or Americans working on the fields. Everything will get more expensive. This is a crap shot. China is not your enemy. They can deliver high quality goods for 1/3rd of the local price. Imagine a young company that just started out buying from China and saving about 2/3rds of investment cost. That is a bargain. And China is certainly not a danger, for that to be the case they would need a sustainable population, which they don't. in 10-20 years China will deflate like a balloon.
 
No, A.I. will not take over. Stop watch the Terminator movies. Okay, everything after T2.
For real though.

I think it's quite telling how they're trying to jam it into all those systems, but something like the banking sector is still predominantly based on COBOL that is over 60 years old. I guess they don't really give a shit if your Tesla drives you off a bridge, but hell will freeze over before they endanger the repository of their wealth.
It's true that AI is popping up in a lot of new tech, and yeah, a surprising amount of core banking infrastructure still runs on older systems like COBOL. However, comparing them directly like that is a bit misleading because the situations are really different. Replacing those core banking systems is incredibly complex and risky. We're talking about systems that handle trillions of dollars reliably. A mistake there could be catastrophic—financially, culturally, and globally—which is why they are so cautious and heavily regulated. It's not just that they don't want to upgrade.

Also, banks use a ton of AI already, just maybe not for ripping out that absolute core processing yet. Think fraud detection, risk analysis, customer service chatbots – they're definitely adopting modern tech where it makes sense without destabilizing the foundation. The type of risk is different, too. A failure in a core bank system could cause systemic financial meltdown, whereas a failure in a car's AI, while potentially tragic for those involved, doesn't carry the same kind of global financial risk.

Saying they don't care about safety isn't quite fair either. While companies obviously care about profit, huge safety failures are also terrible for car companies – think lawsuits, recalls, reputation and regulations. The reasons for the different paces of adoption are more complex than just 'protecting money vs. endangering people'.

It is a gimmick, because it isn't approximating intelligence at all.. there's no reasoning or any logical deduction going on under the hood, just a probability calculation about what the next token should be.
I disagree. Maybe it isn't intelligence, but it damn sure approximates it. That probability calculation can be surprisingly effective, not to mention it passes the Turing Test.

If intelligence is defined strictly as human-like consciousness & reasoning, then sure, current LLMs don't qualify. But if intelligence includes the ability to learn complex patterns, process information, solve certain problems, and generate sophisticated outputs based on input, then LLMs demonstrate aspects of intelligent behavior, even if the underlying mechanism is statistical pattern-matching rather than human-like cognition.

This is why it hallucinates, why it can't count accurately
You're right about the core mechanism – at a basic level, it's predicting the next token based on probability. And yes, that's why it struggles with things like precise counting or can 'hallucinate' facts, because it doesn't have true understanding or a fact-checking module built-in. Because the model is generating statistically likely sequences rather than accessing a factual database or performing symbolic logic, it can confidently generate plausible-sounding falsehoods ("hallucinations") or fail at tasks requiring precise, non-linguistic reasoning (like counting or complex math). Its "knowledge" is embedded in the patterns of its training data, including any biases or inaccuracies present there. These short-comings do not negate it's overall value and utility as a tool to augment human intelligence and capability.

You're dismissing a couple of important things here: 1. Scale – these models have billions or trillions of parameters, trained on internet-scale datasets. 2. Emergent Abilities – the complex patterns learned during training lead to capabilities that weren't explicitly programmed, such as translation or code generation. The ability to calculate the correct probabilities to produce coherent, useful text requires learning intricate patterns of language, grammar, semantics, and world knowledge (as represented in the text).

It's convincing in the same way a politician can reel off word salad.
This is a dismissive analogy. While LLMs can produce vague or nonsensical output, they can also produce highly structured, informative, and useful text, code, or analysis when prompted correctly. The quality varies greatly depending on the model, the prompt, and the task.

there's no intelligence in there what so ever. Hence, a gimmick.
It's definitely not a gimmick. A "gimmick" suggests something superficial with little real utility. LLMs, despite their flaws, have shown significant utility in, e.g.: drafting emails, writing reports, translation, summarization, creative writing, brainstorming, coding assistance and code generation, information retrieval (though fact-checking is clutch), and idea inspiration. They are tools that can augment human capabilities, even if they aren't "thinking" machines.
 
It's definitely not a gimmick. A "gimmick" suggests something superficial with little real utility. LLMs, despite their flaws, have shown significant utility in, e.g.: drafting emails, writing reports, translation, summarization, creative writing, brainstorming, coding assistance and code generation, information retrieval (though fact-checking is clutch), and idea inspiration. They are tools that can augment human capabilities, even if they aren't "thinking" machines.
It definitely is a gimmick. The amount of raw power and hardware it takes.. just to draft a fucking email? Are you kidding me 😂 This is not much more than a sophisticated auto-complete system, there isn't any actual intelligence in there. Which is why it can't deal with stuff it has no training data for, and why it can't get the number of fingers on a hand correct. It does not understand numeracy, or anything else.
If intelligence is defined strictly as human-like consciousness & reasoning, then sure, current LLMs don't qualify. But if intelligence includes the ability to learn complex patterns, process information, solve certain problems, and generate sophisticated outputs based on input, then LLMs demonstrate aspects of intelligent behavior, even if the underlying mechanism is statistical pattern-matching rather than human-like cognition.
This is sophistry. It doesn't learn anything, hence why it makes glaring mistakes even after having been fed gargantuan amounts of training data on that specific subject. It doesn't understand anything it has been fed. It can't solve problems. I think this quote of yours encapsulates the delusion people have about LLM's, because your own language is not accurately describing what the LLM's are actually doing but what you'd like them to be doing..

LLM's are not demonstrating aspects of intelligent behaviour, they are just mimicking language in a way that has convinced people to project back on to it what it doesn't have. People see the output and mistake it for intelligence, they are convinced by the language appearing on the screen.
You're dismissing a couple of important things here: 1. Scale – these models have billions or trillions of parameters, trained on internet-scale datasets. 2. Emergent Abilities – the complex patterns learned during training lead to capabilities that weren't explicitly programmed, such as translation or code generation. The ability to calculate the correct probabilities to produce coherent, useful text requires learning intricate patterns of language, grammar, semantics, and world knowledge (as represented in the text).
It's a very clever gimmick, sure. But we can already see the cracks in it. It has already pilfered the majority of the available training data i.e. stealing other peoples content across the internet. It can't "scale" when there is nothing new left to feed it, and it absolutely can not feed on its own output.. that has been proven already, it just degenerates rapidly into nonsense.. which shows in a demonstrable way why LLM's have zero intelligence.

They can't scale the hardware or power either, and the whole thing is operating at massive financial loss already due to that. And for what? So people can template emails? Create some lame meme artwork? The actual output is nowhere near matching the inputs, not even close. No business critical applications are going to ever trust 'probable best guess' AI, which is what it is. It might work for replacing journalists who write useless trash, templating emails, or chatbot systems.. but that's it.

It's a hype train. It's so obviously a massive hype train. The tech industry is the principle focus of cultural-financial investment right now, they have to sell something!
 
Programmers know that already, it's an inside joke.. where you spend x10 the time trying to automate a task than actually just doing the fucking thing in the first place.

I work in software and use LLMs every day to help generate code with real business value.

A year ago, what you say would have been more true. Today, it's not.

Progress in this field can be measured in weeks not in months or years.

Sorry but I don't see the point talking about it speculatively when better models than the one you're thinking of are already in common use.

Btw I think the hype phase has already calmed down and now we're on the sober second look where actual impact is being measured and not just taken for granted.
 
This is not much more than a sophisticated auto-complete system, there isn't any actual intelligence in there.
You're using too narrow of a definition for "intelligence". AI is—by definition—a form of intelligence. It's capable of performing tasks that typically require human intelligence, such as reasoning, learning, and problem-solving. While it doesn't possess the same consciousness or emotional intelligence as humans, AI systems can be very intelligent in their specific domains. You're looking for the wrong thing from LLMs. So long as you know the limitations, you can put this tool to good use.

It's a hype train.
So is literally every corporate idea. Speculative investing alone is predicated on riding various hype trains to profits. In crapitalism, hype is necessary to make sales, as it's a part of marketing and branding. This is not the big reveal you think it is; it's just cynicism without warrant.

The tech industry is the principle focus of cultural-financial investment right now, they have to sell something!
Basic economics, my guy. This does not automatically mean what's being sold is devoid of value.

The cynical view of AI is super reductive, jumping to extremes, and missing the point entirely. There's no sophistry, and why are you so cynical? It's too dismissive to call it a "gimmick", a simplistic trick to fool rubes and rip-off Grammerly and that's all it's good for, right? … 🤣

Personally, I think it's crazy useful as a guide to help developers write better and/or terser code, parse out minified scripts and explain them, point out redundant and unused styles, reverse engineer code tricks, identify gotchas, help with smoke-testing/dog-fooding application code, plan sprints, and many other useful things. No, it cannot be relied on 100% alone, nor should it be. But put to the right use, if nothing else, AI can seriously increase one's workflow. It's been very useful in a number of significant ways with some of the work I do as a freelance creative technologist and consultant. It's another useful tool in the tool belt of modern technology. Sorry if you don't, won't, or can't see that perspective.

The proof is in the proverbial pudding. Res ipsa loquitur.
 
@unodelacosa and @thujone said what i was going to say (and a whole lot more) better than i could say it. nice posts.

The amount of money riding on this means the hype has completely dwarfed the reality of the situation.

i think the hype is ahead of the reality but i don't think it dwarfs it any more.

If you asked GPT to describe something to you, it may give you a correct description of that object, but it has absolutely no conception about anything it has just relayed to you.

what does that matter? what's the difference?

if you asked me to describe a sunset and i asked chatgpt to do it, if i then told you:

"the sun slips gently behind the horizon, a golden coin sinking into the velvet purse of night. sky burns with the hush of farewell - crimson, amber, and blush bleeding into each other like a secret whispered across the clouds.

shadows stretch long and slow, as if time itself is pausing to watch. the world holds its breath - bathed in the final sigh of warmth before stars dare to speak.
"

would you know that it was me, or some poet, or a.i.?

and, why would it matter?

alasdair
 
Last edited:
Big Bang Theory Brain GIF
 
You're looking for the wrong thing from LLMs. So long as you know the limitations, you can put this tool to good use.

I think the AI label has been misleading in this sense that it conditions people to think of it more as competitor to humans rather than tool for humans.

So, it makes sense that it's hard to use it for anyone that views using it like training my replacement.
 
You're using too narrow of a definition for "intelligence". AI is—by definition—a form of intelligence.
AI is first and foremost a buzzword. True AI has never been achieved. People just started calling LLM's AI. And no there arguably is no intelligence here. It just appears intelligent.
There is no mind, no understanding and no consciousness. There is this famous thought experiment, the Chinese room:

"In the thought experiment, Searle imagines a person who does not understand Chinese isolated in a room with a book containing detailed instructions for manipulating Chinese symbols. When Chinese text is passed into the room, the person follows the book's instructions to produce Chinese symbols that, to fluent Chinese speakers outside the room, appear to be appropriate responses. According to Searle, the person is just following syntactic rules without semantic comprehension, and neither the human nor the room as a whole understands Chinese. He contends that when computers execute programs, they are similarly just applying syntactic rules without any real understanding or thinking."
 
if the desired outcome is achieved, what does that matter?

would critics be happier if it was called pattern simulation? or machine-fueled approximation?

alasdair
The desired outcome is not always achieved. It can make the stupidest mistakes while sounding completely plausible. They all also hallucinate on occasion because you are not interacting with an intelligent being that comes up with thought processes but with a piece of software that is good at pretending but has zero awareness. It doesn't "think". Its "intelligence" IMHO is closer to that of an insect rather than a human being.
 
i acknowledge and understand that.

but, in the event that it is, why does it matter?

alasdair
The previous question was if presently LLM's are intelligent or not and what intelligence means in this context. And IMHO they are not intelligent. That's all. I'm all for reliable LLM's where this question doesn't matter but it is questionable if we are getting closer to that with present approaches.
 
Top