No, A.I. will not take over. Stop watch the Terminator movies. Okay, everything after T2.
For real though.
I think it's quite telling how they're trying to jam it into all those systems, but something like the banking sector is still predominantly based on COBOL that is over 60 years old. I guess they don't really give a shit if your Tesla drives you off a bridge, but hell will freeze over before they endanger the repository of their wealth.
It's true that AI is popping up in a lot of new tech, and yeah, a surprising amount of core banking infrastructure still runs on older systems like COBOL. However, comparing them directly like that is a bit misleading because the situations are really different. Replacing those core banking systems is incredibly complex and risky. We're talking about systems that handle trillions of dollars reliably. A mistake there could be catastrophic—financially, culturally, and globally—which is why they are so cautious and heavily regulated. It's not just that they don't want to upgrade.
Also, banks use a ton of AI already, just maybe not for ripping out that absolute core processing yet. Think fraud detection, risk analysis, customer service chatbots – they're definitely adopting modern tech where it makes sense without destabilizing the foundation. The type of risk is different, too. A failure in a core bank system could cause systemic financial meltdown, whereas a failure in a car's AI, while potentially tragic for those involved, doesn't carry the same kind of global financial risk.
Saying they don't care about safety isn't quite fair either. While companies obviously care about profit, huge safety failures are also terrible for car companies – think lawsuits, recalls, reputation and regulations. The reasons for the different paces of adoption are more complex than just 'protecting money vs. endangering people'.
It is a gimmick, because it isn't approximating intelligence at all.. there's no reasoning or any logical deduction going on under the hood, just a probability calculation about what the next token should be.
I disagree. Maybe it isn't intelligence, but it damn sure approximates it. That probability calculation can be surprisingly effective, not to mention it passes the Turing Test.
If intelligence is defined strictly as human-like consciousness & reasoning, then sure, current LLMs don't qualify. But if intelligence includes the ability to learn complex patterns, process information, solve certain problems, and generate sophisticated outputs based on input, then LLMs demonstrate
aspects of intelligent behavior, even if the underlying mechanism is statistical pattern-matching rather than human-like cognition.
This is why it hallucinates, why it can't count accurately
You're right about the core mechanism – at a basic level, it's predicting the next token based on probability. And yes, that's why it struggles with things like precise counting or can 'hallucinate' facts, because it doesn't have true understanding or a fact-checking module built-in. Because the model is generating statistically likely sequences rather than accessing a factual database or performing symbolic logic, it can confidently generate plausible-sounding falsehoods ("hallucinations") or fail at tasks requiring precise, non-linguistic reasoning (like counting or complex math). Its "knowledge" is embedded in the patterns of its training data, including any biases or inaccuracies present there. These short-comings do not negate it's overall value and utility as a tool to augment human intelligence and capability.
You're dismissing a couple of important things here: 1. Scale – these models have billions or trillions of parameters, trained on internet-scale datasets. 2. Emergent Abilities – the complex patterns learned during training lead to capabilities that weren't explicitly programmed, such as translation or code generation. The ability to calculate the correct probabilities to produce coherent, useful text requires learning intricate patterns of language, grammar, semantics, and world knowledge (as represented in the text).
It's convincing in the same way a politician can reel off word salad.
This is a dismissive analogy. While LLMs
can produce vague or nonsensical output, they can also produce highly structured, informative, and useful text, code, or analysis when prompted correctly. The quality varies greatly depending on the model, the prompt, and the task.
there's no intelligence in there what so ever. Hence, a gimmick.
It's definitely not a gimmick. A "gimmick" suggests something superficial with little real utility. LLMs, despite their flaws, have shown significant utility in, e.g.: drafting emails, writing reports, translation, summarization, creative writing, brainstorming, coding assistance and code generation, information retrieval (though fact-checking is clutch), and idea inspiration. They are tools that can augment human capabilities, even if they aren't "thinking" machines.