• S&T Moderators: Skorpio | VerbalTruist

Is A.I. waiting to take over ?

"Please tell me what item #39 is in the array I provided".

"No, this is #39; "bla bla". You gave me #89. Please give me #40". Proceeds to give me #92.

How the hell is business supposed to trust something that can't even count properly.
 
Last edited:
Just saw this news story about Croydon Council, a borough in South London that is bankrupt. It has implemented customer-facing AI to handle council tax enquiries, which they admit is only correct 80% of the time lol. Personally I think the most interesting thing is the fact they're bankrupt but somehow got this AI bollocks; clearly the government wants to push this crap into every corner of our lives, "to save money" of course. Hell, they might even make a little money if the AI gives out incorrect information and people get fined for not getting their tax payments correct haha.

Council begins trial with AI system which is only 80% accurate
CROYDON IN CRISIS: Without any advance notice, the borough’s bureaucrats have unleashed on the public an imperfect system to handle Council Tax enquiries.
 
You know AI is sort of a buzzword for how an algorithm is 'trained'.

They need not be trained in a human language. Most are not. It's just the allure of what Turing described as an 'Oracle' and even then, I don't think Turing was assuming a human language was always appropriate.

AI does much better on specific tasks than on attempting to navigate the logic of a human language to solve every possible question.
 
Last edited:
for sure. a.i. is not magic. it's a tool made by humans and it has applications and limitations. further, different ai implementations have different goals and methods.

like any task, you need to first choose the correct tool for the job and then you need to use that tool properly.

we can't comment on the specifics of ss' case without some sample data but it feels a bit like a self-fulfilling prophecy. we don't know. if one actively anticipates, or even hopes for, failure, who knows how that affects the input and the outcome.

alasdair
 
I posted that Adam Curtis video because he explains that even back in the 1980s business was using computers to model patterns of human activity. Financial activity to begin with but they found lots of cases. But the ones that modelled finance just modelled finance and so on.

I'm reminded that there still remain games that use a fixed set of rules that as things stand, AI cannot beat. Infinite chess is an example. Maybe that one has been tackled now, but the point is, like thermodynamics - AI only seems to work within a contained system. Since it exists within a contained system, it isn't possible for it to be entirely consistent - self-examination would be an example of something not possible (AFAIK).
 
Last edited:
A friend who understands how large language model AI chatbots work informs me that it's now possible to freely download such LLMs.

Whenever something is free, I ask 'Cui bono?'. Clealy giving away a huge training-set suggests that somewhere down the line, the creators see a profit in doing so. I don't know what that might be. I don't use much modern technology so am not in a position to know but it does strike me that if one introduces lies into a model and people begin to rely on AI answering questions, said lies may well benefit the creators.

I still have an extensive paper library on the basis that nobody can offer an update in which the facts have changed. Sure, I will always endevour to have three sources as is good practice, but if it's ever less than three, I will flag that. It it's only one I will specifically state that conformation was unavailable.
 
A friend who understands how large language model AI chatbots work informs me that it's now possible to freely download such LLMs.
You can, but they're nothing like GPT and the other massive models. Consumers simply don't have the hardware required to operate those, you need datacentres for it.
I don't use much modern technology so am not in a position to know but it does strike me that if one introduces lies into a model and people begin to rely on AI answering questions, said lies may well benefit the creators.
From a social engineering standpoint, LLM's are much like social media.. they are a gateway to intellectual laziness and decline in cognitive ability. The evidence is already there for that, unequivocally. If you want your population to be intellectually handicapped for political purposes it's a no-brainer. The general mass of people never really read beyond a headline, so with GPT if they ask a question are they going to bother verifying anything? Nope.

There's also the inherent biases too, because these systems are only as transparent as the human architects make them. You only need to look to Google's search algorithm to see how this is going to pan out. It's no different to television back in the day.. it is currently the most powerful mind shaping technology on the planet, political powers are evidently going to exploit it.

I'm fortunate that the stuff that really interests me, like psychology and philosophy, are two domains AI can't ever touch. Even its regurgitations will never be worth a damn in comparison to just reading the source material of men who thought.
 
You can, but they're nothing like GPT and the other massive models. Consumers simply don't have the hardware required to operate those, you need datacentres for it.

From a social engineering standpoint, LLM's are much like social media.. they are a gateway to intellectual laziness and decline in cognitive ability. The evidence is already there for that, unequivocally. If you want your population to be intellectually handicapped for political purposes it's a no-brainer. The general mass of people never really read beyond a headline, so with GPT if they ask a question are they going to bother verifying anything? Nope.

There's also the inherent biases too, because these systems are only as transparent as the human architects make them. You only need to look to Google's search algorithm to see how this is going to pan out. It's no different to television back in the day.. it is currently the most powerful mind shaping technology on the planet, political powers are evidently going to exploit it.

I'm fortunate that the stuff that really interests me, like psychology and philosophy, are two domains AI can't ever touch. Even its regurgitations will never be worth a damn in comparison to just reading the source material of men who thought.

I believe developers often refer to the 'black box' problem i.e. they don't actually understand exactly how these AI systems work.

My friend (who has 2 x 6TB HDs) was able to download some of those LLMs. He pointed to the fact that as yet, nobody appears to have used academic papers as their chief source of training text. I pointed to the fact that while Sci-Hub is still going strong, the few publishers who control almost 100% of journal articles are exceedingly litigious. Authors are not paid if their academic article is published in one of those journals. Reviewers who check articles are not paid. Yet not only do the publishers demand thousands of $ per annum for institutions to subscribe to each journal they publish but they then charge anywhere from $12-$60 for researchers access to a specific article.

Is that why some of those LLMs were inventing references? Because I don't know but it does at least that would make sense.

All I can say is that while AI may currenly be of value to people who only seek a basic understanding of a topic, they are OK. But I'm old enough to have noted that even books can disagree on a fact hence my using three sources. That stems from a family of journalists. You cannot rely on a single data-point EVER. That was driven into me. After all, get a fact wrong as a journalist and someone might sue, so there is a benefit in performing the checking.

But we are going to see generations assuming AI is correct without needing to check? It's clearly of benefit to some parties to alter those facts. Hence my preference for paper reference. Less convenient, but not subject to unflagged revision.

BTW the funniest example is wife of our ex-PM whose birth certificate clearly names her 'Cherry Booth' (now magically 'Cheri Blair'). Copies of that birth certificate are still hung on the walls in many newspaper offices.
 
Last edited:
But we are going to see generations assuming AI is correct without needing to check.
See I think we're only going to see some initial early adopters place too much confidence in AI and get burned from doing so. Between getting burned and because of zealots who LOVE and take every opportunity to proselytize and rail against the supposed wretched evils of sloth that they think AI will inevitably bring forth. I think people are wisening up to the fact that there are limits to what AI can currently do. And all the while the technology is steadily getting better and better…

And also like you stated, use at least three sources for any disputable facts, so AI would de facto need to be checked anyway. Sure, the size of your LLM matters, but it's not how big it is, but how you use it. (And yeah, that's what she said 😂).
 
"A.I." is essentially just a statistical model being ran on a computer. There's really not much that is intelligent about it as it only updates its parameters based on the data it's given. That being said thanks to silicon valley it is now a buzz word and all sorts of businesses think it will magically improve things in applications even though if you know anything about "A.I." you will know it will make things worse.

All that being said it does have legitimate applications and will have a profoundly positive effect in some applications.

In my opinion most problems that "A.I." solves can be solved much better by using other mathematical models, and in a lot of cases even better by just having a human do it :p .
 
That being said thanks to silicon valley it is now a buzz word and all sorts of businesses think it will magically improve things in applications even though if you know anything about "A.I." you will know it will make things worse.
The only thing it has been able to magically improve is pumping up those sweet sweet stock market numbers. Once the hype train loses its momentum it could be quite the bubble pop indeed.
In my opinion most problems that "A.I." solves can be solved much better by using other mathematical models, and in a lot of cases even better by just having a human do it :p .
One of the fundamental issues, going back almost 100 years now, is the false equivalency between 'information' and 'intelligence'. Computing, genetics, and brain research, all assumed that paradigm between them - "the brain is an information processor"/"computers are information processors"/"DNA is a storehouse of information".

This leads to where we are now, where we call gargantuan information processing (LLMs) "AI".. when in reality there is zero intelligence in there! It also leads to this belief that AGI is even possible based on pure raw logic (information processing), when intelligence has so many non-logical qualities beyond just information e.g. intuition.

It's a great representation of our scientific paradigm as a whole. Just make the thing bigger and bigger, pump more and more energy into it, and it's sure to yield better results!
 
"A.I." is essentially just a statistical model being ran on a computer.

I worked in software development for dacades and was there on the spot when the 'Dot com boom' occurred. Essentially EVERY investment company felt that they had a visceral need to be 'in the game' although it quickly became clear that they didn't really know what it meant.

It seems that just like 'Dot Com', 'A.I.' is merely a fairly non-specific term that businesses hoping for investors will slap into their portfolios so venture capitalists will think well of them.

Is a software house who develop computer games strictly a 'dot com' business? I mean, back then games came on disc, CD or cartridge. Sure, they all had websites but it wasn't a core part of their business. Yet they were vastly overvalued by investors.

So I put it to you that we may well be witness to all manner of businesses who are in some minor way using AI for SOMETHING being conaidered part of the AI boom. I bet it's possible to set up an entirely ficticious business with AI in it's name and you will be sure to see venture capitalists who don't CARE what the business does investing.

bluelight.ai could be a HUGE earner. Just use all the rubbish spoken here as the training set and off you go...
 
I just remembered readging YEARS before the term A.I. became a hot button topic that the team behind Google Translate began with an alogorithmic translation method and then they added scraped data that was in two or more languages and supplemented the algorithmic method with that scraped data until one day (history says), whoever headed the team simply asked what the quality was like if the algorithm was rejected.

In my memory this story occurred before AI was such a big deal, so I cannot relay the veracity of the statment, but I can well see someone who has been scraping data for years simply wondering if the algorithm can be discarded. As a programmer, I can well see someone performing such a test on the basis that whatever the outcome(s), it would likely provide useful information.
 
I just remembered readging YEARS before the term A.I. became a hot button topic that the team behind Google Translate began with an alogorithmic translation method and then they added scraped data that was in two or more languages and supplemented the algorithmic method with that scraped data until one day (history says), whoever headed the team simply asked what the quality was like if the algorithm was rejected.

In my memory this story occurred before AI was such a big deal, so I cannot relay the veracity of the statment, but I can well see someone who has been scraping data for years simply wondering if the algorithm can be discarded. As a programmer, I can well see someone performing such a test on the basis that whatever the outcome(s), it would likely provide useful information.
I think for the task of translating languages they still used a statistical model. It just wasn't something that would be called "A.I." (https://en.wikipedia.org/wiki/Statistical_machine_translation). The link I just gave doesn't give a whole lot of info on how they did it just that it was based on bayes theorem :LOL:. I believe later on they replaced it with an "A.I." model that was some sort of encoder/decoder architecture (around 2015, 2016 I think).
 
I think for the task of translating languages they still used a statistical model. It just wasn't something that would be called "A.I." (https://en.wikipedia.org/wiki/Statistical_machine_translation). The link I just gave doesn't give a whole lot of info on how they did it just that it was based on bayes theorem :LOL:. I believe later on they replaced it with an "A.I." model that was some sort of encoder/decoder architecture (around 2015, 2016 I think).

I stand corrected. I just recall the fact that initially they used mixed methods and then tried each in isolation and quickly realized which one performed better.

I have to ask if they REALLY used AI or did they simply append the term 'AI' to their existing method? It's just my experience that we didn't change our business model one bit but as soon as we had a website (built by someone else I might add), we were suddenly a 'dot com' business and the share price went crazy.

In short - how much of the AI bubble is businesses whose business model is AI and how many are what I might term 'AI adjascent'? Because we couldn't work out WHY we were suddenly worth four times as much... but happily took the shares we were given.
 
Now while I'm not expert on this, haven't researchers found simple ways for 'locked in' patients to communicate on a very basic level? As I understand it, the locked in patient is asked to either think about moving around the rooms of their home OR think that they were on the touch-line, prepared to serve in a game of tennis. So a simple digital option..

I seem to recall specialists in the UK using what on it's surface appears a really odd method, but it did turn out to work. Now it appears to be the usual way 'locked in' patients are able to confirm that they are conscious.

So it's not as if we haven't had the technologies to do this for some time.

I'm certainly not knocking anyone who refines the system, but I'm uncertain if they can claim to have worked for a clean sheet or looked at previous works (which would seem a logical thing to do).
 
Top