• S&T Moderators: Skorpio | VerbalTruist

Is A.I. waiting to take over ?

TL;DR below… I think people are underestimating the power of human language and how it's one of the clutch differences between humans and the rest of the animal world. Our language is the key to our intelligence. It's how we pass along knowledge from generation to generation, and it's why humans are insanely intelligent today compared to our ancestors from yesteryear going back into time. It's also why human childhood lasts significantly longer than any other species' childhood. Development of our complex frontal lobe-based intelligence takes time, nutrition (esp. protein), and strategic education.

It's interesting to me that the Niponese do not share the same fear of AI in their culture, and are generally optimistic / positive about its integration into quotidian human life, having already experienced many of the benefits and convenience this technology has to offer. Also, side note, and speaking of Nippon ☞ I recommend the anime movies and series, Ghost In The Shell if you're unfamiliar. Some of these philosophical concepts are explored in a sci-fi / cyberpunk future Tokyo… and in a cool, entertaining, anime style…

The previous question was if presently LLM's are intelligent or not and what intelligence means in this context.
I thought the actual question was: "Is A.I. waiting to take over?" I think, at least, one thing we probably all agree on is that: no, AI is not "waiting to take over." That would indicate desire, a human emotion (unless it's executing some iRobot-type shit I guess). I think that's one of the other key things, besides language, that makes us human ☞ our emotions, which are built into the core of our beings, our minds, and it's in our DNA. And perhaps at the heart of it is the fact that we are all mortal and will die one day. That realization comes when we are young and haunts us our entire lives.

I don't know whether AI will ever get to the point where it can experience consciousness, pain, joy, nostalgia, homesickness, gratitude, rage, wonderment, sadness, euphoria, depression, guilt, regret, forgiveness, redemption, etc., but if we're going to continue to use the term "smart phone", I think calling AI "intelligent" is not that difficult to accept. Emotional IQ almost seems like a different technology, so to speak. Surely, emotional IQ counts toward overall intelligence, which AI is still lacking, but I was never expecting that from AI in the first place. (But maybe I should've? After all, I have seen Bladerunner a bunch of times, and read the works of William Gibson, Neal Stephenson, and Phillip K. Dick, et al.).

TL;DR ☞ LLMs probably aren't going away anytime soon, and not to sound like a technocrat here, but it seems wiser to explore what they can do for you before wholly dismissing them as not intelligent and/or devoid of value, a gimmick.

I am reminded of those people in the mid 90s who said: "that internet thing is just a fad; no one will be using that in another 5 or 10 years, watch." Hell, before that, there were people who thought no one would bother using a telephone when you can just send a perfectly good telegram via delivery person. “CandyGram for Mongo! CandyGram for Mongo!” 🤣
 
Last edited:
The desired outcome is not always achieved. It can make the stupidest mistakes while sounding completely plausible. They all also hallucinate on occasion because you are not interacting with an intelligent being that comes up with thought processes but with a piece of software that is good at pretending but has zero awareness. It doesn't "think". Its "intelligence" IMHO is closer to that of an insect rather than a human being.
Thank you! Jesus, why is it so hard for some people to just acknowledge this basic understanding of what we're talking about.
i acknowledge and understand that. but, in the event that it is, why does it matter?
Is this even a serious question? We're not talking about LLM's being used to proof read some useless half-arsed essay by a teenager in school, something with no consequence. We're talking in the context of LLM's being shoved and jammed into business applications and business products, being pushed front and centre in a world that is now dominated by technology.

Here's a practical example. Facial recognition technology. This is currently being deployed in the UK and already we've had people being detained at length after being falsely identified by these systems.. systems which are built on very niche AI functionality for that specific task.

Do you really want AI in systems that is going to be right only 80% of the time? Where its authoritative answer is taken as gospel until you can proof otherwise? That invoice for a bill that is wrong.. that prescription for a medication that is wrong.. and let's not pretend it won't happen even with people supposedly 'proof checking' the AI output, because people are lazy enough as it is and AI will just exacerbate that more.
 
I work in software and use LLMs every day to help generate code with real business value.
There's more than enough people online discussing this topic at length, and sorry but your anecdotal perspective does not weigh against the very detailed accounts I've read by others. Yes it can be used to generate boilerplate, unit tests, and what have you, but it still hallucinates.. it still can't solve anything.. it can only repeat what it has already seen in other peoples code. That is just auto-complete.

Give it a new framework or system design architecture it hasn't seen before in its training data and watch it fall flat on its fucking face. It can't reason, it can't think laterally, it is absolutely useless beyond its ability to churn out "code" it has already seen.

And while that is a productivity boost, that's all it is.. a boost. It's not a revolution. It will raise the bar, just like Excel did, and then that bar will become the baseline and people will just build on top of that plateau.
Sorry but I don't see the point talking about it speculatively when better models than the one you're thinking of are already in common use.
What are you thinking of then. Cursor? Copilot?

It was a ball ache just to get GPT to output some dummy JSON data for my code work. It couldn't iterate properly, it ended up repeating itself where generative text was required, going off on a tangent when given repeated instructions to stay on a specific topic area. If you're using it to produce production code you best be checking every single line of it lmao..
what does that matter? what's the difference?

if you asked me to describe a sunset and i asked chatgpt to do it, if i then told you:..
Because you're effectively talking to a politician, someone who will tell you with confidence what they're saying is correct when in fact it could be seriously misleading or just flat out wrong. If these systems can not even do the most basic reasoning and logic deduction then you will have to remain cynical of all its output, because it's going to hallucinate on you, and that is a serious waste of your own bandwidth.

And in a job or product scenario.. that is going to lead to intellectual laziness and mistakes compounding upon each other. This isn't rocket science. We're seeing it already with young people who are relying on AI to do their work for them, to do their thinking for them, it is absolutely rotting them - accelerating an already brain rotting ecosystem. You put that in the hands of the majority of people who are already lazy and overstressed by our society.. it's not going to be a positive outcome.

Mix that with bureaucratic systems and you're asking for real trouble, because you'll end up in scenario where you'll be arguing against 'the algorithm' that is wrong and a human being who 'proof read' the output and thought it was correct because they're now intellectually incompetent as well! What a fucking ball ache that will be.
 
You're using too narrow of a definition for "intelligence". AI is—by definition—a form of intelligence. It's capable of performing tasks that typically require human intelligence, such as reasoning, learning, and problem-solving. While it doesn't possess the same consciousness or emotional intelligence as humans, AI systems can be very intelligent in their specific domains. You're looking for the wrong thing from LLMs. So long as you know the limitations, you can put this tool to good use.
But it does not actually do any of the underlined. Do you not understand how these LLM's work? It has no reasoning capability, no learning capability, no problem-solving capability. None of those capacities are part of the underlying architecture, that's not how the system works! It can produce output that may convince you it can do those things, but that is not the same thing.

You give it anything it has never seen before in its training data and it can't "solve" it, because it requires by default that data on which to build its neural network patterns, because it does not actually understand the data itself.

Give it a new coding framework not in its repository of training data, and hand it the documentation for the new framework. It can't connect the two, because it doesn't even understand language itself. And if it can't understand language at the root level, then there is nothing to build on it (reason, learning, problem-solving).

This is why it is an imitation of intelligence, a very clever one granted, but why it is still a gimmick.
Basic economics, my guy. This does not automatically mean what's being sold is devoid of value.

The cynical view of AI is super reductive, jumping to extremes, and missing the point entirely. There's no sophistry, and why are you so cynical? It's too dismissive to call it a "gimmick", a simplistic trick to fool rubes and rip-off Grammerly and that's all it's good for, right? … 🤣

Personally, I think it's crazy useful as a guide to help developers write better and/or terser code, parse out minified scripts and explain them, point out redundant and unused styles, reverse engineer code tricks, identify gotchas, help with smoke-testing/dog-fooding application code, plan sprints, and many other useful things. No, it cannot be relied on 100% alone, nor should it be. But put to the right use, if nothing else, AI can seriously increase one's workflow. It's been very useful in a number of significant ways with some of the work I do as a freelance creative technologist and consultant. It's another useful tool in the tool belt of modern technology. Sorry if you don't, won't, or can't see that perspective.
I never said it wasn't useful - you guys aren't using GPT to summarize my responses are you?..

I did say repeatedly throughout my points in this thread that these LLM's are basically the new Microsoft Excel. They are productivity boosters, but that's all they are. But even then Excel was more of a revolution because it is far more logical in its I/O than these LLM's are.. Excel does not hallucinate 99.9% of the time (when there are no bugs).

The cynicism is absolutely warranted if only for the basic point of the fact that all of these AI systems are operating at a net financial loss, due to the massive power and hardware requirements. Which I find hilarious given a lot of the tech-scene bros are "so concerned" about climate change and the environment. So these systems shave off a little bit of time in your job, so what? Is that really worth the trade-off at the other end of the equation?

Given that these systems are not revolutionary but just minor productivity boosters.. productivity which in itself will become the norm in a short time and then any financial benefits folded back up the economic structure.. where is the actual net benefit to anyone? How do these system actually enhance our lives in any meaningful way? The evidence is there already that these systems are making people intellectually lazy, which is a psychological cost that precisely ZERO of these tech-bros have even considered.. I mean why would they when they've given the world FaceFart, InstaShite, and all the other brain rotting apps?

How can you not be cynical. Strip away the hype and take an objective look at these things. I think people are so desperate for novelty, for the star-trek future fantasy, that we're willing to overlook things and swallow all the tripe that the media should be filtering out but aren't because they're all bought out and sucking off the tech industry.. because that's where all the money is, where we have decided to bet our futures on.

I mean christ, enough people on here bash Elon for his antics.. but he is just one example of these tech-bro leaders who have been talking absolute shite for years. The exact same faults can be leveraged at all of them, including the ones pushing AI. Hyper-loop? Metaverse? Web3? There's already a graveyard of these failed hype wagons rotting at the side of the road.. it's really not a stretch to see how AI could fall by the way side as well.
 
These guys break down one issue with AI coding, related to its ability to seemingly solve issues by just bypassing the test suites without actually solving the problem. This highlights a theme I've seen by multiple seniors, where in practical terms you're really not saving much time if at all because you're forced to review very carefully all the output.. whereas you could have built up a working relationship with the juniors and you know where their knowledge base is at (and what the likely issues are in their code).

 
Anyone who works in software and entirely relies on LLMs to generate code with minimal work on their part is using the tools wrong. These tools have increased my velocity 10x+ and I'm learning new technologies faster because of how I use the tools

Almost everyone in my lab uses LLMs in some fashion from code or text generation to using them as part of the research project
 
Which is fine, that's a productivity boost.. but in reality all that is is just a more sophisticated google and stackoverflow search mechanism. The moment your "code" goes beyond just being simple snippets of problems already solved by other people to the point where you actually have to understand how the technology you're using works.. then you're in trouble. This is literally what has happened with the "vibe coding" thing recently, where people have built entire apps and gone live, made money, and then they have no idea how to scale it or debug inevitable issues.. and neither does the LLM. That's an extreme (and stupid) example, but it shows the underlying issue.

As a learning aid it has an inherent flaw too, because it doesn't know what it presents is true but will claim that it is. And if you don't know either then you can internalize something which is wrong. Something not pointed out in this thread yet is prompt injection.. where the underlying models are tarnished by injecting false information into it. If you're learning something, you want your source to be credible and trustworthy.. not right 80% of the time. You don't want your teacher hallucinating on you.

Like it's great for simple shell scripts, snippets, or what have you.. though personally I wouldn't touch it for anything interfacing with system architecture.. and its useful as a web search facility - is that just because Google has become so fucking awful over the past decade?

Read around. It's being used by a lot of people, obviously, and the complaints are all valid. From the human level of depreciating the creative aspect of coding, to utilizing it in complex testing workflows which ultimately take the same amount of time as before because you have to triple-check it hasn't bypassed the test suites.

These tools have increased my velocity 10x+ and I'm learning new technologies faster because of how I use the tools
But at what cost. Are you really learning, learning deeply.. beyond its ability to gather information for you better than Google (taking into account errors)? Your x10 productivity.. will just become the expected, given that's precisely what the past 50 years have shown.. how is that of any net benefit to you?

I would point to the detrimental affect on young people already as evidence that the psychological impact hasn't really been factored into this. It will be a short lived 'productivity boost'.. but again, at what cost.
 
ZIt's interesting to me that the Niponese do not share the same fear of AI in their culture, and are generally optimistic
So are the Chinese. They put Deepseek into a nuclear power plant already despite LLM's not being reliable. In Germany too politicians who have no idea what LLM's are think they can resurrect the nuclear industry by replacing workers that are in retirement now. It seems most people in charge ignore it's limitations.

I thought the actual question was: "Is A.I. waiting to take over?" I think, at least, one thing we probably all agree on is that: no, AI is not "waiting to take over."
Well it's not that I made claims about it's intelligence.

Back to topic:
Ai won't take over by just throwing more power at it at an exponential rate but should AGI and the singularity become a thing then this might lead to super intelligences which might very well completely outclass us intellectually. We might be creating our own gods.

We probably also will go into further intellectual decline since AI does nearly everything for us. We already can see a decline thanks to computers and mobiles even a 5 year old can operate without any qualifications.

This would make us hopelessly dependent. And who knows if the problem solving abilities of such super intelligent AI doesn't also include manipulation of humans in order to best achieve its goals. We likely wouldn't know since we can't inspect how AI come to it's conclusions.
 
But at what cost. Are you really learning, learning deeply.. beyond its ability to gather information for you better than Google (taking into account errors)? Your x10 productivity.. will just become the expected, given that's precisely what the past 50 years have shown.. how is that of any net benefit to you?

I would point to the detrimental affect on young people already as evidence that the psychological impact hasn't really been factored into this. It will be a short lived 'productivity boost'.. but again, at what cost.
Yes, I am actually learning. LLMs have not only augmented the way I previously learned new technologies but also having things like autocomplete in writing code is immensely beneficial.

Previously after outlining requirements I would essentially do a series of internet searches which resulted in me looking briefly at the features and docs of libraries, services, or other programs. Sometimes I would need to scour several internet forums to find caveats or advice on some weird error in implementation or configuration etc. I would need to go through documentation, searching manually, or find code/config snippets that other people had posted from their own implementations or elsewhere and adapt it to my use case. All of this takes a rather significant amount of time, and it is essentially repetitive manual labor that I have been doing for over 20 years.

LLMs excel at generating text. That is what they do at the end of the day. Whether that text is an appropriate response to the input, that is a more complicated question and varies wildly on the type of input and expected output and for many inputs, when crafted carefully, the outputs have been improving in usefulness for me. These text generation engines have been trained on a huge amount of text data available freely on the web and I know there are issues but I'm not getting into the copyright stuff here because that is only tangentially related to the point of this post. There are monstrous amounts of text related to programming, computer science, and mathematics available on the internet including all of the documentation available on websites, forums where users discuss error messages and troubleshooting, stackoverflow, listserv archives, and much more.

In this use case I am augmenting the work I mentioned previously. Instead of a series of internet searches I can now paste an error that I haven't seen before and the code for context and depending on the type of issue it gets solved far faster and I learn about multiple possible causes of the error along with various ways to fix it. This shrinks multiple work hours to days worth of searching and testing to minutes to hours. Just like the internet didn't replace journals or books, we still use them but we have the information available much faster now. This isn't to say I have fully replace what I did before, rather I augmented it. I still do the same research as before, especially since LLMs can make mistakes and there is a date cutoff for the training data and the internet searching features are currently far too inefficient for my purposes. I do often check for updated docs for new versions of software released after the cutoff and sometimes I do need to correct small syntax errors for specific libraries such as polars (chatgpt often writes groupby() instead of group_by() or withcolumn() instead of with_columns() [iirc] because of similarities with pandas) but again these are incredibly minor and far less work that the previous method without LLMs.

Well, you may say something like "but most of the learning was happening in the previous method with all of the research and synthesis!" and I would agree that there is a lot of learning that happens in synthesis and application. This learning still happens but has just shifted, when I learn about methods or functions with the help of an LLM not only do I type it in myself but I also work to implement those new methods elsewhere in new contexts. The synthesis is still happening but much more quickly due to faster access to specific implementation details and essentially combined historical documentation and troubleshooting all in one.

Now when I'm programming, I have an extension in my text editor which offers autocomplete. I know this doesn't sound useful if you come from the context of autocomplete for English language conversations because that has been absolute dog shit in my experience. For autocomplete, the LLM still has access to the same historical documentation and troubleshooting information and as I type it takes the context of the document to try and predict what will come next. if I'm writing a repetitive switch statement which uses a series of previously defined variables for example, in the past I would have probably written a case, return with some basic boilerplate of what I wanted, copy pasted it a bunch of times, then filled in the variable names. Now, as soon as I finish the first case the rest are autocompleted with variables all filled in as soon as I hit tab. This shortens the amount of time I spend typing significantly helping to protect my wrists from carpal tunnel and it helps make my code clearer, more maintainable, and easier to read and follow. I no longer desire short variable names to make typing less annoying and I can use clear, descriptive variable names so anyone can easily follow the flow of data.

I'm not saying everyone uses them the way I do, but this is what I have found to be optimal in the current situation. The situation may change in the future and I will adapt as that happens as I have always done.

LLMs have been nothing but a boon for anyone who programs and if you think otherwise you either don't program or you're missing the boat and will be left in the dust as everyone else has 4-10x your productivity
 
So are the Chinese. They put Deepseek into a nuclear power plant already despite LLM's not being reliable.
No fucking way 😂 Simpsons going to be become reality..

season 7 keyboard GIF

Ai won't take over by just throwing more power at it at an exponential rate but should AGI and the singularity become a thing then this might lead to super intelligences which might very well completely outclass us intellectually. We might be creating our own gods.
We're nowhere close to AGI. LLM's can't, and there's nothing in the technological pipeline elsewhere as far as I can see. Despite what Sam Altman might claim, it ain't gonna happen.
We probably also will go into further intellectual decline since AI does nearly everything for us. We already can see a decline thanks to computers and mobiles even a 5 year old can operate without any qualifications.

This would make us hopelessly dependent.
This is the real cost of all of this I feel. The psychological damage. The further enshitification of our way of life, because everything increasingly revolves around digital technology. It's the spiral of mediocrity. Big growth numbers must go up, regardless of the human cost.

You look at the vocabulary, diction, and depth of argument of people being interviewed from around 1950.. before TV and consumerism had really begun to rot peoples brains from early childhood. They are like a foreign species compared to people today. You look at Gen Z and the damage smartphones have done to them. AI is just going to compound all of that in the worst way, because it is taking away the creative fields and creativity, and making people intellectually lazy too.

Dependency is probably one of the end games of this. I'd bet money on that. Supplanting peoples own ability to think.. gee I wonder who could possibly benefit from that arrangement.
 
Yes, I am actually learning. LLMs have not only augmented the way I previously learned new technologies but also having things like autocomplete in writing code is immensely beneficial.
Yes, this is where it is beneficial.. which is what I have been saying. Excel 2.0. It is very useful for doing those simple single line auto-completes in certain scenarios.. because that is precisely how the LLM fits best.. next token prediction. As soon as you extend out from that, then you run into problems. Multi-line auto-complete is a complete hit or miss, and personally I'd rather spend the 5 seconds thinking about how to quickly do a multi-line writing using VSCodes own features. You can do an awful lot just by knowing the keyboard shortcuts!
Previously after outlining requirements I would essentially do a series of internet searches which resulted in me looking briefly at the features and docs of libraries, services, or other programs. Sometimes I would need to scour several internet forums to find caveats or advice on some weird error in implementation or configuration etc. I would need to go through documentation, searching manually, or find code/config snippets that other people had posted from their own implementations or elsewhere and adapt it to my use case. All of this takes a rather significant amount of time, and it is essentially repetitive manual labor that I have been doing for over 20 years.
Again, this is beneficial.. but it really is just a slightly more advanced version of Google's search functionality. I'd also raise the point again that Googles search has been degraded massively over the past 10 years. If it hadn't become so shit then the gap between it and the LLMs might be narrower than you think it might be. And for code specifically, stackoverflow; sometimes you can learn more just by reading every comment on the thread rather than looking just looking the accepted answer - the LLM's inevitably were going to be cherry-picking the top answers, even if they are massively outdated too.
LLMs have been nothing but a boon for anyone who programs and if you think otherwise you either don't program or you're missing the boat and will be left in the dust as everyone else has 4-10x your productivity
For now. Linting and basic one-line auto-complete is all I use AI for in my coding environment; actually being able to understand thoroughly from experience what is going on in the code as a result of that 'luddite' approach will be far more valuable in 5 years than some kid who is "10x more productive" than me now but doesn't have a clue what is really going on. Just wait. When the failures start mounting up, and the inevitable big company failures too, you will see a big pivot again as managers scramble for people who actually know what the fuck they're doing.

It's no good having the majority "x10" when they have not developed any of the actual skills required to be a good software engineer, skills you only get by doing (and failing/debugging). This is no different to juniors who hand in portfolios of 'apps' they 'made' by copy/pasting along from tutorials. Like you don't know shit, you learn nothing doing that. And if you're using AI to supplant all your thinking for you, then you do not learn even half of what someone who grinded it out has.

It might be profitable in the short-term, but the chickens will come home to roost.

EDIT: Just thinking about it. You try and get someone who has abused AI in their learning journey to setup a mono-repository with all the respective configuration, linting, etc. Watch them crash and burn 😂 I know because I've done that, and while AI's could probably get you started with summaries.. you're not going to resolve the inevitable clashes of packages and configurations. That kind of problem solving, deep-diving, lateral thinking, is something you only pick up by doing it.. it's not something AI can bypass for you.. and that problem-solving ability is ultimately what you are really getting paid for, not for being a code-monkey.
 
Last edited:
They can't scale the hardware or power either, and the whole thing is operating at massive financial loss already due to that.

My company supplies some of the hardware, and it is scaling. Massively, and at speeds we (suppliers) simply can't keep up with. nVidia continues to kick everyone's butt in the market, but others aren't too far behind. Scaling is both in speed (jumping from 112Gb/s to 224) and in terms of how many GPU per rack for computations. They are creating server racks that need forklifts to move or to even assemble.

Progress in this field can be measured in weeks not in months or years.

Now, we're being asked to build half racks so while parts of hardware try to catch up they can continue to work on the models and programming to be used on these newer generation servers. Measurement in 'weeks' is spot on. Hardware is struggling to keep up, but models are already waiting to run on that hardware once it is available.
 
Now, we're being asked to build half racks so while parts of hardware try to catch up they can continue to work on the models and programming to be used on these newer generation servers. Measurement in 'weeks' is spot on. Hardware is struggling to keep up, but models are already waiting to run on that hardware once it is available.
The human mind doesn't need football fields full of server racks to develop its mental abilities.Throwing more hardware at it is arguably cheating. The way I understand it is that new AI's got specifically trained to cover up flaws where previous LLM's would break. That doesn't mean progress though because other weak spots remain to break it. They (so far) were unsuccessful avoiding such flaws altogether. It might be a complete crapshot to build all those new super power hungry server farms. They might very well become obsolete should we actually fix the current flaws.
 
That whole "I am what you get when you try to carve god out of the wood of you're own hunger" poem was disturbing.

In a US military similuation they had an a.i drone simulation that worked like this -- An array of targets were sent up and the operator would inform the drone which targets to attack and which ones not to. The Drone (AI) immediately realized that it gained points from each destroyed target and "Killed the operator" to obtain more points as all targets would be greenlit.

They promptly corrected the matter -- Told the drone (and this is their words) "Hey don't kill the operator you will not get points for that" -- reprogrammed it or whatever and ran the sim again.

This time in the sim the drone immediately attacked the/our communication towers that would allow the "operator" to communicate which targets to greenlight.

2 years ago in a Tesla (haha) lab a man was attacked and left bleeding on the floor by an ai robot.

YOUTUBE THIS ONE -- At a festival in China an AI device attacked a fellow/crowd. (This was real recent)
 
Thank you! Jesus, why is it so hard for some people to just acknowledge this basic understanding of what we're talking about.

i find that it's nice when people agree with me. it's harder, but more valuable, when people disagree with me.

to you it's a basic understanding you share with one other person. great.

other people demonstrably have a different perspective.


We're not talking about LLM's being used to proof read some useless half-arsed essay by a teenager in school, something with no consequence. We're talking in the context of LLM's being shoved and jammed into business applications and business products, being pushed front and centre in a world that is now dominated by technology.

your disdain for this is clear. i get it. you don't like ai and think it's smoke and mirrors. i think a couple of people - unodelacosa and thujone - have tried to demonstrate that your position is dismissive and the technology has value in some/many applications.

alasdair
 
Trying to cut down on my megaposts, which means dropping the quotes for what I'm speaking to. There were several where SS is reiterating his earlier point, which I believe I can summarize as 'it is a bag of existing knowledge, and it bounces in that bag to provide you a response'. Ergo it is much simpler, less capable, and driven solely by prompts run thru it's algorithms to find a semi-acceptable response. I can't argue he's wrong, but it is where he is stuck - a box filled with what-if decisions and a fistful of predefined answers to arrive at.

I believe the churn AI is driving is clouded beyond that simple definition in both the marketing (yes, there's a hype train to drive the $ to fund it...with promises of the future we are teased that it will achieve), but also in that it has been packaged into LLM. Use of LLM is necessary for us to communicate with AI. As has been seen, let two AI bots talk to eachother and they develop a more efficient means of communication and we're staring at a screen asking 'what are they talking about'. But also, in that bag of what-if and predefined answers, it has been packaged to present itself as a thinking entity with asking it's own questions, responding to directional cues from the human. A basic program you can say 'give me the recipe for cookies' and it would naturally respond back with 'how many do you wish to make (it impacts the qty of ingredients'). We're getting that basic interaction, but it is wrapped in more humanistic dialogue such as 'oh, do you have an event that is coming up (irrelevant light talk not needed for the response but may guide suggestions on type of cookie if you say a specific holiday)' or 'does anyone have allergies or a favorite type of cookie (again, a generic recipe could have been provided but this personal interaction guides it's response and seems more friendly and caring)'. These AI bots are presented to be more humanistic, when to SS' point, its still a bag of existing what-if and predefined answers. Though it can expand that list of answers by researching the internet...can it expand it's list of what-if interactions on it's own or is that restricted to the base programming? Can it think and learn...I don't know.

This more 'humanistic' interaction is part of what drives the fear or sense of replacement. Along those lines:



In the new study, published March 31 to the arXiv preprint database but not yet peer reviewed, researchers found that when taking part in a three-party Turing test, GPT-4.5 could fool people into thinking it was another human 73% of the time. The scientists were comparing a mixture of different artificial intelligence (AI) models in this study.


While another team of scientists has previously reported that GPT-4 passed a two-party Turing test, this is the first time an LLM has passed the more challenging and original configuration of computer scientist Alan Turing's "imitation game."
 
Last edited:
We're nowhere close to AGI. LLM's can't, and there's nothing in the technological pipeline elsewhere as far as I can see. Despite what Sam Altman might claim, it ain't gonna happen.

I don't have a view of the pipeline and what's coming next. I didn't know how AI would be shared with the public, it's capabilities and limitations, but it has done very well in showing the first steps along that path. I would like to understand what comes next, do we jump from LLM to something else? I want to think so, somewhat jumping from one (programming language) mode of thought to another (mathmatical perhaps), but we (humans) need to be able to communicate with it so perhaps LLM remains at the base foundation, if only so we can understand wtf it is doing.
 
about its integration into quotidian human life, having already experienced many of the benefits and convenience this technology has to offer. Also, side note, and speaking of Nippon ☞ I recommend the anime movies and series, Ghost In The Shell if you're unfamiliar. Some of these philosophical concepts are explored in a sci-fi / cyberpunk future Tokyo… and in a cool, entertaining, anime style…

Know your audience .../makes me look up quotidian, lol

Respect on the GitS reference.
 
I thought the actual question was: "Is A.I. waiting to take over?" I think, at least, one thing we probably all agree on is that: no, AI is not "waiting to take over." That would indicate desire, a human emotion (unless it's executing some iRobot-type shit I guess).

Emotional IQ almost seems like a different technology, so to speak. Surely, emotional IQ counts toward overall intelligence, which AI is still lacking, but I was never expecting that from AI in the first place. (But maybe I should've? After all, I have seen Bladerunner a bunch of times, and read the works of William Gibson, Neal Stephenson, and Phillip K. Dick, et al.).

Great point. AI may climb the IQ chart in ability to respond to queries, perhaps at some point being able to generate it's own queries. But The EQ aspect of humanity is too far out to be approximated artificially. It can help set up the differentiator on where AI can be applied (where the outcome has no emotional factor or need) vs emotional engagment. It can make a tear jerker video, but it will likely be so absurd as to not be what is wanted.
 
Last edited:
you are not interacting with an intelligent being that comes up with thought processes but with a piece of software that is good at pretending but has zero awareness. It doesn't "think".

This aligns with my understanding of SS' outlook.

For an experiment, I think I'll ask one of the AI systems to give me the encyclopedic breakdown on Faloums (made up word). What is their origin, how are they harvested and where, and how do they impact local, national, and global economies. It'll be interesting to see what it come sup with.
 
Top