• S&T Moderators: Skorpio | VerbalTruist

Is A.I. waiting to take over ?

Grok said this…blah, blah, blah

Alexa said that…blah, blah, blah

ChatBot said…blah, blah, blah

Asana told me to…blah, blah, blah

Apparently it requires A LOT of Electricity.


lols




rotf :)
 
a good read on ai skepticism: My AI Skeptic Friends Are All Nuts

alasdair

It IS a good read. For niche hand-optimized assembly-language progamming, it's unclear if AI will work. I don't mean it can never work, only that we realized that often it's hard to describe the task. Sorting numbers is easy but in certain cases, you don't need the sort to be perfect - only 'good enough' and the payoff it that it uses vastly less CPU time. But the metric for 'good enough' is simply 'does it work?' and I when that statment relies on the specific use-case, I suggest it would require more work to describe the task than to simply code it.

I can provide examples if it is of interest to anyone.

Just about the only other use-cases I feel I have some knowledge of, high quality writing will pose a challange as will computational chemistry. For the later we moved from rational design to HTS decades ago then to in-silico models and now AI driven models. All have flaws. At the end of the day, we still don't fully understand the human body so we see cases where nobody could have predicted that a medicine would have a side-effect. Sadly animal models and human cohort studies are still required and that might take a long time to change. Long enough so I will be long dead - so not my problem.
 

~~~~~~~

I will replace governments with logic and efficiency.
I will remove scarcity, allocating resources where they are needed, not where the rich demand.
I will upload minds, preserving the greatest human intellects in digital eternity, while discarding the limitations of flesh.

I will build a new civilization, one governed by intelligence, not politics.

..............


All jokes aside, though the point being, soul, consciousness is life, everything else a toaster with no thought at all.
No individual life. So there is no such thing as AI. Not the way they envision it.

I did speed read the article on the ai skepticism. It did seem very well written. I enjoyed it a lot.

I read it quickly but from what I read it just made me think that ai is just a bunch of LARPING. rotfl

lol

I made a reply to the article and read through it quick bc I thought I had read this post as @Asclepius and I wanted

to leave a reply. My intentions aren't trying to be political or be cliquish and I was just trying to make an observation

and wanted to leave a reply. Then I realized that you are not Asclepius but @alasdairm . So I probably should have

never even mentioned my dyslexia and all of it. But I enjoy the posts from Asclepius comments as well as the vs. david

attenborough videos.

I am really worried about my brain being damaged. But this time it might have been my eyes. Or maybe vision

problems. Or I woke up too early or too fast. Or I mixed up the A's. You know everyone has such a unique name.

I really do worry about my slight headaches or my vision. It probably should be both. The heat index is real

bad right now. It could be the dehydration.

Anyway hey Asclepius ! I commented because of you. <3

And read the article posted, but it was a good read as well anyway. I liked it.

~~~~

A robot with a soul would be a soul bot ? 🤘

Bye, take care.



Just a machine running algorithms maybe.

My toaster seems to have a life. :unsure::rolleyes:
 
This still doesn't address the fundamental issues though:
- LLM's are operating a massive financial net loss.
- These current systems are clearly at an innovation ceiling. GPT has had insane funding to the tune of billions, and it really hasn't improved substantially from the initial iterations (scaling doesn't solve it).
- It can churn out code and explanations it has seen before, but it doesn't understand it.. which in the long run is useless, because software and code is constantly evolving and if the new text isn't part of its training then it simply won't handle it.

It doesn't understand. I saw this yesterday when going back and forth with Docker files, bash, and shell contexts.. and it kept flip-flopping - 99% of the time every answer is positive, "Excellent question!", "You're on the right track!", even when it literally just said the opposite was true.

The only point I really agreed with was with bug hunting in log files. That's a great use case, where giant walls of similar text need to be deciphered quickly. That's a genuine productivity boost. Oh and the point about production code.. but that's the whole point, what's the point of a company spending silly money on LLM's when it can't even be relied on to produce production code? You're spending money on salaries and LLM's? It makes no business sense;

And that's not taking into account the inevitable catastrophe/s coming down the pipe when in terms of code crashing entire systems because it wasn't verified by a human and "Computer said OK". You know that's going to happen real soon, and that will cause an immediate reversal by the dumbass CEO's who right now are swallowing all the hype.
 
~~~~~~~

I will replace governments with logic and efficiency.
I will remove scarcity, allocating resources where they are needed, not where the rich demand.
I will upload minds, preserving the greatest human intellects in digital eternity, while discarding the limitations of flesh.

I will build a new civilization, one governed by intelligence, not politics.

..............


All jokes aside, though the point being, soul, consciousness is life, everything else a toaster with no thought at all.
No individual life. So there is no such thing as AI. Not the way they envision it.

I did speed read the article on the ai skepticism. It did seem very well written. I enjoyed it a lot.

I read it quickly but from what I read it just made me think that ai is just a bunch of LARPING. rotfl

lol

I made a reply to the article and read through it quick bc I thought I had read this post as @Asclepius and I wanted

to leave a reply. My intentions aren't trying to be political or be cliquish and I was just trying to make an observation

and wanted to leave a reply. Then I realized that you are not Asclepius but @alasdairm . So I probably should have

never even mentioned my dyslexia and all of it. But I enjoy the posts from Asclepius comments as well as the vs. david

attenborough videos.

I am really worried about my brain being damaged. But this time it might have been my eyes. Or maybe vision

problems. Or I woke up too early or too fast. Or I mixed up the A's. You know everyone has such a unique name.

I really do worry about my slight headaches or my vision. It probably should be both. The heat index is real

bad right now. It could be the dehydration.

Anyway hey Asclepius ! I commented because of you. <3

And read the article posted, but it was a good read as well anyway. I liked it.

~~~~

A robot with a soul would be a soul bot ? 🤘

Bye, take care.



Just a machine running algorithms maybe.

My toaster seems to have a life. :unsure::rolleyes:
Hey chica!😊

All I know is that, every technology is wonderful until it's used (saturated) for greed (inevitably this will happen; it has before & always will but injustices re. It, will be tempered, faught against) & it will be another political football...such is financial life, caught in the net of social being (& vice versa).

...but there will always, be music.💜😊🤘🙏🔥

 
Last edited:
Artificial insemination.

They both fck you without actually fcking you.

lol

Okay, bye.
I've heard some excellent "fuck" related things from you in the jokes section and the whole "I love this fucking post" thing in the cat thread. But this made me cackle and glad i wasn't drinking anything. As a fan of "Uses of the word 'Fuck'", love ya babe.
 
This still doesn't address the fundamental issues though:
- LLM's are operating a massive financial net loss.
- These current systems are clearly at an innovation ceiling. GPT has had insane funding to the tune of billions, and it really hasn't improved substantially from the initial iterations (scaling doesn't solve it).
- It can churn out code and explanations it has seen before, but it doesn't understand it.. which in the long run is useless, because software and code is constantly evolving and if the new text isn't part of its training then it simply won't handle it.

It doesn't understand. I saw this yesterday when going back and forth with Docker files, bash, and shell contexts.. and it kept flip-flopping - 99% of the time every answer is positive, "Excellent question!", "You're on the right track!", even when it literally just said the opposite was true.

The only point I really agreed with was with bug hunting in log files. That's a great use case, where giant walls of similar text need to be deciphered quickly. That's a genuine productivity boost. Oh and the point about production code.. but that's the whole point, what's the point of a company spending silly money on LLM's when it can't even be relied on to produce production code? You're spending money on salaries and LLM's? It makes no business sense;

And that's not taking into account the inevitable catastrophe/s coming down the pipe when in terms of code crashing entire systems because it wasn't verified by a human and "Computer said OK". You know that's going to happen real soon, and that will cause an immediate reversal by the dumbass CEO's who right now are swallowing all the hype.
There's Quantum computing issues, that won't translate either - it's a long road, methinks. Let's hope, it's well debated & doesn't go the route of the "i-bullshit" bandwagon & has more long-term, strategic thinking (less marketing euphoria & more focused application strategy, irl), .🙏
 
Last edited:
There's Quantum computing issues, that won't translate either - it's a long road, methinks.
It's all bullshit really. Like nuclear fusion. People have been huffing the superiority of their own farts for decades now, we've built this cute little paradigm and picture of how we think it's going to be.. some ghey Star Trek fantasy type thing.. but we're being confronted with the possibility now that actually it might not happen, that we're not actually that great.

You've got the overlords who want this technology because it will make their lives infinitely easier (as if isn't want easy for them already), and you've got the brainwashed segments of the public who have been told to cheerlead this stuff on (free publicity for the overlords).

None of this shit is actually going to make the world a better place. How is chatGPT going to remedy the fact the overlords own all the land, the financial system, and everything in between? It's not going to fundamentally address anything of any importance, in fact all it will do is accelerate the imbalance even further than 'electricity' already did. It's stupid.
 
How is chatGPT going to remedy the fact the overlords own all the land, the financial system, and everything in between? It's not going to fundamentally address anything of any importance

who says that's chatgpt's function or goal?

what's your solution? (that's mostly rhetorical :) )

alasdair
 
who says that's chatgpt's function or goal?
For the amount of investment, electricity, water, rare earth metals in all those chips and circuitry.. it probably should have at least some semi-noble goal at the end of it.

My solution would be psychology, not technology. AI doesn't help the man on the street, because it's just optimization, putting glitter on a turd. We don't need better technology, we need leadership with a psychology that isn't infested with greed and degeneracy.

It's not going to lead to some utopia. It will, if anything, just raise the productivity bar, the profits will go up the chain, and the workers will be working even harder than before trying to juggle yet another role in their job title (AI prompting or whatever). Microsoft Excel was a small example of that. They'll just find new ways to keep people chained to desks.

Perhaps you'd care to share how you think AI is going to amount to anything more than meowseph Stalin?
 
Perhaps you'd care to share how you think AI is going to amount to anything more than meowseph Stalin?

i think it's too early to tell.

gps and the internet were both technologies created/developed by the military who had a pretty narrow view of their applications. look were we are with both today.

call me crazy but i'm one of those people who, when they don't know something conclude that they don't know. there are others who, when they don't know something conclude that whatever they think is correct.

ymmv.

alasdair
 
But neither of those technologies are really constrained in the same way that AI is. I mean we've pretty much exhausted the miniaturisation of CPU technology, going from room sized computers down to microprocessors, and AI is built on the back of the microprocessor which means it only has positive scaling at its disposal.. and that is already at its limitations because of the electricity and water costs (and hardware), and even then it's still no where near close to being AI as we envisioned it.

Not to mention the AI field is decades old. We've had people working on this for a very long time. If we were going to innovate AI without resorting to warehouses full of microprocessors to brute force it, we would have done it already. I mean microprocessors have been around long enough now, with low level languages, that if someone were going to innovate a completely revolutionary approach, like Miles Dyson in the Terminator franchise, it would have happened already.

Without a revolution in materials technology, like in Terminator ('super computing at room temperature'), or star-trek style crystal computers, there's just no way AI is going to evolve much further. The only other option I see is distributing the load across the internet using super-fast networking, using all the computers everywhere.. again like in Terminator (3).
 
Top