• S&T Moderators: Skorpio | VerbalTruist

Is A.I. waiting to take over ?

Reliance and laziness

Time are changing, and as much as those of us operating at a post-education adult level cling to our need for anyone using AI to ground it's output againt reality (sanity check it's responses, weed out hallucinations, etc) the hard truth is this won't stick. The need for validation will fade, IMO, as people give into the wisdom of AI under the assumption any errors are found and corrected by the AI developers (I call BS, but we shall see).

As for losing that ability to do the real world work (ie, proof read, weed out hallucinations, recognize errors) those skills will fade and be replaced with newer skills by younger generations that focus on how to best utilize AI (and trust it is a reliable tool). The comparison that jumps to mind is many of us used to know the phone numbers of our families and friends by heart, but that skill is no longer needed, replaced by knowing how to get them into our contact lists. Or, more critically, our ancestors learned to make fire, how many of us in this chat could walk outside and make it today? I'd venture less than 5% because we've got modern devies and materials that achieve this for us reliably, quickly, and more easily. Technology evolves, so do our skill sets and what we can/cannot do for ourselves.

I believe there is a HUGE liability in placing our trust into AI, but that' the old person in me and our current state of AI. Maybe the AI improves, maybe we just don't care anymore at some point and accept AI output as truth.
 
I mean christ, enough people on here bash Elon for his antics.. but he is just one example of these tech-bro leaders who have been talking absolute shite for years. The exact same faults can be leveraged at all of them, including the ones pushing AI. Hyper-loop? Metaverse? Web3? There's already a graveyard of these failed hype wagons rotting at the side of the road.. it's really not a stretch to see how AI could fall by the way side as well.

Extending my last post, and pivoting to Elon a bit. Consider cars and their safety systems. Used to be cars didn't have seat belts, then they were optional, now you can't get a car without those and alarms nagging you to connect them. Likewise, over time cameras have been added for lane departure and radar for sensing obstacles or slowing traffic in front of you, again with nagging alarms. Then we have progressed to self-driving cars (not just by Elon), which are evolving and improving but still remain that 'bag of what-if and predetermined outputs'. At some level we're giving up our skills to drive, handing them over to AI for convenience, with an implied trust (of our lives).

Tangent - Many kids don't wish to learn to drive. They simply prefer to have others drive them out of convenience (and perhaps cost, given the purchase and maintenance of vehicles today). But of those who drive, the % in the US who can handle a manual transmission is single digits, likely on one hand. Personally, my recent car purchase was manual primarily for my enjoyment but also to teach the skill (and enjoyment) to my family in friends willing to try. No longer a necessary skill, and perhaps one they may never have a use for, but I'm doing it just the same. There is an emotional gain in having to do things 'the old way' <3
 
Trying to cut down on my megaposts

i'm afraid i don't believe you :)

For an experiment, I think I'll ask one of the AI systems to give me the encyclopedic breakdown on Faloums (made up word). What is their origin, how are they harvested and where, and how do they impact local, national, and global economies. It'll be interesting to see what it come sup with.

post your results!

did you see this?



alasdair
 
post your results!

Give me a full view of Floudims. What is their history, where/how are they collected, and how do they impact local, state, and global economies?

ChatGPT said:​

"Floudims" does not appear to be a recognized term in economics, history, science, or any major field as of my latest update. It might be a typo, a fictional term, a highly specialized niche concept, or a new/emerging idea that hasn't yet entered broader discourse.

Can you clarify what you mean by Floudims? Here are a few possibilities to help guide you:

  1. Could it be a typo? Maybe you meant:
    • Fungibles (like fungible assets or goods)?
    • Foudums (perhaps a variation on “fiefdoms” or “feudums,” related to feudal systems)?
    • Ploudims (fictional/fantasy name)?
    • Flow domains (in fluid dynamics or economics)?
  2. Is it from a specific book, game, or sci-fi universe? If it's a fictional or conceptual entity, I’d love to know the context so I can build out the full view you're asking for.
  3. Is it a brand, acronym, or organization? If it’s modern or niche, I can do a quick search for more details.
Let me know how you’d like to proceed, and I’ll dive right in with the full analysis!
 
My company supplies some of the hardware, and it is scaling. Massively, and at speeds we (suppliers) simply can't keep up with. nVidia continues to kick everyone's butt in the market, but others aren't too far behind. Scaling is both in speed (jumping from 112Gb/s to 224) and in terms of how many GPU per rack for computations. They are creating server racks that need forklifts to move or to even assemble.
But this isn't evidence of actual scaling of the output, only the input. What is all that extra computational power actually going to deliver that the LLM's haven't already delivered? A slightly more optimized version of being able to template an email?

It's not just the hardware. The algorithm, the architecture of the LLM, is the fundamental bottleneck along with the training data required. You can't just throw infinite Nvidia chips at this and expect AGI to materialize, it doesn't work like that.

Also, just because they're pumping money in and buying all this hardware.. doesn't necessarily mean anything of itself. Especially when they're still operating at a net loss i.e. they have to keep up appearances to convince investors to keep pumping the money in. If they suddenly stop all this expansion, they'll immediately be asked why they aren't.. it's about appearances.
your disdain for this is clear. i get it. you don't like ai and think it's smoke and mirrors. i think a couple of people - unodelacosa and thujone - have tried to demonstrate that your position is dismissive and the technology has value in some/many applications.
My disdain here is not for AI, though that is a separate issue and one I do not see as being a net positive for humanity. But this isn't even AI! My disdain and dismissive attitude is just the equal and opposite reaction to all the hype bullshit surrounding this current crop of "AI" tools (LLM's), of this attitude that these are the 'game changing tools' that the media is portraying it as. I get people want some optimism and hope about the future, but is this it? No, I don't think so.
 
given you've stated that, even if you're passionate about issues you discuss here, you're not passionate enough about them to actually do anything about it in the real world. your contribution here is mere cathartic release. as such, why do you care?

it's called intelligence but it's not intelligent! so what? it's overhyped by the media and business! so what?

alasdair
 
Here we go, again. Are you not capable of just sticking to attacking the points raised? Why do you always have to bring it back to making it a personal affair?

Fine, whatever. Uno Reverse: Why do you care that I care? Just because I'm not going with the current thing like everyone else is, just because I haven't been hypnotized to the degree you have where only extolling the greatness of the current thing is permissible, just because my perspective is an affront to your worldview.
 
i have been attacking the points. you just dismiss a lot, if not most, of it as you often do. i'm placing your response in a context.

"I haven't been hypnotized to the degree you have". you don't have an issue with your making it personal. you only have an issue with my making it personal :)

i'm not hypnotized by this.

i just see things differently.

your perspective is not "an affront to my worldview".

i just see things differently.

i already made it clear i'm not an ai fanboy.

i respect that you, likewise, see things differently.

in part it seems the issue is the fact that ai is called ai and not something else

i'll draw my own conclusions and move on.

alasdair
 
I was messing with ChatGPT earlier, throwing it all kinds of questions....after awhile, it started quizzing me about my life. lol
It’s wild to just fuck with it like that cause we do need to get familiar with it.

I created a google account just for access to Geminis ChatGPT. I made sure it wasn’t synced or related with anything else. Cause these ad companies are insidious with how theyre profiling all of us. Then I made sure to give google all the permissions they ask for in settings so they can track all the metrics and collect all the data I create. But I reset the ad ID and keep it from making associations with my other google accounts. I want this one to only be for using Gemini. Having more control over the data being collected makes it a lot more interesting to fuck with.
 
and complex experience. Here’s a detailed, step-b

It’s wild to just fuck with it like that cause we do need to get familiar with it.

I get that we're still in the early stages of AI. And, as we've seen, they cann get some bias from their foundational programming.



But what bugs me is that this uncontrolled system is going to places that surprise us, we're aghast at....and have no guard rails up against it as yet. I'd like to understand the how and why of it going here with users:




 
i have been attacking the points. you just dismiss a lot, if not most, of it as you often do.
Exactly. It either gets ignored or written off as some seemingly obvious thing only someone blinded by their love of said thing would say. Last week he was accusing me of being blinded by what he terms "scientism". Turns out our buddy is quite the spiritualist, ironically.

"I haven't been hypnotized to the degree you have". you don't have an issue with your making it personal. you only have an issue with my making it personal :)
No cap, @-=SS=- you get rather cranky and hostile to people on here for no reason, not just in this thread. I see it all over the place with you. You might not insult someone directly, calling them stupid or something, but you will make passive aggressive statements like "I haven't been hypnotized to the degree you have." You know goddamn good and well that's a childish put down, too. You talk down to people too much. You can disagree with someone without treating them like a jerk, you know.

your perspective is not "an affront to my worldview".
He loves making a disagreement into a "worldview". He's quick to project a persona on people where he just lumps together all the traits and arguments that he despises and assumes he has that person all figured out.

i already made it clear i'm not an ai fanboy.
It doesn't matter to him. I've done the same thing, and will gladly and easily point out the shortcomings of LLM artificial intelligence. He sees it as you're either an AI hater like him, or you're the biggest AI fanboy to ever live. No balance. All or nothing.

i respect that you, likewise, see things differently.
Those who want respect, give respect. Our friend should learn about that.

in part it seems the issue is the fact that ai is called ai and not something else
Ooh, I know! Let's get hung up on semantics. That'll be fun.

i'll draw my own conclusions and move on.
I suspect he's scared, because he mentioned some examples of things he fears will be the result of LLM integration in our lives. Mentioned something about it replacing junior devs and how it would create more work for him bc that code would need to be checked. I'm not sure why he's fixated on such a pessimistic view of what he assumes will be the future, but it seems rooted in a misunderstanding of the technology's many benefits despite its shortcomings. Idk, there was a point when some people thought the internet was just a fad / gimmick, too, that would not last long… 🤷‍♂️ it's just whatever. Some Luddite shit.
 
argh, this thread is growing so fast that soon i will need an LLM to summarize it :)

And while that is a productivity boost, that's all it is.. a boost. It's not a revolution. It will raise the bar, just like Excel did, and then that bar will become the baseline and people will just build on top of that plateau.

that depends on how you define revolution. for me, the productivity boost is revolutionary, because it's so much better at saving time than the tools i've used so far.

It was a ball ache just to get GPT to output some dummy JSON data for my code work. It couldn't iterate properly, it ended up repeating itself where generative text was required, going off on a tangent when given repeated instructions to stay on a specific topic area. If you're using it to produce production code you best be checking every single line of it lmao..

It sounds like you've hit the context limit, which is a common pain point, but there are workarounds and I think it will in time be fixed definitively.

Similarly, an earlier generation of txt2img models had a huge "too many fingers" problem. They just couldn't generate hands or feet that weren't grotesquely deformed. Eventually they were replaced with a newer type of model that does much better.

Because you're effectively talking to a politician, someone who will tell you with confidence what they're saying is correct when in fact it could be seriously misleading or just flat out wrong. If these systems can not even do the most basic reasoning and logic deduction then you will have to remain cynical of all its output, because it's going to hallucinate on you, and that is a serious waste of your own bandwidth.

I am cynical of its output and it's not a big waste of bandwidth to have it spit out just a function or two at a time then verify them manually.

That's exactly what I would do if I were writing it all myself!

And in a job or product scenario.. that is going to lead to intellectual laziness and mistakes compounding upon each other. This isn't rocket science. We're seeing it already with young people who are relying on AI to do their work for them, to do their thinking for them, it is absolutely rotting them - accelerating an already brain rotting ecosystem. You put that in the hands of the majority of people who are already lazy and overstressed by our society.. it's not going to be a positive outcome.

Mix that with bureaucratic systems and you're asking for real trouble, because you'll end up in scenario where you'll be arguing against 'the algorithm' that is wrong and a human being who 'proof read' the output and thought it was correct because they're now intellectually incompetent as well! What a fucking ball ache that will be.

This seems more like a human problem than a technology problem.

Yes, I am actually learning. LLMs have not only augmented the way I previously learned new technologies but also having things like autocomplete in writing code is immensely beneficial.

Previously after outlining requirements I would essentially do a series of internet searches which resulted in me looking briefly at the features and docs of libraries, services, or other programs. Sometimes I would need to scour several internet forums to find caveats or advice on some weird error in implementation or configuration etc. I would need to go through documentation, searching manually, or find code/config snippets that other people had posted from their own implementations or elsewhere and adapt it to my use case. All of this takes a rather significant amount of time, and it is essentially repetitive manual labor that I have been doing for over 20 years.

Yea, stuff like reading documentation is a major pain point, would use LLMs if for no other reason than to take that off my plate.

It's no good having the majority "x10" when they have not developed any of the actual skills required to be a good software engineer, skills you only get by doing (and failing/debugging). This is no different to juniors who hand in portfolios of 'apps' they 'made' by copy/pasting along from tutorials. Like you don't know shit, you learn nothing doing that. And if you're using AI to supplant all your thinking for you, then you do not learn even half of what someone who grinded it out has.

It might be profitable in the short-term, but the chickens will come home to roost.

EDIT: Just thinking about it. You try and get someone who has abused AI in their learning journey to setup a mono-repository with all the respective configuration, linting, etc. Watch them crash and burn 😂 I know because I've done that, and while AI's could probably get you started with summaries.. you're not going to resolve the inevitable clashes of packages and configurations. That kind of problem solving, deep-diving, lateral thinking, is something you only pick up by doing it.. it's not something AI can bypass for you.. and that problem-solving ability is ultimately what you are really getting paid for, not for being a code-monkey.

As mentioned above, a lot of the problem solving is just trolling through tomes of documentation looking for answers. If you already know the question, why waste time looking for your ideal answer, if AI can take you there right away?

Knowing the question is key, and that's also why I think people using it more as a crutch than an aid will fail with it.

I think the reason you don't find it that revolutionary is because you don't know the right questions (prompts) for the specific answers you're seeking, so the answers that you get are sub-par.

If you approach it more as a tool you have to learn how to use (like all tools) then the quality of responses will improve to the point that you may actually find them useful.
 
As mentioned above, a lot of the problem solving is just trolling through tomes of documentation looking for answers. If you already know the question, why waste time looking for your ideal answer, if AI can take you there right away?
Why read a book when you can just get a summary for it? That's the extension of that logic and something that is already happening with young people using it to supplant their learning process. Sometimes the journey is far more important than the destination itself, you learn more and more about yourself by actually doing than just cherry-picking at the finish line.

Yeah documentation is a bag of shite. I hate reading it, especially when its poorly written or not even finished. You never know what you might inadvertently pick up as you look for an answer though.
I think the reason you don't find it that revolutionary is because you don't know the right questions (prompts) for the specific answers you're seeking, so the answers that you get are sub-par.

If you approach it more as a tool you have to learn how to use (like all tools) then the quality of responses will improve to the point that you may actually find them useful.
Or.. maybe the questions I have, it can't actually answer with any degree of reliability or accuracy. The questions it can answer are the ones I don't need answers to..

Nah. I'm not abashed to say I don't like it and I don't like what it is doing to both people and the way jobs are conducted, that it is a net negative, and that it will backfire spectacularly in short order. It's short-termism that the public have the option not to play along with by choice; it's surprising to me just how quickly people forget about the consequences of other technology pushed on us in the past 20 years.. like, you want people to be even more short-sighted and intellectually dim? Because that's what is going to happen.. and for what? So we 'saved time' doing stupid shit for peanuts anyway?
 
If the summary contains the information required for the task at hand then the summary is sufficient. Why would I waste time reading the whole book
 
If the summary contains the information required for the task at hand then the summary is sufficient. Why would I waste time reading the whole book
If the task at hand is your own learning, learning to learn, and the other subtle intricacies of intellectual and mental development.. then skipping over the process and cherry-picking what you think you need will only lead to you being stunted and short-sighted. You don't know what you don't know, and you only get past that by doing.

Yeah it's great if you're completely business minded and don't care about anything outside profit margins and "saving time". But for the humans in the group that sort of attitude is detrimental psychologically.
 
Top