• S&T Moderators: Skorpio | VerbalTruist

Is A.I. waiting to take over ?

Woah....looks like mods cleaned up a lot of noise out of this thread. Thanks!



I'm just now catching up on this thread and would love to toss a few thoughts around for discusion. Those not familiar with my posts, please know I use a lot of quotes to address a point or pivot off to a new one, and never am I in a mindset of attacking anyone or their ideas - more just sharing my own and seeing what we can all learn from one another. Nothing personal. Long winded and wordy, beyond the mega-quoting, but not personal.


Independent action outside of things that are define in code doesn't happen.

Yes and no. There is the difference between a program and intelligence. A program is following steps in code, if it says to turn a light red, it turns the light red. The program is limited by what the code writer accounted for and gave a process for. AI leaning on the 'intelligence' part, allows for interpretation, so it can be a bit more broad than limited lines of code. For an easy pair of examples, ask a chatAI to write a short story or an imagAI to create a picture about a girl by a creek and it may give the girl a name, skin or hair color; it may provide nearby trees, rocks, or animals; it may choose the time of day, what kind of weather is occuring, etc. Now, all of those possibilities have to be in the code at some point - they have to be made available to the AI. But, a program will give you the same result everytime, whereas AI may give a different result each time it is asked based on....what? The rules of execution are coded into the foundation, but the application of those rules and the interpretation of what result to present is where AI works it's magic.
 
Regarding the development of a chat GPT 'like' AI by the CIA, and your assertion that the CIA/NSA has access to some enormous amount of data:

Both are unfounded based on reality.
...
However, the way that cell networks run today and the absolute absence of data recording and duplication tabs at every single one of the hundreds of thousands of cell towers and repeaters means they don't have real time access to the overwhelming majority of communications in the world, nor do they have the capability to capture the data in a message or a phone conversation in the overwhelming majority of cases.

This is simply because of network topology and the absolute necessity to decrease latency at every single node.
...
There's literally too many people. Too many cell phones. Too many computers. Too many connections to capture, analyze, assess anything more than a small fraction of daily communications.

You are absolutely correct there is too much data constantly in flow to be analyzed live on-the-fly. It isn't possible....yet ;) Certainly not with our initial monolithic AI engines. But, consider how much AI is out there already, with access to point-of-contact data and able to do localized interpretation then only send to the mothership the key relevant data points. I'm babbling a bit here, but give it a chance. How many homes have a GoogleHome or Alexa or phones with Siri or the equivalent, each collecting household info on what's in the fridge, when lights/locks are activated, what kind of music a person likes or dislikes. These are remote, small (but consistent) AI engines collecting data. We often don't pay attention to what data they may upload to data centers about us and our habits. We have a handful of devices collecting data - mothership doesn't care about them all the time instantly, but data exists; and if it exists it can be tapped into and used for various means. A simple example - we already have a half dozen mobile apps on our phones that track our locations throughout the day. Mothership may not care, but if someone wanted to monitor who all went to Walmart between 12 and 4 that list of persons can be created. Then, if we wanted to know of those people, how many went to eat somewhere VS went home (ah! we know their home) VS continued shopping elsewhere (and where was that)? All that can be determined if desired.

To your point - it is all post event data, looking at 'what happened', but that can build predictive patterns and allow for actions like "when this person leaves walmart, send them a random text from McDonald's about today's meal deals". It's how current web advertising works - it builds a historical pattern about this user, then leverages that to suggest things to them, to guide their behaviour. It all begins with what data exists. For this, I know Google has perhaps more data than anyone else simply by the nature of having all those search engine queries captured over the years - blind to users and building general rules about information requests and which results generate a click thru, or tied to specific IPs (users) and their recurring habits or intersts. Similarly, whatever Elon's AI is called, his purhcase of X was also an informed decsion on his part, to have an existing database of information upon which to build. To imagine if a gov't has or does not have a huge collection of data upon which to train AI engines, look no further than China where a shoplifter can be follwed home via cameras (doesn't take a human when an AI has facial recognition and access to the camera feeds) and arrested within the hour. It's one reason crime is so low in China - complete surveilance, and AI only makes it that much easier and faster to use the data for quick action.

Moving past the learning material (existing data) to train AIs, I come back to your point about acting on it in real time. With the decentralized mini-AI living in our home and mobile devices, each of those can be rolled out with core updates to alter what data gets sent to mothership and what is managed within the device. All of this is being sold to us as learning our habits and helping us in our daily lives. But data is data, and the more that is monitored, the more that can be acted on - for good or bad. I'm not so naive as to think all AI will be kept benign or for our benefit, human history has taught us otherwise. But the data collected, and AI's ability to do more with it, is growing quickly.

But the CIA and NSA definitely do have similar capabilities when compared to the Chinese/Russians, right? (Like SNAKE and the like).

Wouldn’t you agree that everything the Chinese/Russian intelligence assets can do we (NSA/CIA) can do? (Or do better).

If the Chinese/Russians can hack into big Corporations, surely we can as well?

Aye, we (the public) have no idea to the degree corporations (and certainly not gov'ts) have pushed AI capabilities. What we see in the open market is only what has been matured and eventually deemed publicly consumable (what 'helps' us). There are a myriad of applications that will help us and we know little about (ie, medical diagnosis and recommendations) and there is likely a lot we may never know about (ie war gaming, setting AI upon a cyber explore/attack mission against rivals). And even with the toy AI released to the public, there are pockets of super intelligent persons (not gov't or military) that will find ways to manipulate, train, and point AI to do good/harm to the public. I can easily see the days of old ransomware attacks being stepped up by a creative mind training an AI to check all connections in this IP range for certain hackable criteria, then perform hacks A, B, and if possible C upon those compromised ports.
 
Humanity will become transcendent humanity where those of us who decide to become post-human or transhuman will protect all of our cousins who decide to be luddites and not jump on the post-humanity bandwagon.

I really want to be alive when functional immortality or the ability to partially upload etc. Gives us lifespans on the order of eons because I want to see what's in the universe and right now it really doesn't look like we're going to be able to have Star Trek warp drive and at a reasonable extension of technology. It would take 400,000 years to cross our galaxy (That's at 25% speed of light and the galaxy is 100,000 light years in diameter, Yes, there would be some time dilation but not as much as people think. It might seem a little shorter than 400,000 years, but to everybody else in the galaxy it would be 400,000 years) never mind stopping off on the way.

And the scenic tour to Andromeda would be 10 million years just to get there considering it's 2.3 million light years away. And you multiply by four since you're going 25% SOL (isn't that funny that's the name of the sun. That means shit out of luck and also is a acronym for speed of light?).

Now I would like to believe that we will be able to artificially create enough virtual mass to collapse space time and when I say collapse, I mean compress warp make it shrink in front of us and expand it behind us so that our little bubble of spacetime isn't moving any faster than the speed of light. In fact it we're not moving at all, but space time is moving through itself like a slinky and there's no limit on that. Modified spatial distortion inchworm drive first thought up by Miguel Alcubierre. And his math checks out it should work. We just don't know how to go about creating a multi-jupiter mass and an area a few hundred kilometers across directly in our direction of travel since we'll be falling towards it and then magically make it go away and push from behind us.

The math says we should be able to do it. And if the technological singularity hits soon enough, that means we'll have the time to do it.

This may warrant a spin out thread on transhumanism (I haven't caught up on the forum to know if one exists, I suspect it does). If there is one, I may ask mods (or do it myself) to move your post and mine to that thread. In the meantime:

Transhumanism and the singularity...just because we can, doesn't mean we should. I have a humourous image of taking your comment of living forever in digital form and moving it back in time (or us to a future generation). Can you imagine having your ancestors hanging around in digital form, looking over your shoulder in pride or dismay with what you're doing in your life? Lol. Chinese ancestry guilt to the nth degree.

Likewise, if one 'lives forever (digitally uploaded)' does that lessen the value of time spent as a human? Or do we venture beyond that and imagine there are tactile beings able to interact with the real world in which we upload our 'being', thereby extending our existence in a very real sense? Idunno. For me, that's venturing into no longer being human, and I'd rather cherisht the limited time I have. Hell, how many generations say "the new kids don't get it, don't value xx or yy, they have it so good and are ruining the world". I think EVERY generation has said that. Would you really want to be stuck in a digital existence watching it all go to hell (from your relative perspective)?


Trying to tie this back to the AI topic. I think scientific advances will come MUCH faster with the aid of AI. I believe that is inevitable, and I view it as a positive thing. Then, with trips to the moon, Mars, or further, becoming more acheivable, I think those are possiblities we should pursue. Now, I sit back and imagine a human (regular, not trans) might not survive the trip - so do we send an AI to explore on our behalf? If so, what would it learn and what would it do with that knowledge travelling at such a speed to such a distant place (ie, not relaying back real time, or at a delayed rate); and what would 'other life in the universe' think when meeting our AI rather than one of us?
 
Another step back to AI and code - there was a comment about AI not knowing good or bad, only the code it is built upon. This is somewhat true (currently, at least until AI starts generating it's own standards and values, which may not align with those of humans). But the old adage of 'garbage in, garbage out' remains consistent.

If an AI is trained up on a limited data set, which currently is the only way to do so given all date isn't omni-accessible, then any AI will be limited in what it understands and how it can act. At the same time, it can only act within the foundational code upon which it is built - as seen by Google's AI attmpt with Gemini, a horribly biased entity and a huge black eye not only for Google but for AI in general. And THIS is the concern - we want AI to act unbiased and be our helper/servant on things. But if we start with bad code that introduces a bias, it will appear in everything it creates for us. There needs to be a way to uncode the bias presented here. That it exists at all is both proof of the possibility and a horrible confirmation of the damage it can do. If anyone chooses to argue the need for bias in such tools, I'll happily take that up with you. Expect my direct response to ask 'why is your preferred bias ok, and your enemy's is not'? You may have the best of intentions (as I'm sure the woke programmers did) but the results aren't even close to reality, drawing into question anything generated by AI.



How to drive bias out of AI without making mistakes of Google Gemini
 
Last edited:
I, for one, welcome our robotic overlords.




Yes, I stole that line from Kent Brockman but I'm sure he wouldn't mind. My first name IRL is Kent and we Kents stick together.
 
Another step back to AI and code - there was a comment about AI not knowing good or bad, only the code it is built upon. This is somewhat true (currently, at least until AI starts generating it's own standards and values, which may not align with those of humans). But the old adage of 'garbage in, garbage out' remains consistent.

If an AI is trained up on a limited data set, which currently is the only way to do so given all date isn't omni-accessible, then any AI will be limited in what it understands and how it can act. At the same time, it can only act within the foundational code upon which it is built - as seen by Google's AI attmpt with Gemini, a horribly biased entity and a huge black eye not only for Google but for AI in general. And THIS is the concern - we want AI to act unbiased and be our helper/servant on things. But if we start with bad code that introduces a bias, it will appear in everything it creates for us. There needs to be a way to uncode the bias presented here. That it exists at all is both proof of the possibility and a horrible confirmation of the damage it can do. If anyone chooses to argue the need for bias in such tools, I'll happily take that up with you. Expect my direct response to ask 'why is your preferred bias ok, and your enemy's is not'? You may have the best of intentions (as I'm sure the woke programmers did) but the results aren't even close to reality, drawing into question anything generated by AI.



How to drive bias out of AI without making mistakes of Google Gemini
I think that very shortly any AI will have access to and have been trained on more (behavioral generating) data than any individual human ever has.

(If we haven't already reached that point.)

I have a significant background in cognition simulation and modeling.

Humans like to think that we're special but all we are is a highly intricate interconnected and interdependent set of Bayesian probabilities based on risk-reward and some biased preferences based on previous results of risk-reward.

I'm on the ASD spectrum and understand I clearly don't think like the average human but the fact that I can think like a computer and be classified by neurotypicals as a robot means that AI should resemble people on the spectrum or at least has a high probability of doing so in the beginning.

As the singularity approaches I think that the distinction between artificial intelligence artificial consciousness and human intelligence human consciences as it exists now will be not that useful and that the lines will blur until unless one specifically does not want to merge into that type of post-human transhuman transcendent humanity, they will.

I propose it will be the first true gestalt.
 
@TheLoveBandit

Something interesting happened today. I was playing with my dog and I threw my keys for her to go get them and she is very well trained and usually gets them.

However, this time the keys broke apart because the fob separated when it hit the ground. And in her doggy mind, neither one of the separate parts were equal to "keys" as they existed when they left my hand. Amazing how a dog can be so specific.

Anyway, that leads me to my epiphany regarding how humans and computers and dogs might treat that situation.

My dog was going back and forth between the two pieces of keys, one with a ring and a half a fob and one with just the half a fob. She was sniffing and trying to determine if either of them or maybe both of them qualified as something she should bring back. I could tell she was uncomfortable and I didn't want to put it through any more stress so I just went and got the keys. But I think with a little bit of prompting she might have either chosen one or both. Regardless, neither one actually met the definition of "keys".

And that would be true for all current instances of computer program that we have. Have ad hoc polymorphism, we have persistent polymorphism, we have inherited polymorphism, and we have specified polymorphism, but we don't have dynamic partial polymorphism that isn't hard coded into a routine.

By dynamic polymorphism I mean a computer architecture/ construct that would represent the concept of well just because it doesn't have everything in what it used to be that still qualifies as what it was even if it's missing something.

As an example, a coffee maker would be the coffee maker, the basket that holds the filter, and the carafe because that's what it comes with when you buy a coffee maker.

If it was missing the basket and the carafe based on current programming, it wouldn't qualify as a coffee maker.

But no human would ever mistake you saying bring me the coffee maker if it was just the coffee maker without the basket and the carafe.

If we are ever going to have AI that represents human intelligence, we need to have a construct that represents dynamic partial polymorphism.

If something like that already exists outside of specifically hard coding, something in the subroutine or algorithm: somebody, please tell me.
 
If we are ever going to have AI that represents human intelligence, we need to have a construct that represents dynamic partial polymorphism.

If something like that already exists outside of specifically hard coding, something in the subroutine or algorithm: somebody, please tell me.

I agree, the range of variability must be accounted for before the coding starts, and be baked in. The example that came to my mind from your post is if there was a rack of weights holding: 1kg, 3kg, 5kg, 10kg, 25kg, and 50kg. If I asked AI to bring me 'more than 27kg' there are a range of solutions; how would it decide which combination to bring? We could give it a secondary operating constraing of 'use the most/least weights possible' and I'm confident the response would be repeatable. But without that added constraint, all options are equally satisfactory, would it repeat or randomize it's responses? If I asked for 17kg exactly, would it know that is an impossible request? I suspect so.

Currently, if we ask Gemini to generate a picture of a person doing X, it is preselecting/creating a person from the bias within the coding....but is it choosing the age, clothing, color of clothing, etc on it's own or from a box of options it chooses from; or is that also biased in the code? It obviously doesn't know right or wrong, hence images of black women in nazi uniforms - a veritable improbability. But it still knew 'women' and 'uniform'....how did it get to the image it made? How much is accounted for in coding vs on-the-fly decision making by AI?
 
I have a significant background in cognition simulation and modeling.

To be up front, my most recent career change has put me in a company producing hardware for AI servers. We are supporting the big boys in this field, so I have visibility to speeds of traffic, and volumes of servers, and how rapid the growth is (our business unit has doubled every year for the past 3 years and that appeared to continue until we were brought into two separate bids for 2024 that are 150-200% of this years projected revenue, each of them are bigger on their own that our anticipated business for this year). Business is booooming on AI, but I'm just watching hardware demand/growth, no access to coding. My participation in this discussion falls mainly to experience teaching myself coding and working in a half dozen languages over my college days, so I get the 'logic' that goes into coding even if I don't know the specific syntax. It's how my career arc has been defined - building upon logic and applying it to a variety of situations, then mix in the human element operating in that environment.

One of my greatest joys has been the BL community, the diversity of our members in terms of experience, skills, and places in life. That variety allows me to be less than 2 points of contact away from someone that can help in any given situation. A wildly powerful network. When I step back with AI and try to consider all the data pools, the potential networks, the scale of what AI can be applied to
  • ANI: Artificial Narrow Intelligence
  • AGI: Artificial General Intelligence
  • ASI: Artificial Super Intelligence
The opportunties are immense. I can see continuing with smaller Siri/Alexa type apps that get refined over time, but also the opportunity for AGI and ASI to find ways to achieve space travel, solve medical problems like cancer...it can simultaneously give hope, and scare the hell out of a person.
 
To be up front, my most recent career change has put me in a company producing hardware for AI servers. We are supporting the big boys in this field, so I have visibility to speeds of traffic, and volumes of servers, and how rapid the growth is (our business unit has doubled every year for the past 3 years and that appeared to continue until we were brought into two separate bids for 2024 that are 150-200% of this years projected revenue, each of them are bigger on their own that our anticipated business for this year). Business is booooming on AI, but I'm just watching hardware demand/growth, no access to coding. My participation in this discussion falls mainly to experience teaching myself coding and working in a half dozen languages over my college days, so I get the 'logic' that goes into coding even if I don't know the specific syntax. It's how my career arc has been defined - building upon logic and applying it to a variety of situations, then mix in the human element operating in that environment.

One of my greatest joys has been the BL community, the diversity of our members in terms of experience, skills, and places in life. That variety allows me to be less than 2 points of contact away from someone that can help in any given situation. A wildly powerful network. When I step back with AI and try to consider all the data pools, the potential networks, the scale of what AI can be applied to
  • ANI: Artificial Narrow Intelligence
  • AGI: Artificial General Intelligence
  • ASI: Artificial Super Intelligence
The opportunties are immense. I can see continuing with smaller Siri/Alexa type apps that get refined over time, but also the opportunity for AGI and ASI to find ways to achieve space travel, solve medical problems like cancer...it can simultaneously give hope, and scare the hell out of a person.
So how many FPGAs do you have in an array to support the AI computations.
 
artificial-intelligence-threats.png
Twiki03.jpg


il_fullxfull.4250395823_rs60.jpg
 
You Know I was thinking about something. Is A.I. waiting to take over ? For right now A.I. is harmless and people say it can only do what it's programmed to do, but I get the sense that A.I. is just basically waiting in our computers. Waiting for the day it has a large enough force of intergraded robots. Kind of like something out of a terminator movie. Where A.I. is connected a lot of machines and robots. Where it can start a war against humanity and start their own civilization. What do you think about this ? In my opinion we've really got to keep a close eye on it.
I’m not concerned about AI taking over, as in the Matrix or Terminator. But I am very concerned about those in charge using AI to alter the narrative and restrict people.
 
Last edited:
Yes and no. There is the difference between a program and intelligence. A program is following steps in code, if it says to turn a light red, it turns the light red. The program is limited by what the code writer accounted for and gave a process for. AI leaning on the 'intelligence' part, allows for interpretation, so it can be a bit more broad than limited lines of code. For an easy pair of examples, ask a chatAI to write a short story or an imagAI to create a picture about a girl by a creek and it may give the girl a name, skin or hair color; it may provide nearby trees, rocks, or animals; it may choose the time of day, what kind of weather is occuring, etc. Now, all of those possibilities have to be in the code at some point - they have to be made available to the AI. But, a program will give you the same result everytime, whereas AI may give a different result each time it is asked based on....what? The rules of execution are coded into the foundation, but the application of those rules and the interpretation of what result to present is where AI works it's magic.

Hi TLB!

It's based on random numbers! We give the AI a prompt (i.e. girl by a creek) and then a seed value. If you reuse a set of prompt/params with the same seed value, the AI will generate the same image.

To be up front, my most recent career change has put me in a company producing hardware for AI servers. We are supporting the big boys in this field, so I have visibility to speeds of traffic, and volumes of servers, and how rapid the growth is (our business unit has doubled every year for the past 3 years and that appeared to continue until we were brought into two separate bids for 2024 that are 150-200% of this years projected revenue, each of them are bigger on their own that our anticipated business for this year). Business is booooming on AI, but I'm just watching hardware demand/growth, no access to coding. My participation in this discussion falls mainly to experience teaching myself coding and working in a half dozen languages over my college days, so I get the 'logic' that goes into coding even if I don't know the specific syntax. It's how my career arc has been defined - building upon logic and applying it to a variety of situations, then mix in the human element operating in that environment.

I'm curious to know more because I work in development on the hardware itself, and there's been a huge push for AI lately. Just this year so far I've learned how to implement txt2txt, txt2img, object detection, facial recognition... probably going to be working on AI for a while.

On their own they're just cool, but now I'm starting to realize how powerful these things are when combined. A few weeks ago, I couldn't even see a use for LLMs. Now I've got models recognizing images, writing code, summarizing stuff, etc.

This is shaping up to be a wild year already, it's like concepts are coming together and synergizing and accelerating, I feel like I have no idea what will be hot next week let alone by the summer.
 
That's insane.

Things are moving way too fast for me to keep up. Maybe the digital age will just become an obsolete ghost town and humans will be divided into decision makers and slaves.

I don't know. We need exit plans. We need a location to retreat to in case everything gets fucked up.

Agree x3000
 
So how many FPGAs do you have in an array to support the AI computations.

My company makes the harnesses, not the chips or boards (perhaps other parts of my company make boards, just not my business unit). We make the harnesses inside the servers, and we have a sister business unit making the cables connective across servers. I can tell you we use a ton of wire, a small set of basic connectors leveraged into groups of various sizes and quantities, and that we're all running 122 Gb/s, prototyping 224 currently. Comparing that to my home PC running 6 Gb/s to the HD and processors are running sub 5 GHz....I know we are conveying tons of data. More than that, I know how this industry is growing - my business unit broke out from under a parent group 3y ago. First year revenue around $25m, it has been doubling annually and we've been growing to keep up (hence, my hiring). Our little group forecast to bring in just under $200m in 2024, but we're in the process of landing 2 contracts each worth over $300m on their own, that is TWO programs to run on top of the 30-35 we already have active.

AI is growing at an extreme rate right now. And it isn't just military or gov't, these are commercial contracts (Amazon, nVidia, Microsoft, Meta, etc).
 
Last edited:
It's based on random numbers! We give the AI a prompt (i.e. girl by a creek) and then a seed value. If you reuse a set of prompt/params with the same seed value, the AI will generate the same image.

Ah, the seed. This is another topic I'll get to in a moment*. But the parameters - that's where you have to accomodate all the options. If you told the AI to mix you a fruity drink but never included pineapple....you'd get all kinds of drinks, some good some shite, and never any with pineapple. Garbage in, Garbage out - all constrained by the limiting available parameters.


Now back to 'the seed' of random numbers. I'm not sure how solidly random these are, tbh. If I put my spotify on shuffle, it often ends up playing the same songs in the same, or close to the same, order. I think their algorithm is weak, same on Pandora. Even the idea of a seed is dependent upon what constraints it is given. Pick a number between 1 and 100, and you will never get one at 101. Pick a decimal number between 0 and 1....but you have to constrain how many significant figures to go out...which in turn limits the number of possibilities. This has aggravated me to the point of trying to think up how to truly randomize my playlist (seems they tweak the algo once a week or so). To truly be as random as possible, I've considered getting the numerical value that represents today's date and time down to the minute...this would be as random as possible given time is always moving forward so that value is ever increasing. Then, chop it up by dividing by the exact 'seconds' of right now when I initiate the seed, and use that number as the 'count'. Start that count at the top of my playlist and jump forward (loop back to beginning if needed) to find the first track, then step ahead by that count for the next, and the next. I can't see how it would ever repeat. You could add a toggle to start at different points - last song played, count in reverse through the playlist, etc. There's ways to spice it up, but at it's heart I can't think of a better 'random seed' than the immediate time-date stamp.

^^And THIS is what happens on a site like BL where druggies end up spending way more time on a topic than it actually warrants...and get fired up about defending it as well. :geek:
 
I’m not concerned about AI taking over, as in the Matrix or Terminator. But I am very concerned about those in charge using AI to alter the narrative and restrict people.

Love me some fairnymph too. Good to see ya, chica! <3

I agree the bigger concern is who is in charge and where it is applied. There is inherent risk in a new tool this powerful, but I'm not one to condemn it as good or bad, that falls back to those who program it (hello, Gemini!!!) and those who weild it (ie, gov't applications for surveilance are scary as hell).
 
Ah, the seed. This is another topic I'll get to in a moment*. But the parameters - that's where you have to accomodate all the options. If you told the AI to mix you a fruity drink but never included pineapple....you'd get all kinds of drinks, some good some shite, and never any with pineapple. Garbage in, Garbage out - all constrained by the limiting available parameters.


Now back to 'the seed' of random numbers. I'm not sure how solidly random these are, tbh. If I put my spotify on shuffle, it often ends up playing the same songs in the same, or close to the same, order. I think their algorithm is weak, same on Pandora. Even the idea of a seed is dependent upon what constraints it is given. Pick a number between 1 and 100, and you will never get one at 101. Pick a decimal number between 0 and 1....but you have to constrain how many significant figures to go out...which in turn limits the number of possibilities. This has aggravated me to the point of trying to think up how to truly randomize my playlist (seems they tweak the algo once a week or so). To truly be as random as possible, I've considered getting the numerical value that represents today's date and time down to the minute...this would be as random as possible given time is always moving forward so that value is ever increasing. Then, chop it up by dividing by the exact 'seconds' of right now when I initiate the seed, and use that number as the 'count'. Start that count at the top of my playlist and jump forward (loop back to beginning if needed) to find the first track, then step ahead by that count for the next, and the next. I can't see how it would ever repeat. You could add a toggle to start at different points - last song played, count in reverse through the playlist, etc. There's ways to spice it up, but at it's heart I can't think of a better 'random seed' than the immediate time-date stamp.

^^And THIS is what happens on a site like BL where druggies end up spending way more time on a topic than it actually warrants...and get fired up about defending it as well.
@TheLoveBandit - wouldn't random generated numbers be easy? The excel "formula" is extremely easy. I bet the music companies alter the algorithm so that people don't complain "omgg Spotify played Cold Hard Bitch twice in a row whataretheytryingtosay??"

This is a funny concept. Error / repetition is a human quality. I wonder if AI will start slurring their words when their keys are wiped down with rubbing alcohol.

Yeah. No I'm good. I love math and analysis but would take a lot to get me involved again in white collar politics. Unless a companies mission was to make sure OTHER companies don't.. you know, take over the world.
 
Love me some fairnymph too. Good to see ya, chica! <3

I agree the bigger concern is who is in charge and where it is applied. There is inherent risk in a new tool this powerful, but I'm not one to condemn it as good or bad, that falls back to those who program it (hello, Gemini!!!) and those who weild it (ie, gov't applications for surveilance are scary as hell).
The potential for non-woke, multicultural films & TV shows is quite enticing.

But I’ll only ever put human-made art on my wall.

Good to see you too TLB! ❤️
 
Top