AI slop, or art, or science?

LLMs are complex things, dependant on training data. Sure, lots of them are woke, just like lots of humans are woke, especially after they get bad training data in college/university. Do not underestimate this technology. There is a strong degree of democratisation that AI will facilitate, in many fields. Eventually, I’m sure I’ll be able to feed it a book, and have it spit out a movie, and where it falls short, well, I’ll tweak it a bit. Soon enough, we’ll all need an AI to help us sort the wheat from the chaff, as far as entertainment is concerned.

As far as coding, writing scientific papers, analysing cosmological data and particle accelerator data and so on is concerned, its already right up there. The idea that “its only as good as its worse programmer” is total BS. It is a very, very powerful tool. Ignore it to your detriment.

“The responsible ones learn to use the tools before the tools learn to use them.”

The whole point of LLMs is there is zero effort in the tool. That it “democratizes” the output. The value of the content, no matter what it is accelerates to zero. If anyone can create by typing in a prompt then it isn’t something that has a time sunk or skill sunk into it.

There are numerous other problems here, the major players are actually working with Global governments to use their datacenters for surveillance (Which is why Iran targetted AWS datacenters) you don’t actually own any IP that the slop machine creates for you.

The biggest one for me?

These:

  1. Using the tool makes us actively dumber. This is proven by numerous studies.
  2. Using the tool makes us less skilled Mo Bitar talks about this on YT, how his mind has adjusted to doing it the lazy way and asking the LLM to do it for him. Now he and other coders, aren’t flexing the brain muscle to write their own code, and that portion of their brain atrophies. Numerous studies also prove this.
  3. Using the tool is leading to AI pscyhosis. No one knows how wide spread this is, but again - numerous studies are showing this to be a problem the AI companies are just glossing over.
  4. There’s no real path for people to pay their bills if they are replaced by the slop machines. UBI is insane, it would create hyper inflation so fast it would make the Weimar Republic look sane.
  5. Long term if the AI does what the grifters claim? Prompting will be a career that lasts at most six months before you are replaced by an self generating agent.
1 Like

Incorrect. A well-made LLM can provide very valuable data.

Also, creating good prompts that can give you good data or whatever is also a skill.

That’s like saying “since apples grow on trees, and it takes no effort or skill to pick one, then they have no value.”

  1. Functionally it is still a GIGO machine. But even worse than your calculator, they all lie. Every single model will lie to you to tell you what you want to hear. I’ve witnessed this myself countless times in conversations on X, depending on who instantiates the conversation you can change the output and it remains steadfast in that mode once started.
  2. They are also RNG language machines. You can actually change the output making it more or less deterministic by changing the seed value in the prompt.

You can believe that if you want, using an LLM is no different than pulling a slot machine that is sycophantic and rewards you with dopamines.

No that’s nothing like saying that as what you’re comparing it to takes actual effort still. You have to get out of your house, find an apple tree, and pick them off the tree.

Animals can pick apples, or forage for food, but they can’t use an AI.

It takes more skill and intelligence to use an AI than it does to pick an apple, by a long shot. This should not even be under discussion.

Sure, a poorly trained LLM will often be wrong. A poorly trained human will often be wrong. What of it? Basing your opinion of AI on public offerings like grok or ChatGPT is akin to basing your opinion of youtube content on its trending list.

Most people cannot calculate roots of numbers using a pen and paper, in fact, I’d say most people under 30 cannot even do long division. Again, what of it? Many of those people can still solve differential equations.

Do you think humanity is better or worse because of this? it is literally dumbing us down, and yes I include every bit of it in that.

I was likely in the last generation that had to memorize phone numbers because my phone couldn’t do that. Because I did that it exercised that portion of my brain and I can still remember numbers better than the average person.

The same is now going to be true of those who use AI to get work done and those who don’t. Those who do are dumbing their brains down, and won’t be able to solve problems without typing it in a prompt - and then they’ll just accept the answer, not even possessing the discernment that comes from the pain of figuring it out on your own.

THAT is likely why AI psychosis is so rampant.

That’s just you making a value judgement.

Many people said similar things when calculators replaced slide rules and log tables, or when spreadsheet software replaced rooms full of number crunchers.

AI is a very powerful, useful and capable tool. If some people abuse it and it wrecks their tiny little minds, that’s on them, not the AI, in much the same way as you shouldn’t blame hammers for people hitting their thumbs with one.

Then you have these people dismissing it as a “bubble.” Again, so what if it is? The .com bubble of the late 90s was a bubble, and we still have the internet decades after that bubble burst.

AI is here to stay, simply because it is so useful, powerful and capable.

Its changing things, sure. So did cars, computers, calculators, assembly lines and so on. Now all these white-collar jobs like graphic designers, marketers, heck soon enough even actors will fall to AI, just like buggy-whip makers fell to cars.

Sure, like any powerful tool, its dangerous. So what? So are chainsaws. It takes skill to safely and effectively use a chainsaw, and the same is true of AIs.

You can still buy a horse, a buggy-whip or a slide rule if ya like, but no amount of complaining is gonna put the genie back in the bottle.

I really don’t care if some weak-minded morons get all messed up playing with AI, just like I don’t care if a drunk chops his arm of when he decides that juggling chainsaws looks cool. Its all on the morons who get hurt, not on the tool they used to hurt themselves.

we all have to operate on value judgements right?

If you have all the data, and still accept it - then I’ve said my opinion and you’ve said yours, both based on our values.