![]()
AI is only as good as the worst programmer. Just as a poorly designed engine is going to render poor performance. Or a poorly designed bridge will suffer a much shorter service timeframe.
No.
Its only good as the worst training data it is fed. They are kinda like humans in that respect.
LLMs and stable diffusion use algorithms that work fine, when given good training data, although these algorithms are very complex.
For some odd reason, people seem unable to discriminate between the algorithm itself, and the training data that these algorithms are trained on.
As an analogy, take a normal human. Send him to a liberal arts university, and watch his behaviour after a few years of bad training data. He’ll come out with blue hair, a nose ring and waving some trans or free Palestine flag. This is an indictment of the training data, not of all humans, and in exactly the same way, Google’s and other LLMs are an indictment of the training data used to train them, not the algorithms themselves.
If you feed a well-designed engine bad gas, and use olive oil as a lubricant, it will perform poorly. If you take a well-designed bridge, and cheap out on the concrete and rebar, it will fail sooner.
I will also add this:
Take a chess-playing software program. Say it can win 70% of the time, playing against Grandmasters. Sure, people can draw attention to when it loses, and it will sometimes lose in some pretty stupid ways, but using those losses as a pretext to call the software junk is disingenuous.
Now, the Amazon case. Coders are being laid off in favour of using AI to increase the productivity of fewer coders. I’m sure the bean-counters at Amazon have done some form of cost-benefit analysis, and decided that the gamble was worth it. Nobody here, to my knowledge, has done this analysis to compare the cost of this outage to the money saved by using fewer coders. Nobody here can predict the future. Sure, there’s a AI bubble, and 90% of the investors may wind up losing their shirts, kinda like the dot com bubble some time ago. Those who don’t lose their shirts and come out ahead will go on, and the market will adapt, just like what happened when the dot com bubble burst. The thing is, if you don’t play, you are guaranteed not to win, and that guarantees that someone else will.
As far as value judgements are concerned, asking things like “is it a net benefit to humanity?” is just so much meaningless noise. Are nukes a net benefit, or guns, cars, or social media or the internet? Opinions, nothing more. These things exist, they have their benefits and drawbacks, and now AI exists as well, and it too has benefits and drawbacks.
Some anti-gun nuts think guns have only drawbacks. Some anti-AI nuts think AI has only drawbacks. These are just opinions based purely on emotion or ideology, disconnected from physical reality.
I spent enough time (almost 25 years) in tool and die shops to understand that if you have a poor designer cranking out poor tool designs, the guys in the shop with experience (the old hands and myself (I was the young kid back then)) spend lots of time going back to the office, blueprints in hand, to point out why this can’t be built or why if it is built that way, it will fail, quite spectacularly, assuming it can even be built. Choices for the designer are to either to listen, getting educated by those with hands-on experience and fix the problem, or proceed to ignore the shop guys and end up getting fired.
Any problems with AI coding, and there are many, fall back to those who worked on it. The algorithm can’t fix itself. Only those who worked on it can fix the code. Or turn the project over to someone else who is more qualified to make the changes.
If you dump bad gas into a good engine, yes it probably won’t run unless it was built by Rumely or Kahlenberg, two manufacturers who built reputations for bullet-proof engines designed to run on very low quality fuel with water mixed in. They were bullet-proof too. MacArthur praised the Kahlenberg for being the only engine to get shot full of holes by the Japanese and still keep on running when evacuating the Philippines. A reliable but slow-running design. Rumely was a very heavy and quite slow tractor engine that would run on any oil, so long as the oil would flow freely and water was added to the mix. A few Rumely engines did end up with the occasional hole shot into the block by stray ammunition during deer season, and they still ran, oil optional. A more recent example is the Ford i6-300, good for YouTube videos showing owners trying to destroy it and failing at the intended task. Garbage gas? Adjust the carburetor. Oil? Optional. Coolant? What is that?
For an example of a very poorly designed engine, there is the aluminum block Chevy Vega. All but guaranteed to warp, crack, or burn oil by 50,000 miles, no matter how good the fuel, oil, and coolant was. The heads warped, gaskets blew, cylinders cracked or warped, and pistons and valves would come into conflict with each other. A textbook example of “what were they smoking when they designed it and why were they not sharing?”
If the AI is poorly written by programmers or is written by programmers following an agenda, you will receive poor AI results or an AI that follows its prescribed agenda. Computers can’t think for themselves. At the end of the day, everything comes down to ones and zeros.
But it is possible to convince enough people that AI is able to “think” and create something out of nothing. The most likely candidates for this are those who have been indoctrinated to not think for themselves, following whatever their handlers tell them they must think, no matter how insane the instructions are.
Now as for Amazon, the question nobody is asking is, “how many coders are there occupying space, not being productive?” If Amazon is anything like the American railroads were back in the 1970s, my answer would be “most of them.” I remember trains with five man crews doing the job of two man crews because the union imposed that situation while the truck drivers “stole their freight.” The train crew consisted of the engineer (required to pull the throttle and brake levers), fireman (what boiler needs firing on a diesel-electric?), head-end brakeman (not needed since Westinghouse invented air brakes a century before), rear-end brakeman (Westinghouse again), and conductor (man in charge, does the paperwork, responsible for double-checking the line-side signals and switches, maintaining radio communications with dispatch when the engineer is busy). It was known as feather-bedding. Finally the unions caved in on their rules and allowed two-man crews on most trains. It was that or nobody would have a job.
I really don’t care what the industry is because all companies accumulate dead weight. Some faster than others and in different places. It even happened where I worked in the trucking industry. Truck drivers got paid by the load. They were incentivized to be very productive. No load = no pay. If the load didn’t arrive on time, there had better be a good reason, such as tire failure or weather. Mechanics? They caught hell fast if a repair took too long with no good reason. It was the office where the dead weight accumulated. Too many dispatchers, middle management, and so on.
I got involved once in the middle of one of those mass firings. The dispatcher was shorting loads for drivers she didn’t like because we had the wrong skin color. One day she was told to tell us that we were put on probation for not hauling enough loads and turned over to a different dispatcher. Three months later, the data was collected. The drivers “on probation” were productive. She was gone so fast she didn’t know what happened. That morning I received a text message in the truck, listing all who were fired in that office. Half of that office was gone for assorted reasons.
So for Amazon, AI could just be an excuse to eliminate some dead weight. The larger the company, the easier it is for dead weight to accumulate. The trucking company where I worked was in the top five for number of drivers and loads hauled, definitely very large. Right now, Amazon is the 800 pound gorilla in the room. The bean counters could be saying one thing, but inside the bean counting office, only they know what the real reason is.
As for the gun grabbers, they function under the delusion of a gun being able to get up off the table, load itself, and start shooting randomly, for no reason. So let’s blame the gun owners who did nothing. Sort of along the same thinking as a drunk driver killed someone last night while driving home. So let’s take away all the automobiles from their owners and blame the automobiles. You can’t argue using reality when confronting a gun grabber.
AI can work once all the bugs are worked out. Are we there yet? No. Will we be there in the future? Perhaps. It depends on cleaning up the code and those working on it. OpenAI is still cranking out Woke Agenda compliant results because those who worked on that project follow the Woke Agenda. So is that AI good for anything? Most of the time, no. It all goes back to those writing the code.
Well - yes and no, the whole point of “AI” (which LLMs are not) is that it “democratizes” the process, making it possible for anyone to generate code, music, etc.
The fact is LLMs are not reliable, they are RNG language generators and if you just apply the first draft, it has LOADS of errors.
14 000 people have contributed code to the Linux kernel.
I guess its only as good as the worst coder among them, all the same, it works, and generally does what its supposed to do.
LLMs and stable diffusion works, they do exactly what they are supposed to do, just like a spreadsheet or sorting algorithm.
I just don’t get all these people with such an issue over AI.
“Artificial flavour.” OK.
“Artificial plants.” OK.
“Artificial limbs.” OK.
“Artificial sunlight.” OK.
“Artificial fireplace.” OK.
“Artificial fish tank.” OK
“Artificial intelligence.” Reeeeeeeeeeee!
Permit me for a moment…
/* smug mode on */
It occurs to me that I have seen many stories in various locations about who is being laid off or having other “issues” as a result of AI, are the very same people who were telling coal miners and others a bunch of years ago to “Learn to code…”
Welp, the coal miners and oil drillers are still working…
all to power those great big data centers that run AI models…
Karma.
/* smug mode off */
1st it was “Let them eat cake”, then it was “Learn to code.”, now it us saying to THEM “Lear to mine coal.” or “the Hollywood actors, writers, and all the graphics designers and urinalists yearn for the mines.” ![]()
You can add truck drivers and farmers to the list being told to learn to code. About a decade ago there were massive predictions of a future with self-driving trucks and self-driving tractors and combines. “It is only a few years away.”
Here we are, a decade later. What changed? Trucks are being driven by illegals, only very dangerously. Self-driving trucks still require several vans full of technicians and equipment to monitor and guide the trucks. “But in a few more years it will happen.” Wash. Rinse. Repeat.
Farmers are still using tractors and combines that require constant attention, only with one difference, the right to repair when things break and the right to replace the poorly coded software when it doesn’t work as promised. Yes, open source tractors and combines are something farmers are demanding to replace the very expensive, poorly supported close source software. Perhaps some of those who were laid off could code new open source software for tractors and combines and then figure out how to get it installed on closed source computers inside those tractors and combines.
For those unfamiliar, tractor corporations are like Apple. You must go to them to make an appointment to send out their technician when something breaks, resulting in downtime you can’t afford because the crops must be planted and harvested when the weather is good and in the proper season. Spare tractors and combines as a general rule don’t exist in most locations and dealers know they can charge a premium in season for that new one sitting in the dealer’s yard.
So instead of complaing about losing their jobs, perhaps those who code should see this as an opportunity to diversify into places where their skills are in demand.
Louis Rossman has done a fair bit of work fighting John Deer’s unethical practices.
Up here in the Worker’s Paradise of Canada, most of our truck drivers have been from Pakistan or India for a long time.
Over 10 years ago, I worked in a scrapyard for big rigs, and almost all of the customers were from Pakistan or India. I remember reading some trucker trade magazines that we got there, some article about how our “Conservative” gov’t would spend tax $$ to “attract women to the trucking industry.”
Back in 2008, driving across Saskatchewan heading for Grande Prairie, Alberta, I had two try to run me off the Yellowhead Highway. Two days later, another one tried the same north of Kamloops when I was taking a load of grass seed to Salem, Oregon. The last one ended wheels up between the trees. All three were skateboard B-trains.
The company I worked for officially did not refuse to hire them, but all new hires had to pass the company’s road test and yard test before being hired, which was more difficult than the one required by the various state governments. Not sure if that policy is still in place. Now that corporate investors own 49%, there is a possibility that such a policy was thrown out as part of the plan to pay drivers less per mile. I have been gone for almost six years, so I have no clue about that company’s latest hiring practices. But they do put more and more containers on the railroads, which tells me good drivers are in short supply.
The feds did the same down here. During the GW Bush years. Did it work? Of course not. Women want to be home every evening and refuse to work the hours required to get the load delivered on time.
There was one day when on a “local” dedicated account where I was assigned for two years. I was in the yard securing the load and inspecting the trailer. In comes a woman driver and she starts screaming at me for being a white man.
Turns out she just got fired because she did three late loads in a row. As a general rule, unless told otherwise, the first store delivery is at 3 am. I would arrive at 5 pm the day before. That way I would get my 10 hour legal break in and start with a clean log for the day. I found out later that she arrived at 10 am each day to the first store. By then, the unloading crew at the store went home and store management is very upset. By strike three, store chain headquarters was becoming upset. But somehow, this is my fault because I am a white guy?
Then women wonder why single white men want nothing to do with them. But then so do ultra-religious men and women who make it their life’s work to go around condemning to hell any white man who is still single once past age 25. Perhaps there is a connection between the two?
Back to AI:
AI facial recognition is definitely not ready for prime time. Neither is fingerprint recognition software. Of course all of her current disaster can be directly traced back to the big failure government not doing its job properly.
If this is true, lawyers must be lining up to see to it that she gets big money.
AI isn’t the culprit here. Bad cops are.
The problem with those lawyers is they tend to never tell the client about their massive cut if they win. I don’t know how that works in Canada, but based on the ambulance chasing ads appearing on every local television station, lawyers never disclose their cut if they win. If they did, they would quickly go out of business. I have heard of stories where the victim receives a pittance while the lawyer makes off with everything because the victim wasn’t intelligent enough to read the contract with the lawyer.
Bad cops and a compromised judiciary are a major problem. If AI is completely innocent, a competent judge should have started asking more questions that resulted in more questions because the police didn’t do their homework. But then, trying to find competence with zero corruption in the courts today is like hitting a moose crossing a highway and expecting to drive away with zero damages. Kyle Rittenhouse got lucky with Judge Schroeder.
I don’t know what the story of this case is, but I highly suspect the police in another state found the real fraudster the old fashioned way, following the paper trail, and suddenly, AI isn’t looking so good, along with the keystone cops and the judge.
Sort of like that movie, My Cousin Vinny, where the culprit’s car could not have been a Buick because only the Pontiac Tempest and a Corvette could leave those tire tracks. But the local yokel police and the local witnesses couldn’t tell a Pontiac from a Buick. Only when the police in the next state over report they have someone driving a Pontiac Tempest with evidence in hand did everyone start asking serious questions about possibly having the wrong suspects.
Someday AI will be ready for prime time. When it gets to the point where the low-IQ types are not easily fooled and assume everything must be so because a computer said so. Until then, I don’t trust it.
But then, I don’t trust organized religion, television preachers, and other con artists.
It’s just a tool. It is no more to be trusted or distrusted than a spreadsheet or a hammer. Just because a bunch of fools don’t know how to use it is not on the tool, its entirely on the fools.
No, not at all - because LLMs are not intelligent machines, they are RNG machines. There was another case of this I recently saw where the Casino’s AI facial detector tagged a guy what was six inches taller with being a guy they banned from the Casino. Police called, he was escorted out of the building, and had to pay to get back to the Casino later on to get his car.
The AI made a mistake and the police went with it because the casino’s manager was screaming and checking the facts requires a properly functioning brain. Something that is definitely in short supply among local law enforcement. Especially now with mandatory DEI hires.
You can always count on the local police to never question anything. They just go with whatever the accuser screams. As a retired over the road truck driver, been there, done that, many times.
There were the different times when I was accused of running a red light, despite the video evidence showing I had a green light. Then there was the time when I was accused of parking all night at a strip mall, police ignoring my mandatory electronic log that showed I just arrived five minutes earlier after traveling from the previous delivery at another store earlier that morning.
We really don’t need AI to help law enforcement to screw up. They do that just fine all by themselves. All it will do is help them screw up faster. AI will probably get most of the bugs knocked out in the future. But today, it is good for amplifying mistakes and pushing the developer’s agenda.
A good example of pushing the agenda is to type in “does the left want to assassinate President Trump.” AI will produce claims of zero attempts on his life by the left. Yet everyone with functioning eyeballs had seen what happened in Butler, PA. Using a manual search produces all kinds of articles covering what happened and how the shooter was definitely a leftist following the left’s agenda. The AI was programmed to ignore facts that violated the narrative. Therefore, it is safe to conclude the AI is incapable of producing factual results because the programmer is following an agenda and can’t be trusted.
