13 Comments
User's avatar
Freedom Fox's avatar

A lot of words building up and then debunking a straw man argument against "AI."

Nobody (or very few) opposed to "AI" deny how efficient it may prove to be, the ability it may have to aggregate datapoints and detect patterns, produce useful valuable output in a fraction of the time it takes current systems. That's a given, no serious person denies that; it's a freakin' computer, that's what computers do.

You share a truth that gets as close to the real peril of it in your piece, "Much of the AI industry's leadership class is a peculiar ideological cocktail, full of progressive politics and transhumanist philosophy. Some of its most prominent figures speak openly with reckless abandon about accelerating toward a future that nobody elected them to engineer." But then equate it with a non-sequitur about how the technology to split the atom didn't result in the world nuking itself to the Stone Age.

No, that's not the correct paradigm and threat to compare "AI' technology to. All those "AI" Data Centers are being built to process ALL the data that is currently gathered on us, every last bit of it that currently languishes without attention due to resource constraints becomes capable of real-time processing for everything everybody does. With algorithmic pattern identifiers to "measure" "risk," desirable traits, thoughts, propensity to have anti-authoritarian, skeptical ideas. Even dystopian "pre-crime" arrests straight out of Minority Report become possible, instituted, normalized. Only "AI" possesses the capability of making that happen. And ALL governments around the world are desirous of this. Collaboratively, not competitively, like splitting the atom was. Ironically, "AI" Data Centers deemed so important that they require building nuclear energy plants to power, once relegated to the "too dangerous" ash heap of history, The ability to monitor and control populations determined to be worth the investment and risk.

Worldwide Social Credit Industry - Infrastructure to Support Social Credit Systems Represents a $16.1 Billion Opportunity by 2026

https://www.businesswire.com/news/home/20211223005270/en/Worldwide-Social-Credit-Industry---Infrastructure-to-Support-Social-Credit-Systems-Represents-a-%2416.1-Billion-Opportunity-by-2026---ResearchAndMarkets.com

"The COVID-19 pandemic has facilitated substantial interest in citizen monitoring solutions

Infrastructure to support social credit systems represents a $16.1B global opportunity by 2026

Cameras and other optical equipment for social credit systems will reach $723M globally by 2026

Advanced computing will be used in conjunction with AI to provide nearly flawless identification and tracking

Various forms of biometrics will be used for identity verification as well as verifying the presence/location of people

Starting as tangential to public safety and homeland security, the social credit market becomes mainstream by 2026

Social credit systems represent the ability to identify (mostly people but also some "things") and track activities for purposes of grading behaviors and applying "social credit" scoring. A given grading/scoring methodology depends largely on social credit system objectives and metrics.

However, most systems will have socially acceptable behaviour at their core. This presents both a challenge and an opportunity as a combination of government, companies, and society as a whole must determine "good", "bad", and "marginal" behavior within the social credit market."

Omnipresent AI cameras will ensure good behavior, says Larry Ellison

https://arstechnica.com/information-technology/2024/09/omnipresent-ai-cameras-will-ensure-good-behavior-says-larry-ellison/

“Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on,” Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. “We’re going to have supervision,” he continued. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report the problem and report it to the appropriate person.”

FF - The comparison to the atomic bomb does stand, however, in that both are either a tool for good or a weapon for bad depending solely on the intentions of those whose hands they are in. We are being told EXACTLY what the intentions of those who are currently holding the technology is: an Orwellian dystopic nightmare where freedom, even of thought and association, is extinguished.

Foolish techno-optimism to ignore this big, glaring, stated threat inside every single aspect of human existence. In the absence of any sort of ironclad prohibitions, penalties, recourse for violations of personal freedom and liberty the "AI" technology perpetrates in the hands of those with these ill-intentions, against both the state/corporate public/private partnerships that direct them and the individuals in positions of authority deciding to abuse the technology I, and all sane people who cherish individual liberty and freedom remain OPPOSED to the development of the technology. The time is NOW to unambiguously, without exception protect civil liberties, freedom, privacy and algorithmic assessments of us as individuals from this most dangerous weapon pointed at humanity that has ever been devised. Far exceeding the threat of nuclear bombs.

Richard's avatar

Like it or not we are all going to be dealing with it. This is true whether the optimists or pessimists are right which only time will tell. I am retired so have little use for cutting edge applications but I call on all that do to buckle down and make them great. Slop is real and not just in writing. Lots of places are using it for customer service, mostly poorly. So fix it. Supermarket has scanner, camera and weight sensor connected with AI to prevent theft. AI generates a lot of false positives for theft which requires more human intravention.

variantk's avatar

Just wanted to mention the progressive ideology being baked into most if not all of the frontier AI models. Elon has claimed grok is the only “truth seeking” AI, and google faced some backlash for their over(t)ly diverse image generation model a bit ago. I’m not a big fan of regulation but a growing segment of the general public uses AI systems like ChatGPT as the new google, and treat its output as the absolute truth. I think time will educate most users but it’s still a bit scary the enormity and reach of these tools for spreading an ideology as compared to traditional and even digital media.

jtrudel trudelgroup.com's avatar

I had a long career in High Tech.

My view -- New technology can make things better. Or worse.

The outcomes are up to us. And God. www.johntrudel.com

FreedomFighter's avatar

Okay, AI is fast. The calculator was faster (and more accurate) than the pencil and paper. The computer has it all over the calculator. So AI will probably accelerate productivity. Look at the creators of AI. They are a very liberal, progressive, wealthy crew. They are cheerleaders for globalism, the WEF (which they are very much part of), etc. AI has been taught from a very left wing bias-- more Marxist teachers for our children and future. How about AI veracity and accuracy. The mistakes it makes are being brought out on a continuous basis. People thought it would aid medical-- not if you value your life. I just read that AI flubs its Bible quotes 60 or 70% of the time. Realistically, how can somebody put their trust in AI's output? Confronted with its mistakes, AI denies them. You can't even get a pardon me out of it.

I see AI as more detrimental than good. It is a highly hyped (propaganda) product that will make rich people richer and be of no particular benefit to mankind.

Nick's avatar

I tend to agree with you if we are just talking about AI as a tool like generative AI or chatbots. As a software engineer I am on the "front-lines" so to speak, and agree wholeheartedly these are amazing tools that have made me better not because the tools by themselves are innately good but because they are a multiplier of the skills I already posses. There are some issues though. Vibe coding is slop. I still solve problems on my own. That's not true for everyone though. I see a lot (I mean a lot) of people just outsourcing the thinking and problem solving to these AIs. I have to imagine that will become more common. A lot of companies are also having trouble making the distinction of what is slop and what is quality. Maybe this stuff will all work itself out but only time will tell. Where I think I diverge from your opinion is if we start talking about artificial general intelligence (AGI). I don't think there is really anything to compare it to. I believe it will be unique. I don't think anyone can actually comprehend what the world will look like with it. The world is going to change overnight and there is a non-zero chance this will turn into an apocalyptic hell. The first few generations might be okay, but our posterity may truly be screwed here. I do agree no matter what the genie is out of the bottle. There is no turning back. Its happening regardless of what anyone does or say. However I don't think I can agree with the optimists view. I wouldn't say I am pessimistic necessarily. I am taking a very cautious approach. I am not dismissing AI tech nor am I not using it. I am embracing it when it makes sense and will continue to do so. But predicting the outcome here is impossible.

Arthur Nimz's avatar

British statistician George Box said: "All models are wrong, but some are useful". I know what Mr. box meant. I’ve worked with early AI versions and I can manipulate statistics from any model to come up with any “useful” answer you like. Just pay me enough money. Think about public opinion surveys and how they are used to bias the public into adopting various narratives. Who pays for those surveys?

The core of any Generative AI (gen AI) model is a massive library of words: Large Language Models (LLM). But words have meaning.

Jordan wrote: “you do not have to share an inventor's ideology to recognize the value of his invention.” But the developers of these LLM’s can insert their ideological biases into those models very easily.

Jordan wrote: "Skepticism and suspicion is primarily useful when it remains open to evidence."

Here ya go.

Litigation regarding AI bias has intensified significantly, particularly in hiring and employment, with courts increasingly holding that companies can be held liable for discriminatory outcomes produced by automated tools. Who taught those automated tools to be discriminatory? Did the models just evolve to somehow become self-aware? You betcha!

OpenAI developed ChatGPT and is now facing multiple lawsuits that allege everything from copyright infringement to wrongful death claims involving murder and suicide. OpenAI has cited Section 230 of the Communications Decency Act to argue they are not responsible for content or how ChatGPT is used. Why? As of Q2 2025, OpenAI has more than 2.1 million developers actively building on the platform.

Recent research and developments suggest that AI models are beginning to demonstrate early signs of self-awareness and introspective ability, particularly in the more advanced LLMs like Anthropic's Claude 3 Opus and 4.x. Current AI may not demonstrate human consciousness or the ability to learn from subjective experience, but claims are being made that some of these models can recognize when their own internal processing algorithms have been manipulated. Most humans aren’t even intelligent enough to recognize when their thought processes are being manipulated.

I agree that America needs to maintain AI dominance. The question becomes at what cost to individual freedoms?

Jeffery Whitaker's avatar

You have the right idea insofar as using AI as a tool. Like it or not, we have to deal with it. China is reportedly dumping huge amounts of money into fostering AI. It's doubtful the Chinese would pass on the opportunity to improve their military by using it to its extremes. There looks to be potential in teaming AI with our huge data centers. The effects on our country look to be as much or more influential than what the Industrial Revolution was to mankind.

JD Free's avatar

I'd share Scott Alexander's newly-published childish counterpoint (in which he straw mans everyone who holds this view), but Alexander childishly blocked me.

Jrod's avatar

Good post, been listening to the All In podcast have ya?

Tim Johnson's avatar

Not bad. The broad brush using skeptics/attackers always make me chuckle a little bit. They attack the "slop" as an argument against all AI, which says far more about them than it does about the technology.

Penelope Bullis's avatar

Recently, a story went viral ... a 50 yr old woman was arrested & imprisoned x 6 months .... on ONE metric . AI deemed she had been part of a crime using facial recognition. She was in another state. Per the reporting, she lost her job & home . Absolutely, ridiculous/criminal. IE , if true.

Richard's avatar

Was Soviet Aggression the result of US belligerence and greed?