A lot of words building up and then debunking a straw man argument against "AI."
Nobody (or very few) opposed to "AI" deny how efficient it may prove to be, the ability it may have to aggregate datapoints and detect patterns, produce useful valuable output in a fraction of the time it takes current systems. That's a given, no serious person denies that; it's a freakin' computer, that's what computers do.
You share a truth that gets as close to the real peril of it in your piece, "Much of the AI industry's leadership class is a peculiar ideological cocktail, full of progressive politics and transhumanist philosophy. Some of its most prominent figures speak openly with reckless abandon about accelerating toward a future that nobody elected them to engineer." But then equate it with a non-sequitur about how the technology to split the atom didn't result in the world nuking itself to the Stone Age.
No, that's not the correct paradigm and threat to compare "AI' technology to. All those "AI" Data Centers are being built to process ALL the data that is currently gathered on us, every last bit of it that currently languishes without attention due to resource constraints becomes capable of real-time processing for everything everybody does. With algorithmic pattern identifiers to "measure" "risk," desirable traits, thoughts, propensity to have anti-authoritarian, skeptical ideas. Even dystopian "pre-crime" arrests straight out of Minority Report become possible, instituted, normalized. Only "AI" possesses the capability of making that happen. And ALL governments around the world are desirous of this. Collaboratively, not competitively, like splitting the atom was. Ironically, "AI" Data Centers deemed so important that they require building nuclear energy plants to power, once relegated to the "too dangerous" ash heap of history, The ability to monitor and control populations determined to be worth the investment and risk.
Worldwide Social Credit Industry - Infrastructure to Support Social Credit Systems Represents a $16.1 Billion Opportunity by 2026
"The COVID-19 pandemic has facilitated substantial interest in citizen monitoring solutions
Infrastructure to support social credit systems represents a $16.1B global opportunity by 2026
Cameras and other optical equipment for social credit systems will reach $723M globally by 2026
Advanced computing will be used in conjunction with AI to provide nearly flawless identification and tracking
Various forms of biometrics will be used for identity verification as well as verifying the presence/location of people
Starting as tangential to public safety and homeland security, the social credit market becomes mainstream by2026
Social credit systems represent the ability to identify (mostly people but also some "things") and track activities for purposes of grading behaviors and applying "social credit" scoring. A given grading/scoring methodology depends largely on social credit system objectives and metrics.
However, most systems will have socially acceptable behaviour at their core. This presents both a challenge and an opportunity as a combination of government, companies, and society as a whole must determine "good", "bad", and "marginal" behavior within the social credit market."
Omnipresent AI cameras will ensure good behavior, says Larry Ellison
“Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on,” Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. “We’re going to have supervision,” he continued. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report the problem and report it to the appropriate person.”
FF - The comparison to the atomic bomb does stand, however, in that both are either a tool for good or a weapon for bad depending solely on the intentions of those whose hands they are in. We are being told EXACTLY what the intentions of those who are currently holding the technology is: an Orwellian dystopic nightmare where freedom is extinguished.
Foolish techno-optimism to ignore this big, glaring, stated threat inside ever single aspect of human existence. In the absence of any sort of ironclad prohibitions, penalties, recourse for violations of personal freedom and liberty the "AI" technology perpetrates in the hands of those with these ill-intentions, against both the state/corporate public/private partnerships that direct them and the individuals in positions of authority deciding to abuse the technology I, and all sane people who cherish individual liberty and freedom remain OPPOSED to the development of the technology. The time is NOW to unambiguously, without exception protect civil liberties, freedom, privacy and algorithmic assessments of us as individuals from this most dangerous weapon pointed at humanity that has ever been devised. Far exceeding the threat of nuclear bombs.
Like it or not we are all going to be dealing with it. This is true whether the optimists or pessimists are right which only time will tell. I am retired so have little use for cutting edge applications but I call on all that do to buckle down and make them great. Slop is real and not just in writing. Lots of places are using it for customer service, mostly poorly. So fix it. Supermarket has scanner, camera and weight sensor connected with AI to prevent theft. AI generates a lot of false positives for theft which requires more human intravention.
Just wanted to mention the progressive ideology being baked into most if not all of the frontier AI models. Elon has claimed grok is the only “truth seeking” AI, and google faced some backlash for their over(t)ly diverse image generation model a bit ago. I’m not a big fan of regulation but a growing segment of the general public uses AI systems like ChatGPT as the new google, and treat its output as the absolute truth. I think time will educate most users but it’s still a bit scary the enormity and reach of these tools for spreading an ideology as compared to traditional and even digital media.
I'd share Scott Alexander's newly-published childish counterpoint (in which he straw mans everyone who holds this view), but Alexander childishly blocked me.
A lot of words building up and then debunking a straw man argument against "AI."
Nobody (or very few) opposed to "AI" deny how efficient it may prove to be, the ability it may have to aggregate datapoints and detect patterns, produce useful valuable output in a fraction of the time it takes current systems. That's a given, no serious person denies that; it's a freakin' computer, that's what computers do.
You share a truth that gets as close to the real peril of it in your piece, "Much of the AI industry's leadership class is a peculiar ideological cocktail, full of progressive politics and transhumanist philosophy. Some of its most prominent figures speak openly with reckless abandon about accelerating toward a future that nobody elected them to engineer." But then equate it with a non-sequitur about how the technology to split the atom didn't result in the world nuking itself to the Stone Age.
No, that's not the correct paradigm and threat to compare "AI' technology to. All those "AI" Data Centers are being built to process ALL the data that is currently gathered on us, every last bit of it that currently languishes without attention due to resource constraints becomes capable of real-time processing for everything everybody does. With algorithmic pattern identifiers to "measure" "risk," desirable traits, thoughts, propensity to have anti-authoritarian, skeptical ideas. Even dystopian "pre-crime" arrests straight out of Minority Report become possible, instituted, normalized. Only "AI" possesses the capability of making that happen. And ALL governments around the world are desirous of this. Collaboratively, not competitively, like splitting the atom was. Ironically, "AI" Data Centers deemed so important that they require building nuclear energy plants to power, once relegated to the "too dangerous" ash heap of history, The ability to monitor and control populations determined to be worth the investment and risk.
Worldwide Social Credit Industry - Infrastructure to Support Social Credit Systems Represents a $16.1 Billion Opportunity by 2026
https://www.businesswire.com/news/home/20211223005270/en/Worldwide-Social-Credit-Industry---Infrastructure-to-Support-Social-Credit-Systems-Represents-a-%2416.1-Billion-Opportunity-by-2026---ResearchAndMarkets.com
"The COVID-19 pandemic has facilitated substantial interest in citizen monitoring solutions
Infrastructure to support social credit systems represents a $16.1B global opportunity by 2026
Cameras and other optical equipment for social credit systems will reach $723M globally by 2026
Advanced computing will be used in conjunction with AI to provide nearly flawless identification and tracking
Various forms of biometrics will be used for identity verification as well as verifying the presence/location of people
Starting as tangential to public safety and homeland security, the social credit market becomes mainstream by2026
Social credit systems represent the ability to identify (mostly people but also some "things") and track activities for purposes of grading behaviors and applying "social credit" scoring. A given grading/scoring methodology depends largely on social credit system objectives and metrics.
However, most systems will have socially acceptable behaviour at their core. This presents both a challenge and an opportunity as a combination of government, companies, and society as a whole must determine "good", "bad", and "marginal" behavior within the social credit market."
Omnipresent AI cameras will ensure good behavior, says Larry Ellison
https://arstechnica.com/information-technology/2024/09/omnipresent-ai-cameras-will-ensure-good-behavior-says-larry-ellison/
“Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on,” Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. “We’re going to have supervision,” he continued. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report the problem and report it to the appropriate person.”
FF - The comparison to the atomic bomb does stand, however, in that both are either a tool for good or a weapon for bad depending solely on the intentions of those whose hands they are in. We are being told EXACTLY what the intentions of those who are currently holding the technology is: an Orwellian dystopic nightmare where freedom is extinguished.
Foolish techno-optimism to ignore this big, glaring, stated threat inside ever single aspect of human existence. In the absence of any sort of ironclad prohibitions, penalties, recourse for violations of personal freedom and liberty the "AI" technology perpetrates in the hands of those with these ill-intentions, against both the state/corporate public/private partnerships that direct them and the individuals in positions of authority deciding to abuse the technology I, and all sane people who cherish individual liberty and freedom remain OPPOSED to the development of the technology. The time is NOW to unambiguously, without exception protect civil liberties, freedom, privacy and algorithmic assessments of us as individuals from this most dangerous weapon pointed at humanity that has ever been devised. Far exceeding the threat of nuclear bombs.
Like it or not we are all going to be dealing with it. This is true whether the optimists or pessimists are right which only time will tell. I am retired so have little use for cutting edge applications but I call on all that do to buckle down and make them great. Slop is real and not just in writing. Lots of places are using it for customer service, mostly poorly. So fix it. Supermarket has scanner, camera and weight sensor connected with AI to prevent theft. AI generates a lot of false positives for theft which requires more human intravention.
Just wanted to mention the progressive ideology being baked into most if not all of the frontier AI models. Elon has claimed grok is the only “truth seeking” AI, and google faced some backlash for their over(t)ly diverse image generation model a bit ago. I’m not a big fan of regulation but a growing segment of the general public uses AI systems like ChatGPT as the new google, and treat its output as the absolute truth. I think time will educate most users but it’s still a bit scary the enormity and reach of these tools for spreading an ideology as compared to traditional and even digital media.
I'd share Scott Alexander's newly-published childish counterpoint (in which he straw mans everyone who holds this view), but Alexander childishly blocked me.
Was Soviet Aggression the result of US belligerence and greed?
I had a long career in High Tech.
My view -- New technology can make things better. Or worse.
The outcomes are up to us. And God. www.johntrudel.com