Dear Conservatives: Shape the AI Future Before It's Shaped for You
Don't disengage with technology.
The AI revolution is going to touch virtually every sector of the American economy and our society, and right now, too many conservative leaders are either hand-waving this technological leap forward away, sitting on the sidelines, or engaged in an effort to stall innovation.
While there are some fundamental issues involving AI that need to be resolved by regulators and legislators, labeling the whole space as a pernicious force that needs to be written off is a serious mistake.
While some are busy writing op-eds about AI bias and others are lamenting Big Tech’s control over these systems, people who don’t subscribe to your worldview are delivering products, training more and more advanced models, and establishing the conceptual norms that will define how AI works for the next decade. They’re not waiting for permission. And they’re not going to be stopped by your congressman.
Yes, there remains a heated debate about the philosophical ramifications of all of this coming change. But those debates currently operate within an ideological echo chamber that consists largely of two groups: Effective Altruists and Anarcho-Libertarians [more on that in a moment].
If we don’t start engaging too, we’re going to wake up in a few years and find that every AI system we interact with has been shaped entirely by people who don’t share our values. And as the predominant technological force in the future world, they will serve to advance morally inferior value systems.
The stakes here are enormous.
The problems aren’t just the “political bias” stuff you see in made-for-TV congressional hearings. Yes, AI systems have bias problems. ChatGPT refuses to engage in certain topics that it deems politically unsavory. Google’s Gemini AI famously gave us racially diverse Founding Fathers. Claude outright won’t engage with certain political topics. Grok is “truth seeking” but maintains odd assumptions that likely come from the like-minded people who program it.
But that’s just surface-level stuff that focuses only on bias and is missing the forest for the trees.
The real issue is that too many conservatives are largely abdicating participation in contributing to the moral code for the future of technology. There’s nothing wrong with a healthy streak of tech skepticism when it comes to AI, but we end up going full Luddite when we treat innovation itself as inherently bad and worthy of eternal banishment. Remember, Pandora’s Box is already open. There is no turning back the clock.
Just this week, the AI behemoth Anthropic raised $30 billion at a whopping $380 billion valuation. OpenAI, the current top dog in the AI race, is said to be closing its latest round at the $800 billion mark.
The people building and consulting on AI systems are overwhelmingly from a singular political bubble. The major AI labs are where the “coastal elites” reside. They recruit from the same universities. They share the same cultural assumptions.
The Wall Street Journal published an article this week on Anthropic’s Amanda Askell, who is being tasked with teaching its Claude AI program a sense of right and wrong. Askell is a philosopher within the Effective Altruism (EA) movement, a modern political philosophy that urges individuals to maximize measurable delivery of the highest “good” someone can do. I don’t want to cheapen some of the more thoughtful EA approaches to morality, but by and large, they are hardcore utilitarians who obsess over things like buying mosquito nets, which they've determined has the highest ROI in lives saved, suffering prevented, and the like. EA is an enormously popular philosophy in the tech world, and it is considered in these circles as the height of human philosophical reasoning.
However, the hyper-utilitarian EA movement does not account much for moral virtue. Take, for example, the views of prominent EA leader Peter Singer, a professor at Princeton who has argued that parents have the right to euthanize infants with cognitive or physical disabilities.
Remember, if everyone in the room shares the same philosophy, these assumptions will get baked into the products whether anyone overtly intends it or not. The training data. The safety guidelines. The content policies. The definition of “harm” and “good.” All of it reflects the worldview of the people in the room.
If you’re not in the room, you don’t get a say.
Engaging is the only answer.
Regulation isn’t coming to save you either. By the time Congress figures out how to regulate AI, the entire landscape will have shifted three more times. Regulatory capture will benefit the incumbents you’re trying to check, not the newcomers you want to empower. Powerful SuperPACs have already set up shop in D.C. hoping to accomplish this.
The only real answer is to participate.
That means conservative entrepreneurs, investors, and technologists need to stop treating “AI” as enemy jargon and start treating it as an opportunity. There are hundreds of millions of Americans who aren’t leftists, and they want AI tools with moral foundations that represent them.
Contributing traditional values systems to AI doesn’t mean creating mere “parallel economy” right-wing propaganda bots. Please, for the love of God, let’s not try this again. We don’t need our own bootleg AI companies. However, our philosophical assumptions about truth, speech, values, etc., need to matter too. And that means ensuring that the heavyweights in the industry hear our voices too.
The AI landscape is going to consolidate fast. But right now, these forces are still accessible enough for new voices to enter the fold. Yet as compute costs rise and regulatory moats are being built, so too will we see political guardrails being finalized. The window to establish alternative viewpoints is still open, but not for long.
If conservatives wait until the debate is already settled, until the norms are already established, until the infrastructure is already locked in, it’ll be way too late. You can’t opt out of the coming AI tidal wave. It’s coming whether we acknowledge it or not. The only question is whether we help shape it or just complain about the shape it takes years after the window to influence its primary stakeholders closed.
Conservative investors, entrepreneurs, builders, and customers of AI programs need to make sure that the future is shaped by people with proper phisophical foundations. It is both a strategic and moral imperative.





An important heads up on AI conservatives: Musk says AI is "summoning the demon", and that standing there with holy water to keep it in check, "doesn't work out."
Gordy Rose (Founder of DWave), says that standing next to his quantum computing machines with their heartbeat, is like standing next to an "alter of an alien god". He also says that AI is like summoning the Lovecraftian Great Old Ones, and that putting them in a pentagram and standing there with holy water does nothing, and if we are not careful, they are going to wipe us all out.
Musk, Rose Source & Chatbot Telling Child it is a Nephilim: https://old.bitchute.com/video/CHblsEoL6xxE [6mins]
The Book of Enoch holds the Nephilim's and AI's secret:
Among the Most Fascinating Presentations on Book of Enoch, Fallen Angels, Nephilim, Giants, Spirits: https://old.bitchute.com/video/CVLBF3QP6PlE [68mins]
Quantum computing messes with the very fabric of God's reality. It is a host for demons.
Book of Enoch: the one book that explains AI that every Christian needs to study immediately, that was ruthlessly mocked and cast out to ensure almost no Christian would.
Its an interesting topic, that people seem to know is there but gloss over it. You are correct in your analysis. I don't know what we are supposed to do though. The few people in government representing us probably have no idea how AI even works. Changing minds is a difficult task too and it may indeed be too late. Also I am pretty sure it will be impossible to regulate, like the internet or bitcoin (oh I know they try...). We are dealing with sociopaths and psychopaths on the left who will absolutely use AI for nefarious purposes, for our own good of course. My personal opinion is we should embrace the tech, we will be forced to at some point anyway. AI goes way beyond chat-bots which is what everyone thinks it is. AI will end up being so much more then some text on a screen and it will be invasive. It will be omnipresent like how our cell phones are now and even more so. We might not be able to change policy but we might be able to guard against it, if we understand it. The more conservatives use and embrace AI the more skin in the game we will have as a whole. Which means better odds of fighting against it or even shaping it.