My biggest worry for AI just got bigger.
- Gary Lloyd
- Aug 6
- 1 min read

On a recent podcast, I was asked about my biggest AI concern. My answer was deliberate disinformation. Not the crude kind we saw on social media, but something more subtle: a quiet shifting of what counts as truth, history, or even fairness.
Last week, the U.S. administration issued an “anti‑woke AI” order requiring that any AI models used by federal agencies be “ideologically neutral” and explicitly free of “diversity, equity, and inclusion programming.” In plain terms: AI systems must be trained to avoid embedding diversity, equity, and inclusion — the principles of representing different voices, treating people equally regardless of race, gender, disability or background, and ensuring fairness.
I struggle to see how any of those can be bad, or how representing different perspectives can be dangerous. When entire generations are already using AI as their first source of information and advice, the quiet framing of what is “neutral” truth matters more than ever. Calling something neutral doesn’t make it value‑free — it simply decides whose values are built in.
AI’s power lies in subtlety. Unlike social media, which amplifies discordant messages, AI can shape the texture of what people take as fact. If systems quietly reframe equality or human rights as “political” rather than foundational, that doesn’t just move the Overton window; it pushes it off the edge of the page and with it, the shared truths and values that democracy depends upon.
Artificial intelligence helped me collect my thoughts and write what you are reading now. Will I still be able to do so in the future?
Comments