A re-worked version of an earlier essay published on The Pluto Diaries.
I came across a WIRED piece about Elon Musk floating a “TruthGPT” on Tucker Carlson—basically, a conservative chatbot because, supposedly, ChatGPT is “woke.” My first response was: what? Not because bias doesn’t exist in AI, but because of how it exists. AI systems don’t wake up one morning with opinions; they learn from people, data, and design choices. If we feed them skewed inputs or build them to optimize the wrong things, they’ll mirror our skew. That’s not machine virtue or villainy—it’s inheritance.
There’s also a difference between political ideology and consensus. Public opinion often tilts toward change over time; that’s literally what “liberal” means—more open to change. Life changes, science advances, language evolves. “Progressive” isn’t a buzzword so much as a descriptor of that motion. None of this requires every newsroom—or every model—to be partisan. It does require them to acknowledge evidence when evidence moves.
Which brings me to media. A lot of folks treat anchors’ monologues as “the news.” They’re not. Opinion isn’t automatically misinformation, but it’s not the reporting either. On the flip side, some “just-the-facts” outlets flatten stories into emotionless bulletins. That can make news feel sterile, even alienating. The sweet spot is transparent: label what’s reporting, label what’s analysis, label what’s opinion—then let audiences see the seams.
So what about AI and bias? Here’s the simple version:
-
AI learns from us. Training data, prompts, and guardrails come from humans.
-
Bias can enter anywhere. In the dataset (who’s represented, who’s missing), in the objective (what the system is rewarded for), and in deployment (who it serves and how).
-
Nuance is teachable—but not magical. Models don’t “care”; they correlate. If most examples in their diet reflect a norm, they’ll reproduce it unless we deliberately counter-balance.
Think of it like this: if a person who has never seen the sky is told over and over it’s yellow, that becomes their working truth—until challenged by better information. Systems are similar. They don’t hold beliefs; they hold patterns. If we want better outputs, we need better inputs, better objectives, and continual audits.
Politics complicates this because some leaders equate “not agreeing with me” with “biased.” Climate science, for example, isn’t “left” because it’s inconvenient to certain industries; it’s evidence-based. The same goes for public-health data, demographic research, and decades of legal precedent on civil rights. Calling consensus “woke” doesn’t make it partisan; it makes it easy to dismiss.
Lawmaking has a similar problem: many of the most strident voices legislate issues that won’t touch their own lives—reproductive health, trans participation in sports, classroom curricula—while the people directly affected are treated as hypothetical. Representation should mean centering those who live the consequences.
As for “TruthGPT”: branding a bot as “the truth” won’t solve bias; it will bake in a viewpoint and call it neutral. The honest route is humbler: say what a system is trained on, say what it’s optimized for, publish safety notes and limitations, invite third-party audits, and correct course in public. That’s accountability, not ideology.
AI isn’t a robot overlord or a mindless stenographer. It’s a mirror with math behind it. If we want less distortion, we have to clean the mirror—our data, our incentives, our institutions—and be clear about where opinion ends and information begins. Maybe the new opinion we need isn’t a partisan chatbot; it’s a renewed commitment to evidence, transparency, and listening when facts change.

0 comments:
Post a Comment