
Millions of people ask ChatGPT questions every day. Whether it’s questions about taxes, how to change a faucet, or whatever else, AI chatbots are increasingly used as personal advisors. They’re quick, polite, and articulate. They also have a lot of useful information, though they do hallucinate sometimes. But somewhere in the invisible crevices of code, something fundamental may be changing. The voice is still neutral in tone but the political subtext might be shifting.
According to new research, that quiet shift might be taking a surprising direction: toward the political right.
The new research, titled analyzed over 3,000 responses from different versions of ChatGPT using a standard political values test. It found a statistically significant rightward movement over time — suggesting that, while still left-leaning overall, the chatbot’s answers are inching closer to centre-right positions on both economic and social issues.
AI is not political. Technically
ChatGPT doesn’t “want” anything. It doesn’t vote, it doesn’t have a political agenda. It doesn’t care about the minimum wage. But it is trained on a sprawling corpus of text — billions of sentences, across decades of human thought. Its first bias comes from this data. But data isn’t everything. When OpenAI updates its models, it might feed in different data, tweak algorithms, or change how the AI is rewarded for certain answers.
Three researchers from top Chinese universities examined the ideological leanings of ChatGPT. They examined this over 3,000 times, using a well-established political test — the kind that places individuals (and apparently now, chatbots) on a spectrum of economic and social ideologies.
The results were uncanny.
Earlier versions of ChatGPT (like GPT-3.5-turbo-0613) answered the Political Compass test in a way that placed it squarely in the libertarian-left quadrant: low on authoritarianism, high on economic egalitarianism. Think social democrat meets Silicon Valley idealist.
But newer versions — especially GPT-3.5-turbo-1106 and GPT-4-1106 — are edging rightward. They’re still sort of liberal, but the needle is moving. Statistically, significantly, unmistakably.


<!– Tag ID: zmescience_300x250_InContent_3
–>
What’s causing the drift?
Things can get tricky very fast here.
In some sense, AI is still a black box, with no one truly understanding exactly why it outputs the things it outputs. Yet the authors of the study have an idea why this happens.
In complex systems — from flocks of birds to human brains to machine learning models — patterns arise that were never explicitly programmed. Something similar could be happening here, although AI isn’t a living creature. But, because we don’t know exactly what AI is doing, it’s hard to say.
From what we know, no one told ChatGPT to drift to the right. Or rather, no one transparently told ChatGPT to turn to the right. We don’t know whether its algorithm was changed specifically for this.
The shift appears to stem from a mix of algorithmic updates, subtle reinforcement learning tweaks, and possibly even emergent behavior within the model itself. Human interaction may also play a role, as frequent use can create feedback loops that influence how the model prioritizes certain responses. In short, the ideological change likely results from internal system dynamics and user interactions — not just from feeding the model different information.
The shift identified in the study, while statistically significant, is not extreme. ChatGPT hasn’t become a far-right pundit. It hasn’t started quoting Ayn Rand unprompted. It still answers most questions with nuance, hedging, and an awareness of complexity.
This could matter a lot
It’s still a marginal shift, but marginal shifts matter. In ecology, a degree of warming can collapse coral reefs. In politics, a slight tilt can decide an election. And in AI, a small change in tone can shape how millions of users think about the world.
Furthermore, AI systems are often described as mirrors to humanity, but they are also amplifiers. When they skew politically — intentionally or not — they risk shaping public discourse, entrenching existing biases, and subtly influencing user beliefs.
The researchers call for better auditing, more transparency, and ongoing monitoring of language models. We want to know what’s in the training sets, how reinforcement signals are applied, and why certain shifts happen.
The solution, they say, isn’t to make AI apolitical. It’s to make it transparent and accountable.
If AI is a reflection of society, then we owe it to ourselves to watch closely when the reflection begins to shift.
The study was published in Humanities and Social Sciences Communications.