top of page

If ChatGPT were a person, it’d be Dutch

ree

A new Harvard study mapped GPT’s values against 60+ countries using World Values Survey data.

The result?


GPT doesn’t reflect “humanity.”

It thinks like Western Europe.


Here’s what stunned me:

→ GPT aligns more with the Netherlands and Germany than with China, India, or Nigeria.

→ That’s 4 billion people whose worldview barely registers.

→ So when someone in Pakistan asks GPT for advice, they’re getting Dutch values with an American accent.


We talk about bias in hiring or lending.

But this goes deeper.


It’s about whose morality we’re embedding in machines.

Whose definition of “appropriate.”

Whose concept of family, honor, and community becomes the planet’s default intelligence.


Ask GPT:

– “Should I prioritize family or career?”

– “What’s the right way to resolve this conflict?”

– “Is this behavior acceptable?”


You’re not getting a neutral answer.

You’re getting a Western individualist’s answer.


As AI becomes the interface for education, work, and ethics -

we’re not exporting intelligence.

We’re exporting culture.


It’s not artificial.

It’s cultural.


If our AI thinks like 15% of the world’s population...

Are we building intelligence - or just automating Western values with better UX?


Sources: Tijana Zivkovic, Harvard University, M. Atari, M.J. Xue, P.S. Park, D.E. Blasi, J. Henrich, J. Kemper

 
 
 

Related Posts

See All

Comments


bottom of page