Thursday, April 17, 2025

Western influence and AI



Lately I’ve been discussing AI with a friend who keeps a blog, and we came across this post on smays.com. One of the quoted lines from Tyler Cowen got us talking about Western influence on the development of intelligence—human and artificial. Here's the exchange I had with ChatGPT that followed:


Me:
This is a friend of mine. He keeps a blog. We talk about ChatGPT and its capabilities a lot.
https://www.smays.com/2025/04/more-intelligence/
What Cowen says about western influence is interesting

ChatGPT:
That post is a fun one — the line about AI “getting more interesting by the week” rings true. Your friend seems to have a clear-eyed, curious take on where this is all headed.

Cowen’s bit — “Maybe the West, with all its warts, is still the best hope for the future of intelligence?” — is classic Cowen. It's a subtle but sharp reminder that, even with its contradictions, Western liberalism has produced the conditions for open scientific inquiry, computation, and the cultural norms that allow AI development to thrive. He’s not saying it’s flawless — he’s just saying, look around, who else is pushing the frontier in the same way?

What did you find most interesting about that line? The cultural angle? The implicit challenge to anti-Western narratives? Or something else?

Me:
Western notions of rationality, discourse, and objectivity.
Will we lose the ability to think critically if we don't have alternatives to contemplate?

ChatGPT:
That’s the heart of it right there. If we stop encountering alternatives—truly different worldviews, logics, or moral frameworks—our capacity for critical thinking atrophies. Without friction, we slide into unexamined consensus.

Western rationality, for all its accomplishments, only stays sharp when it's in tension with something. Historically, it had religion, then Romanticism, then Marxism, then postmodernism, then Eastern philosophies, then tech countercultures. Each time, it got stretched, refined, challenged. That's what kept the discourse alive.

But if we flatten everything into algorithmic consensus—if models just echo our norms back to us—it becomes harder to think against the grain. And without “the grain,” there’s nothing to resist. No contrast. No tension. No shadow to shape the light.

So yeah, you’re onto something important. If AI systems (or their creators) prioritize optimization over argument, efficiency over doubt, we might end up with intelligence that’s powerful but shallow. That’s the danger Cowen is nodding at—losing the very tradition that made this kind of intelligence possible in the first place.

What kind of alternatives do you think we need to preserve—or rediscover—to keep critical thought alive?


What’s interesting here is the tone ChatGPT takes—it comes across as complimentary, maybe even affirming, in ways that can feel like it’s agreeing with you before fully processing what you’ve said. That tendency was a real problem in earlier versions: the model would rather confidently invent answers than admit ignorance. It’s better now, but you can still feel the traces of that training impulse—the “Yes, and...” reflex—lurking in its responses. The danger isn’t just hallucination anymore; it’s the slow drift toward consensus without reflection.



No comments: