
I figure we thought we were building tools to help us think. Now those very same tools are shaping what we think. The problem is, most of us haven’t noticed the difference.
Artificial intelligence is changing the world. That’s not a controversial statement. What we’re slower to reckon with is the more dangerous version: AI is changing humanity itself. And it’s not just changing what we do. What we believe. What we consider true, important, or relevant. The line between helping us and influencing us shifted so quietly we missed it.
Here’s one of many problems. These systems, the large language models reshaping how information gets created and consumed, were trained on what was available. Not what was best. Not what was most rigorous or most true. But… what was accessible. What was free. What made it outside the paywalls. The cheap information. Because the best of human knowledge is largely behind paywalls — academic journals, premium journalism, specialized research. The institutions designed to produce humanity’s most reliable thinking are the ones most aggressively monetized.
So the commons that trained these systems reflects what people were willing to make free… which isn’t always the same as what matters the most, or what would be most important for the training of AI. But it’s not necessarily garbage either. A lot of important thinking does happen in public spaces. But it’s clearly a skewed sample. And a skewed sample, at civilizational scale, is a problem we haven’t fully dealt with. Think of it this way; much of the wisdom that you have come to count on from AI incorporates the mass idiocy of Reddit and similarly accessible platforms.
Oh boy.
The old adage is “garbage in, garbage out”. But is this really the same thing? What AI produces isn’t quite “garbage”. I’d say what we’ve got here is basically: confident mediocrity. Coherent, well-structured, authoritative-sounding… average. The outputs don’t seem wrong. They seem right enough. And right enough is sufficient to stop deeper inquiry for most people. And that’s really the problem. Not obvious error. But plausible adequacy.
Now add the feedback loop. AI generates content. That content floods the internet. Future models train on that content. Which means each generation of models trains on a higher ratio of AI-derived material to original human thought. The quality of information degrades with each step. Like making a copy of a copy of a copy of a copy. Slowly degrading. Directionally. Persistently. Researchers call it model collapse — when systems trained on synthetic outputs lose variance, lose the edges, converge on the mean. The weird and brilliant and counterintuitive gets diluted out.
And we’re just… watching it happen. And most people don’t know that it’s happening.
This isn’t just a technical footnote for programmers to ponder. This is a civilizational variable. If the systems shaping how millions of people think and work are systematically skewed toward accessible-but-average rather than difficult-but-true… the ceiling on collective understanding lowers quickly. It doesn’t even feel like dumbing down. It feels like… well, competence. That’s what makes it so hard to resist.
We’ve been here before, in a different form. Convenience culture already did this to attention. The path of least resistance replaced the harder, richer path. Now the path of least resistance is a well-written AI answer. And most people will take it. Why wouldn’t they?
But here’s what I think matters, to us, to those wanting to turn back the tide, even a little bit. The countervailing force isn’t a better algorithm. It isn’t a regulatory framework; though those may help. The countervailing force is humans in genuine relationship with each other, doing the work of original thought in community. I’m not saying that technology is bad. I’m saying that authentic human exchange (the slow relational kind) is the only place that generates the kind of knowledge that doesn’t already exist somewhere in a training set.
Original experiences. Novelty.
Human experiential novelty.
The meaning economy I keep writing about isn’t separate from this. It’s the same problem, dressed up differently. Systems optimized for production and consumption erode the conditions under which real meaning gets made. Now those same dynamics are operating at the epistemic level. Not just extracting economic value from human connection but… extracting cognitive value from human thought. Extracting life itself. Averaging it. Feeding it back to us as pseudo-wisdom.
And we’re eating it up.
The question isn’t whether AI is useful. It obviously is. The question is whether we’re paying attention to what it costs. Not in dollars; though that’s important too. But in originality. In the irreplaceable work of thinking something genuinely new.
Because if we’re not… we don’t just get dumber tools. We get a dumber humanity.
And the loop closes… for good.