Seattle Times Op-ed: AI Is Not Human
- Ryan Burns
- Dec 15, 2025
- 3 min read
Ryan has a new Op-ed in The Seattle Times. The main idea is that there's a very important distinction between human intelligence and machine computation. (He didn't choose the title, and is excited to have a conversation about it with anyone who is interested.) Access the PDF here.
Don't Be Fooled: AI Is Only as 'Human' as It Is Trained to Be
In a recent well-known AI policy podcast, the host asked the guest how long it will take artificial intelligence to surpass humans’ intelligence “in every category of intelligence.” They debated the timeline for humans being written out of everything from creative endeavors to companionship. The host suggested that it might be years, perhaps decades, before the technology becomes good enough, but they were confident that it would eventually happen. This way of thinking about AI, where it is only a matter of time before AI is indistinguishable from an everyday person, does immense injustice to what it means to be human.
The consequence is that we undersell ourselves, disempower creativity and sideline deep discussions of AI ethics, while corporate leaders and venture capitalists sell us their (profitable) discount bargain-store version of humanness.
Don’t get me wrong: AI is a stockpile of technologies far beyond large language models like ChatGPT and Claude, and it would take a Don Quixote-level naiveté to deny the utility of all AI. However, just like Don Quixote, our folly would be to see a useful tool and mistake its functions for uniquely human activities.
For one, much human activity lies outside rational-economic ways of thinking baked into large language models. These models supposedly “improve” when engineers adjust the weights given to different kinds of training data measurements and outcomes. Think of these weights like a set of dials on a stereo: Each turn of a knob changes the sound quality, and you adjust until it sounds “right.”
Applying this to creativity, critical thinking and decision-making treats humans like the ultimate calculator, where when given a certain set of inputs — say, sensory signals or a nearby event — we react according to predictable, rational and explainable probabilities. Just look at Washington, D.C., to see that humans don’t behave this way. We are often irrational or counterintuitive, unpredictable and baffling. Despite LLM engineers’ strong inclinations to cast us as such, we are not homo economicus.
Large language models are not capable of creativity in the deepest sense of the word, because they are models based on existing data — that is, materials that humans have already created. Given a prompt, an LLM will generate content of a high level of predictability within a certain window, with each next step (e.g., the next word, the next “brush stroke”) having a probability derived from its training data. Recent moves toward “reasoning models” only add layers of sophisticated calculations. To put it in oversimplified but relatable terms, LLMs aim for average.
But what exactly is “average” for an LLM? This is where the vapid claims of human qualities are laid bare for what they are. Large language model training data comes primarily from web-based data, and as information geographers have long shown, web-based data shows stark geographic unevenness. In short, more online data is produced about wealthy Western nations and by their citizens. This shouldn’t surprise anyone, as many scientific and sociological studies have reflected this WEIRD (Western, Educated, Industrialized, Rich, Developed) bias for decades. The models merely extend those patterns and claim universality. LLMs are clouded by these limited information sources. They don’t mimic “humans” (as if there is only one kind of human); they mimic an exceedingly narrow slice of humanity.
The claim that AI is marching toward performing ethical judgments is also rooted in the idea that these are unchanging, steady measurements that engineers should try to get “closer” to. But we know that humans and societies change, that what is considered normal and ethical today may have been repugnant in the past, and vice versa. Ethical judgments are contextual, normative and often debatable, and opposing views may be based in different, equally acceptable ethical philosophies. “Close” is an uncertain, moving target.
So: useful? Yes. Capable of carrying out some tasks that previously only humans could do? Of course. Able to surpass human intelligence “in every category”? Absolutely not.





Comments