Uniquely Human: What Can AI Never Do?
May 7, 2026 · Nick Carrino

Uniquely Human: What Can AI Never Do?
By: Nick Carrino
Large language models (LLMs) are trained on massive amounts of text. Novels, textbooks, and poems alike teach models a broad range of concepts and skills, helping explain their impressive capabilities. Many people stop there and conclude, "AI cannot learn anything new; it can only repeat back what we teach it." This sounds correct, but it does not hold up in training theory or in practice. LLMs are not limited to verbatim repetition. Like humans, they learn abstract patterns and can recombine concepts across domains. They can produce outputs that are novel, useful, and often beyond what any individual human explicitly taught them.
Model Training Is More Human Than We Think
LLMs are trained first and foremost by consuming vast amounts of human text, but this is only the first of two main LLM training phases. This first phase, known as pre-training, resembles early childhood learning: passively absorbing the world and finding patterns. This alone helps explain how models can generate novel outputs, much as toddlers can produce creative ideas from limited experience. However, many see this as only pattern-finding and not anything new. Still, pre-training is only the first step.
After pre-training, the model is knowledgeable but not yet reliably useful. Post-training is the phase in which the model learns to be useful. Think of this like a child attending school. In post-training, models must complete a wide range of specialized tasks in the same way a child is trained in math, literacy, science, and history. Much like school, the goal of post-training is not to make a model good at one task, but to instill a general capability. Children learn to read and write not just for the craft, but also for the valuable lessons in critical thinking and empathy that it brings. Similarly, coding tasks can strengthen a model's logic and reasoning. The point is simple: if humans can be inspired by school to form novel ideas, why can't AI?
Super-Human Capability
Chess offers us a useful precedent. Chess engines are not LLMs, but they show how quickly "uniquely human" can become a moving target. Computers first surpassed humans by calculating better moves; later systems like AlphaZero played in ways grandmasters described as dynamic, unconventional, and even beautiful. Today, LLM-powered systems are beginning to show a similar pattern outside games: helping produce new mathematical results, generating stronger algorithms, and finding software vulnerabilities humans missed. The point is not that AI thinks exactly like humans. It is that machines can discover useful patterns humans did not explicitly teach them, and those discoveries can still be novel, valuable, and even elegant.
Understanding "The Human Element"
So what is "uniquely human"? What is something that AI could never do? The only honest answer is "I don't know," maybe followed by a few hopeful guesses around judgment and taste. But that is exactly why this question matters. We should be wary of anyone who talks about AI and humanity with too much certainty. Technologists can flatten the human experience into benchmarks, while AI critics can flatten the technology into "plagiarism machines." Neither view is enough. This is a hard, cross-disciplinary question, and navigating it well will require humility from everyone involved.
Image: Visualising AI by Google DeepMind, artwork by Aurora Mititelu, via Unsplash.
