AI fear exaggerated: Turing Award winner Richard Sutton
At the opening ceremony of the 2025 Inclusion Conference on the Bund in Shanghai on Sept. 11, Turing Award laureate Richard Sutton argued that public anxiety about artificial intelligence is overblown. “Fear of AI is exaggerated and stoked by certain organizations and people that benefit from it,” he told attendees in a keynote that blended science, policy, and philosophy.
Sutton, widely regarded as a founding figure in modern computational reinforcement learning, used the stage to outline where AI stands today and where it ought to go next. He contended that the field’s current path—largely fueled by ingesting vast troves of human-created data—has hit a ceiling, limiting AI’s ability to create genuinely new knowledge or learn continuously over time.
Pointing to a discussion highlighted by technology podcaster Dwarkesh Patel, Sutton noted that today’s large language models “don’t get better over time the way a human would.” Without mechanisms for continual learning from lived interaction, he argued, systems stagnate once they’ve absorbed the available text, images, and code.
Toward an Era of Experience
To break through that bottleneck, Sutton called for a shift into what he termed “The Era of Experience,” where AI systems learn the way people and animals do: by acting in the world, receiving feedback, and improving iteratively. “Experience is the focus and foundation of all intelligence,” he said.
He cited emblematic cases of experiential learning: AlphaGo’s now-legendary “move 37,” which stunned world champion Lee Sedol with a creative, non-intuitive play, and AlphaProof’s silver-medal performance at the International Mathematical Olympiad. In both cases, Sutton said, breakthroughs came not from static datasets alone but from systems that learned by trying, failing, and refining their strategies.
Politics, perception, and public risk
Sutton acknowledged that fear surrounding AI has real roots: geopolitical rivalry, concerns about bias and fairness, worries over job displacement, and existential risks. But he argued that AI discourse has become deeply politicized, with alarm amplified by stakeholders who benefit from either heightened scrutiny or regulatory capture. The result, he suggested, is a skewed picture—one that can distract from the practical work of building safer, more capable systems.
A cooperative path forward
Rather than rallying around a single “common purpose,” Sutton advocated for what he called “decentralized cooperation” as a more durable and resilient model. “AI and human flourishing both come and could come from decentralized cooperation,” he said, framing the future as an ecosystem of many actors—research groups, companies, regulators, and communities—aligning through shared incentives and interoperable norms, not centralized command.
A long view of intelligence
Placing AI in a broader evolutionary arc, Sutton argued that progress toward machine intelligence is both natural and inevitable. He outlined four tenets for thinking clearly about the future:
- We will not reach a universal global consensus on AI’s direction.
- True intelligence in machines will eventually be created.
- That intelligence will surpass current human capabilities in many domains.
- Power tends to flow toward the most intelligent agents.
None of these points, he suggested, implies doom. Instead, they underscore the need to build systems that learn responsibly from experience, are embedded in cooperative structures, and are directed toward broadly beneficial outcomes.
Embracing the frontier
For Sutton, the right response to AI’s rapid advance is neither panic nor complacency, but pragmatic optimism. If society invests in experiential learning, nurtures decentralized cooperation, and keeps an eye on real-world performance rather than hype cycles, AI can become a force multiplier for human progress.
“We should embrace it with courage, pride and a sense of adventure,” he said—an invitation to treat the rise of machine intelligence as the next chapter in the story of learning itself, not an epilogue.