Hinton won the 2024 Nobel Prize in Physics for his research on neural networks that sparked the generative AI revolution. Since this win, Hinton has reached millions of people through platforms including Jon Stewart’s The Weekly Show podcast and legacy media such as 60 Minutes. This past November he shared a stage with U.S. Senator Bernie Sanders at Georgetown University, discussing AI’s impact on jobs and inequality.
“AI might be wonderful for health care and education and making most industries more productive,” says Hinton. “But the public must understand the dangers so they can provide a counter pressure on our politicians.”
Gift supports Hinton’s advocacy for AI safety
The gift from the Good Ventures foundation supports Hinton’s work as a high-profile global ambassador for AI safety, enabling him to selectively engage in the most productive and important global events and conversations for advancing this cause.
Good Ventures funds work across a variety of areas, including global health, scientific research, pandemic preparedness, farm animal welfare and helping society prepare for the advent of advanced AI.
“Philanthropy is very important for AI safety right now,” says Hinton. “But the problem is philanthropists are funding most of it; 99 per cent of corporate investment goes to making AI models smarter and one per cent goes to safety.”
The Schwartz Reisman Institute for Technology and Society (SRI) is the university’s homebase for Hinton’s AI safety work. Founded in 2019 through a visionary gift from the Schwartz Reisman Foundation, SRI brings together leading scholars in the sciences, social sciences and humanities to confront the profound challenges posed by rapidly advancing technologies. The institute supports foundational research, shapes public conversations and informs policy – always with a focus on ensuring that technology serves the public good.
Sheila McIlraith, a professor in the Department of Computer Science, a Canada CIFAR AI Chair and associate director and research lead at SRI, is working on human-compatible AI, figuring out how to endow models with the ability to contemplate the impact of their decision-making on the welfare and agency of others. Roger Grosse, associate professor of computer science and a Schwartz Reisman Chair in Technology and Society, also works to advance AI safety, tracing unexpected AI behaviours back to the training data that caused them.
“Geoffrey Hinton’s advocacy efforts have given AI risks a new level of public visibility and appreciation,” says Grosse, who divides his time between Toronto and Silicon Valley as a member of Anthropic’s alignment team.
“Not only is he a transformative AI researcher, but he also has a long track record of interdisciplinary work tying AI to human cognition, which gives his assessments of AI capabilities and motivations even more credibility, making it harder for skeptics to dismiss the risks as just speculation.”
World isn’t ready for future of AI, warns Hinton
Hinton says he doesn’t know exactly when AI will become smarter than us, but it’s likely to happen in the next few decades, and the world isn’t ready – at least not yet. He says policymakers have failed to grasp the urgency of the moment.
Future AI systems, Hinton says, will be “billions of times better at sharing information than we are – not twice as good, billions of times better, and the only thing to take care of a rogue superintelligence is another superintelligence.”
“People think I’m all doom and gloom and I’m not,” Hinton says. “But the future is extremely uncertain and we’re entering a time when we’ve no idea what’s going to happen. We should be cautious.”
By David Goldberg