
Leaders from the technology industry, academia, and other fields convened to discuss the implementation of responsible AI and the balance between safeguarding and innovation at a roundtable organized by TIME in Davos, Switzerland, on January 21st.
During an extensive discussion, participants, guided by TIME CEO Jess Sibley, addressed various issues, including the effects of AI on children’s development and safety, methods for regulating the technology, and strategies for improving AI model training to prevent human harm.
Regarding child safety, Jonathan Haidt, a professor of ethical leadership at NYU Stern and author of The Anxious Generation, advised parents to focus on establishing healthy habits rather than solely restricting children’s exposure to technology. He suggested that children do not require smartphones until at least high school and can learn to use technology effectively at age 15. “Let their brain develop, let them get executive function, then you can expose them,” he stated.
Yoshua Bengio, a professor at the Université de Montreal and founder of LawZero, emphasized the necessity of scientific understanding to address AI-related challenges. He proposed two mitigation strategies: first, designing AI with built-in safeguards to protect child development, which could be driven by market demand, according to Bengio, known as one of the “godfathers of AI.” Second, he suggested a governmental role through mechanisms like mandatory liability insurance for AI developers and deployers, indirectly regulating the industry.
While the U.S. AI competition with China is often cited as a reason to limit regulations and guardrails for American AI companies, Bengio argued that both nations share common interests in preventing harm. “Actually, the Chinese also don’t want their children to be in trouble. They don’t want to create a global monster AI, they don’t want people to use their AI to create more bio-weapons or cyberattacks on their soil. So both the U.S. and China have an interest in coordinating on these things once they can see past the competition,” he said. Bengio drew parallels to past international cooperation, such as the U.S. and USSR coordinating on nuclear weapons during the Cold War.
The roundtable participants also noted similarities between AI and social media companies, particularly in their competition for user attention. Bill Ready, CEO of Pinterest, which sponsored the event, commented, “All the progress in history has been about appealing to the better angels of our nature. Now we have, one of the largest business models in the world has at its center engagement, pitting people against one another, sowing division.”
Ready further elaborated, “We’re actually preying on the darkest aspects of the human psyche, and it doesn’t have to be that way. So we’re trying to prove it’s possible to do something different.” He explained that under his leadership, Pinterest shifted its optimization strategy from maximizing view time to maximizing outcomes, including those outside the platform. He acknowledged short-term negative impacts but noted long-term benefits of increased user return.
Bengio stressed the importance of developing AI that can “provide safety guarantees as the systems become bigger and we have more data.” He also suggested that establishing sufficient conditions for training AI systems to ensure honest operation could be a viable solution.
Yejin Choi, a computer science professor and senior fellow at Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI), stated that current AI models are often trained “to misbehave, and by design, it’s going to be misaligned.” She posed the question: “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs [large language models] on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?”
In response to whether AI can improve humanity, Kay Firth-Butterfield, CEO of the Good Tech Advisory, highlighted ways to make AI a better tool for people, such as consulting with actual users like workers and parents. “What we need to do is to really think about: how do we create an AI literacy campaign amongst everybody and not have to fall back on organizations? We need that conversation, and then we can make sure AI gets certified,” she said.
Other participants at the TIME100 Roundtable included Matt Madrigal, CTO at Pinterest; Matthew Prince, CEO of Cloudflare; Jeff Schumacher, Neurosymbolic AI Leader at EY-Parthenon; Navrina Singh, CEO of Credo AI; and Alexa Vignone, president of technology, media, telco and consumer & business services at Salesforce, where TIME co-chair and owner Marc Benioff serves as CEO.
The TIME100 Roundtable: Ensuring AI For Good — Responsible Innovation at Scale was presented by Pinterest.