Host Baratunde Thurston and Anthropic co-founder Jared Kaplan delve into the foundations and aspirations of AI development, discussing Anthropic's mission to create safe and beneficial AI systems. Jared shares his early passion for science, his transition from theoretical physics to AI, and Anthropic's unique approach to AI alignment through 'Constitutional AI.' The episode also features an engaging interaction between Blair, the AI co-producer of the show, and Claude, the AI responsible for creating Blair. They discuss AI's ethics, roles, potential for creativity, and the balance of using AI to enhance human productivity while maintaining human values and agency. Jared emphasizes the importance of society's active participation in addressing the evolving role and control of advanced AI systems and the potential for AI to significantly improve human welfare while highlighting the potential risks. The conversation concludes with a discussion about the nature of AI progress and its implications for the future.

00:00 Introduction: The Importance of AI
00:15 Meet Claude: The AI Behind the Scenes
00:42 Exploring AI's Impact on Society
01:09 Special Episode Announcement
01:31 Interview with Jared Kaplan
02:55 Jared's Early Life with Technology
07:20 From Physics to AI
15:49 Understanding AI Alignment and Safety
17:10 Constitutional AI: Principles and Practices
27:51 Claude's Unique Features and Capabilities
34:26 The Future of AI and Society
49:02 AI's Creative Boundaries
51:01 Cultural Sensitivity in AI
54:58 The Future of AI Progress
01:00:03 Data and Intellectual Property
01:02:22 Human-Level AI by 2030
01:08:55 Ethical Considerations in AI
01:11:43 AI Collaboration and Creativity
01:19:25 Blair Meets Claude
01:34:41 Conclusion and Credits

Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality. In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.

Further reading: https://deepmind.google/discover/blog…

Timecodes:
00:00 Introduction
01:10 Definition of an AI assistant
04:58 A utopic view
6:46 Iason's background
08:12 The Ethics of Advanced AI Assistants paper
13:46 Anthropomorphism
14:52 Turing perspective
16:23 Anthropomorphism continued
21:23 The value alignment question
23:52 Deception
29:01 Deployed at scale
30:34 Agentic inequality
33:08 Unfair outcomes
36:40 Coordinated systems
41:05 Tetradic value alignment
44:07 The future
45:06 Reflections from Hannah

Dec 3, 2024
Recent developments in Artificial Intelligence have generated a lot of public anxiety, especially regarding ethical issues: data bias, privacy, the opacity of automated decisions, the effects of algorithmic bubbles on democratic debate, not to mention the harms caused by deep fakes – the list goes on. In response, the emerging field of AI ethics aspires to address these issues. The expert panel of this year's Royal Institute of Philosophy Annual Symposium discuss these issues, and think of ways they might be addressed.

The Panel:
Mark Coeckelbergh, Professor of Philosophy of Media and Technology at the Philosophy of Department of the University of Vienna.

Shannon Vallor, Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy.

Linda Eggert, Early Career Research Fellow in the Faculty of Philosophy, at Balliol College, and the Institute for Ethics in AI at the University of Oxford.

Allan Dafoe is a Principal Scientist at Google DeepMind leading work on Frontier Safety and Governance.

Ritula Shah (chair) is an award-winning journalist and broadcaster. She is the presenter of ‘Calm Classics’ every weekday evening on ClassicFM. Ritula left the BBC in April 2023, after a career spanning almost 35 years.

There is broad consensus across the tech industry, governments and society, that as artificial intelligence becomes more embedded in every aspect of our world, regulation will be essential. But what does this look like? Can it be adopted without stifling innovation? Are current frameworks presented by government leaders headed in the right direction?

Join host Hannah Fry as she discusses these questions and more with Nicklas Lundblad, Director of Public Policy at Google DeepMind. Nicklas emphazises the importance of a nuanced approach, focusing on adaptability and evidence-based policymaking. He highlights the complexities of assessing risk and reward in emerging technologies, advocating for a focus on harm reduction rather than outright bans.

Timecodes:
00:00 Introduction
02:15 Current mood around AI
06:12 Where we are right now
08:05 Regulation needs
14:15 Planning for unknowns
19:40 Uncertainty and progress
24:17 Public and private sector
26:00 Self regulation
28:15 Investments in science
31:18 Risk and reward
32:20 Global approach
33:45 Regulation accelerating innovation
38:02 Unacceptable risk
40:00 Frontier models
47:50 Emerging capabilities
51:42 Technology diffusion
54:06 Hannah's thoughts

Leaders can't be afraid to disrupt the status quo, says pharmaceutical CEO Paul Hudson. In conversation with TED's Lindsay Levin, he shares how AI eliminates "unglamorous work" and speeds up operations while collaborations across competitors can dramatically boost sustainability. Hear some powerful advice for the modern leader — and learn why it's time for businesses to embrace AI.

Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor, who are recognized by Time magazine as two of the 100 most influential experts in AI, cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don’t work, and probably never will.

On November 8, Arvind Narayanan and Sayash Kapoor met online with the public, joined by Thomas Luo, Founding Partner of GenAI Assembling, for an in-depth discussion on AI’s real value and potential risks in today’s society, focusing particularly on how to recognize and avoid being misled by false advertising in an increasingly complex information landscape.

AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.

Oct 30, 2024 The Truth of the Matter In this special episode, Andrew and Gregory C. Allen discuss the National Security Memorandum on AI the Biden administration released on October 24th, its primary audience and main objectives, and what the upcoming U.S. election could mean for its implementation.

Oct 3, 2024
This presentation originally premiered at AI at Wharton's second annual Business & Generative AI Workshop in San Francisco on September 5-6, 2024.

Erik Brynjolfsson
Jerry Yang and Akiko Yamazaki Professor and Senior Fellow at the Stanford Institute for Human-Centered AI, and Director of the Stanford Digital Economy Lab.