Dec 3, 2024
Recent developments in Artificial Intelligence have generated a lot of public anxiety, especially regarding ethical issues: data bias, privacy, the opacity of automated decisions, the effects of algorithmic bubbles on democratic debate, not to mention the harms caused by deep fakes – the list goes on. In response, the emerging field of AI ethics aspires to address these issues. The expert panel of this year's Royal Institute of Philosophy Annual Symposium discuss these issues, and think of ways they might be addressed.

The Panel:
Mark Coeckelbergh, Professor of Philosophy of Media and Technology at the Philosophy of Department of the University of Vienna.

Shannon Vallor, Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy.

Linda Eggert, Early Career Research Fellow in the Faculty of Philosophy, at Balliol College, and the Institute for Ethics in AI at the University of Oxford.

Allan Dafoe is a Principal Scientist at Google DeepMind leading work on Frontier Safety and Governance.

Ritula Shah (chair) is an award-winning journalist and broadcaster. She is the presenter of ‘Calm Classics’ every weekday evening on ClassicFM. Ritula left the BBC in April 2023, after a career spanning almost 35 years.

There is broad consensus across the tech industry, governments and society, that as artificial intelligence becomes more embedded in every aspect of our world, regulation will be essential. But what does this look like? Can it be adopted without stifling innovation? Are current frameworks presented by government leaders headed in the right direction?

Join host Hannah Fry as she discusses these questions and more with Nicklas Lundblad, Director of Public Policy at Google DeepMind. Nicklas emphazises the importance of a nuanced approach, focusing on adaptability and evidence-based policymaking. He highlights the complexities of assessing risk and reward in emerging technologies, advocating for a focus on harm reduction rather than outright bans.

Timecodes:
00:00 Introduction
02:15 Current mood around AI
06:12 Where we are right now
08:05 Regulation needs
14:15 Planning for unknowns
19:40 Uncertainty and progress
24:17 Public and private sector
26:00 Self regulation
28:15 Investments in science
31:18 Risk and reward
32:20 Global approach
33:45 Regulation accelerating innovation
38:02 Unacceptable risk
40:00 Frontier models
47:50 Emerging capabilities
51:42 Technology diffusion
54:06 Hannah's thoughts

Leaders can't be afraid to disrupt the status quo, says pharmaceutical CEO Paul Hudson. In conversation with TED's Lindsay Levin, he shares how AI eliminates "unglamorous work" and speeds up operations while collaborations across competitors can dramatically boost sustainability. Hear some powerful advice for the modern leader — and learn why it's time for businesses to embrace AI.

Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor, who are recognized by Time magazine as two of the 100 most influential experts in AI, cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don’t work, and probably never will.

On November 8, Arvind Narayanan and Sayash Kapoor met online with the public, joined by Thomas Luo, Founding Partner of GenAI Assembling, for an in-depth discussion on AI’s real value and potential risks in today’s society, focusing particularly on how to recognize and avoid being misled by false advertising in an increasingly complex information landscape.

Oct 30, 2024 The Truth of the Matter In this special episode, Andrew and Gregory C. Allen discuss the National Security Memorandum on AI the Biden administration released on October 24th, its primary audience and main objectives, and what the upcoming U.S. election could mean for its implementation.

Sep 24, 2024
4:40 to 5:55
Artificial Intelligence: how can we harness Europe’s innovative potential?

The EU’s AI Act came into force on 1 August 2024. Though lagging the United States and China in AI development, the EU is striving to establish itself as a global rule-setter in digital technologies. With concerns running high for some about AI-powered techniques such as profiling, targeting, election manipulation, and disinformation campaigns, eurocrats have celebrated the EU AI Act as the globe’s first comprehensive legal framework around AI. Yet the ability of European businesses to catch up with the pace of AI development and implementation remains very much in doubt.

Can European businesses overcome the challenges posed by this stringent regulatory landscape to remain competitive on the global stage? Mark Zuckerberg and Daniel Ek, CEOs of Facebook and Spotify, recently issued a high-profile statement warning that ‘pre-emptive regulation of theoretical harms for nascent technologies such as open-source AI will stifle innovation’ and ‘Europe’s risk-averse, complex regulation could prevent it from capitalising on the big bets that can translate into big rewards’. Others welcome the EU’s attempt to build public confidence in AI and to shape its future direction to maximise public goods and minimise harmful and unforeseen consequences.

In addition to questions over the EU’s risk averse approach to regulation, industry also warn of other drawbacks, including investment shortfalls and scalability issues, that could hinder innovation. As one Silicon Valley entrepreneur has commented, innovation in Europe is held back by an absence of ‘genuine risk-seeking capital at all stages’ pointing to a wider problem of risk aversion. Join us to contribute to the discussion about Europe’s appetite for investment in AI and whether the EU AI Act is stifling innovation or helping to shape the development and use of AI in a way that maximises its potential to do good?

Keith Teare, Silicon Valley based entrepreneur and CEO of SignalRank Corporation

It is part of the event Unshackling Europe’s Economy: what holds us back?

Nov 5, 2020
In this webinar, experts from the National Institute of Standards and Technology provide an overview of their key artificial intelligence initiatives including responsible, trustworthy, and explainable AI.

Mar 21, 2024
Joint stakeout Linda Thomas-Greenfield, Representative of the United States to the United Nations, joined by representatives from the Bahamas, Japan, The Netherlands, Morocco, Singapore and the United Kingdom on the UN General Assembly resolution on Artificial Intelligence.

US Ambassador Linda Thomas-Greenfield, said today (21 Mar), “All 193 members of the United Nations General Assembly have spoken in one voice and together chosen to govern artificial intelligence, rather than let it govern us.”

The General Assembly adopted this morning the draft resolution “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development.”

The text was introduced by the United States of America and co-sponsored by over 120 Member States.

Addressing the press after the adoption of the resolution, US Ambassador Linda Thomas-Greenfield, said, “The groundbreaking resolution adopted today cements global consensus for safe, secure, trustworthy AI systems, systems that both advance sustainable development and respect fundamental freedoms.”

She also said, “A first of its kind resolution adopted not only by majority, but by consensus, because ultimately, the risk and benefits of AI have the potential to impact all of us. And so, approaching it requires all of us. The resolution we just adopted reflected our shared responsibility and collective fate. It leaves no one behind it lays the groundwork for AI systems that leave no one behind either.”

She continued, “Why the UN? Why should this body, with so many existential challenges on its plate, take on AI? And the answer is simple. AI is existential. And not only in its threats, of which there are many, but in its opportunities, of which I believe there are many more.”

Answering a question, she said “AI will be able to bring to people in remote villages access to technology, but access to the basic needs that they didn't have access to before. And the whole idea of this is to bridge that gap.”

Moroccan Ambassador Omar Hilale said, “This resolution is a steppingstone for existing and future multilateral initiatives on digital cooperation and on AI in particular. This resolution is not a net in itself. But the beginning of our collective journey to shape safe, secure, and trustworthy artificial intelligence systems for sustainable development that leaves no one behind it.”

Addressing the General Assembly earlier this morning, the US Ambassador said, “All told, for all of the unknowns surrounding AI, this much is clear: this technology has the potential to unlock more progress, on more issues, for more people. But in order to ensure equitable access to these benefits, and mitigate risks for communities across the globe, we must approach this technology as a global community. Leaving no one behind.”

At Google, we take responsible AI seriously. Discover why responsible AI matters more than ever, and why we care so much about it. Explore Google’s 7 AI Principles, a framework designed to guide users toward a safer and more ethical use of AI.