Sep 26, 2024 KING'S PLACE
In the first lecture of the series, The Alan Turing Institute’s Head of Online Safety and AI for Public Services, Jonathan Bright, will explore how political campaigns are making use of AI and the challenges posed by deepfakes. He'll also talk about fact-checking, content moderation and why it's so important to approach the media we consume in the digital age with a healthy amount of scepticism.

Jonathan will probe AI’s impact on our political world and what it might mean for the future of democracy. This talk aims to leave its audience with a better understanding of the ways in which AI could both support and undermine democracy, and the steps we can take to increase our resilience to AI generated misinformation.

This series of Turing Lectures is presented by The Alan Turing Institute, in partnership with Knowledge Quarter.

Nov 9, 2023 THE ROYAL INSTITUTION
As generative AI becomes increasingly sophisticated, its potential to revolutionise the way we interact with data is clear. It has already shown its ability to assist with tasks such as image and video synthesis, text and speech generation, and music composition. However, the rapid development of generative AI models has also raised concerns about their misuse, particularly in the context of disinformation campaigns, deepfakes and online harassment.

Are generative AI models moving too fast for regulation to keep up? Will the development of generative AI outpace our ability to ensure their responsible use?

In this lecture, Dr Mhairi Aitken will examine what this means for online and offline safety and discuss how society might be able to mitigate these risks.

Mhairi Aitken is an Ethics Fellow in the Public Policy Programme at The Alan Turing Institute, and an Honorary Senior Fellow at Australian Centre for Health Engagement, Evidence and Values (ACHEEV) at the University of Wollongong in Australia. She is a Sociologist whose research examines social and ethical dimensions of digital innovation particularly relating to uses of data and AI. She was included in the 2023 international list of “100 Brilliant Women in AI Ethics”.

This lecture is part of a series of events - How AI broke the internet - that explores the various angles of large-language models and generative AI in the public eye.

This series of Turing Lectures is organised in collaboration with The Royal Institution.

https://www.youtube.com/watch?v=UYdx74st9O4

Jan 25, 2024
December 5, 2023
Douwe Kiela, Contextual AI

Language models have led to amazing progress, but they also have important shortcomings. One solution for many of these shortcomings is retrieval augmentation. I will introduce the topic, survey recent literature on retrieval augmented language models and finish with some of the main open questions.

More about the course can be found here: https://web.stanford.edu/class/cs25/

Mustafa Suleyman is the ultimate AI insider. As co-founder of DeepMind and Inflection AI, he is one of the pioneers of the artificial intelligence revolution, potentially the single greatest accelerant of progress in history. Investors in Inflection AI include the likes of Microsoft, Nvidia and Bill Gates. Suleyman says AI represents nothing less than a step change in human capability and human society, introducing both risks and innovations on an awesome scale. This is what is coming.

In September 2023 Suleyman came to the Intelligence Squared stage to discuss his new book The Coming Wave: Technology, Power and the Twenty-First Century’s Greatest Dilemma. In conversation with Zanny Minton Beddoes, Editor-in-Chief of The Economist, he explained how he believes we are approaching a critical threshold in the history of humankind. Soon we will live surrounded by AIs which will carry out complex tasks – operating businesses, running government services and maintaining infrastructure. This will be a world of DNA printers and quantum computers, robot assistants and abundant energy, as well as human-engineered pathogens and autonomous weapons.

In this episode, Geoffrey Hinton, a renowned computer scientist and a leading expert in deep learning, provides an in-depth exploration of his groundbreaking new learning algorithm - the forward-forward algorithm. Hinton argues this algorithm provides a more plausible model for how the cerebral cortex might learn, and could be the key to unlocking new possibilities in artificial intelligence.

Throughout the episode, Hinton discusses the mechanics of the forward-forward algorithm, including how it differs from traditional deep learning models and what makes it more effective. He also provides insights into the potential applications of this new algorithm, such as enabling machines to perform tasks that were previously thought to be exclusive to human cognition.

Hinton shares his thoughts on the current state of deep learning and its future prospects, particularly in neuroscience. He explores how advances in deep learning may help us gain a better understanding of our own brains and how we can use this knowledge to create more intelligent machines.

Overall, this podcast provides a fascinating glimpse into the latest developments in artificial intelligence and the cutting-edge research being conducted by one of its leading pioneers.

Oct 18, 2024 SUPERC - RWTH HOCHSCHULVERWALTUNG
Speaker: Professor Dirk Helbing (ETH Zürich)
Titel: Artificial intelligence - a universal tool?

Center for Artificial Intelligence, RWTH Aachen: https://www.ai.rwth-aachen.de/

Abstract:
Research and development in the field of artificial intelligence is making breathtaking progress. Billions are being invested in the technology worldwide. It seems that we will soon have a universal tool that we can entrust with our world problems. But is that really the case? Or does AI also have limits that need to be taken into account? I will try to present answers to this question and present interesting examples of complex dynamic systems that urge caution in not neglecting alternative approaches.
Bio:
Dirk Helbing is Professor of Computational Social Science at the Department of Humanities, Social and Political Sciences at ETH Zurich, where he is also associated with the Department of Computer Science.
His current studies focus on the digital society, smart cities and the challenges and opportunities of digital democracies.
At TU Delft, he coordinated the doctoral program “Engineering Social Technologies for a Responsible Digital Future”.
He is also an elected member of the German Academy of Sciences “Leopoldina” and the External Faculty of the Complexity Science Hub Vienna.

Ricardo Hausmann Prosperity is associated with technological progress, which is related to advances in scientific and productive knowledge. But where is that knowledge embedded? How is it put to use? And what kind of social structures does it give rise to? This lecture will shed light on why the world is so unequal, why and how some laggard countries catch up and others fall further behind, why inequality within countries is so large, why we need governments and why it is so hard for people to get along. At the core of it all is a view of society as a social brain that has distributed knowledge that must be linked together through complex networks in order to be used. This gives rise to economic complexity and both its convenient and inconvenient implications. Ricardo Hausmann is the founder and Director of Harvard’s Growth Lab and the Rafik Hariri Professor of the Practice of International Political Economy at Harvard Kennedy School. He earned a PhD in economics at Cornell University. In addition to positions with the government of Venezuela, he served as the first chief economist of the Inter-American Development Bank and as chair of the IMF-World Bank Development Committee. He is SFI external faculty, and recently named co-chair on the Santa Fe Institute's Science Board, along with Melanie Mitchell. His research interests include issues of growth, macroeconomic stability, international finance, and the social dimensions of development. He brings a networks and statistical mechanics perspective to developmental economics.

Prof. Tim Rocktäschel, AI researcher at UCL and Google DeepMind, talks about open-ended AI systems. These systems aim to keep learning and improving on their own, like evolution does in nature.

TOC:
00:00:00 Introduction to Open-Ended AI and Key Concepts
00:01:37 Tim Rocktäschel's Background and Research Focus
00:06:25 Defining Open-Endedness in AI Systems
00:10:39 Subjective Nature of Interestingness and Learnability
00:16:22 Open-Endedness in Practice: Examples and Limitations
00:17:50 Assessing Novelty in Open-ended AI Systems
00:20:05 Adversarial Attacks and AI Robustness
00:24:05 Rainbow Teaming and LLM Safety
00:25:48 Open-ended Research Approaches in AI
00:29:05 Balancing Long-term Vision and Exploration in AI Research
00:37:25 LLMs in Program Synthesis and Open-Ended Learning
00:37:55 Transition from Human-Based to Novel AI Strategies
00:39:00 Expanding Context Windows and Prompt Evolution
00:40:17 AI Intelligibility and Human-AI Interfaces
00:46:04 Self-Improvement and Evolution in AI Systems

Show notes (New!) https://www.dropbox.com/scl/fi/5avpsy…

REFS:
00:01:47 - UCL DARK Lab (Rocktäschel) - AI research lab focusing on RL and open-ended learning - https://ucldark.com/

00:02:31 - GENIE (Bruce) - Generative interactive environment from unlabelled videos - https://arxiv.org/abs/2402.15391

00:02:42 - Promptbreeder (Fernando) - Self-referential LLM prompt evolution - https://arxiv.org/abs/2309.16797

00:03:05 - Picbreeder (Secretan) - Collaborative online image evolution - https://dl.acm.org/doi/10.1145/135705…

00:03:14 - Why Greatness Cannot Be Planned (Stanley) - Book on open-ended exploration - https://www.amazon.com/Why-Greatness-…

00:04:36 - NetHack Learning Environment (Küttler) - RL research in procedurally generated game - https://arxiv.org/abs/2006.13760

00:07:35 - Open-ended learning (Clune) - AI systems for continual learning and adaptation - https://arxiv.org/abs/1905.10985

00:07:35 - OMNI (Zhang) - LLMs modeling human interestingness for exploration - https://arxiv.org/abs/2306.01711

00:10:42 - Observer theory (Wolfram) - Computationally bounded observers in complex systems - https://writings.stephenwolfram.com/2…

00:15:25 - Human-Timescale Adaptation (Rocktäschel) - RL agent adapting to novel 3D tasks - https://arxiv.org/abs/2301.07608

00:16:15 - Open-Endedness for AGI (Hughes) - Importance of open-ended learning for AGI - https://arxiv.org/abs/2406.04268

00:16:35 - POET algorithm (Wang) - Open-ended approach to generate and solve challenges - https://arxiv.org/abs/1901.01753

00:17:20 - AlphaGo (Silver) - AI mastering the game of Go - https://deepmind.google/technologies/…

00:20:35 - Adversarial Go attacks (Dennis) - Exploiting weaknesses in Go AI systems - https://www.ifaamas.org/Proceedings/a…

00:22:00 - Levels of AGI (Morris) - Framework for categorizing AGI progress - https://arxiv.org/abs/2311.02462

00:24:30 - Rainbow Teaming (Samvelyan) - LLM-based adversarial prompt generation - https://arxiv.org/abs/2402.16822

00:27:45 - AI Debate (Khan) - Improving LLM truthfulness through debate - https://proceedings.mlr.press/v235/kh…

00:29:40 - Gemini (Google DeepMind) - Advanced multimodal AI model - https://deepmind.google/technologies/…

00:30:15 - How to Take Smart Notes (Ahrens) - Effective note-taking methodology - https://www.amazon.com/How-Take-Smart…

00:35:05 - Voyager (Wang) - Open-ended embodied agent using GPT-4 in Minecraft - https://arxiv.org/abs/2305.16291

00:38:00 - AlphaGo Nature paper (Silver) - Deep neural networks and tree search for Go - https://www.nature.com/articles/natur…

00:38:05 - AlphaStar (Vinyals) - AI achieving grandmaster level in StarCraft II - https://www.nature.com/articles/s4158…

00:42:00 - The Beginning of Infinity (Deutsch) - Book on explanations and scientific progress - https://www.amazon.com/Beginning-Infi…

00:43:30 - AI model collapse (Shumailov) - Risks of training on AI-generated content - https://www.nature.com/articles/s4158…

00:48:35 - Chain-of-Thought Prompting (Wei) - Improving LLM reasoning through prompting - https://arxiv.org/abs/2201.11903

00:49:35 - Self-improving neural networks (Schmidhuber) - Early work on self-referential networks -

00:54:45 - UCL DARK Lab (UCL Computer Science) - RL and Deep Learning research group - https://www.ucl.ac.uk/computer-scienc…

Aug 14, 2024 Google DeepMind: The Podcast
Professor Hannah Fry is joined by Google DeepMind's senior research director Douglas Eck to explore AI's capacity for true creativity. They delve into the complexities of defining creativity, the challenges of AI generated content and attribution, and whether AI can help us to connect with each other in new and meaningful ways.

Timecodes
01:22 Defining creativity
02:12 Small c and big C creativity
02:40 Can AI be truly original?
04:12 The social side of art
05:42 How do we measure AI creativity?
07:57 Challenges of continuity
09:25 How does AI learn?
10:37 What’s next for generative AI?
10:42 Multimodality (video, image, music)
12:12 AI and scientific discovery
13:22 Hypotheticals
14:12 The role of AI in art creation
15:12 Conclusion

In this episode, Professor Hannah Fry sits down with Pushmeet Kohli, VP of Science Research at Google DeepMind to discuss AI’s impact on scientific discovery. They go on a whirlwind tour of scientific projects, touching on recent breakthroughs in #alphafold, material science, weather forecasting, and mathematics to better understand how AI can enhance our scientific understanding of the world.

Timecodes:
00:00 Intro
01:13 AlphaFold
06:13 AlphaFold Database
08:14 Weather forecasting
11:24 Creating new materials with deep learning
25:10 Imposter syndrome, being a generalist, and Nobel prize winners
31:21 Choosing the right projects
32:07 Root node problems
34:32 Large language models for science
36:06 Function search and algorithmic discovery
42:10 Math olympiad
46:26 What is coming next
48:35 Reflections from Hannah