Mustafa Suleyman: The AI Pioneer Reveals the Future in ‘The Coming Wave’ | Intelligence Squared
Mustafa Suleyman is the ultimate AI insider. As co-founder of DeepMind and Inflection AI, he is one of the pioneers of the artificial intelligence revolution, potentially the single greatest accelerant of progress in history. Investors in Inflection AI include the likes of Microsoft, Nvidia and Bill Gates. Suleyman says AI represents nothing less than a step change in human capability and human society, introducing both risks and innovations on an awesome scale. This is what is coming.
In September 2023 Suleyman came to the Intelligence Squared stage to discuss his new book The Coming Wave: Technology, Power and the Twenty-First Century’s Greatest Dilemma. In conversation with Zanny Minton Beddoes, Editor-in-Chief of The Economist, he explained how he believes we are approaching a critical threshold in the history of humankind. Soon we will live surrounded by AIs which will carry out complex tasks – operating businesses, running government services and maintaining infrastructure. This will be a world of DNA printers and quantum computers, robot assistants and abundant energy, as well as human-engineered pathogens and autonomous weapons.
Geoffrey Hinton Unpacks The Forward-Forward Algorithm
In this episode, Geoffrey Hinton, a renowned computer scientist and a leading expert in deep learning, provides an in-depth exploration of his groundbreaking new learning algorithm – the forward-forward algorithm. Hinton argues this algorithm provides a more plausible model for how the cerebral cortex might learn, and could be the key to unlocking new possibilities in artificial intelligence.
Throughout the episode, Hinton discusses the mechanics of the forward-forward algorithm, including how it differs from traditional deep learning models and what makes it more effective. He also provides insights into the potential applications of this new algorithm, such as enabling machines to perform tasks that were previously thought to be exclusive to human cognition.
Hinton shares his thoughts on the current state of deep learning and its future prospects, particularly in neuroscience. He explores how advances in deep learning may help us gain a better understanding of our own brains and how we can use this knowledge to create more intelligent machines.
Overall, this podcast provides a fascinating glimpse into the latest developments in artificial intelligence and the cutting-edge research being conducted by one of its leading pioneers.
AIC Artificial intelligence – a universal tool?
Oct 18, 2024 SUPERC – RWTH HOCHSCHULVERWALTUNG Speaker: Professor Dirk Helbing (ETH Zürich) Titel: Artificial intelligence – a universal tool?
Center for Artificial Intelligence, RWTH Aachen: https://www.ai.rwth-aachen.de/
Abstract: Research and development in the field of artificial intelligence is making breathtaking progress. Billions are being invested in the technology worldwide. It seems that we will soon have a universal tool that we can entrust with our world problems. But is that really the case? Or does AI also have limits that need to be taken into account? I will try to present answers to this question and present interesting examples of complex dynamic systems that urge caution in not neglecting alternative approaches. Bio: Dirk Helbing is Professor of Computational Social Science at the Department of Humanities, Social and Political Sciences at ETH Zurich, where he is also associated with the Department of Computer Science. His current studies focus on the digital society, smart cities and the challenges and opportunities of digital democracies. At TU Delft, he coordinated the doctoral program “Engineering Social Technologies for a Responsible Digital Future”. He is also an elected member of the German Academy of Sciences “Leopoldina” and the External Faculty of the Complexity Science Hub Vienna.
Knowledge, Prosperity and Economic Complexity: How are They Connected?
Ricardo Hausmann Prosperity is associated with technological progress, which is related to advances in scientific and productive knowledge. But where is that knowledge embedded? How is it put to use? And what kind of social structures does it give rise to? This lecture will shed light on why the world is so unequal, why and how some laggard countries catch up and others fall further behind, why inequality within countries is so large, why we need governments and why it is so hard for people to get along. At the core of it all is a view of society as a social brain that has distributed knowledge that must be linked together through complex networks in order to be used. This gives rise to economic complexity and both its convenient and inconvenient implications. Ricardo Hausmann is the founder and Director of Harvard’s Growth Lab and the Rafik Hariri Professor of the Practice of International Political Economy at Harvard Kennedy School. He earned a PhD in economics at Cornell University. In addition to positions with the government of Venezuela, he served as the first chief economist of the Inter-American Development Bank and as chair of the IMF-World Bank Development Committee. He is SFI external faculty, and recently named co-chair on the Santa Fe Institute’s Science Board, along with Melanie Mitchell. His research interests include issues of growth, macroeconomic stability, international finance, and the social dimensions of development. He brings a networks and statistical mechanics perspective to developmental economics.
Open-Ended AI: The Key to Superhuman Intelligence?
Prof. Tim Rocktäschel, AI researcher at UCL and Google DeepMind, talks about open-ended AI systems. These systems aim to keep learning and improving on their own, like evolution does in nature.
TOC: 00:00:00 Introduction to Open-Ended AI and Key Concepts 00:01:37 Tim Rocktäschel’s Background and Research Focus 00:06:25 Defining Open-Endedness in AI Systems 00:10:39 Subjective Nature of Interestingness and Learnability 00:16:22 Open-Endedness in Practice: Examples and Limitations 00:17:50 Assessing Novelty in Open-ended AI Systems 00:20:05 Adversarial Attacks and AI Robustness 00:24:05 Rainbow Teaming and LLM Safety 00:25:48 Open-ended Research Approaches in AI 00:29:05 Balancing Long-term Vision and Exploration in AI Research 00:37:25 LLMs in Program Synthesis and Open-Ended Learning 00:37:55 Transition from Human-Based to Novel AI Strategies 00:39:00 Expanding Context Windows and Prompt Evolution 00:40:17 AI Intelligibility and Human-AI Interfaces 00:46:04 Self-Improvement and Evolution in AI Systems
Aug 14, 2024 Google DeepMind: The Podcast Professor Hannah Fry is joined by Google DeepMind’s senior research director Douglas Eck to explore AI’s capacity for true creativity. They delve into the complexities of defining creativity, the challenges of AI generated content and attribution, and whether AI can help us to connect with each other in new and meaningful ways.
—
Timecodes 01:22 Defining creativity 02:12 Small c and big C creativity 02:40 Can AI be truly original? 04:12 The social side of art 05:42 How do we measure AI creativity? 07:57 Challenges of continuity 09:25 How does AI learn? 10:37 What’s next for generative AI? 10:42 Multimodality (video, image, music) 12:12 AI and scientific discovery 13:22 Hypotheticals 14:12 The role of AI in art creation 15:12 Conclusion
AI: Supercharging Scientific Exploration with Pushmeet Kohli
In this episode, Professor Hannah Fry sits down with Pushmeet Kohli, VP of Science Research at Google DeepMind to discuss AI’s impact on scientific discovery. They go on a whirlwind tour of scientific projects, touching on recent breakthroughs in #alphafold, material science, weather forecasting, and mathematics to better understand how AI can enhance our scientific understanding of the world.
Timecodes: 00:00 Intro 01:13 AlphaFold 06:13 AlphaFold Database 08:14 Weather forecasting 11:24 Creating new materials with deep learning 25:10 Imposter syndrome, being a generalist, and Nobel prize winners 31:21 Choosing the right projects 32:07 Root node problems 34:32 Large language models for science 36:06 Function search and algorithmic discovery 42:10 Math olympiad 46:26 What is coming next 48:35 Reflections from Hannah
#57 – Prof. MELANIE MITCHELL – Why AI is harder than we think
Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.
Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.
Framing [00:00:00] Dartmouth AI Summer Workshop [00:07:02] Letitia Intro to Melanie [00:09:22] The Googleplex situation with Melanie and Douglas Hofstadter [00:14:58] Melanie paper [00:21:04] Note on audio quality [00:25:45] Main show kick off [00:26:51] AI hype [00:29:57] On GPT-3 [00:31:46] Melanie’s “Why is AI harder than we think” paper [00:36:18] The 3rd fallacy: Avoiding wishful mnemonics [00:42:23] Concepts and primitives [00:47:56] The 4th fallacy [00:51:19] What can we learn from human intelligence? [00:53:00] Pure intelligence [01:00:14] Unrobust features [01:02:34] The good things of the past in AI research [01:11:30] Copycat [01:17:56] Thoughts on the “neuro-symbolic camp” [01:26:49] Type I or Type II [01:32:06] Adversarial examples — a fun question. [01:35:55] How much do we want human-like (human-interpretable) features? [01:43:44] The difficulty of creating intelligence [01:47:49] Show debrief [01:51:24]
MIT AGI: Building machines that see, learn, and think like people (Josh Tenenbaum)
Feb 8, 2018 This is a talk by Josh Tenenbaum for course 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world.
AI: Grappling with a New Kind of Intelligence
A novel intelligence has roared into the mainstream, sparking euphoric excitement as well as abject fear. Explore the landscape of possible futures in a brave new world of thinking machines, with the very leaders at the vanguard of artificial intelligence.
The Big Ideas Series is supported in part by the John Templeton Foundation.
00:00 – Introduction 07:32 – Yann lecun Introduction 13:35 – Creating the AI Brian Greene 20:55 – Should we model AI on human intelligence? 27:55 – Schrodinger’s Cat is alive 37:25 – Sébastien Bubeck Introduction 44:51 – Asking chatGPT to write a poem 52:26 – What is happening inside GPT 4? 01:02:56 – How much data is needed to train a language model? 01:11:20 – Tristan Harris Introduction 01:17:13 – Is profit motive the best way to go about creating a language model? 01:23:41 – AI and its place in social media 01:29:33 – Is new technology to blame for cultural phenomenon? 01:36:34 – Can you have a synthetic version of AI vs the large data set models? 01:44:27 – Where will AI be in 5 to 10 years? 01:54:45 – Credits