Ricardo Hausmann Prosperity is associated with technological progress, which is related to advances in scientific and productive knowledge. But where is that knowledge embedded? How is it put to use? And what kind of social structures does it give rise to? This lecture will shed light on why the world is so unequal, why and how some laggard countries catch up and others fall further behind, why inequality within countries is so large, why we need governments and why it is so hard for people to get along. At the core of it all is a view of society as a social brain that has distributed knowledge that must be linked together through complex networks in order to be used. This gives rise to economic complexity and both its convenient and inconvenient implications. Ricardo Hausmann is the founder and Director of Harvard’s Growth Lab and the Rafik Hariri Professor of the Practice of International Political Economy at Harvard Kennedy School. He earned a PhD in economics at Cornell University. In addition to positions with the government of Venezuela, he served as the first chief economist of the Inter-American Development Bank and as chair of the IMF-World Bank Development Committee. He is SFI external faculty, and recently named co-chair on the Santa Fe Institute's Science Board, along with Melanie Mitchell. His research interests include issues of growth, macroeconomic stability, international finance, and the social dimensions of development. He brings a networks and statistical mechanics perspective to developmental economics.

Prof. Tim Rocktäschel, AI researcher at UCL and Google DeepMind, talks about open-ended AI systems. These systems aim to keep learning and improving on their own, like evolution does in nature.

TOC:
00:00:00 Introduction to Open-Ended AI and Key Concepts
00:01:37 Tim Rocktäschel's Background and Research Focus
00:06:25 Defining Open-Endedness in AI Systems
00:10:39 Subjective Nature of Interestingness and Learnability
00:16:22 Open-Endedness in Practice: Examples and Limitations
00:17:50 Assessing Novelty in Open-ended AI Systems
00:20:05 Adversarial Attacks and AI Robustness
00:24:05 Rainbow Teaming and LLM Safety
00:25:48 Open-ended Research Approaches in AI
00:29:05 Balancing Long-term Vision and Exploration in AI Research
00:37:25 LLMs in Program Synthesis and Open-Ended Learning
00:37:55 Transition from Human-Based to Novel AI Strategies
00:39:00 Expanding Context Windows and Prompt Evolution
00:40:17 AI Intelligibility and Human-AI Interfaces
00:46:04 Self-Improvement and Evolution in AI Systems

Show notes (New!) https://www.dropbox.com/scl/fi/5avpsy…

REFS:
00:01:47 - UCL DARK Lab (Rocktäschel) - AI research lab focusing on RL and open-ended learning - https://ucldark.com/

00:02:31 - GENIE (Bruce) - Generative interactive environment from unlabelled videos - https://arxiv.org/abs/2402.15391

00:02:42 - Promptbreeder (Fernando) - Self-referential LLM prompt evolution - https://arxiv.org/abs/2309.16797

00:03:05 - Picbreeder (Secretan) - Collaborative online image evolution - https://dl.acm.org/doi/10.1145/135705…

00:03:14 - Why Greatness Cannot Be Planned (Stanley) - Book on open-ended exploration - https://www.amazon.com/Why-Greatness-…

00:04:36 - NetHack Learning Environment (Küttler) - RL research in procedurally generated game - https://arxiv.org/abs/2006.13760

00:07:35 - Open-ended learning (Clune) - AI systems for continual learning and adaptation - https://arxiv.org/abs/1905.10985

00:07:35 - OMNI (Zhang) - LLMs modeling human interestingness for exploration - https://arxiv.org/abs/2306.01711

00:10:42 - Observer theory (Wolfram) - Computationally bounded observers in complex systems - https://writings.stephenwolfram.com/2…

00:15:25 - Human-Timescale Adaptation (Rocktäschel) - RL agent adapting to novel 3D tasks - https://arxiv.org/abs/2301.07608

00:16:15 - Open-Endedness for AGI (Hughes) - Importance of open-ended learning for AGI - https://arxiv.org/abs/2406.04268

00:16:35 - POET algorithm (Wang) - Open-ended approach to generate and solve challenges - https://arxiv.org/abs/1901.01753

00:17:20 - AlphaGo (Silver) - AI mastering the game of Go - https://deepmind.google/technologies/…

00:20:35 - Adversarial Go attacks (Dennis) - Exploiting weaknesses in Go AI systems - https://www.ifaamas.org/Proceedings/a…

00:22:00 - Levels of AGI (Morris) - Framework for categorizing AGI progress - https://arxiv.org/abs/2311.02462

00:24:30 - Rainbow Teaming (Samvelyan) - LLM-based adversarial prompt generation - https://arxiv.org/abs/2402.16822

00:27:45 - AI Debate (Khan) - Improving LLM truthfulness through debate - https://proceedings.mlr.press/v235/kh…

00:29:40 - Gemini (Google DeepMind) - Advanced multimodal AI model - https://deepmind.google/technologies/…

00:30:15 - How to Take Smart Notes (Ahrens) - Effective note-taking methodology - https://www.amazon.com/How-Take-Smart…

00:35:05 - Voyager (Wang) - Open-ended embodied agent using GPT-4 in Minecraft - https://arxiv.org/abs/2305.16291

00:38:00 - AlphaGo Nature paper (Silver) - Deep neural networks and tree search for Go - https://www.nature.com/articles/natur…

00:38:05 - AlphaStar (Vinyals) - AI achieving grandmaster level in StarCraft II - https://www.nature.com/articles/s4158…

00:42:00 - The Beginning of Infinity (Deutsch) - Book on explanations and scientific progress - https://www.amazon.com/Beginning-Infi…

00:43:30 - AI model collapse (Shumailov) - Risks of training on AI-generated content - https://www.nature.com/articles/s4158…

00:48:35 - Chain-of-Thought Prompting (Wei) - Improving LLM reasoning through prompting - https://arxiv.org/abs/2201.11903

00:49:35 - Self-improving neural networks (Schmidhuber) - Early work on self-referential networks -

00:54:45 - UCL DARK Lab (UCL Computer Science) - RL and Deep Learning research group - https://www.ucl.ac.uk/computer-scienc…

Aug 14, 2024 Google DeepMind: The Podcast
Professor Hannah Fry is joined by Google DeepMind's senior research director Douglas Eck to explore AI's capacity for true creativity. They delve into the complexities of defining creativity, the challenges of AI generated content and attribution, and whether AI can help us to connect with each other in new and meaningful ways.

Timecodes
01:22 Defining creativity
02:12 Small c and big C creativity
02:40 Can AI be truly original?
04:12 The social side of art
05:42 How do we measure AI creativity?
07:57 Challenges of continuity
09:25 How does AI learn?
10:37 What’s next for generative AI?
10:42 Multimodality (video, image, music)
12:12 AI and scientific discovery
13:22 Hypotheticals
14:12 The role of AI in art creation
15:12 Conclusion

In this episode, Professor Hannah Fry sits down with Pushmeet Kohli, VP of Science Research at Google DeepMind to discuss AI’s impact on scientific discovery. They go on a whirlwind tour of scientific projects, touching on recent breakthroughs in #alphafold, material science, weather forecasting, and mathematics to better understand how AI can enhance our scientific understanding of the world.

Timecodes:
00:00 Intro
01:13 AlphaFold
06:13 AlphaFold Database
08:14 Weather forecasting
11:24 Creating new materials with deep learning
25:10 Imposter syndrome, being a generalist, and Nobel prize winners
31:21 Choosing the right projects
32:07 Root node problems
34:32 Large language models for science
36:06 Function search and algorithmic discovery
42:10 Math olympiad
46:26 What is coming next
48:35 Reflections from Hannah

Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.

Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.

Framing [00:00:00]
Dartmouth AI Summer Workshop [00:07:02]
Letitia Intro to Melanie [00:09:22]
The Googleplex situation with Melanie and Douglas Hofstadter [00:14:58]
Melanie paper [00:21:04]
Note on audio quality [00:25:45]
Main show kick off [00:26:51]
AI hype [00:29:57]
On GPT-3 [00:31:46]
Melanie's "Why is AI harder than we think" paper [00:36:18]
The 3rd fallacy: Avoiding wishful mnemonics [00:42:23]
Concepts and primitives [00:47:56]
The 4th fallacy [00:51:19]
What can we learn from human intelligence? [00:53:00]
Pure intelligence [01:00:14]
Unrobust features [01:02:34]
The good things of the past in AI research [01:11:30]
Copycat [01:17:56]
Thoughts on the "neuro-symbolic camp" [01:26:49]
Type I or Type II [01:32:06]
Adversarial examples -- a fun question. [01:35:55]
How much do we want human-like (human-interpretable) features? [01:43:44]
The difficulty of creating intelligence [01:47:49]
Show debrief [01:51:24]

Feb 8, 2018
This is a talk by Josh Tenenbaum for course 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world.

A novel intelligence has roared into the mainstream, sparking euphoric excitement as well as abject fear. Explore the landscape of possible futures in a brave new world of thinking machines, with the very leaders at the vanguard of artificial intelligence.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Participants:
Sébastien Bubeck
Tristan Harris
Yann LeCun

Moderator:
Brian Greene

00:00 - Introduction
07:32 - Yann lecun Introduction
13:35 - Creating the AI Brian Greene
20:55 - Should we model AI on human intelligence?
27:55 - Schrodinger's Cat is alive
37:25 - Sébastien Bubeck Introduction
44:51 - Asking chatGPT to write a poem
52:26 - What is happening inside GPT 4?
01:02:56 - How much data is needed to train a language model?
01:11:20 - Tristan Harris Introduction
01:17:13 - Is profit motive the best way to go about creating a language model?
01:23:41 - AI and its place in social media
01:29:33 - Is new technology to blame for cultural phenomenon?
01:36:34 - Can you have a synthetic version of AI vs the large data set models?
01:44:27 - Where will AI be in 5 to 10 years?
01:54:45 - Credits

The last five years have witnessed a dramatic resurgence of excitement in the goal of creating intelligent machines. Technology companies are now investing billions of dollars in this field, new research laboratories are springing up around the globe, and competition for talent has become intense. In this Discourse Chris Bishop describes some of the recent technology breakthroughs which underpin this enthusiasm, and explores some of the many exciting opportunities which artificial intelligence offers.

Chris Bishop is the Laboratory Director at Microsoft Research Cambridge and is a professor of computer science at the University of Edinburgh. He has extensive expertise in artificial intelligence and machine learning.

This Discourse was filmed at the Royal Institution on 28 October 2016.

Large language models, like ChatGPT and Claude, have remarkably coherent communication skills. Yet, what this says about their “intelligence” isn’t clear. Is it possible that they could arrive at the same level of intelligence as humans without taking the same evolutionary or learning path to get there? Or, if they’re not on a path to human-level intelligence, where are they now and where will they end up? In this episode, with guests Tomer Ullman and Murray Shanahan, we look at how large language models function and examine differing views on how sophisticated they are and where they might be going.

Generative AI refers to a type of artificial intelligence that involves creating new and original data or content. Unlike traditional AI models that rely on large datasets and algorithms to classify or predict outcomes, generative AI models are designed to learn the underlying patterns and structure of the data and generate novel outputs that mimic human creativity.

ChatGPT is perhaps the most well-known example, but the field is far larger and more varied than text generation. Other applications of generative AI include image and video synthesis, speech generation, music composition, and virtual reality.

In this lecture, Professor Mirella Lapata will present an overview of this exciting—sometimes controversial—and rapidly evolving field.

Mirella Lapata is professor of natural language processing in the School of Informatics at the University of Edinburgh. Her research focuses on getting computers to understand, reason with, and generate natural language. She is the first recipient (2009) of the British Computer Society and Information Retrieval Specialist Group (BCS/IRSG) Karen Sparck Jones award and a Fellow of the Royal Society of Edinburgh, the ACL, and Academia Europaea.

This lecture is part of a series of events - How AI broke the internet - that explores the various angles of large-language models and generative AI in the public eye.

This series of Turing Lectures is organised in collaboration with The Royal Institution