Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.

Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.

Framing [00:00:00]
Dartmouth AI Summer Workshop [00:07:02]
Letitia Intro to Melanie [00:09:22]
The Googleplex situation with Melanie and Douglas Hofstadter [00:14:58]
Melanie paper [00:21:04]
Note on audio quality [00:25:45]
Main show kick off [00:26:51]
AI hype [00:29:57]
On GPT-3 [00:31:46]
Melanie’s “Why is AI harder than we think” paper [00:36:18]
The 3rd fallacy: Avoiding wishful mnemonics [00:42:23]
Concepts and primitives [00:47:56]
The 4th fallacy [00:51:19]
What can we learn from human intelligence? [00:53:00]
Pure intelligence [01:00:14]
Unrobust features [01:02:34]
The good things of the past in AI research [01:11:30]
Copycat [01:17:56]
Thoughts on the “neuro-symbolic camp” [01:26:49]
Type I or Type II [01:32:06]
Adversarial examples — a fun question. [01:35:55]
How much do we want human-like (human-interpretable) features? [01:43:44]
The difficulty of creating intelligence [01:47:49]
Show debrief [01:51:24]

Feb 8, 2018
This is a talk by Josh Tenenbaum for course 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world.

A novel intelligence has roared into the mainstream, sparking euphoric excitement as well as abject fear. Explore the landscape of possible futures in a brave new world of thinking machines, with the very leaders at the vanguard of artificial intelligence.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Participants:
Sébastien Bubeck
Tristan Harris
Yann LeCun

Moderator:
Brian Greene

00:00 – Introduction
07:32 – Yann lecun Introduction
13:35 – Creating the AI Brian Greene
20:55 – Should we model AI on human intelligence?
27:55 – Schrodinger’s Cat is alive
37:25 – Sébastien Bubeck Introduction
44:51 – Asking chatGPT to write a poem
52:26 – What is happening inside GPT 4?
01:02:56 – How much data is needed to train a language model?
01:11:20 – Tristan Harris Introduction
01:17:13 – Is profit motive the best way to go about creating a language model?
01:23:41 – AI and its place in social media
01:29:33 – Is new technology to blame for cultural phenomenon?
01:36:34 – Can you have a synthetic version of AI vs the large data set models?
01:44:27 – Where will AI be in 5 to 10 years?
01:54:45 – Credits

The last five years have witnessed a dramatic resurgence of excitement in the goal of creating intelligent machines. Technology companies are now investing billions of dollars in this field, new research laboratories are springing up around the globe, and competition for talent has become intense. In this Discourse Chris Bishop describes some of the recent technology breakthroughs which underpin this enthusiasm, and explores some of the many exciting opportunities which artificial intelligence offers.

Chris Bishop is the Laboratory Director at Microsoft Research Cambridge and is a professor of computer science at the University of Edinburgh. He has extensive expertise in artificial intelligence and machine learning.

This Discourse was filmed at the Royal Institution on 28 October 2016.

Large language models, like ChatGPT and Claude, have remarkably coherent communication skills. Yet, what this says about their “intelligence” isn’t clear. Is it possible that they could arrive at the same level of intelligence as humans without taking the same evolutionary or learning path to get there? Or, if they’re not on a path to human-level intelligence, where are they now and where will they end up? In this episode, with guests Tomer Ullman and Murray Shanahan, we look at how large language models function and examine differing views on how sophisticated they are and where they might be going.

Generative AI refers to a type of artificial intelligence that involves creating new and original data or content. Unlike traditional AI models that rely on large datasets and algorithms to classify or predict outcomes, generative AI models are designed to learn the underlying patterns and structure of the data and generate novel outputs that mimic human creativity.

ChatGPT is perhaps the most well-known example, but the field is far larger and more varied than text generation. Other applications of generative AI include image and video synthesis, speech generation, music composition, and virtual reality.

In this lecture, Professor Mirella Lapata will present an overview of this exciting—sometimes controversial—and rapidly evolving field.

Mirella Lapata is professor of natural language processing in the School of Informatics at the University of Edinburgh. Her research focuses on getting computers to understand, reason with, and generate natural language. She is the first recipient (2009) of the British Computer Society and Information Retrieval Specialist Group (BCS/IRSG) Karen Sparck Jones award and a Fellow of the Royal Society of Edinburgh, the ACL, and Academia Europaea.

This lecture is part of a series of events – How AI broke the internet – that explores the various angles of large-language models and generative AI in the public eye.

This series of Turing Lectures is organised in collaboration with The Royal Institution

Abstract: I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to “understand” language—and the physical and social situations language encodes—in any important sense. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems. I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions.

Short Bio: Melanie Mitchell is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) was shortlisted for the 2023 Cosmos Prize for Scientific Writing.

https://www.youtube.com/watch?v=O5SLGAWSXMw

With their ability to generate human-like language and complete a variety of tasks, generative AI has the potential to revolutionise the way we communicate, learn and work. But what other doors will this technology open for us, and how can we harness it to make great leaps in technology innovation? Have we finally done it? Have we cracked AI?
Join Professor Michael Wooldridge for a fascinating discussion on the possibilities and challenges of generative AI models, and their potential impact on societies of the future.
Michael Wooldridge is Director of Foundational AI Research and Turing AI World-Leading Researcher Fellow at The Alan Turing Institute. His work focuses on multi-agent systems and developing techniques for understanding the dynamics of multi-agent systems. His research draws on ideas from game theory, logic, computational complexity, and agent-based modelling. He has been an AI researcher for more than 30 years and has published over 400 scientific articles on the subject.
This lecture is part of a series of events – How AI broke the internet – that explores the various angles of large-language models and generative AI in the public eye.
This series of Turing Lectures is organised in collaboration with The Royal Institution of Great Britain.

Professor Melanie Mitchell gives the Margaret Boden Lecture for 2023 at the University of Cambridge. The Margaret Boden lectures are held annually by the Leverhulme Centre for the Future of Intelligence at Cambridge.

Abstract: While AI has made dramatic progress over the last decade in areas such as vision, language processing, and robotics, current AI systems still lack key aspects of human intelligence. In this lecture Professor Melanie Mitchell argues that the inability to form conceptual abstractions—and to make abstraction-driven analogies—is a primary source of brittleness and unreliability in state-of-the-art AI systems. She reflects on the role played by abstraction at all levels of intelligence, and on the prospects for developing AI systems with humanlike abilities for abstract reasoning and analogy.

https://www.youtube.com/watch?v=uEN_rOxKkag

Mar 15, 2024
UC Davis College of Engineering Dean’s Distinguished Speaker Melanie Mitchell, Professor at the Santa Fe Institute, presents “The Future of Artificial Intelligence”