So you say you love your computer or smartphone…but can it love you back? As we become more dependent on technology, and our technology becomes more lifelike, where does the line between human and computer lie? And what happens when our relationships become romantic? In this episode of Mind Field, I look into Artificial Intelligence.

Can AI help us answer life’s biggest questions? In this visionary conversation, Google DeepMind cofounder and CEO Demis Hassabis delves into the history and incredible capabilities of AI with head of TED Chris Anderson. Hassabis explains how AI models like AlphaFold — which accurately predicted the shapes of all 200 million proteins known to science in under a year — have already accelerated scientific discovery in ways that will benefit humanity. Next up? Hassabis says AI has the potential to unlock the greatest mysteries surrounding our minds, bodies and the universe.

This video on the Artificial Intelligence full course video cover all the topics you need to know to become a master in AI and ML. It covers all the basics of Machine Learning, the different types of Machine Learning, and the various applications of Machine Learning used in different industries. This video will also help us understand the basics of artificial intelligence. We will look at the future of AI and listen to some of the industry experts and learn what they have to say about AI. Finally, you will learn the Top 10 Artificial Intelligence Technologies In 2024.

What is Artificial Intelligence?
Artificial Intelligence is a method of making a computer, a computer-controlled robot, or software think intelligently like the human mind. AI is accomplished by studying the patterns of the human brain and by analyzing the cognitive process. The outcome of these studies develops intelligent software and systems.

➡️ About Post Graduate Program In AI And Machine Learning
This AI ML course is designed to enhance your career in AI and ML by demystifying concepts like machine learning, deep learning, NLP, computer vision, reinforcement learning, and more. You’ll also have access to 4 live sessions, led by industry experts, covering the latest advancements in AI such as generative modeling, ChatGPT, OpenAI, and chatbots.

✅ Skills Covered

ChatGPT
Generative AI
Explainable AI
Generative Modeling
Statistics
Python
Supervised Learning
Unsupervised Learning
NLP
Neural Networks
Computer Vision
And Many More…

Aug 14, 2024 Google DeepMind: The Podcast
It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hannah Fry caught up. In that time, the world has caught on to artificial intelligence—in a big way. Listen as they discuss the recent explosion of interest in AI, what Demis means when he describes chatbots as ‘unreasonably effective’, and the unexpected emergence of capabilities like conceptual understanding and abstraction in recent generative models.

Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence.

Timecodes
00:00 Introduction
03:22 Explosion of AI
04:28 Unreasonably effective AI
08:40 Overhyped or under hyped AI
10:58 Gemini and Project Astra
15:40 Benchmarks
18:50 Engine room of Google
26:00 Shifting expectations of this technology
28:16 Open source
33:00 Regulation
38:00 The royal princes of the past
43:10 Avengers assemble
45:34 Peer into the mysteries of the universe
49:58 Conclusion

In this video we will talk about backpropagation – an algorithm powering the entire field of machine learning and try to derive it from first principles.

OUTLINE:
00:00 Introduction
01:28 Historical background
02:50 Curve Fitting problem
06:26 Random vs guided adjustments
09:43 Derivatives
14:34 Gradient Descent
16:23 Higher dimensions
21:36 Chain Rule Intuition
27:01 Computational Graph and Autodiff
36:24 Summary
38:16 Shortform
39:20 Outro

Jürgen Schmidhuber’s blog on the history of backprop:

https://people.idsia.ch/~juergen/who-invented-backpropagation.html

Aug 28, 2024
Jürgen Schmidhuber, the father of generative AI shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe.

TOC
00:00:00 Intro
00:03:38 Reasoning
00:13:09 Potential AI Breakthroughs Reducing Computation Needs
00:20:39 Memorization vs. Generalization in AI
00:25:19 Approach to the ARC Challenge
00:29:10 Perceptions of Chat GPT and AGI
00:58:45 Abstract Principles of Jurgen’s Approach
01:04:17 Analogical Reasoning and Compression
01:05:48 Breakthroughs in 1991: the P, the G, and the T in ChatGPT and Generative AI
01:15:50 Use of LSTM in Language Models by Tech Giants
01:21:08 Neural Network Aspect Ratio Theory
01:26:53 Reinforcement Learning Without Explicit Teachers

Refs:
★ “Annotated History of Modern AI and Deep Learning” (2022 survey by Schmidhuber):
★ Chain Rule For Backward Credit Assignment (Leibniz, 1676)
★ First Neural Net / Linear Regression / Shallow Learning (Gauss & Legendre, circa 1800)
★ First 20th Century Pioneer of Practical AI (Quevedo, 1914)
★ First Recurrent NN (RNN) Architecture (Lenz, Ising, 1920-1925)
★ AI Theory: Fundamental Limitations of Computation and Computation-Based AI (Gödel, 1931-34)
★ Unpublished ideas about evolving RNNs (Turing, 1948)
★ Multilayer Feedforward NN Without Deep Learning (Rosenblatt, 1958)
★ First Published Learning RNNs (Amari and others, ~1972)
★ First Deep Learning (Ivakhnenko & Lapa, 1965)
★ Deep Learning by Stochastic Gradient Descent (Amari, 1967-68)
★ ReLUs (Fukushima, 1969)
★ Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960)
★ Backpropagation for NNs (Werbos, 1982)
★ First Deep Convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988).
★ Metalearning or Learning to Learn (Schmidhuber, 1987)
★ Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, Feb 1990; see the G in Generative AI and ChatGPT)
★ NNs Learn to Generate Subgoals and Work on Command (Schmidhuber, April 1990)
★ NNs Learn to Program NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT)
★ Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT)
★ Experiments with Pre-Training; Analysis of Vanishing/Exploding Gradients, Roots of Long Short-Term Memory / Highway Nets / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber)
★ LSTM journal paper (1997, most cited AI paper of the 20th century)
★ xLSTM (Hochreiter, 2024)
★ Reinforcement Learning Prompt Engineer for Abstract Reasoning and Planning (Schmidhuber 2015)
★ Mindstorms in Natural Language-Based Societies of Mind (2023 paper by Schmidhuber’s team)
https://arxiv.org/abs/2305.17066
★ Bremermann’s physical limit of computation (1982)

Dec 18, 2019
Can A.I. make music? Can it feel excitement and fear? Is it alive? Will.i.am and Mark Sagar push the limits of what a machine can do. How far is too far, and how much further can we go?

The Age of A.I. is a 8 part documentary series hosted by Robert Downey Jr. covering the ways Artificial Intelligence, Machine Learning and Neural Networks will change the world.

0:00 What Is AI?
3:03 Baby X
7:24 Object Recognition
9:41 The Future Studios
14:26 Human Collaboration
21:45 A New Hand
27:20 Digital Will Meets Real Will
31:40 Free Will

https://www.youtube.com/watch?v=UwsrzCVZAb8

A novel intelligence has roared into the mainstream, sparking euphoric excitement as well as abject fear. Explore the landscape of possible futures in a brave new world of thinking machines, with the very leaders at the vanguard of artificial intelligence.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Participants:
Sébastien Bubeck
Tristan Harris
Yann LeCun

Moderator:
Brian Greene

SHARE YOUR THOUGHTS on this program through a short survey: https://survey.alchemer.com/s3/761927…

00:00 – Introduction
07:32 – Yann lecun Introduction
13:35 – Creating the AI Brian Greene
20:55 – Should we model AI on human intelligence?
27:55 – Schrodinger’s Cat is alive
37:25 – Sébastien Bubeck Introduction
44:51 – Asking chatGPT to write a poem
52:26 – What is happening inside GPT 4?
01:02:56 – How much data is needed to train a language model?
01:11:20 – Tristan Harris Introduction
01:17:13 – Is profit motive the best way to go about creating a language model?
01:23:41 – AI and its place in social media
01:29:33 – Is new technology to blame for cultural phenomenon?
01:36:34 – Can you have a synthetic version of AI vs the large data set models?
01:44:27 – Where will AI be in 5 to 10 years?
01:54:45 – Credits

It’s no secret that AI is controversial today. Judging by some of the chaos it’s caused, there’s good reason to think AI seems to ruin everything it touches. But what about the flipside? What good is AI actually doing in the world? It’s a question I don’t hear asked much so today we’ll find out. Note: Reinforcement learning etc are all included in this conversation.

 Sources and Notes

https://docs.google.com/document/d/1RpPbUyj3jK7n56aT45n1R9jGYl4w9tIR6GxbcvMVoUs/edit?tab=t.0

With Artificial Intelligence evolving so rapidly, will it surpass human intelligence? Could this lead to our replacement—or, worse, our extinction? Top experts provide a clear understanding of the immense benefits and potential dangers of AI.