LangChain became immensely popular when it was launched in 2022, but how can it impact your development and application of AI models, Large Language Models (LLM) in particular. In this video Martin Keen shares an overview of the features and uses of LangChain.

What is Generative AI and how does it work? What are common applications for Generative AI? Watch this video to learn all about Generative AI, including common applications, model types, and the fundamentals for how to use it.

Learn how to implement RAG (Retrieval Augmented Generation) from scratch, straight from a LangChain software engineer. This Python course teaches you how to use RAG to combine your own custom data with the power of Large Language Models (LLMs).

💻 Code: https://github.com/langchain-ai/rag-from-scratch

⭐️ Course Contents ⭐️
⌨️ (0:00:00) Overview
⌨️ (0:05:53) Indexing
⌨️ (0:10:40) Retrieval
⌨️ (0:15:52) Generation
⌨️ (0:22:14) Query Translation (Multi-Query)
⌨️ (0:28:20) Query Translation (RAG Fusion)
⌨️ (0:33:57) Query Translation (Decomposition)
⌨️ (0:40:31) Query Translation (Step Back)
⌨️ (0:47:24) Query Translation (HyDE)
⌨️ (0:52:07) Routing
⌨️ (0:59:08) Query Construction
⌨️ (1:05:05) Indexing (Multi Representation)
⌨️ (1:11:39) Indexing (RAPTOR)
⌨️ (1:19:19) Indexing (ColBERT)
⌨️ (1:26:32) CRAG
⌨️ (1:44:09) Adaptive RAG
⌨️ (2:12:02) The future of RAG

Demystifying attention, the key mechanism inside transformers and LLMs.

To follow along with the course, visit the course website:
https://deepgenerativemodels.github.io/

Stefano Ermon
Associate Professor of Computer Science, Stanford University
https://cs.stanford.edu/~ermon/

https://www.youtube.com/watch?v=XZ0PMRWXBEU

This is the 5th video in a series on using large language models (LLMs) in practice. Here, I discuss how to fine-tune an existing LLM for a particular use case and walk through a concrete example with Python code.

Learn prompt engineering techniques to get better results from ChatGPT and other LLMs.

This is the second of a series of 3 videos where we demystify Transformer models and explain them with visuals and friendly examples.

00:00 Introduction
01:18 Recap: Embeddings and Context
04:46 Similarity
11:09 Attention
20:46 The Keys and Queries Matrices
25:02 The Values Matrix
28:41 Self and Multi-head attention
33:54: Conclusion

Attention mechanisms are crucial to the huge boom LLMs have recently had.
In this video you’ll see a friendly pictorial explanation of how attention mechanisms work in Large Language Models.
This is the first of a series of three videos on Transformer models.

https://www.youtube.com/watch?v=OxCpWwDCDFQ