LangChain became immensely popular when it was launched in 2022, but how can it impact your development and application of AI models, Large Language Models (LLM) in particular. In this video Martin Keen shares an overview of the features and uses of LangChain.
Introduction to Generative AI
What is Generative AI and how does it work? What are common applications for Generative AI? Watch this video to learn all about Generative AI, including common applications, model types, and the fundamentals for how to use it.
RAG vs. Fine Tuning
Learn RAG From Scratch – Python AI Tutorial from a LangChain Engineer
Learn how to implement RAG (Retrieval Augmented Generation) from scratch, straight from a LangChain software engineer. This Python course teaches you how to use RAG to combine your own custom data with the power of Large Language Models (LLMs).
Fine-tuning Large Language Models (LLMs) | w/ Example Code
This is the 5th video in a series on using large language models (LLMs) in practice. Here, I discuss how to fine-tune an existing LLM for a particular use case and walk through a concrete example with Python code.
Prompt Engineering Tutorial – Master ChatGPT and LLM Responses
Learn prompt engineering techniques to get better results from ChatGPT and other LLMs.
The math behind Attention: Keys, Queries, and Values matrices
This is the second of a series of 3 videos where we demystify Transformer models and explain them with visuals and friendly examples.
00:00 Introduction 01:18 Recap: Embeddings and Context 04:46 Similarity 11:09 Attention 20:46 The Keys and Queries Matrices 25:02 The Values Matrix 28:41 Self and Multi-head attention 33:54: Conclusion
The Attention Mechanism in Large Language Models
Attention mechanisms are crucial to the huge boom LLMs have recently had. In this video you’ll see a friendly pictorial explanation of how attention mechanisms work in Large Language Models. This is the first of a series of three videos on Transformer models.