Oct 9, 2024
As LLMs are being integrated into more and more applications, security standards for these integrations have lagged behind. Most security research either focuses 1) on social harms, biases exhibited by LLMs, and other content moderation tasks, or 2) zooms in on the LLM itself and ignores the applications that are built around them. Investigating traditional security properties such as confidentiality, integrity, or availability for the entire integrated application has received less attention, yet in practice, we find that this is where the majority of non-transferable risk lies with LLM applications.

NVIDIA has implemented dozens of LLM powered applications, and the NVIDIA AI Red Team has helped secure all of them. We will present our practical findings around LLM security: what kinds of attacks are most common and most impactful, how to assess LLM integrations most effectively from a security perspective, and how we both think about mitigation and design integrations to be more secure from first principles.

Full Abstract & Presentation Materials:

https://www.blackhat.com/us-24/briefings/schedule/index.html#practical-llm-security-takeaways-from-a-year-in-the-trenches-39468

Prof. Alexander G. Ororbia is a researcher in the field of bio-inspired artificial intelligence, working on on mortal computation and neurobiologically-plausible learning algorithms. Ororbia takes us on a tour of brain-inspired AI, discussing how concepts like predictive coding, forward-only learning, and neural generative coding can lead to more efficient and adaptable AI systems.

He explores the how we might implement these bio-inspired approaches on neuromorphic hardware, and shares his vision for a future where AI systems are more closely aligned with biological intelligence.

TOC:

  1. Foundations of Bio-Inspired AI
    [00:00:00] 1.1 Introduction to Bio-Inspired AI and Mortal Computation
    [00:04:50] 1.2 Principles of Mortal Computation and Biomimetic AI
    [00:17:41] 1.3 Markov Blankets and Free Energy Principle
    [00:24:38] 1.4 MILLS Framework and Biological Systems
  2. Alternative Learning Paradigms
    [00:31:00] 2.1 Challenging Backpropagation: Overview of Alternatives
    [00:31:49] 2.2 Predictive Coding and Free Energy Principle
    [00:41:52] 2.3 Biologically Plausible Credit Assignment Methods
    [00:50:11] 2.4 Taxonomy of Bio-inspired Learning Algorithms
  3. Advanced Bio-Inspired AI Implementations
    [00:59:30] 3.1 Forward-Only Learning and NGC Learn Implementation
    [01:03:25] 3.2 Stability-Plasticity Dilemma and Bio-Inspired Solutions
    [01:09:00] 3.3 Neuromorphic Hardware Landscape and Challenges
    [01:12:58] 3.4 Neural Generative Coding and Predictive Coding Advancements
    [01:20:36] 3.5 Latent Space Predictions in Forward-Only Learning

REFS:
The Levin Lab
https://drmichaellevin.org/

Mortal Computation: A Foundation for Biomimetic Intelligence
https://arxiv.org/pdf/2311.09589

The Forward-Forward Algorithm: Some Preliminary Investigations
https://arxiv.org/pdf/2212.13345

Good regulator
https://en.wikipedia.org/wiki/Good_re…

The free-energy principle: a rough guide to the brain?
https://www.fil.ion.ucl.ac.uk/~karl/T…

Hebbian theory
https://en.wikipedia.org/wiki/Hebbian…

There’s Plenty of Room Right Here
https://www.ncbi.nlm.nih.gov/pmc/arti…

Active Inference: The Free Energy Principle in Mind, Brain, and Behavior
https://direct.mit.edu/books/oa-monog…

Brain-Inspired Machine Intelligence: A Survey of Neurobiologically-Plausible Credit Assignment
https://arxiv.org/pdf/2312.09257

Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects
https://www.nature.com/articles/nn019…

Hopfield network
https://en.wikipedia.org/wiki/Hopfiel…

A Tutorial on Energy-Based Learning
https://yann.lecun.com/exdb/publis/pd…

A Learning Algorithm for Boltzmann Machines
https://www.cs.toronto.edu/~hinton/ab…

A Review of Neuroscience-Inspired Machine Learning
https://arxiv.org/pdf/2403.18929

Spiking neural predictive coding for continually learning from data streams
https://www.sciencedirect.com/science…

Neuroanatomy, Basal Ganglia
https://www.ncbi.nlm.nih.gov/books/NB…

Intel Loihi 2
https://www.intel.com/content/www/us/…

IBM TrueNorth
https://research.ibm.com/publications…

GC (Generative Coding)
https://www.researchgate.net/publicat…

NeuroEvolution of Augmenting Topologies (NEAT)
https://nn.cs.utexas.edu/downloads/pa…

Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods
https://arxiv.org/pdf/2205.11508

A Path Towards Autonomous Machine Intelligence (Yann LeCun)
https://openreview.net/pdf?id=BZ5a1r-…

Test-Time Model Adaptation with Only Forward Passes
https://arxiv.org/pdf/2404.01650v2

Sep 19, 2024
The Inaugural lecture for the new Living Well With Technology series from the Digital Futures Institute at King’s College London.

The public lecture AI: The means to an end or a means to the end? was delivered by actor, author, broadcaster and comedian Stephen Fry.

The Institute’s new Thought-Leadership Series – Living Well With Technology is an events programme and associated book series convening thought leaders from across the tech, health, policy, business, education and creative sectors to collaborate on the creation of a better digital future.

https://www.youtube.com/watch?v=-Cw37g39g00

Artificial Intelligence in Healthcare is revolutionizing the medical industry by providing a helping hand. This Edureka session will help you understand the positive impact of Artificial Intelligence in the healthcare domain along with practical implementation in Python. The following topics are covered in this session:

https://www.youtube.com/watch?v=j6EB9HO6acE

https://www.youtube.com/watch?v=MZWOF9EcyHg

Apr 18, 2020 #Ranga_Yogeshwar
Artificial intelligence (AI) is changing our lives. It touches on all aspects of society – private life, business, security — including in the spread of fake news and the challenges posed by the advent of autonomous weapons.

This documentary looks at the rapid change digitalization is causing as it unfolds. In particular, breakthroughs in artificial intelligence are opening completely new horizons. In their film about AI, Tilman Wolff and Ranga Yogeshwar examine the role AI plays in the spread of fake news. They also consider a future with robots and the risks and ethical questions posed by the development of autonomous weapons. To address these issues, they travel the globe to speak with leading experts. AI can generate perfectly forged sound and videos, making it effective for purveying fake news. Discerning the truth from fiction will become increasingly difficult. Technology will streamline work, making some jobs surplus to requirements. Software will pilot self-driving cars and aerial drones. AI is rapidly opening up new vistas, but turning blind corners at speed can be risky. How sensible is this type of progress, and at which point should society step in and set limits to its development?

A documentary by Tilman Wolff und Ranga Yogeshwar

LLaMA3.2 has released a new set of compact models designed for on-device use cases, such as locally running assistants. Here, we show how LangGraph can enable these types of local assistant by building a multi-step RAG agent – this combines ideas from 3 advanced RAG papers (Adaptive RAG, Corrective RAG, and Self-RAG) into a single control flow using LangGraph. But we show LangGraph makes it possible to run a complex agent locally.

Code:

Llama3.2:

https://www.youtube.com/watch?v=bq1Plo2RhYI

Get ready for a showdown between LangChain and LangGraph, two powerful frameworks for building applications with large language models (LLMs.) Master Inventor Martin Keen compares the two, taking a look at their unique features, use cases, and how they can help you create innovative, context-aware solutions.

LangChain became immensely popular when it was launched in 2022, but how can it impact your development and application of AI models, Large Language Models (LLM) in particular. In this video Martin Keen shares an overview of the features and uses of LangChain.

Large language models– or LLMs –are a type of generative pretrained transformer (GPT) that can create human-like text and code. There’s a lot of talk about GPTs and LLMs lately, but they’ve actually been around for years! In this video, Martin Keen briefly explains what a LLM is, how they relate to foundation models, and then covers how they work and how they can be used to address various business problems.