Understanding the LLM Development Cycle: Building, Training, and Finetuning

Title: Understanding the LLM Development Cycle: Building, Training, and Finetuning
Date: June 5, 2024
Duration: 1 HR

SPEAKER
Sebastian Raschka
Staff Research Engineer, Lightning AI

MODERATOR
Marlene Mhangami
Senior Developer Advocate, Microsoft

Registration Link

Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media (free Percipio Book for ACM Members with Skills Bundle)
Fine-Tuning LLM’s (free Percipio Video for ACM Members with Skills Bundle)
Getting Started with Large Language Models (LLMs) (free Percipio Course for ACM Members with Skills Bundle)
Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs (free O’Reilly book for ACM Members with Skills Bundle)
AI & Generative AI: Executive Briefing (free Pluralsight Course for ACM Members with Skills Bundle)

2 Likes

As someone who is responsible for Application Security in my org, AI and LLMs are very very new to me, and I would appreciate including some thinking about malicious or abusive use cases–how can we make LLMs resilient both from a code level as well as a functional level. Is there anything we can do short of teaching LLMs to detect malicious questions in a sort of reverse-Turing test?

How would you incorporate this into the testing lifecycle?