Title: Understanding the LLM Development Cycle: Building, Training, and Finetuning
Date: June 5, 2024
Duration: 1 HR
SPEAKER
Sebastian Raschka
Staff Research Engineer, Lightning AI
MODERATOR
Marlene Mhangami
Senior Developer Advocate, Microsoft
Registration Link
Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media (free Percipio Book for ACM Members with Skills Bundle)
Fine-Tuning LLM’s (free Percipio Video for ACM Members with Skills Bundle)
Getting Started with Large Language Models (LLMs) (free Percipio Course for ACM Members with Skills Bundle)
Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs (free O’Reilly book for ACM Members with Skills Bundle)
AI & Generative AI: Executive Briefing (free Pluralsight Course for ACM Members with Skills Bundle)
2 Likes
mseil
May 20, 2024, 4:11pm
2
As someone who is responsible for Application Security in my org, AI and LLMs are very very new to me, and I would appreciate including some thinking about malicious or abusive use cases–how can we make LLMs resilient both from a code level as well as a functional level. Is there anything we can do short of teaching LLMs to detect malicious questions in a sort of reverse-Turing test?
How would you incorporate this into the testing lifecycle?
When I tried to watch the recording, I got the following message:
Session Capacity Has Been Reached
You cannot join this session or view this recording because it has reached the maximum number of participants allowed.
This is the rewatch link:
An online events platform featuring free, paid, and fundraising events. Share your expertise with the world, or enjoy new experiences from the comfort of wherever you are.
I found link to youtube, I guess it is session recording