Skip to main content

CRCD-Hosted NVIDIA Workshop Dec. 3

Domain-Adaptive Pre-Training: Tailoring LLMs for Specialized Applications
Wednesday Dec. 3, 2-4pm ET
Maximum Number of Attendees: 40

While Large Language Models (LLMs) are broadly capable, their general knowledge often falls short of the specialized, domain-specific information required for enterprise applications. This hands-on lab provides a focused, end-to-end approach for building domain-specific large language models. You'll learn how to curate domain-specific datasets, design and train custom tokenizers, and execute the pre-training process to tailor LLMs for specialized applications. You'll gain practical skills and knowledge necessary to adapt LLMs to your unique domain requirements and to real-world use cases. This course takes you on a practical journey from initial data preparation to a domain-adapted, fine-tuned model.

Register at: https://pitt.co1.qualtrics.com/jfe/form/SV_cZVnNvb3ZM7eX7E