Efficient Large Language Model (LLM) Customization

Skip to Scheduled Dates

Course Overview

In this course, you'll go beyond using out-of-the-box pretrained LLMs and learn a variety of techniques to efficiently customize pretrained LLMs for your specific use cases—without engaging in the computationally intensive and expensive process of pretraining your own model or fine-tuning a model's internal weights. Using the open-source NVIDIA NeMo™ framework, you’ll learn prompt engineering and various parameter-efficient fine-tuning methods to customize LLM behavior for your organization.

Who Should Attend

Highly-experienced Python Developers

Course Objectives

    • Use prompt engineering to improve the performance of pretrained LLMs
    • Apply various fine-tuning techniques with limited data to accomplish tasks specific to your use cases
    • Use a single pretrained model to perform multiple custom tasks
    • Leverage the NeMo framework to customize models like GPT, LLaMA-2, and Falcon with ease

Course Outline

  • Introduction
  • Engineering Effective Prompts
  • Customized Prompt Learning
  • Parameter-Efficient Fine-Tuning (PEFT) and Supervised Fine-Tuning (SFT)
  • Assessment and Q&A

 Back to Course Search

Class Dates & Times

Class times are listed Central time

This is a 1-day class

Price : $500.00

Oh Snap!
There are no dates listed.
Please contact us to get something scheduled.