Model Armor: Securing AI Deployments

Skip to Scheduled Dates

Course Overview

This course explains how to use Model Armor to protect AI applications, specifically large language models (LLMs). The curriculum covers Model Armor's architecture and its role in mitigating threats like malicious URLs, prompt injection, jailbreaking, sensitive data leaks, and improper output handling. Practical skills include defining floor settings, configuring templates, and enabling various detection types. You'll also explore sample audit logs to find details about flagged violations.

Who Should Attend

Security Engineers, AI/ML Developers, and Cloud Architects

Course Objectives

    • Explain the purpose of Model Armor in a company’s security portfolio
    • Define the protections that Model Armor applies to all interactions with the LLM
    • Set up the Model Armor API and find flagged violations
    • Identify how Model Armor manages prompts and responses

Course Outline

  • Course Overview
  • Model Armor Overview
  • Customize Model Armor
  • Use Model Armor
  • Put It All Together
  • Course Conclusion

< Back to Course Search

Class Dates & Times

Class times are listed Eastern time

This is a 1-day class

Price : $900.00

NERCOMP Price : $855.00

Class dates not listed.
Please contact us for available dates and times.