LLM Tutorial

Use Gantry to create, evaluate, deploy, and monitor the performance of an OpenAI Completions LLM.

πŸ“˜

Gantry is currently invite-only. Contact us for access.

This tutorial provides an overview of Gantry features applied to an example grammar correction application that wraps an OpenAI completion model. This tutorial uses a Gantry workflow specific to OpenAI Completion data. If you're using other data, follow the custom model quickstart. This workflow will support other LLM model providers and OpenAI chat soon.

The tutorial introduces the following Gantry concepts:

  • Logging: Sending data to Gantry
    • Performance monitoring is based on input, output, and feedback across all of your environments
  • Applications: A model and it's associated configuration
  • Evaluations: Performance reports on model configurations
    • Easily compare model versions on criteria that you choose
    • Harness an LLM to generate test data (in addition to using your production data)
  • Analysis: Use a flexible dashboard to slice and dice your data
    • Create interesting visualizations, find underperforming cases, and add those cases to your test datasets

What’s Next

We've successfully created an application and a prompt version in Gantry! Next, let's see if it performs as expected.