Logging LLM Data
The logging API is designed to log every request and response made to OpenAI.
There are two primary ways to send data to Gantry:
- Through the UI
- Through the SDK
Both of these workflows build on the concept of versioning. Gantry keeps track of your versions so that you're able to compare the performance over time. Understanding versioning is a prerequisite to understanding this page. Learn more about it on the Manage Model Configuration Versions page.
Via the SDK
The most straightforward way to send completion data programmatically is with the log_llm_data function:
import gantry
from gantry.applications.llm_utils import fill_prompt
import openai
from openai.util import convert_to_dict
GANTRY_API_KEY = os.environ.get("GANTRY_API_KEY")
openai.api_key = os.environ.get("OPENAI_API_KEY")
gantry.init(api_key=GANTRY_API_KEY)
my_llm_app = gantry.get_application(GANTRY_APP_NAME)
version = my_llm_app.get_version("test")
config = version.config
prompt = config['prompt']
def generate(values):
filled_in_prompt = fill_prompt(prompt, values)
request = {
"model": "text-davinci-002",
"prompt": filled_in_prompt,
}
results = openai.Completion.create(**request)
my_llm_app.log_llm_data(
api_request=request,
api_response=convert_to_dict(results),
request_attributes={"prompt_values": values},
version=version.version_number,
)
return results
This pulls a stored version from Gantry, fills the prompt and all the parameters, sends the request to OpenAI, and then logs the result to Gantry. The request_attributes
, response_attributes
, and session_id
parameters can also be used to send additional metadata. Feedback can be added directly to the logging request with the feedback
parameter or sent after the fact. The Feedback page walks through some examples.
Requesting multiple answers from OpenAI
OpenAI supports requesting multiple answers from the model output using the n
parameter for the endpoint. For example, if we set {n: 3}
in the example POST
request body to the OpenAI API, the choices
field in the response will look like a list of 3 possible completions. To choose option 1 (index = 1
) as the best choice to present to the user and log it to Gantry, set the selected_choice_index
field in the logging request body. Note that if OpenAI returns multiple choices and the selected_choice_index
field is not manually set, it will default to index 0.
my_llm_app.log_llm_data(
api_request=request,
api_response=convert_to_dict(results),
version=version.version_number,
selected_choice_index=1
)
In the UI, you'd be able to see the choices and the choice that was selected:

Streaming data from OpenAI
Updated about 2 months ago