Predictions

Logging inputs and outputs for predictions is straightforward and takes just a few lines of code.

In the previous section we tried to develop a mental model of what a record in Gantry looks like. Here we will show how to log the first few columns of the record we showed:

13101310

We will discuss tags in more detail in a subsequent section, but we include them here as they are logged with predictions.

Streaming

First initialize Gantry:

import gantry

gantry.init(
    api_key="YOUR_API_KEY",  # see above docs
)

🚧

Gantry is global

The Gantry module is initialized globally for per Python process. That simply means all logging calls in a process need to share an API key, though all parameters are left to the logging call site.

Next prepare your data in the required format.

inputs = {
  "prompt": “I read the news today oh boy”,
}

outputs = {
  "generation": “About a lucky man who made the grade
}

tags = {
    "env": "prod",
  "user_type": "professional",
}

📘

View the list of supported data types in Gantry here.

That's it, you're ready to send your data to Gantry:

gantry.log_record(
  "my-awesome-app",
  inputs=inputs,
  outputs=outputs,
  tags=tags,
)

Batch

Just like the streaming case, start by initializing Gantry:

import gantry

gantry.init(
    api_key="YOUR_API_KEY",  # see above docs
)

Again, prepare your data, noting that Gantry relies on the inputs and outputs being in corresponding order:

inputs: pd.DataFrame = ...  # Contains the inputs to your model
outputs: pd.DataFrame = ... # Contains your model's predictions
tags: dict = ...                        # Any values (env, model version, etc.) to tag records with

Finally, log the records:

gantry.log_records(
  "my-awesome-app",
  inputs=inputs,
  outputs=outputs,
  tags=tags,
  as_batch=True,
)

Note in the log_records call we included the as_batch parameter to log synchronously.


What’s Next

We mentioned tags without explaining them, let's briefly dive into those