Feedback

Feedback is any data, ground truth or other kinds, that help assess model performance.

Feedback can occur long after the model made its original prediction.

Gantry takes care of tying your feedback back to the prediction it corresponds to. All you have to do is pick a unique identifier for each piece of data, and send feedback with the same unique identifier.

If you have a unique identifier handy, you can simply provide it alongside both the original prediction and its feedback to tie them together using join_key parameter.

gantry.log_record(
  "my-awesome-app",
  inputs=inputs,
  outputs=outputs,
  join_key=some_prediction_id,
)

Later, in another process, you can log feedback with the same join_key:

gantry.log_record(
  "my-awesome-app",
  feedback=feedback,
  join_key=some_prediction_id,
)

If you don't have a join_key handy, Gantry will create it for you and return it in log_record call:

join_key = gantry.log_record(
  "my-awesome-app",
  inputs=inputs,
  outputs=outputs,
)

In case you want to tie feedback with predictions using input values, you can build the join_key with that specific information. For example:

inputs = {
  "college_degree": True,
  "loan_amount": 1000.0,
  "job_title": "Software Engineer",
  "loan_id": uuid.uuid4(),
}

gantry.log_record(
  "my-awesome-app",
  inputs=inputs,
  outputs=outputs,
  join_key=f"loan_id_{inputs['loan_id']}"
)

Feedback can be delayed for up to six months, or up to your desired data retention interval, whichever is shorter.

📘

Delayed Feedback Ingestion

Delayed feedback is processed in fifteen minutes interval, so you'll see the ingested delayed feedback on the dashboard fifteen minutes after logging.

Example in Flask

from flask import Flask, jsonify, request
import gantry

from loan_utils import preprocess_features

GANTRY_API_KEY = "YOUR_API_KEY"  # see Getting Your API Key
GANTRY_APPLICATION_NAME = "my-awesome-app"  # name your application
GANTRY_APPLICATION_VERSION = "1.0"  # name your version

app = Flask(__name__)

gantry.init(
    api_key=GANTRY_API_KEY,
    environment="production",
)


# Every time this function is called, log inputs & predictions under "loan_pred"
def _predict(
    loan_id: str,  # id of loan, important metadata but not used in predicting
    college_degree: bool,  # if applicant has college degree
    loan_amount: float,  # in $1,000s
):
    preprocessed_features = preprocess_features(
        loan_id, college_degree, loan_amount, X_mean=X_train_mean, X_std=X_train_std
    )
    prediction = bool(model.predict(preprocessed_features.reshape(1, -1))[0])
    inputs = {"college_degree": college_degree, "loan_amount": loan_amount}
    outputs = {"loan_repaid_pred": prediction}
    gantry.log_record(
        GANTRY_APPLICATION_NAME,
        version=GANTRY_APPLICATION_VERSION,
        inputs=inputs,
        outputs=outputs,
        join_key=loan_id,
    )
    return prediction


@app.route("/loans/<loan_id>/predict", methods=["POST"])
def predict(loan_id):
    """Call this endpoint to make a prediction on whether a given loan
    will be repaid.
    """
    data = request.get_json(force=True)
    return jsonify({"result": _predict(loan_id, **data)})
  
@app.route("/loans/<loan_id>/repay", methods=["POST"])
def repay_status(loan_id):
    """Call this endpoint to update the actual loan repay status"""
    data = request.get_json(force=True)
    
    gantry.log_record(
      GANTRY_APPLICATION_NAME,
      feedback={"loan_repaid": data["repaid"]},
      join_key=loan_id,
    )
    return {"response": "ok"}