Skip to main content

Agent Run Reporting

To effectively monitor an AI agent within IcarusAI, it's crucial that we receive data from the agent's runs. This is accomplished through a straightforward API endpoint designed for reporting these runs.

Reporting Runs via API

We have a simple API endpoint that you can use to report the runs of your agents. The endpoint is fully documented and available for reference and testing at this link.

Structure of an Agent Run Report

When reporting a run, the data should be structured as follows:

{
"access_token": "string",
"run_start": "1970-01-01T00:00:00.000Z",
"run_end": "1970-01-01T00:00:00.000Z",
"success": false,
"sample_skipped": 0,
"client_reference": "AAAAAA",
"inputs": {},
"outputs": 0,
"labels": {}
}

Field Descriptions

  • access_token: A string token used to authenticate the request. This field is mandatory.
  • run_start: The timestamp marking the start of the run. This field is mandatory and should be in ISO 8601 format.
  • run_end: The timestamp indicating the end of the run. This field is also mandatory and should be in ISO 8601 format.
  • success: A boolean value indicating whether the run was completed successfully. This field is optional and defaults to true if not provided.
  • sample_skipped: An integer representing the number of runs that were skipped before this report. This field is optional and defaults to 0 (no runs skipped).
  • client_reference: A string for any client-specific identifier or reference associated with the run. This field is optional.
  • inputs: A dictionary containing the input features for the run. Each feature can be a number or a string (categorical value). This field is optional.
  • outputs: The output generated by the agent during the run. The type of this field varies based on the agent type (detailed below). This field is optional.
  • labels: A dictionary for any additional metadata related to the run, such as agent ID, version, geographic region, etc. This field is optional.

Output Field Based on Agent Type

The outputs field is dependent on the agent type, which determines the expected format:

  • Regression: A single numerical value (e.g., 42).
  • Classification: A categorical value as a string (e.g., "Dog").
  • Detection: A list of bounding boxes (e.g., [[1, 2, 10, 20]]).
  • Generic: A dictionary with one or more numeric or categorical output values (e.g., {"num": 42, "class": "4"}).
  • LLM (Large Language Model): A conversation represented as a list of queries and responses (e.g., ["How are you", "Great!"]).

Example: Python Code for Reporting a Run

Reporting an agent run to IcarusAI is straightforward and can be done with a simple API call. Here’s an example using Python:

import requests
response = requests.put(
"http://api.icrs.ai/api/data/run",
json={
"access_token": "icrs-a-YOUR-ACCESS-TOKEN",
"run_start": start.isoformat(),
"run_end": end.isoformat(),
"outputs": 0.42,
},
)

You can find additional code snippets for other programming languages in the app.

Sampling Runs

It’s not necessary to report every single run to IcarusAI—just a representative sample can often provide enough data for effective monitoring. If you choose to report only a subset of runs, you can indicate how many runs were skipped using the sample_skipped field.

By default, IcarusAI assumes that no runs were skipped (sample_skipped = 0). The frequency of sampling is entirely up to you, allowing you to balance data granularity with reporting overhead.