POST
/
classifications
/
universal
import os
from isaacus import Isaacus

client = Isaacus(
    api_key=os.environ.get("ISAACUS_API_KEY"),  # This is the default and can be omitted
)
universal_classification = client.classifications.universal.create(
    model="kanon-universal-classifier",
    query="This is a confidentiality clause.",
    texts=["I agree not to tell anyone about the document."],
)
print(universal_classification.classifications)
{
  "classifications": [
    {
      "index": 0,
      "score": 0.8825573934438159,
      "chunks": [
        {
          "index": 0,
          "start": 0,
          "end": 45,
          "score": 0.8825573934438159,
          "text": "I agree not to tell anyone about the document."
        }
      ]
    }
  ],
  "usage": {
    "input_tokens": 19
  }
}

Authorizations

Authorization
string
header
required

An Isaacus-issued API key passed as a bearer token via the Authorization header in the format Authorization: Bearer YOUR_API_KEY.

Body

application/json

A request to classify the relevance of legal documents to a query with an Isaacus universal legal AI classifier.

model
enum<string>
required

The ID of the model to use for universal classification.

Available options:
kanon-universal-classifier,
kanon-universal-classifier-mini
Example:

"kanon-universal-classifier"

query
string
required

The Isaacus Query Language (IQL) query or, if IQL is disabled, the statement, to evaluate the texts against.

The query must contain at least one non-whitespace character.

Unlike the texts being classified, the query cannot be so long that it exceeds the maximum input length of the universal classifier.

Required string length: 1 - 5000
Example:

"This is a confidentiality clause."

texts
string[]
required

The texts to classify.

The texts must contain at least one non-whitespace character.

Example:
[
  "I agree not to tell anyone about the document."
]
is_iql
boolean
default:true

Whether the query should be interpreted as an IQL query or else as a statement.

Example:

true

scoring_method
enum<string>
default:auto

The method to use for producing an overall confidence score.

auto is the default scoring method and is recommended for most use cases. Currently, it is equivalent to chunk_max. In the future, it will automatically select the best method based on the model and inputs.

chunk_max uses the highest confidence score of all of the texts' chunks.

chunk_avg averages the confidence scores of all of the texts' chunks.

chunk_min uses the lowest confidence score of all of the texts' chunks.

Available options:
auto,
chunk_max,
chunk_avg,
chunk_min
Example:

"auto"

chunking_options
object | null

Settings for how the texts should be chunked into smaller segments before classification using semchunk.

If null, the texts will not be chunked and will instead be truncated to the maximum input length of the model less overhead if found to exceed that limit.

Example:
{
  "size": 512,
  "overlap_ratio": null,
  "overlap_tokens": null
}

Response

200
application/json
The documents have been successfully classified.

Classifications of the relevance of legal documents to a query produced by an Isaacus universal legal AI classifier.

classifications
object[]
required

The classifications of the texts, by relevance to the query, in order from highest to lowest relevance score.

Example:
[
  {
    "index": 0,
    "score": 0.8825573934438159,
    "chunks": [
      {
        "index": 0,
        "start": 0,
        "end": 45,
        "score": 0.8825573934438159,
        "text": "I agree not to tell anyone about the document."
      }
    ]
  }
]
usage
object
required

Statistics about the usage of resources in the process of classifying the text.

Example:
{ "input_tokens": 19 }