POST
/
classifications
/
universal
import Isaacus from 'isaacus';

const client = new Isaacus({
  apiKey: process.env['ISAACUS_API_KEY'], // This is the default and can be omitted
});

async function main() {
  const universalClassification = await client.classifications.universal.create({
    model: 'kanon-universal-classifier',
    query: 'This is a confidentiality clause.',
    text: 'I agree not to tell anyone about the document.',
  });

  console.log(universalClassification.chunks);
}

main();
{
  "chunks": [
    {
      "end": 46,
      "score": 0.7481262778280844,
      "start": 0,
      "text": "I agree not to tell anyone about the document."
    }
  ],
  "score": 0.7481262778280844,
  "usage": {
    "input_tokens": 19
  }
}

Authorizations

Authorization
string
header
required

An Isaacus-issued API key passed as a bearer token via the Authorization header in the format Authorization: Bearer YOUR_API_KEY.

Body

application/json

A request to classify the relevance of a legal document to a query using an Isaacus universal legal AI classifier.

model
enum<string>
required

The ID of the model to use for universal classification.

Available options:
kanon-universal-classifier,
kanon-universal-classifier-mini
Example:

"kanon-universal-classifier"

query
string
required

The Isaacus Query Language (IQL) query or, if IQL is disabled, the statement, to evaluate the text against.

The query must contain at least one non-whitespace character.

Unlike the text being classified, the query cannot be so long that it exceeds the maximum input length of the universal classifier.

Required string length: 1 - 5000
Example:

"This is a confidentiality clause."

text
string
required

The text to classify.

The text must contain at least one non-whitespace character.

Required string length: 1 - 10000000
Example:

"I agree not to tell anyone about the document."

chunking_options
object | null

Settings for how the text should be chunked into smaller segments before classification using semchunk.

If null, the text will not be chunked and will instead be truncated to the maximum input length of the model less overhead if found to exceed that limit.

Example:
{
  "overlap_ratio": null,
  "size": 512,
  "overlap_tokens": null
}
is_iql
boolean
default:
true

Whether the query should be interpreted as an Isaacus Query Language (IQL) query or else as a statement.

Example:

true

scoring_method
enum<string>
default:
auto

The method to use for producing an overall confidence score.

auto is the default scoring method and is recommended for most use cases. Currently, it is equivalent to chunk_max. In the future, it will automatically select the best method based on the model and input.

chunk_max uses the highest confidence score of all of the text's chunks.

chunk_avg averages the confidence scores of all of the text's chunks.

chunk_min uses the lowest confidence score of all of the text's chunks.

Available options:
auto,
chunk_max,
chunk_avg,
chunk_min
Example:

"auto"

Response

200
application/json
The classification of the text.

A classification of the relevance of a legal document to a query produced by an Isaacus universal legal AI classifier.

chunks
object[] | null
required

The text as broken into chunks by semchunk, each chunk with its own confidence score.

If no chunking occurred, this will be null.

A chunk of a text that has been classified by an Isaacus universal legal AI classifier.

Example:
[
  {
    "end": 46,
    "score": 0.7481262778280844,
    "start": 0,
    "text": "I agree not to tell anyone about the document."
  }
]
score
number
required

A score of the likelihood that the query expressed about the text is supported by the text.

A score greater than 0.5 indicates that the text supports the query, while a score less than 0.5 indicates that the text does not support the query.

Required range: 0 <= x <= 1
Example:

0.7481262778280844

usage
object
required

Statistics about the usage of resources in the process of classifying the text.

Example:
{ "input_tokens": 19 }