POST
/
extractions
/
qa
Python
import os
from isaacus import Isaacus

client = Isaacus(
    api_key=os.environ.get("ISAACUS_API_KEY"),  # This is the default and can be omitted
)
answer_extraction = client.extractions.qa.create(
    model="kanon-answer-extractor",
    query="What is the punishment for murder in Victoria?",
    texts=["The standard sentence for murder in the State of Victoria is 30 years if the person murdered was a police officer and 25 years in any other case."],
)
print(answer_extraction.extractions)
{
"extractions": [
{
"index": 0,
"answers": [
{
"text": "30 years if the person murdered was a police officer and 25 years in any other case",
"start": 61,
"end": 144,
"score": 0.11460486645671249
}
],
"inextractability_score": 0.0027424068182309302
}
],
"usage": {
"input_tokens": 43
}
}

Authorizations

Authorization
string
header
required

An Isaacus-issued API key passed as a bearer token via the Authorization header in the format Authorization: Bearer YOUR_API_KEY.

Body

application/json

A request to extract answers from legal documents with an Isaacus legal AI extractive question answering model.

model
enum<string>
required

The ID of the model to use for extractive question answering.

Available options:
kanon-answer-extractor,
kanon-answer-extractor-mini
Examples:

"kanon-answer-extractor"

query
string
required

The query to extract the answer to.

The query must contain at least one non-whitespace character.

Unlike the texts from which the answer will be extracted, the query cannot be so long that it exceeds the maximum input length of the model.

Required string length: 1 - 5000
Examples:

"What is the punishment for murder in Victoria?"

texts
string[]
required

The texts to search for the answer in and extract the answer from.

There must be at least one text.

Each text must contain at least one non-whitespace character.

Examples:
[
"The standard sentence for murder in the State of Victoria is 30 years if the person murdered was a police officer and 25 years in any other case."
]
ignore_inextractability
boolean
default:false

Whether to, if the model's score of the likelihood that an answer can not be extracted from a text is greater than the highest score of all possible answers, still return the highest scoring answers for that text.

If you have already determined that the texts answer the query, for example, by using one of our classification or reranker models, then you should set this to true.

Examples:

false

top_k
integer
default:1

The number of highest scoring answers to return.

If null, which is the default, all answers will be returned.

Required range: x >= 1
Examples:

1

chunking_options
object | null

Settings for how texts should be chunked into smaller segments by semchunk before extraction.

If null, the texts will not be chunked and will instead be truncated to the maximum input length of the model less overhead if found to exceed that limit. Options for how to split text into smaller chunks.

Examples:
{ "size": 512, "overlap_ratio": 0.1 }
{ "size": 512, "overlap_tokens": 10 }
{ "size": 512 }

Response

The documents have been successfully processed.

The results of extracting answers from texts.

extractions
Answer extraction · object[]
required

The results of extracting answers from the texts, ordered from highest to lowest answer confidence score (or else lowest to highest inextractability score if there are no answers for a text).

Examples:
[
{
"index": 0,
"answers": [
{
"text": "30 years if the person murdered was a police officer and 25 years in any other case",
"start": 61,
"end": 144,
"score": 0.11460486645671249
}
],
"inextractability_score": 0.0027424068182309302
}
]
usage
object
required

Statistics about the usage of resources in the process of extracting answers from the texts.

Examples:
{ "input_tokens": 43 }