Documentation Index
Fetch the complete documentation index at: https://docs.lasscyber.com/llms.txt
Use this file to discover all available pages before exploring further.
| |
|---|
| HTTP status | 413 Payload Too Large |
| Code | payload_too_large |
| Retry? | No — send a smaller payload. |
When this happens
Two distinct triggers share this error code:
- Raw byte cap. The request body exceeds the API’s configured
MAX_REQUEST_SIZE (10 MB by default).
- Per-analyzer token cap. The hero
POST /api/v1/analyze/ rejects
prompts that exceed the per-analyzer token limits (e.g. 100,000
tokens for the safety / prompt-injection / vector analyzers,
1,000,000 for YARA / SDP / URL).
The response detail tells you which.
Example response
{
"detail": "Request body exceeds 10 MB.",
"code": "payload_too_large",
"request_id": "5b3f6c7e-7d24-4d40-9b12-3a59c01c6e91",
"doc_url": "https://docs.lasscyber.com/errors/payload_too_large"
}
For an analyzer-level cap:
{
"detail": "Prompt exceeds 100,000 token limit for safety_moderation_analyzer.",
"code": "payload_too_large",
"request_id": "...",
"doc_url": "https://docs.lasscyber.com/errors/payload_too_large"
}
How to fix
- For prompt analysis: trim the prompt or split it into chunks
yourself before sending to Agnes. The classifier already chunks at
400 tokens internally for prompt-injection; the cap is on the
input you submit.
- For uploads / threat-intel ingestion: split the payload across
multiple requests.
Per-analyzer token caps
| Analyzer | Max input tokens |
|---|
| Prompt Injection & Jailbreak | 100,000 |
| Safety & Responsible AI | 100,000 |
| Sensitive Data | 1,000,000 |
| Natural Language | 100,000 |
| URL Risk | 1,000,000 |
| YARA | 1,000,000 |
| Semantic Threat Intelligence | 100,000 |
The combined analyzer enforces the most restrictive limit across the
analyzers in your policy.
SDK behaviour
| SDK | Exception |
|---|
| Python | agnes.ValidationError (with code == "payload_too_large") |
| TypeScript | ValidationError (with code === "payload_too_large") |
SDKs do not retry 413s.