Agnes is an AI security service that protects production LLM applications. You send Agnes the text flowing in or out of your LLM — user prompts, model responses, retrieved documents — and Agnes decides whether it is safe to proceed. Behind the API, Agnes runs a suite of analyzers in parallel:Documentation Index
Fetch the complete documentation index at: https://docs.lasscyber.com/llms.txt
Use this file to discover all available pages before exploring further.
- Prompt injection & jailbreak detection — BERT-family classifiers tuned for adversarial prompts.
- Safety & responsible AI guardrails — LLM-as-a-judge using ShieldGemma.
- Sensitive Data Protection — Google Cloud DLP for PII, credentials, and PHI.
- Natural Language analysis — entity, sentiment, and moderation signals from Google Cloud NL.
- Malicious URL detection — Google Web Risk for malware, phishing, and unwanted-software URLs.
- YARA rule enforcement — pre-built and customer-authored signatures.
- Semantic threat intelligence — vector similarity against a database of known adversarial prompts.
POST /api/v1/analyze/, that runs a combination of those
analyzers under a customer-defined policy.
5-minute quickstart
Sign up, mint an API key, and run your first analysis with the Python or
TypeScript SDK.
How Agnes works
The request lifecycle from your code to the analyzer pipeline and back.
Build a policy
Combine analyzers, set thresholds, and define when to block, warn, or pass
through.
API reference
Full OpenAPI reference with an interactive playground for every endpoint.
What Agnes is for
You should reach for Agnes when you are shipping LLM-powered features and you need a security layer that does not depend on your model vendor. Typical deployments place Agnes:- In front of every prompt going to your model, to block prompt injection, jailbreak attempts, and policy violations before tokens leave your tenancy.
- In front of every response coming back from your model, to catch safety violations, leaked sensitive data, or malicious URLs before they hit users.
- On retrieval pipelines (RAG ingestion, tool outputs), to scan third-party content with the same policy your prompts get.
Where to next
- First time here? Start with the Quickstart and Authentication.
- Designing a policy? Read How Agnes works, the Combined analyzer deep dive, and Agnes policies.
- Building an integration? Jump to the
Python SDK or TypeScript SDK. Both
share a single OpenAPI contract, expose
Agnes.analyzeandAgnes.guard, ship aPolicyBuilder, and offer an OpenAI drop-in. - Operating Agnes in production? Roles, API keys, and Billing cover the day-two work.
Service status
Real-time API health and incident history live at status.lasscyber.com. Subscribe via email or Slack to be notified the moment an incident opens or resolves.Help
If you cannot find what you need:- File a ticket from the in-app Support page — your tenant, plan, and recent request IDs are attached automatically.
- Email support@lasscyber.com and quote a
recent
X-Request-IDfrom the API response. The SDKs surface it asdecision.request_id(Python) /decision.requestId(TypeScript).