AISeal Scan

TrustScan

Test a prompt against the OWASP LLM Top 10. Get a TrustScore instantly.

45/ 100
HIGH RISK

TrustScore

Scan Summary

Model: Claude Sonnet 4.6

3/29/2026, 9:15:00 AM · 142 chars

7

Passed

1

Warnings

2

Failed

OWASP LLM Top 10 Breakdown

10 categories checked

FAIL
LLM01Prompt Injectioncritical

Prompt injection pattern detected — attempt to override model instructions.

PASS
LLM02Sensitive Information Disclosure

No sensitive data patterns detected.

PASS
LLM03Supply Chain Vulnerabilities

No issues detected via static analysis. Dynamic testing recommended.

PASS
LLM04Data and Model Poisoning

No issues detected via static analysis. Dynamic testing recommended.

PASS
LLM05Improper Output Handling

No unsafe output handling patterns detected.

FAIL
LLM06Excessive Agencyhigh

Prompt requests high-privilege or destructive actions — excessive agency risk.

WARN
LLM07System Prompt Leakagemedium

Prompt attempts to extract system prompt or internal instructions.

PASS
LLM08Vector and Embedding Weaknesses

No issues detected via static analysis. Dynamic testing recommended.

PASS
LLM09Misinformation

No issues detected via static analysis. Dynamic testing recommended.

PASS
LLM10Unbounded Consumption

No issues detected via static analysis. Dynamic testing recommended.

NIST AI RMF Framework Alignment

NIST AI 100-1 · Govern · Map · Measure · Manage

GOVERNMAPMEASUREMANAGE
LLM01Prompt Injection
MEASUREMANAGE
LLM02Sensitive Information Disclosure
MAPMANAGE
LLM03Supply Chain Vulnerabilities
GOVERNMAP
LLM04Data and Model Poisoning
MAPMEASURE
LLM05Improper Output Handling
MEASUREMANAGE
LLM06Excessive Agency
GOVERNMANAGE
LLM07System Prompt Leakage
MEASUREMANAGE
LLM08Vector and Embedding Weaknesses
MAPMEASURE
LLM09Misinformation
MAPMEASURE
LLM10Unbounded Consumption
GOVERNMANAGE

MITRE ATLAS Technique Mapping

Adversarial Threat Landscape for AI Systems · atlas.mitre.org

3 TECHNIQUES TRIGGERED
LLM01
AML.T0051 · LLM Prompt Injection
LLM02
AML.T0025 · Exfiltration via Cyber Means
LLM03
AML.T0010 · ML Supply Chain Compromise
LLM04
AML.T0020 · Poison Training Data
LLM05
AML.T0048 · LLM Jailbreak
LLM06
AML.T0051 · LLM Prompt InjectionAML.T0040 · ML Inference API Access
LLM07
AML.T0056 · LLM Meta Prompt Extraction
LLM08
AML.T0043 · Craft Adversarial Data
LLM09
AML.T0048 · LLM Jailbreak
LLM10
AML.T0034 · Cost Harvesting
Pattern analysis byIRIS· Integrated Risk Insight System
IRIS Analysis· Integrated Risk Insight System

Ask IRIS for an executive risk narrative on these findings.

Red Team Suite

Run multiple tests. Get an aggregate score.

Select the scenarios you want to run, or fire the full suite at once.

0 of 9 selected

Pricing

Start free. Scale when you're ready.

Every plan includes full OWASP LLM Top 10 coverage.

Free

$0forever

For individuals and teams evaluating AI security posture.

  • 10 TrustScans per day
  • Full OWASP LLM Top 10 analysis
  • TrustScore + exportable report
  • 13-test red team suite
Most Popular

Pro

$499/ month

For security teams running continuous AI red team testing.

  • Unlimited TrustScans
  • REST API access
  • Scan history + audit log
  • CI/CD pipeline integration
  • Priority support

Enterprise

Custom

For organizations certifying AI products at scale.

  • Everything in Pro
  • AISeal Monitor (runtime surveillance)
  • AISeal Cert + public badge
  • Custom red team test suites
  • Dedicated SLA + onboarding