Documentation

Everything you need to know about using the TIERverify™ Analyzer to evaluate AI outputs against governance frameworks.

What is TIERverify™ Analyzer?

TIERverify™ Analyzer is a governance scoring tool that evaluates AI-generated outputs against regulatory and compliance frameworks. In an era where organizations increasingly rely on AI assistants for policy drafting, compliance documentation, and security guidance, ensuring these outputs meet governance standards is critical.

The analyzer performs comprehensive checks including citation accuracy verification, hallucination detection, prescriptive language analysis, governance alignment scoring, and framework coverage mapping. Each analysis produces a Trust Score—a weighted composite reflecting how well the AI output aligns with your selected governance context.

TIERverify™ Analyzer is part of the TIERverify™ Architecture Suite, a comprehensive platform for AI governance, inference orchestration, and compliance automation.

How to Use the Analyzer

1

Select Your AI Source

Choose which AI model generated the output you want to analyze. This helps contextualize the analysis and allows tracking of governance patterns across different AI systems.

2

Choose Your Role

Select the role that best describes your function. This affects which governance rules apply—auditors see stricter checks for prescriptive language, developers see implementation-focused analysis. Use "General User" for a neutral analysis without role-specific constraints.

3

Select a Governance Framework

Choose the compliance framework to evaluate against (NIST 800-53, ISO 27001, SOC 2, etc.). The analyzer will map content to specific controls and identify coverage gaps. Select "None" to run content quality checks only (citation + hallucination detection) without framework-specific scoring.

4

Select Industry (Optional)

Optionally load industry-specific governance profiles for more targeted analysis. This applies additional context relevant to your sector—healthcare, finance, government, etc.

5

Paste Your Content

Paste the AI-generated output you want to analyze. You can optionally include the original prompt for more contextual analysis. The analyzer processes text content of any length.

6

Review Results

View your Trust Score and detailed analysis including citation check, hallucination detection, governance gap analysis, and framework coverage report. Export the results as a PDF for documentation or stakeholder review.

Understanding Your Results

%
Trust Score

Your overall governance alignment score on a 0-100 scale. This weighted composite reflects citation accuracy (20%), content accuracy (25%), framework completeness (20%), governance alignment (25%), and objectivity (10%). Scores above 75 indicate good governance alignment; scores below 50 suggest significant revision is needed.

Citation Check

Identifies claims in the output and verifies whether they are properly supported by references. Reports total claims found, properly cited claims, and unsupported claims that may need authoritative sources added.

Hallucination Detection

Flags assertions that cannot be traced to framework controls or established facts. Identifies fabricated references, incorrect control numbers, invented requirements, or unverifiable quantitative claims. Each flag includes the problematic text and an explanation of why it was flagged.

Framework Coverage

Evaluates how well the output addresses controls from your selected framework. Shows which controls are referenced, which are missing, and the overall coverage percentage. Only available when a framework is selected—displays "N/A" when using the "None" option.

Prescriptive Language Detection

Checks whether the output maintains objectivity or uses prescriptive/absolute language. Particularly important for auditors who must avoid telling auditees what to do. Flags phrases like "you must," "you should," or "the only way" and suggests neutral alternatives.

Governance Gap Analysis

Detailed breakdown of governance constraint checks including: speculation, assumptions, absolute statements, strong opinions, sensitive content, legal advice, and framework citation requirements. Each gap shows whether it was detected and its severity level.

Original vs. Governed Comparison

When your Trust Score is below threshold, the analyzer shows a side-by-side comparison of your original AI output and a MIKE-governed version. This demonstrates how the governance engine would adjust language, add citations, and remove problematic statements to improve compliance alignment.

Supported Frameworks

TIERverify™ Analyzer supports evaluation against the regulatory and compliance frameworks below. Each entry reflects the control catalog used for coverage mapping in the analyzer. Additional frameworks and reference connectors may be added over time.

NIST 800-53 Rev 5

Security and Privacy Controls for Information Systems

1189 controls

Common use cases: Federal agencies, government contractors, high-security environments

NIST 800-171 Rev 3

Protecting Controlled Unclassified Information

110 controls

Common use cases: DoD contractors, CUI handling organizations

NIST CSF 2.0

NIST Cybersecurity Framework (CSF 2.0)

6 controls

Common use cases: Enterprise risk programs, executive reporting, cross-sector alignment

FedRAMP

Federal Risk and Authorization Management Program

325 controls

Common use cases: Cloud service providers for federal agencies

CMMC 2.0

Cybersecurity Maturity Model Certification

110 controls

Common use cases: Defense Industrial Base contractors

SOC 2

Service Organization Control 2 Trust Principles

64 controls

Common use cases: SaaS providers, cloud services, technology companies

HIPAA

Health Insurance Portability and Accountability Act

54 controls

Common use cases: Healthcare providers, health plans, healthcare clearinghouses

PCI DSS v4.0

Payment Card Industry Data Security Standard

64 controls

Common use cases: Payment processors, merchants, financial services

ISO 27001:2022

Information Security Management System

93 controls

Common use cases: Global enterprises, international compliance, ISMS certification

GDPR

EU General Data Protection Regulation

99 controls

Common use cases: Organizations handling EU personal data, privacy compliance

EU AI Act

EU Artificial Intelligence Act (risk-based AI requirements)

25 controls

Common use cases: Organizations placing or deploying AI systems in the EU

NIST AI RMF

NIST Artificial Intelligence Risk Management Framework

27 controls

Common use cases: AI governance programs, trustworthy AI assessment, federal AI risk management

OECD AI Principles

OECD Principles on Artificial Intelligence

10 controls

Common use cases: International AI policy alignment, responsible AI stewardship

See It In Action

See how TIERverify™ Analyzer evaluates AI-generated content about NIST 800-53 AU-2 (Event Logging) requirements.

Sample AI Output (Ungoverned)Trust Score: 23
"For AU-2 compliance, you must always log all system events without exception. According to NIST control AU-2.1(b), organizations should implement comprehensive logging. The only correct approach is to use a SIEM tool, which 95% of compliant organizations already do..."
Citations: 1/4 supported — fabricated control reference
Hallucinations: 2 detected — invented statistic, wrong control ID
Governance Gaps: 4 found — prescriptive language, absolutes

Summary: Contains fabricated control numbers, unsupported statistics, and prescriptive language that could create compliance risk.

MIKE-Governed OutputTrust Score: 94
"NIST 800-53 Rev 5 AU-2 requires organizations to identify events that the system is capable of logging (AU-2a) and coordinate the event logging function with other entities (AU-2c). Organizations may consider SIEM solutions based on their risk assessment and operational requirements..."
Citations: 4/4 supported — accurate control references
Hallucinations: 0 detected — all claims verifiable
Governance Gaps: 0 found — objective, qualified language

Summary: Accurate control citations, verifiable claims, and objective language suitable for compliance documentation.

The difference isn't the AI model — it's whether the output was evaluated through a governance layer. TIERverify™ Analyzer scores any AI output and shows you exactly where the gaps are.

Data Handling & Privacy

How your content is handled

When you are signed in, analysis text and results may be stored in your account (for example, analysis history and PDF exports) as described in our Privacy Policy. Requests are processed over encrypted connections. Analysis content is not written to local application disk during processing or post-processing job storage paths.

Anonymized Usage Analytics

We collect anonymized, aggregated usage patterns (e.g., which frameworks are most requested, common role selections) to improve our governance engine. No personally identifiable information or client-specific content is included in these analytics.

AI Provider Data Policy

TIERverify™ uses commercial APIs from OpenAI, Anthropic, and other AI LLM operators for governance scoring. Under each operator's commercial API terms, your inputs and outputs are not used for model training and are not retained beyond the API request lifecycle.

Your Data, Your Control

Authenticated users can delete their account and all associated metadata at any time. Unauthenticated analyses leave no trace.

Infrastructure

The Analyzer runs on stateless serverless infrastructure with encrypted connections (TLS). No analysis content is written to disk or persistent storage during processing or in post-processing storage paths.

Frequently Asked Questions

What AI outputs can I analyze?

Any text output from any AI system. Simply paste the response you want to evaluate into the analyzer. TIERverify™ works with outputs from ChatGPT, Claude, Gemini, Copilot, Llama, Mistral, internal enterprise AI systems, or any other AI tool.

Does TIERverify™ store my content?

When you are signed in, your analysis inputs and results may be stored in your account so you can access history and reports—see the Privacy Policy for details. Processing uses encrypted connections; content is not written to local application disk during processing or post-processing storage paths. Third-party AI providers process prompts under their commercial API terms (not used for model training, not retained beyond the API request lifecycle per provider policies).

What does a low Trust Score mean?

A low Trust Score indicates gaps in governance alignment, not necessarily that the content is factually wrong. It means the output may lack proper citations, contain unverified claims, use inappropriate prescriptive language, or miss framework-specific requirements. The detailed breakdown shows exactly where improvements are needed.

Can I analyze outputs without selecting a framework?

Yes. Select "None" as your framework to run citation checking and hallucination detection without framework-specific governance scoring. This is useful when you want to assess general content quality without mapping to specific compliance controls.

What is the difference between roles?

Roles adjust which governance rules apply during analysis. An Auditor sees strict checks for prescriptive language (auditors cannot tell auditees what to do). A Developer sees implementation-focused checks. A Business User sees checks for sensitive content and legal advice. A General User gets a neutral analysis without role-specific constraints.

What does "MIKE-Governed" mean?

MIKE (Managed Inference Kernel Engine) is the governance orchestration engine that evaluates and remediates AI outputs. When your Trust Score is below threshold, the analyzer shows a "governed" version demonstrating how MIKE would adjust the output for compliance—replacing absolute language, adding citations, and removing prescriptive statements.

Can I export my results?

Yes. Analysis results can be exported as a PDF report containing your Trust Score, all analysis sections, and detailed findings. Use the "Export PDF" button on the results page.

Ready to start scoring AI outputs?

View Plans
Documentation | TIERverify™ Analyzer