Frameworks & Controls

Automated AI governance aligned to global standards.

Map live AI activity, DLP, Shadow AI, and policy posture directly to industry frameworks with automated scoring and evidence.

Automated scoring Live evidence Framework library Shadow AI aware
Frameworks overview dashboard

What Are Frameworks & Controls?

Frameworks bundle governance and compliance requirements; controls are the specific obligations Kairro measures automatically.

Frameworks

Structured requirements

NIST AI RMF, ISO 42001, OECD, EU AI Act mappings, and the Kairro Baseline Framework.

Controls

Evidence-backed obligations

Control IDs, status, score, mapped signals (policies, events, Shadow AI, incidents, tooling), and evidence.

Automated mapping

Live operational signals

Policies, events, DLP detections, Shadow AI signals, incidents, and tool coverage drive control scores automatically.

Framework Library

Curated, system-managed frameworks plus custom frameworks tailored to your program.

System frameworks

  • NIST AI Risk Management Framework
  • ISO 42001
  • OECD AI Principles
  • EU AI Act mappings
  • Kairro Baseline Framework

Pre-loaded, protected, auto-updated, and aligned to automated scoring.

Custom frameworks

Add org-specific frameworks and controls; editable fields, manual scoring where permitted.

  • Create and edit controls
  • Map to policies and signals
  • Blend manual and automated evidence

Governance guardrails

System items are read-only for mapped areas; custom items remain flexible within RBAC.

  • Cannot delete system frameworks/controls
  • Protected mappings for auto scoring
  • Continuous updates as standards evolve

Controls

Each control carries IDs, mappings, scoring, evidence, incidents, and coverage indicators.

Control metadata
  • Control ID and description
  • Status, score, and coverage
  • Incident counts and mappings
Mappings & evidence
  • Policies, events, Shadow AI filters
  • Automated evidence plus manual (where allowed)
  • Tooling coverage and endpoint footprint
System vs custom
  • System controls: auto-scored & protected
  • Custom controls: editable and manually scoreable
  • Audit events for transparency

Browsing Frameworks & Controls

Purpose-built pages for frameworks, controls, and detailed evidence views.

Frameworks Page (/frameworks)

Framework name, category, controls count, percentage complete, current governance posture. Selecting a framework opens its controls.

Controls Page (/frameworks/:id/controls)

Control IDs, implementation status, score, evidence counts, incident/shadow signals, and permissions-aware "Add Control" with back navigation.

Control Detail (/controls/:controlId)

Description, automated and manual evidence, mapped policies, event linkage, Shadow AI detections, incident impacts, tooling coverage, score breakdown, and audit events.

Controls listing view
Control detail view

How Scoring Works

Each control receives a 0–100 score across weighted signal categories and status thresholds.

Signal weights (max)
  • Policies: 35
  • Enforcement: 25
  • Incidents: 15
  • Shadow AI: 15
  • Tooling: 8
  • Risk Threshold: 2
Status levels
  • ≥ 90 → IMPLEMENTED
  • ≥ 70 → PARTIALLY IMPLEMENTED
  • ≥ 40 → IN PROGRESS
  • < 40 → NOT STARTED

System controls use automated scoring; manual adjustments apply only to custom controls.

Score drivers

Weighted by policies, enforcement outcomes, incidents, Shadow AI signals, tooling, and org risk thresholds.

Signal Inputs

Controls map to policy coverage, events, Shadow AI, incidents, and tooling coverage.

Policy coverage

Active policies aligned to the control, IDs, names, defaults, enforcement strength, relevance.

Events & DLP

DLP severities, block/warn/mask, high-risk interactions, policy outcomes; inferred event types when missing.

Shadow AI

Unapproved hosts, discovered tools, high-severity Shadow AI events when mapped to the control.

Incidents

Linked incidents, severity penalties, recurring issues reduce scores until resolved.

Tooling coverage

Approved vs unapproved tools, endpoint footprint, licensing, and risk threshold alignment.

Automated Evidence

Evidence regenerates on scoring recompute, stays immutable, and blends with manual inputs where allowed.

Automated
Live signal rollups

Policy coverage summaries, enforcement insights, Shadow AI detections, incidents, tooling and endpoint coverage.

  • RecomputeRegenerated with each scoring run
  • ContextPolicies, events, DLP, shadow, incidents, tooling
Integrity
Immutable audit trail

Automated evidence is immutable; manual evidence is allowed only for custom controls and within role permissions.

  • ImmutableSystem-generated evidence locked
  • ManualAllowed for custom controls when permitted

RBAC & Integrity Controls

System frameworks remain protected; custom items stay flexible within permissions with full auditability.

Guardrails
System protections
  • Viewers are read-only
  • System frameworks/controls cannot be deleted; mappings locked
  • System scoring is backend-managed and read-only
Permissions
Custom flexibility
  • Custom controls editable within RBAC
  • Manual evidence restricted to allowed roles
  • Audit events ensure transparency

Recompute & Data Freshness

Scores refresh on schedule and on-demand; the UI always shows fresh data.

Admin recompute endpoint
/v1/admin/frameworks/recompute

Triggers scoring recompute; automated evidence updates with each run.

Freshness

Page loads pull updated scores, signal interpretations, evidence, incidents, and shadow counts.

Why This Matters

A live governance dashboard with automated evidence tied to AI signals, accurate scoring across policies, DLP, Shadow AI, and incidents, and clear gaps to close for major AI standards — turning governance into an operational capability.