G8TED logoG8TED Framework
BlogFrameworkAction execution layer

Where G8TED fits: the action execution layer in the AI security stack

How G8TED composes with NIST AI RMF, CSA MAESTRO, MITRE ATLAS, OWASP Top 10 for LLM, and lifecycle security frameworks.

Sunny KapoorDec 15, 2025, 12:00 PM MST12 min read
g8tedsocaigovernancesecurity

Where G8TED fits: the action execution layer in the AI security stack

When people compare “AI security frameworks,” they often mix layers that solve different problems.

Here is the clean model:

  • NIST AI RMF helps govern AI risk at the enterprise level.
  • CSA MAESTRO helps threat model agentic and multi-agent systems.
  • MITRE ATLAS helps understand adversary tactics and techniques against AI.
  • OWASP Top 10 for LLM Applications helps engineers avoid common LLM app vulnerabilities.
  • SAIL helps teams think about AI security across the lifecycle.
  • G8TED governs state-changing SOC actions at runtime with outcomes, Risk Reasons, and audit-grade Proof.

If your question is: “Can my agent disable an account, quarantine email, or isolate a host safely, and can I defend that decision later?” That is the G8TED layer.

Related:
- Canonical explainer: /blog/g8ted-safe-soc-autonomy
- What G8TED is (and isn’t): /blog/what-is-g8ted
- G8TED Spec v1.0: /spec/v1.0

The problem: security teams need runtime action governance

In the SOC, automation becomes dangerous when it can:

  • change access
  • isolate systems
  • delete data
  • revoke keys
  • disrupt investigations or production uptime

Most teams have detections and playbooks. What they lack is a portable, auditable standard for:

  • what action is being proposed (typed action vocabulary)
  • when it is risky in context (Risk Reasons)
  • what the policy outcome is (allow, require_approval, deny, shadow_only)
  • what Proof must be recorded so the decision is defensible later

That is the gap G8TED fills.

The “stop comparing apples to oranges” table

Use this table to pick the right tool for the right layer, then compose them.

LayerWhat it governsExample frameworksPrimary output artifactHow it composes with G8TED
Enterprise AI risk governanceOrg accountability, risk mgmt process, trustworthy AINIST AI RMFPolicies, controls, governance processUse it to define enterprise expectations; use G8TED to enforce action-level decisions in the SOC
Agentic AI threat modelingThreat modeling for agentic or multi-agent systemsCSA MAESTROThreat model, risks, mitigations by layerUse MAESTRO to find threats; use G8TED to gate and log SOC actions at runtime
Adversary technique knowledge baseHow attackers target AI systemsMITRE ATLASTactics, techniques, case studiesUse ATLAS for testing and scenarios; encode guardrails and outcomes in G8TED
LLM application vulnerability awarenessCommon security issues in LLM appsOWASP Top 10 for LLM AppsRisk list and mitigationsUse OWASP to harden the app; use G8TED to govern high-impact actions the app proposes
Lifecycle AI security blueprintSecurity across build to runtime to retirementSAIL and similarLifecycle risks and mitigationsUse SAIL to organize the program; use G8TED for the SOC execution layer

What “good” looks like in practice

A strong stack uses multiple layers at once:

  1. Engineering uses OWASP guidance to reduce prompt injection, insecure output handling, and agency risks.
  2. Security teams use ATLAS techniques for testing how an attacker would bypass controls.
  3. Architects threat model agentic systems with MAESTRO to identify cross-layer risks.
  4. Leadership uses NIST AI RMF and lifecycle frameworks to set governance expectations.
  5. The SOC uses G8TED to make runtime decisions for state-changing actions: allow, require_approval, deny, shadow_only.

A concrete composition workflow

Here is a practical sequence you can run this month:

Step 1: Threat model the agent and tool chain (MAESTRO)

Identify where an attacker can influence:

  • inputs (tickets, alerts, webhook payloads, prompts)
  • tool calls (overbroad scopes, weak authorization)
  • memory and retrieval (poisoning)
  • outputs (unsafe execution paths)

Step 2: Red-team and scenario test (ATLAS)

Pick 10 adversary techniques relevant to your environment and test:

  • can an attacker trigger a destructive action?
  • can they manipulate the agent’s scope?
  • can they cause false attribution?

Step 3: Harden the LLM app and integrations (OWASP)

Reduce the most common failure modes:

  • prompt injection paths
  • insecure output handling
  • excessive agency and unsafe tool permissions

Step 4: Enforce runtime action governance (G8TED)

Define typed actions you care about and set:

  • default outcomes by action type and risk tier
  • Risk Reason overrides (blast radius, Tier0/VIP, incomplete evidence, input tampering)
  • minimum Proof fields per outcome

Step 5: Align with enterprise governance (NIST AI RMF)

Map the operational controls back to enterprise expectations:

  • accountability and approvals
  • monitoring and measurement
  • incident response and review
  • continuous improvement

The key claim

These frameworks are not substitutes. They are layers.

G8TED is the standard for action execution governance in the SOC: typed actions, outcomes, Risk Reasons, and audit-grade Proof.

Everything else can be upstream input to that decision layer.

FAQ

Is G8TED “an AI governance framework”?

Not in the enterprise sense. G8TED is action governance for SOC execution.

Is MAESTRO a competitor to G8TED?

No. MAESTRO is threat modeling for agentic systems. G8TED is runtime policy and Proof for state-changing SOC actions.

Do I need ATLAS and OWASP if I use G8TED?

Yes, if you are building or deploying agentic AI. They help you reduce risk in the system. G8TED governs what the SOC is allowed to execute.

References

Changelog

  • 2025-12-18: Initial post published.