G8TED Framework: Safe SOC Autonomy for Security Automation
A practical framework for governing AI agents and automation in the SOC with typed actions, guardrails, approvals, and Proof you can defend in review or audit.
A practical framework for governing AI agents and automation in the SOC with typed actions, guardrails, approvals, and Proof you can defend in review or audit.
TL;DR
G8TED is the decision layer for state-changing SOC automation.
It gives teams a shared, portable way to govern actions like identity.disable_user or email.purge_messages using:
- Typed actions (stable vocabulary)
- Policy outcomes (allow, require_approval, deny, shadow_only)
- Risk Reasons (compact “why risky right now” taxonomy)
- Proof (auditable record designed for review, replay, and audit)
If you are automating response actions (or letting an agent propose them), G8TED helps you move faster without losing control.
At a glance
- Defines typed actions, autonomy modes, guardrails, and auditable Proof for state-changing SOC automation.
- Introduces a standard Risk Reason taxonomy to drive approvals and guardrails.
- Shows how to adopt G8TED safely with shadow mode, risk-tier rollout, and explicit approvals.
G8TED (pronounced “gated”) is not an acronym. It refers to gated autonomy for state-changing SOC actions.
You are already letting AI and automation touch your SOC
The real question is not if, but under what guardrails.
AI agents and workflows can already disable users, isolate hosts, purge email, rotate keys, revoke sessions, and close tickets. Most teams are doing this without a shared standard for what “safe” means, how approvals should work, and what evidence must exist after the fact.
Over the last year, I have watched the pattern repeat:
- Huge upside in productivity
- Huge anxiety around loss of control, blast radius, and explainability
- Every team reinventing policies, runbooks, and “guardrails” from scratch
We have shared languages for threats and controls (MITRE ATT&CK, NIST CSF). What we do not have is a shared language for safe autonomy in the SOC.
This post is the canonical explainer on g8ted.org, and it will stay updated as the framework evolves.
What G8TED is (and is not)
What G8TED is
G8TED is a practical framework for governing state-changing actions in the SOC.
It answers three questions for any AI agent or automation:
- What is it allowed to do? Typed actions, scope, and risk tier.
- Under what conditions is it allowed to do it? Policies, approvals, and context.
- How do we prove it behaved safely? Proof records designed for review, audit, and replay.
What G8TED is not
G8TED is not a detection framework, not a SIEM replacement, and not a threat intel product.
It is the decision layer for safe execution.
The G8TED model in one page
1) Typed actions
A typed action is a normalized description of a state change, for example:
identity.disable_userendpoint.isolate_hostemail.purge_messagesaccess.revoke_sessions
Typed actions make policy portable. Instead of writing one-off rules for each tool, you write policy against a stable vocabulary.
2) Policy outcomes
For any proposed action, policy returns one of these outcomes:
allowrequire_approvaldenyshadow_only(evaluate and record Proof, but do not execute)
3) Autonomy modes
Autonomy mode describes how far automation is allowed to go in an environment:
- Shadow (no execution)
- Assist (human executes)
- Autopilot (automation executes when allowed)
- Deny (automation is blocked for this action/context)
4) Proof
Proof is the auditable record of what was proposed, what context was used, why the decision was made, who approved (if required), and what actually changed.
A standard Risk Reason taxonomy
Most SOC automation failures are not “bad intentions.”
They are missing guardrails around why an action is risky in this context.
G8TED includes a compact, standardized Risk Reason taxonomy that plugs directly into policy decisions and Proof. It gives teams a shared vocabulary for the decision layer’s “this is risky right now” signals, so policies stay consistent across tools and implementations.
What a Risk Reason is
A Risk Reason is a normalized label attached to an action proposal that captures why this action may require stricter controls in the current context.
Think of it as the decision layer’s “why this is risky” vocabulary.
Risk Reason list (v1)
Below is the v1 taxonomy. It is intentionally small so teams can actually adopt it.
| Risk Reason | What it means (plain language) |
|---|---|
high_blast_radius | The action impacts many identities, hosts, mailboxes, or systems |
irreversible_or_costly_to_reverse | Rollback is hard, slow, or incomplete |
tier0_or_vip_impact | Could affect privileged identities, executives, or critical assets |
weak_attribution | Confidence is low on who or what is actually responsible |
incomplete_evidence | Required evidence is missing (logs, timestamps, chain-of-custody, etc.) |
active_incident_war_room | Action conflicts with active incident command process |
potential_data_exfiltration | Context suggests risk of data loss or extortion leverage |
privilege_escalation_suspected | Indicators suggest access is being expanded or abused |
lateral_movement_suspected | Indicators suggest spread across systems or identities |
automation_input_tampering | Inputs to the automation may be manipulated or untrusted |
tool_scope_mismatch | Proposed scope exceeds what the case warrants |
policy_exception_required | Requires an explicit exception to baseline policy |
compliance_or_legal_hold | Action may violate retention, legal hold, or regulated process |
safety_model_uncertainty | The agent is uncertain, contradictory, or cannot justify steps |
Risk Reasons → Outcomes: a starter policy you can defend
This is a deliberately simple starting point. Teams can refine it by action type and risk tier.
| Risk Reason (or condition) | Default outcome | Why |
|---|---|---|
incomplete_evidence | shadow_only | If you cannot defend the decision later, you are not safe, even if it worked. |
tier0_or_vip_impact | require_approval | Mistakes on privileged or VIP targets are disproportionately costly. |
high_blast_radius | require_approval | Bulk or wide-scope actions need explicit human accountability. |
automation_input_tampering | deny | Untrusted inputs invalidate automation safety assumptions. |
compliance_or_legal_hold | deny (or require_approval + explicit exception) | Some actions are not “riskier,” they are non-compliant. |
tool_scope_mismatch | require_approval (or deny for forbidden scopes) | Prevents overreach and accidental outages. |
safety_model_uncertainty | shadow_only | If the system cannot justify itself, it should not execute. |
Minimum Proof bar (v1)
Proof is not “logs exist somewhere.” Proof is the record you can hand to an incident review, audit, or postmortem and defend the decision.
For allow
Minimum Proof should include:
- Proposed typed action + target identifiers + scope
- Evidence references (alerts, queries, timestamps) with stable IDs
- Risk Reasons evaluated (including “none” if applicable)
- Policy bundle ID + version, outcome, and rule reason
- Execution receipt (what changed), plus rollback pointer if reversible
For require_approval
Everything in allow, plus:
- Approver identity + role, approval timestamp
- Approval rationale (short, structured)
- Any scope edits made during approval (before vs after)
- “Break-glass” indicator if used
For shadow_only
Minimum Proof should include:
- Proposed action + scope
- What evidence was missing (explicit list)
- Risk Reasons that forced
shadow_only - Policy bundle ID + version, outcome, and rule reason
- “Not executed” marker
For deny
Minimum Proof should include:
- Proposed action + scope
- The deny reason (Risk Reason or forbidden action/scope rule)
- Policy bundle ID + version
- “Not executed” marker
- If deny is due to input integrity issues, record the input source and failed integrity check
Two concrete examples
Example 1: Disable a user
Proposed action: identity.disable_user
Context: suspicious sign-in, confirmed credential stuffing, user is not Tier 0
Risk Reasons: weak_attribution (initially), then cleared after confirmation
Possible outcomes:
shadow_onlyifincomplete_evidencerequire_approvaliftier0_or_vip_impactallowwhen evidence threshold is met and scope is minimal
What Proof should capture:
- exact identity targeted
- evidence used (alerts, sign-in logs, timestamps)
- policy rule that allowed it
- who approved it (if needed)
- final state change + rollback steps if reversal is required
Example 2: Purge email at scale
Proposed action: email.purge_messages
Context: phishing campaign, identical IOC across many mailboxes
Risk Reasons: high_blast_radius, compliance_or_legal_hold
Likely outcome:
require_approvalby default for bulk purge, with stricter evidence requirements
Proof must be strong here:
- exact query criteria for purge
- mailbox count and affected message count
- retention considerations and approvals
- change receipt and audit export
Risk Reasons in practice (next)
I’m publishing a follow-up post with 12 concrete scenarios where the same typed action flips outcome based on Risk Reasons and Proof quality.
If you are implementing G8TED, this is where it gets operational.
How to adopt G8TED without breaking production
Start in shadow mode
Run actions through evaluation, log Proof, execute nothing.
Roll out by risk tier
Autopilot low-risk, reversible actions first.
Keep high-blast-radius and Tier 0 actions gated.
Make approvals explicit
Define who can approve what, and what “good enough evidence” looks like.
Treat Proof as a first-class output
If you cannot defend the decision later, you are not safe, even if it worked.
Help shape v1
G8TED v1 is a starting point. We will evolve it in public based on real deployments.
If you want to help, I want your pushback on:
- What Risk Reasons are missing?
- Which actions should never be eligible for autopilot in your environment?
- What mappings (NIST, MITRE, internal policy controls) would help you adopt faster?
Explore the framework at g8ted.org.
Optional: enforcement in production
I’m also building Neodyne Gateway, a safety and assurance gateway that enforces G8TED in front of your existing tools and AI agents. If you are exploring pilots for SOC automation or agentic response, I’m interested in design partners who want strong guardrails and Proof from day one.
Changelog
- 2025-12-18: Added Risk Reasons → Outcomes starter table and Minimum Proof bar (v1). Tightened TL;DR and clarified non-goals.
- 2025-12-13: Refined Risk Reason taxonomy section language and clarified “canonical explainer” framing.
- 2025-12-10: Initial canonical explainer published.