Master policy as code for compliance. This guide helps audit teams, covering OPA, ISO 27001 evidence workflows, and practical adoption.

Two weeks before an ISO audit, the same pattern shows up in organization after organization. The compliance lead has a spreadsheet open, engineering is digging through tickets, someone is asking for screenshots from a system that changed last month, and nobody is fully sure whether the evidence still reflects the current state.
The stress does not come from the audit alone. It comes from the gap between how controls are written and how systems run. Policies live in PDFs, procedures live in shared drives, and proof of enforcement lives in scattered logs, screenshots, and tribal knowledge. When an auditor asks, “How do you know this control operated consistently?” the answer is often a bundle of documents assembled under pressure.
Policy as code becomes useful in precisely this context. Not as a developer trend. Not as another platform project. As a practical way to make governance testable, repeatable, and easier to prove.
A familiar audit cycle starts with confidence and ends with a scramble.
At the beginning of the year, teams approve policies, update procedures, and assign control owners. By the time the audit window approaches, reality has drifted. Cloud resources changed. New services were deployed. Exceptions were granted in chat threads. Screenshots were captured months ago. The written policy still says one thing, but the environment may be doing another.
A compliance manager asks engineering for proof that storage is encrypted by default. Engineering sends a current configuration screenshot. The auditor then asks whether that control was operating throughout the period under review. Now the team needs historical evidence, not a point-in-time image.
A quality lead in a regulated environment asks for proof that only approved deployment patterns are allowed in production. The platform team points to a documented standard. The auditor asks for evidence that the standard is enforced. The room goes quiet for a moment because documentation is not enforcement.
Manual compliance breaks down in these ways:
Auditors are not being difficult when they ask for timestamps, approvals, logs, and change history. They are trying to answer a basic question. Was the control designed well, and did it operate as intended?
Good compliance evidence is not just documentation. It is proof that a requirement was enforced, reviewed, and traceable over time.
If your current process depends on screenshots, email chains, and manual attestations, the work multiplies as systems change faster. The problem is not effort alone. It is reliability. Human-driven evidence collection tends to be inconsistent, and inconsistency is exactly what audits expose.
Policy as code turns written requirements into machine-readable rules that systems can evaluate automatically.
Consider it akin to building codes for software delivery and cloud operations. A building code says what must be true before a structure is considered safe. Policy as code does the same for digital systems. It defines the conditions that must be true before infrastructure is deployed, workloads are admitted, or changes are approved.

Most organizations start with policies in documents. That is normal. ISO clauses, SOPs, security standards, and internal procedures are written for humans to read.
The problem is that static documents depend on humans to remember, interpret, and apply them every time a change happens. Policy as code moves some of those requirements into executable logic. A rule such as “new storage must be encrypted” stops being a sentence in a PDF and becomes a test that runs before a deployment goes live.
That shift matters because it changes governance from advisory to enforceable.
Compliance teams sometimes hear “as code” and assume this is only for developers. That is too narrow.
Policy as code is not about replacing policy owners, auditors, or quality reviewers. It is not a claim that every requirement can be reduced to a binary rule. It is a method for codifying the parts of governance that are objective enough to test consistently.
Examples include:
It works best where the rule is specific and the evidence can be collected directly from the system.
Policy as code gained prominence around 2018 with the maturation of DevSecOps practices, and Open Policy Agent emerged in 2016 to 2017 as a leading open-source engine. By 2025, it was described as adopted by over 50% of Fortune 500 companies for cloud-native policy enforcement in Styra’s explanation of policy as code and OPA.
That adoption happened because manual review does not scale well. A policy review process that works for a handful of teams becomes a bottleneck when changes are happening across many teams and environments.
Policy as code usually has three moving parts:
The rule
The requirement written in a policy language such as Rego, YAML, or a vendor-specific format.
The input
The thing being evaluated, such as a Terraform plan, a Kubernetes manifest, or a runtime configuration.
The decision
The output from a policy engine that says allow, deny, or warn.
For compliance teams, the key point is simple. Policy as code makes controls operational. Instead of asking whether engineers remembered the rule, the system checks the rule every time.
The most useful way to view policy as code is as a control enforcement layer. It sits between intent and deployment and verifies that required conditions are met.
The biggest value of policy as code for compliance teams is not developer convenience. It is evidence quality.

When a control is enforced automatically, the system creates proof as part of normal operations. You are no longer trying to reconstruct what happened before an audit. You are preserving it as the work happens.
The financial gap between reactive and disciplined governance is large. The average total cost of non-compliance reaches $14.82 million, compared to roughly $5.47 million for maintaining compliance, according to Platform Engineering’s analysis of policy as code and compliance economics.
That number should not be read as “buy a tool and the problem disappears.” It should be read as a reminder that weak control enforcement is expensive. The more your environment changes, the more costly manual verification becomes.
Teams exploring approaches to automate regulatory compliance usually discover the same thing quickly. Payoff comes when controls are enforced during delivery, not inspected after the fact.
A screenshot is easy to produce and easy to challenge.
A version-controlled policy, tied to a pull request, approved by named reviewers, executed in a pipeline, and logged with a timestamp is much harder to dispute. It shows design, approval, enforcement, and outcome in one chain.
That gives compliance teams several practical advantages:
A single source of truth
The current policy is the one in version control, not an outdated attachment in someone’s inbox.
Continuous control checking
Evidence is generated every time the relevant change occurs, not only during audit prep.
Clear change history
You can show when a rule changed, who approved it, and what systems it affected.
Better exception handling
Teams can document when a policy was bypassed, by whom, and under what approval process.
Engineering teams often describe policy as code as “shift-left.” Compliance teams should translate that into their own language: finding and stopping non-compliant changes before they become audit findings.
That changes the audit conversation. Instead of proving you found issues after deployment, you can prove your control design prevented bad configurations from entering production.
A short technical walkthrough helps make that concrete:
The operational difference is straightforward.
Before policy as code, compliance teams ask for evidence after work is complete. With policy as code, the environment itself produces enforcement records. That means fewer ad hoc requests to engineering, fewer one-off screenshots, and less debate over whether evidence is current.
If your audit process depends on asking engineers to “grab the latest screenshot,” your control evidence is weaker than you think.
It also improves governance discussions. Once a policy is executable, stakeholders are forced to clarify vague requirements. “Secure configuration” is too fuzzy to automate. “Block public object storage and require encryption” is specific enough to review, test, and audit.
The tooling can look intimidating at first, but the architecture is simpler than most compliance teams expect. Policy as code relies on a small set of roles that repeat across products.
Policy language
This is how the rule is written. Rego is the best-known example in the open-source ecosystem. Other platforms use their own formats.
Policy engine
This is the decision-maker. It evaluates the input against the rule and returns allow, deny, or similar outcomes.
Input data
This is what the engine reads. It might be a Terraform plan, a Kubernetes manifest, or a cloud configuration snapshot.
Enforcement point At this point, the decision matters. Common enforcement points are CI/CD pipelines, admission controllers in Kubernetes, and runtime monitoring layers.
For compliance teams, the enforcement point is especially important because it determines what kind of evidence you can collect. A rule that runs only in a design review meeting gives you weak evidence. A rule that runs automatically before deployment gives you stronger evidence.
Open Policy Agent (OPA) is one of the most common policy engines. It uses Rego, a declarative language designed for expressing rules about structured data.
A practical example comes from Wiz’s policy as code guide with OPA and Rego. It shows a policy that denies an unencrypted S3 bucket in a Terraform plan. Executed through Conftest in a CI/CD pipeline, this kind of rule can automatically block non-compliant infrastructure changes and reduce human error significantly.
That matters because the rule is not buried in prose. It is executable. If a developer proposes an unencrypted bucket, the pipeline can stop the change before deployment.
| Tool | Governing Language | Primary Use Case | Best For |
|---|---|---|---|
| Open Policy Agent | Rego | Cross-platform policy decisions for infrastructure, Kubernetes, APIs, and pipelines | Teams that want flexibility and broad ecosystem support |
| Conftest | Rego via OPA policies | Testing configuration files and IaC in CI/CD | Teams that want simple pipeline checks without building a full platform |
| HashiCorp Sentinel | Sentinel | Policy enforcement in HashiCorp workflows | Organizations invested in Terraform and HashiCorp products |
| AWS Config Rules | Native AWS rule mechanisms | Detecting and evaluating AWS resource compliance | Teams that need cloud-native checks inside AWS |
| OPA Gatekeeper | Rego on Kubernetes | Admission control and policy enforcement in clusters | Platform teams governing Kubernetes workloads |
Different organizations use policy as code at different moments in the lifecycle.
This is the cleanest place to start. Tools evaluate infrastructure definitions or application manifests before changes are applied. The output is usually a pass or fail in the pipeline.
Compliance teams often prefer this model because evidence is easier to interpret. The rule exists, the proposed change is visible, and the enforcement decision is logged.
In Kubernetes, policy engines can evaluate requests when workloads are submitted to the cluster. This is useful when teams need guardrails close to runtime without relying only on pre-deployment checks.
Some controls need continuous evaluation because environments drift. Runtime checks help identify when a configuration that was once compliant is no longer compliant.
A compliance manager does not need to write Rego to participate effectively. Ask these questions instead:
Those questions move the conversation away from tooling hype and toward auditability.
The best early uses of policy as code are concrete, narrow, and tied to a clear control objective. Start where the requirement is objective and the enforcement path is visible.

A common first pattern is an infrastructure rule that rejects storage resources created without encryption.
In plain English, the logic says: if a proposed storage resource does not declare required encryption settings, fail the pipeline and require the change to be corrected before deployment.
This is a good compliance starting point because the requirement is clear, the decision is binary, and the evidence is strong. The policy file shows the rule. The pipeline log shows it ran. The failed build shows the control operated.
That pattern also creates a more useful conversation with engineering. Instead of asking, “Are we encrypting storage?” compliance can ask, “Which storage classes are blocked if encryption is missing, and where is that policy versioned?”
A second pattern focuses on CI/CD. The goal is to stop releases that violate a technical standard before they reach production.
The exact rule varies by organization. Some teams enforce approved deployment settings. Others require specific metadata, artifact standards, or image provenance checks. What matters for GRC is that the gate is automated and that the output is logged with enough detail to support later review.
Platforms discussed in software for compliance programs and evidence workflows become relevant to the broader operating model in this context. The policy engine handles enforcement. The compliance system needs to capture, organize, and map the output.
Kubernetes environments benefit from admission policies that check workloads when teams try to deploy them.
A classic example is requiring resource limits. In plain English, the rule says: do not allow a workload into the cluster unless it declares required operating boundaries.
From a compliance perspective, the direct value is not just stability. It is the existence of a preventive control that operates every time a deployment request is submitted.
Start with controls that have an unambiguous pass or fail outcome. Encryption, public exposure, required tags, and resource boundaries are usually better early targets than judgment-heavy process rules.
A practical mistake I see often is copying raw policy code into a control matrix without explanation. That helps nobody.
A better format looks like this:
| Control need | Policy logic in plain English | Enforcement outcome | Evidence generated |
|---|---|---|---|
| Encrypt new storage | Reject new storage resources if encryption is not declared | Pipeline fails before deployment | Policy file, pipeline log, reviewer approval record |
| Enforce approved deployment settings | Reject deployment definitions that violate required standards | Release blocked until corrected | Build output, commit history, exception record if bypassed |
| Require workload limits | Deny cluster admission when workload boundaries are missing | Workload not admitted | Admission log, policy version, remediation ticket |
What works:
What does not:
The last point is important. Enforcement is necessary, but auditors also want traceability. You still need a way to connect the rule, the requirement, the approval, and the output.
Many programs stall at this point. The technical team has policy checks running, but the compliance team still struggles to assemble audit-ready evidence.

The gap is real. A key challenge in policy as code adoption is connecting real-time enforcement to the manual evidence expectations of frameworks such as ISO 27001 and ISO 13485. Recent surveys indicate 68% of compliance professionals struggle with evidence traceability in automated tools, as noted in Harness’s discussion of policy as code and evidence traceability.
A pipeline log is useful, but only if someone can explain what it proves.
For audit purposes, a policy event becomes strong evidence when it can answer four questions:
That is why raw automation is not enough. You need an evidence model.
Several artifacts from policy as code can be mapped directly into audit support if handled properly.
A policy stored in version control can support evidence of formal review and change management. The useful record is not just the final file. It is the commit history, pull request discussion, named approvers, and timestamps.
For a compliance team, that can support claims about policy maintenance, review cadence, and approval discipline.
A failed deployment caused by a policy violation is strong proof that a preventive control operated. It is often more persuasive than a screenshot because it shows the system rejected a non-compliant change at the moment it mattered.
Mature programs do not pretend controls are never bypassed. They document exceptions. If a policy can be overridden, the override path should record business justification, approver identity, and duration.
That is not a weakness. It is part of a defensible control environment.
Compliance teams need a practical structure for linking these artifacts to framework controls.
| Evidence artifact | What it proves | Typical framework relevance |
|---|---|---|
| Policy file in version control | Control design and current enforcement logic | Technical and procedural control design |
| Pull request and approvals | Review and authorization of control changes | Change management, governance, accountability |
| Pipeline failure log | Preventive control operated on a specific event | Operational effectiveness |
| Exception approval record | Managed deviation from standard process | Risk acceptance and documented override handling |
| Runtime evaluation output | Ongoing monitoring after deployment | Continuous compliance and drift detection |
This is also where evidence management software for audit workflows becomes important. Without a system for organizing and linking these artifacts, teams still end up rebuilding the evidence package manually.
The bridge from policy as code to audit readiness is not the policy engine alone. It is the mapping layer that explains why a technical event counts as compliance evidence.
ISO programs often rely on document-heavy evidence. Procedures, records, forms, training logs, approvals, and traceability matrices are embedded in the audit process. Policy as code generates excellent technical evidence, but it does not automatically explain how that evidence aligns with clause language, control narratives, or internal procedures.
That is why many organizations need a hybrid workflow. The policy engine enforces objective technical rules. The compliance function links those outcomes to the documented management system.
For ISO 13485 and similar environments, this distinction matters even more. Technical enforcement can prove a system rule operated. It does not by itself replace documented rationale, validation thinking, or regulated review processes.
Two weeks before an ISO surveillance audit is a bad time to discover that every team defines the same control differently. One group treats encryption as a deployment standard, another treats it as a ticket review item, and compliance is left reconciling screenshots, approvals, and exception emails into a single narrative. Policy as code adoption goes off track the same way. Teams start too broad, automate too early, and create more disagreement than evidence.
A phased rollout works better. Start with controls that are easy to define, easy to test, and easy to defend during an audit. Expand only after the review path, ownership model, and evidence output are stable.
Pick one requirement that already has cross-functional agreement and a clear technical outcome.
Good starting points include encryption by default, prevention of public storage exposure, or mandatory deployment metadata. These are strong candidates because they produce a clean pass or fail result. They also map well to audit discussions, where compliance teams need to show how a stated requirement was enforced and what record proves it.
The first milestone is confidence, not coverage.
Compliance needs to see that a rule can move from requirement to code to evidence without creating a parallel process that nobody maintains.
Once the first rule is in place, formalize the operating pattern around it. At this stage, many programs either become sustainable or turn into isolated engineering work with no audit value.
Use a simple policy lifecycle:
Ownership needs to be explicit. Engineering usually owns implementation and maintenance. Compliance owns interpretation, control mapping, and evidence expectations. Internal audit or quality may need a defined role as well, especially in ISO 13485 environments where validation, approval, and traceability expectations are stricter.
After the workflow is repeatable, extend it across a related control family instead of adding disconnected rules. If the first policy covers encryption for a cloud asset, related policies might cover logging, public access restrictions, and approved configuration settings for that same asset class.
That approach makes audit preparation easier. It gives the compliance team a control story that is coherent enough to map to ISO clauses, internal procedures, and evidence requests without rebuilding the rationale each time.
It also exposes trade-offs earlier. A tightly enforced rule set can reduce drift and improve consistency, but it can also create friction if the underlying standard is still immature or full of exceptions. That is why mature programs codify stable requirements first and leave context-heavy judgments in a managed review process.
Policy as code works best for objective technical checks. It struggles when the requirement depends on intent, business context, or a quality judgment that is only partially documented.
For compliance teams, that means avoiding a common failure mode. Do not begin with requirements such as “appropriate access,” “adequate review,” or “risk assessed as acceptable” unless your organization has already translated those phrases into decision logic, thresholds, and approval criteria. Machines can evaluate explicit conditions. They cannot resolve vague control language that different reviewers interpret differently.
The practical test is simple. If an auditor asks why a policy fired, the team should be able to answer with the rule, the threshold, the owner, and the retained record. If that explanation still depends on a long meeting, the control is not ready to codify.
The fastest way to lose support for policy as code is to automate a rule that still needs human interpretation to be applied fairly.
For ISO 27001 and ISO 13485 programs, this discipline matters more than the tooling choice. A narrow, well-governed start produces evidence that audit teams can use. A broad, poorly defined rollout produces policy results that look technical but do little to reduce the pre-audit scramble.
Policy as code changes the posture of compliance work. It moves teams away from retrospective proof gathering and toward preventive, operational governance.
That shift matters because environments now change too quickly for manual control verification alone. Compliance programs need more than good documentation. They need controls that operate consistently and leave behind evidence that can stand up to review.
For audit teams, the promise is simple. Fewer screenshots. Fewer last-minute evidence chases. Better traceability from requirement to enforcement to outcome.
For engineering teams, the benefit is also clear. Requirements become explicit, testable, and reviewable instead of arriving late as audit findings or one-off requests.
The strongest programs will use policy as code where rules are objective, then connect those outputs to documented evidence workflows so auditors can verify them without guesswork. That is how governance becomes both automated and auditable.
If your team is trying to connect policy enforcement, ISO documents, and audit-ready evidence without drowning in manual cross-referencing, AI Gap Analysis is built for that gap. It reads uploaded compliance documents, returns evidence-linked answers with citations to exact pages, and helps teams move from scattered PDFs to verifiable findings faster while keeping human judgment in the loop.
© 2026 AI Gap Analysis - Built by Tooling Studio with expert partners for human validation when needed.