As an Amazon Associate I earn from qualifying purchases from

Coverage as Code and the Open Coverage Agent

Policy as code is the natural evolution of infrastructure as code. Once you start to manage your infrastructure as code, you soon realize that the process needs solid governance and enforcement of policies at scale. The traditional approach of defining and enforcing policies with manual processes or cumbersome GUIs won’t cut it.

Policy as code means that you define policies declaratively using text files that are checked into source control and can be reviewed and audited. Then, a policy engine is responsible for enforcing the policies.

In this article, we will focus on how policy as code plays out in Kubernetes. We’ll look at what comes built-in with Kubernetes and how the Open Policy Agent—with Gatekeeper—helps Kubernetes operators take policy enforcement further.

The policy as code model fits Kubernetes like a glove, since Kubernetes is about declarative definitions at every level. An administrator or developer defines resources, typically as YAML files, and Kubernetes stores them in its state store, etcd. Then, the Kubernetes controllers watch these resources and reconcile the state of the world with these resources.

For example, if you create a Deployment resource with an image and a certain number of replicas, then Kubernetes will create pods that run the image and ensure the correct number of replicas is running.

Policies are just another type of resource

Kubernetes itself defines some policies such as pod security policies and network policies. However, the extensible nature of Kubernetes enables the use of third-party policies and policy engines.

Admission controllers as policy enforcers

In Kubernetes, the lifecycle of a request goes through authentication, authorization, and admission. When a request comes  in, Kubernetes first authenticates it to see who is making the request. Then, the request goes through authorization to see if the requesting entity is authorized to make this request. The request might be rejected at this point. If the request is authorized, then it goes through admission, which checks more dynamic conditions. An authorized request may also be rejected at this stage.

Admission is the stage where policy enforcement comes in. Policy engines can be implemented as admission controllers that will flag, alert, and/or reject requests that violate policies.

Pod security policies

Kubernetes had pod security policies, but they were deprecated in Kubernetes 1.21 and will be replaced by a newer design. Enforcing security settings on a pod is an important concern that can’t be done using existing mechanisms like authentication, authorization, or pod security context. A good example of such a policy is forbidding pods in a certain namespace to run with root privileges.

Network policies

Controlling network ingress and egress is another critical capability. Kubernetes provides built-in network policies. The network policy operates at the pod level (using label selectors) and can control access (ingress and egress) at the namespace, pod, or IP block level. Note that Kubernetes doesn’t provide a controller to enforce the policy. To use network policies, the cluster must have a network plugin that supports network policies.

While Kubernetes’ built-in policies are a good start, they’re insufficient for many organizations with advanced governance requirements. This is where third-party policy engines come in. The Open Policy Agent, which is a CNCF project, is a cloud-native policy engine that can enforce policies for many different targets, including Kubernetes, Envoy, Terraform, HTTP APIs, SQL databases, Plain applications, Kafka, and more.

In this article, we discuss Gatekeeper,  a Kubernetes admission controller that evaluates OPA policies based on Kubernetes custom resource definitions (CRD).

Gatekeeper components

Gatekeeper consists of three primary components:

  1. The controller, responsible for creating Constraint custom resources.
  2. The auditor, which scans the cluster and detects policy violations.
  3. The validating webhook, which is responsible for denying requests that violate policies.

Gatekeeper also has a CLI called Gator that can help with testing constraints and constraint templates locally. 

The policy library

You can define your own policies, but Gatekeeper already comes with a substantial library of policies. The library has a large section of general policies that cover many topics. Some examples include:

One of the reasons Kubernetes deprecated its original PodSecurityPolicy is that the same effect can be achieved through Gatekeeper constraints from the policy library.

Gatekeeper constraint templates

A constraint template is a CRD that defines the schema and the definition of the constraint in the Rego language. The template can be customized by an administrator to create concrete constraints to be enforced later.

Here is a snippet from the requiredprobes constraint template:

kind: ConstraintTemplate
  name: k8srequiredprobes
    description: Requires Pods to have readiness and/or liveness probes.
        kind: K8sRequiredProbes
          type: object
              description: "A list of probes that are required (ex: `readinessProbe`)"
              type: array
                type: string
              description: "The probe must define a field listed in `probeType` in order to satisfy the constraint (ex. `tcpSocket` satisfies `['tcpSocket', 'exec']`)"
              type: array
                type: string

Gatekeeper constraints

Once a constraint template is installed on the cluster, you can define constraints that use the templates. Gatekeeper enforces policies specified by constraints.

This is an example of a constraint based on the requiredprobes template:

kind: K8sRequiredProbes
  name: must-have-probes
      - apiGroups: [""]
        kinds: ["Pod"]
    probes: ["readinessProbe", "livenessProbe"]
    probeTypes: ["tcpSocket", "httpGet", "exec"]

This constraint requires every pod to have readiness and liveness probes of the specified probe types.

Rego: the declarative policy language

The Rego language used to define the policy builds on top of a query language called Datalog, extending Datalog to support structured documents. Rego is very powerful, and its declarative nature makes it a great match for policy management. Here is an example from the requiredprobes constraint template:

    - target:
      rego: |
        package k8srequiredprobes
        probe_type_set = probe_types {
          probe_types := {type | type := input.parameters.probeTypes[_]}
        violation[{"msg": msg}] {
          container :=[_]
          probe := input.parameters.probes[_]
          probe_is_missing(container, probe)
          msg := get_violation_message(container,, probe)
        probe_is_missing(ctr, probe) = true {
          not ctr[probe]
        probe_is_missing(ctr, probe) = true {
          probe_field_empty(ctr, probe)
        probe_field_empty(ctr, probe) = true {
          probe_fields := {field | ctr[probe][field]}
          diff_fields := probe_type_set - probe_fields
          count(diff_fields) == count(probe_type_set)
        get_violation_message(container, review, probe) = msg {
          msg := sprintf("Container <%v> in your <%v> <%v> has no <%v>", [, review.kind.kind,, probe])

OPA/Gatekeeper is not the only game in town. There are other Kubernetes-specific policy engines with shallower learning curves: 

Kyverno is a Kubernetes-native policy engine. Policies are defined as YAML using Kubernetes CRDs. There is no special language like Rego. Kyverno has policies to generate configuration, mutate existing resources, and validate resources.

K-Rail is another open-source policy engine. It is also Kubernetes-specific and comes with many built-in policies. New policies are defined in Go and must be added to the engine.

Policy as code is an important best practice for large systems. Kubernetes is the go-to platform for large distributed systems. There are several solid solutions for policy as code on Kubernetes. OPA/Gatekeeper has a powerful policy definition language. Kyverno and K-Rail are Kubernetes-specific and may be simpler to use. Evaluate your needs and choose the right solution for your use case.

We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!

LinkedIn | Twitter @CiscoDevNet | Facebook Developer Video Channel


We will be happy to hear your thoughts

Leave a reply

Enable registration in settings - general
Compare items
  • Total (0)
Shopping cart