Configuring HashiCorp Vault In AWS For High Availability In Kubernetes

Michael Levan - May 9 '23 - - Dev Community

Regardless of what your Kubernetes environment looks like, whether it’s one cluster or fifty clusters, at some point you will have a secret, password, or API key that you need to store in an encrypted fashion for one of your containerized workloads.

Because Kubernetes Secrets are stored as Base64 plain text in Etcd (the Kubernetes data store), you’ll see a lot of engineers going the third-party route.

In this blog post, you’ll learn about one of the most popular third-party routes, Vault.

Prerequisites

To follow along with this blog post in a hands-on fashion, you should have the following:

  • An EKS cluster running.
  • A KMS key.
  • Access to create IAM users.

If you don’t have these prerequisites, that’s okay! You can still read, follow along, and understand how it all works.

What Is Vault

Before diving into the Vault, implementation, let’s discuss what Vault is.

Vault is a secret manager from HashiCorp. Outside of it being so popular in the Kubernetes space, HashiCorp Vault has very much been the go-to secret store for all types of workloads. Whether you’re running containerized environments or running on VMs or even running on bare-metal, you can store your secrets in Vault and utilize them within your environment.

When you’re thinking of a “secret”, it’s anything that you want to be encrypted at rest and in transit. You don’t want the contents of the secret to ever be in plain text (other than when the endpoint is reading the value).

A secret could be anything from an API key to a password to even a username. Regardless of what the content of the secret is, it’s essentially anything that you wish to have as an encrypted value.

AWS KMS and Auto Unseal

One of the key aspects of HA is the fact that there will be several servers, or in the case of Kubernetes, Pods, that are running Vault. Because of that, you very rarely want to manually go through and unseal the instance of Vault.

What’s the unseal process?

When Vault is started, it starts as sealed. This means Vault knows how to access the storage for the secrets, but it doesn’t know how to unencrypt it. You can think of it like a bank vault. You need a certain access key to access the bank vault. When you unseal Vault, you’re getting the “key” (like a bank vault) to unencrypt Vault itself so you can begin to store and utilize secrets.

Why does this matter? Because if you have five Pods running Vault, that means you’d have to perform the unseal process manually across all of them.

Instead, you can use the Auto Unseal. When you use the Auto Unseal, you’ll just need to manually unseal one Vault Pod. After that one Vault Pod is unsealed, the rest of the Pods get automatically unsealed.

The best way in AWS to Auto Unseal is by using AWS KMS (in the prerequisites of this blog post).

To set up what’s needed for the Unseal process, you’ll need a KMS key and an IAM user to authenticate to AWS from Vault.

First, create the Vault Namespace.

kubectl create namespace vault
Enter fullscreen mode Exit fullscreen mode

Next, create the Kubernetes Secret with the IAM user's access key and secret key to authenticate to AWS.

kubectl create secret generic -n vault eks-creds \
    --from-literal=AWS_ACCESS_KEY_ID="" \
    --from-literal=AWS_SECRET_ACCESS_KEY=""
Enter fullscreen mode Exit fullscreen mode

Once complete, you can set up the Vault configuration.

Vault Helm Config

Now that the Namespace and eks-creds Kubernetes Secret are created, let’s learn how to implement Vault in an HA fashion.

For the purposes of utilizing Kubernetes, the best way to go about this implementation is by using Helm. Because the values.yaml for the Vault Helm config is so large, let’s break it down into chunks below.

First, you’d set the configuration to global and ensure that the injector exists so Vault can be injected into the Pods as a sidecar container.

# Vault Helm Chart Value Overrides
global:
  enabled: true

injector:
  enabled: true
  # Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
  image:
    repository: "hashicorp/vault-k8s"
    tag: "latest"
Enter fullscreen mode Exit fullscreen mode

Next, create the storage for the Raft algorithm backend.

server:
  # This configures the Vault Statefulset to create a PVC for data
  # storage when using the file or raft backend storage engines.
  # See https://www.vaultproject.io/docs/configuration/storage/index.html to know more
  dataStorage:
    enabled: true
    # Size of the PVC created
    size: 20Gi
    # Location where the PVC will be mounted.
    mountPath: "/vault/data"
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: null
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce
    # Annotations to apply to the PVC
    annotations: {}
Enter fullscreen mode Exit fullscreen mode

Set the resource limits and requests along with the readiness probe to ensure that the Vault Pods are getting the resources (CPU, memory) that they need along with confirming that they’re running as expected.

  # These Resource Limits are in line with node requirements in the
  # Vault Reference Architecture for a Small Cluster
  resources:
    requests:
      memory: 8Gi
      cpu: 2000m
    limits:
      memory: 16Gi
      cpu: 2000m

  # For HA configuration and because we need to manually init the vault,
  # we need to define custom readiness/liveness Probe settings
  readinessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
  livenessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true"
    initialDelaySeconds: 60
Enter fullscreen mode Exit fullscreen mode

Create the audit storage and add the environment variables for the Kubernetes Secret you created in the previous section to authenticate to AWS for the purposes of utilizing AWS KMS for the unsealing process.

  # This configures the Vault Statefulset to create a PVC for audit logs.
  # See https://www.vaultproject.io/docs/audit/index.html to know more
  auditStorage:
    enabled: true

  standalone:
    enabled: false

  # Authentication to AWS for auto unseal
  extraSecretEnvironmentVars:
    - envName: AWS_ACCESS_KEY_ID
      secretName: eks-creds
      secretKey: AWS_ACCESS_KEY_ID
    - envName: AWS_SECRET_ACCESS_KEY
      secretName: eks-creds
      secretKey: AWS_SECRET_ACCESS_KEY
Enter fullscreen mode Exit fullscreen mode

Next, create the HA configuration for Vault.

Notice in the seal block that the AWS KMS key ID (kms_key_id) is blank. You’ll have to input this for your environment.

  # Run Vault in "HA" mode.
  ha:
    enabled: true
    replicas: 3
    raft:
      enabled: true
      setNodeId: false

      config: |
        ui = true

        listener "tcp" {
          tls_disable = 1
          address = "[::]:8200"
          cluster_address = "[::]:8201"
        }

        seal "awskms" {
          region     = "us-east-1"
          kms_key_id = ""
        }

        storage "raft" {
          path = "/vault/data"

          retry_join {
          leader_api_addr = "http://vault-0.vault-internal:8200"
          }
          retry_join {
          leader_api_addr = "http://vault-1.vault-internal:8200"
          }
          retry_join {
          leader_api_addr = "http://vault-2.vault-internal:8200"
          }
        }

        service_registration "kubernetes" {}
Enter fullscreen mode Exit fullscreen mode

Lastly, enable the Vault UI so you can access the Vault dashboard.

# Vault UI
ui:
  enabled: true
  serviceType: "LoadBalancer"
  serviceNodePort: null
  externalPort: 8200
Enter fullscreen mode Exit fullscreen mode

All together, the override-values.yaml Helm config should look like the below.

# Vault Helm Chart Value Overrides
global:
  enabled: true

injector:
  enabled: true
  # Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
  image:
    repository: "hashicorp/vault-k8s"
    tag: "latest"

  resources:
      requests:
        memory: 256Mi
        cpu: 250m
      limits:
        memory: 256Mi
        cpu: 250m

server:
  # This configures the Vault Statefulset to create a PVC for data
  # storage when using the file or raft backend storage engines.
  # See https://www.vaultproject.io/docs/configuration/storage/index.html to know more
  dataStorage:
    enabled: true
    # Size of the PVC created
    size: 20Gi
    # Location where the PVC will be mounted.
    mountPath: "/vault/data"
    # Name of the storage class to use.  If null it will use the
    # configured default Storage Class.
    storageClass: null
    # Access Mode of the storage device being used for the PVC
    accessMode: ReadWriteOnce
    # Annotations to apply to the PVC
    annotations: {}

  # Use the Enterprise Image
  image:
    repository: "hashicorp/vault"
    tag: "latest"

  # These Resource Limits are in line with node requirements in the
  # Vault Reference Architecture for a Small Cluster
  resources:
    requests:
      memory: 8Gi
      cpu: 2000m
    limits:
      memory: 16Gi
      cpu: 2000m

  # For HA configuration and because we need to manually init the vault,
  # we need to define custom readiness/liveness Probe settings
  readinessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
  livenessProbe:
    enabled: true
    path: "/v1/sys/health?standbyok=true"
    initialDelaySeconds: 60

  # This configures the Vault Statefulset to create a PVC for audit logs.
  # See https://www.vaultproject.io/docs/audit/index.html to know more
  auditStorage:
    enabled: true

  standalone:
    enabled: false

  # Authentication to AWS for auto unseal
  extraSecretEnvironmentVars:
    - envName: AWS_ACCESS_KEY_ID
      secretName: eks-creds
      secretKey: AWS_ACCESS_KEY_ID
    - envName: AWS_SECRET_ACCESS_KEY
      secretName: eks-creds
      secretKey: AWS_SECRET_ACCESS_KEY

  # Run Vault in "HA" mode.
  ha:
    enabled: true
    replicas: 3
    raft:
      enabled: true
      setNodeId: false

      config: |
        ui = true

        listener "tcp" {
          tls_disable = 1
          address = "[::]:8200"
          cluster_address = "[::]:8201"
        }

        seal "awskms" {
          region     = "us-east-1"
          kms_key_id = ""
        }

        storage "raft" {
          path = "/vault/data"

          retry_join {
          leader_api_addr = "http://vault-0.vault-internal:8200"
          }
          retry_join {
          leader_api_addr = "http://vault-1.vault-internal:8200"
          }
          retry_join {
          leader_api_addr = "http://vault-2.vault-internal:8200"
          }
        }

        service_registration "kubernetes" {}

# Vault UI
ui:
  enabled: true
  serviceType: "LoadBalancer"
  serviceNodePort: null
  externalPort: 8200
Enter fullscreen mode Exit fullscreen mode

Once you save the override-values.yaml file, run the helm installation with the following.

helm install vault hashicorp/vault \
    -f ./override-values.yaml \
    --namespace vault
Enter fullscreen mode Exit fullscreen mode

Vault Configuration

Now that Vault is running, you’ll have to take two steps:

  • Initialize Vault
  • Unseal one Vault Pod

Run the following command to initialize Vault.

kubectl exec --stdin=true --tty=true vault-0 -n vault -- vault operator init
Enter fullscreen mode Exit fullscreen mode

Once the command runs, you’ll see five unseal keys that get printed to the terminal.

Run the following command THREE (3) times inputting a new unseal key from the command above. For example, the command above will output five keys, so you can use keys 1, 2, and 3.

kubectl exec --stdin=true --tty=true vault-0 -n vault -- vault operator unseal
Enter fullscreen mode Exit fullscreen mode

Once complete, Vault will be unsealed and the other Pods will be auto-unsealed with KMS.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .