Are You Doing Kubernetes Secrets Wrong? Here’s How the Pros Handle It
Managing secrets in Kubernetes is hard.
You either...
- Hard code secrets insecurely
- Add a manual process, whether that's scripting or running commands one at a time
- Deploy and maintain a secret management service(s)
Updating services and rotating secrets now takes an eternity just to do something that should take 5 minutes.
We’re going to take you from a hard-coder to a pro coder in no time.
If you want to handle secrets in Kubernetes like a pro, follow along.
First, we’re going to start with the hard, painful way. Then, we’ll bring it up a notch with some automation. Finally, we'll handle it like a pro with a full GitOps-driven approach.
Bad Example
This is how most people are manaing thier secrets in kubernetes.
Say we have an container that is using the secret demo
in the default
namespace.
Option A: Hardcoding Secrets
We hard code it. No fluff — throw it into Git and be done with it.
apiVersion: v1
kind: Secret
metadata:
name: demo
namespace: default
type: Opaque
stringData:
user: "you"
pass: "password"
Mix this with ArgoCD, or your favorite GitOps tool, and you can easily update secrets through Git.
Insecurely...
✅ Pros:
- 🟩 Simple & familiar: Everyone understands YAML.
- 🟩 Kubernetes-native: Works with existing tools like
kubectl
.
🟥 Cons:
- 🟥 Exposed secrets: Base64-encoded, not encrypted.
- 🟥 Manual rotation: Must redeploy to rotate.
- 🟥 Breaks GitOps: You can’t push secrets to Git securely.
Option B: Using kubectl CLI Commands
We use the kubectl
CLI command to create the secret.
kubectl create secret generic demo \
--namespace=default \
--from-literal=user=you \
--from-literal=pass=password
Pure manual labor. To update, you’ll delete the secret and recreate it.
No version control.
No rolling back.
Changes gone forever!
✅ Pros:
- 🟩 Simpler than hardcoding YAML.
- 🟩 No secrets in Git.
🟥 Cons:
- 🟥 Manual process: Hard to automate.
- 🟥 Breaks GitOps: Commands aren't declarative.
- 🟥 Base64 encoding: Same problem as before.
Turning up the Heat
Let's take it up a notch.
You can use scripts to manage credentials. To make it powerful, you’ll need a secrets provider. In this video, I’m going to use HashiCorp Vault.
I have a guide for deploying a single-node Docker instance you can follow here.
I’m using my own at https://vault.svc.spakl:8200
, and I’ll be using SSL validation.
I have Vault OIDC auth set up with my ZITADEL auth provider, so I’ll use that instead of root token authentication.
Scripting Approach
We can make this process much easier using scripts to securely get secrets.
#!/usr/bin/env bash
export VAULT_ADDR="https://vault.svc.spakl:8200"
# Authenticate with Vault via OIDC
echo "Authenticating with Vault..."
vault login -method=oidc
# Get credentials from Vault
CREDS=$(vault kv get -format=json platform/k3s/tempo)
# Delete and recreate the Kubernetes secret
kubectl delete secret -n monitoring loki-storage-secrets
kubectl create secret generic loki-storage-secrets \
--from-literal=AWS_ACCESS_KEY_ID=$(echo $CREDS | jq -r '.data.data.AWS_ACCESS_KEY_ID') \
--from-literal=AWS_SECRET_ACCESS_KEY=$(echo $CREDS | jq -r '.data.data.AWS_SECRET_ACCESS_KEY') \
-n monitoring
We store the secret we want in a CREDS variable.
--format=json
stores a json struct in the variable with the secret data.
Jq
is used to read secret values out of creds.
We can combine this with the kubectl
cli tool and run these scripts manually (gross) or use them in a cicd pipeline.
This is nice but its hard to controll the deployment order and will make a lot of uneccasary upates whenever the ci pipeline runs.
We can however store these in git securely and share them.
✅ Pros:
- 🟩 Automated with scripts.
- 🟩 Can be version-controlled securely.
🟥 Cons:
- 🟥 Still manual if not paired with CI/CD.
- 🟥 Process logs can expose secrets if not handled properly.
We can do better!!!
Like the Pros!
To make this declarative, we’re going to do something incredible — use External Secrets Operator (ESO)!
This will require some setup:
- Service account for token reviews
- ClusterRole and RoleBinding
- Long-lived ServiceAccount token
- Vault Kubernetes auth method
- Vault Kubernetes auth roles
- Helm install ESO
- Add resources
This sounds like a lot, but in reality, you’re going to save time in the long run.
Let’s not waste any time and jump in.
Service Account
Lets make a Token Reviewer SA.
apiVersion: v1
kind: ServiceAccount
metadata:
name: token-reviewer
namespace: vault
annotations:
kubernetes.io/service-account.name: token-reviewer
kubernetes.io/service-account-token: "true" # Required for token generation
vault.hashicorp.com/alias-metadata-name: token-reviewer
vault.hashicorp.com/alias-metadata-namespace: vault
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: vault-token-reviewer
namespace: vault
rules:
- apiGroups: [""]
resources: ["serviceaccounts/token"]
verbs: ["*"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["*"]
- apiGroups:
- authentication.k8s.io
- authorization.k8s.io
resources:
- tokenreviews
- subjectaccessreviews
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-token-reviewer-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: vault-token-reviewer
subjects:
- kind: ServiceAccount
name: token-reviewer
namespace: vault
This will create the token review with the necessary roles and bindings needed for clusterwide access to the vault SA account.
This means you can configure ESO in the vault namespace and then use it to authenticate with vault.
The token review with call the k8s api and validate tokens using this jwt.
To make sure you set this up correctly, this command should return yes
kubectl auth can-i create tokenreviews --as=system:serviceaccount:vault:token-reviewer
SA Token
This will create a long lived token we can use for vault to authenticate with the k8s api.
apiVersion: v1
kind: Secret
metadata:
name: token-reviewer-token
namespace: vault
annotations:
kubernetes.io/service-account.name: token-reviewer
kubernetes.io/service-account-token: "true"
type: kubernetes.io/service-account-token
Deploy ESO with Helm
Im using argocd to manage this resource but the helm command can be extracted.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: eso-system
namespace: argocd
spec:
project: secrets
sources:
- repoURL: 'https://gitlab.com/spakl/platform/cluster-administration/secrets/eso.git'
targetRevision: master
ref: values
- repoURL: "https://charts.external-secrets.io" # This is the standard Helm stable repo. Adjust if you have a different repo.
targetRevision: "0.11.0"
helm:
valueFiles:
- $values/system/values.yml # Update this to point to your custom values file if you have one.
releaseName: "external-secrets"
path: "external-secrets/external-secrets"
chart: "external-secrets"
destination:
namespace: vault
server: https://kubernetes.default.svc
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true # Allows ArgoCD to create the namespace if it doesn't exist.
OR
helm repo add external-secrets https://charts.external-secrets.io
helm install eso external-secrets/external-secrets -n external-secrets --create-namespace
This will use my values.yml and deploy the helm chart to the vault namespace.
I made a secrets
argocd project but you can change that to default
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: secrets
spec:
sourceRepos:
- "https://helm.releases.hashicorp.com"
- "https://charts.external-secrets.io"
- 'https://gitlab.com/spakl/platform/cluster-administration/secrets/**'
destinations:
- namespace: '*'
server: 'https://kubernetes.default.svc'
clusterResourceWhitelist:
- group: '*'
kind: '*'
Optional to include values yaml, i just included this for future use
bitwarden-sdk-server:
enabled: true
Configure Vault
Im using Terraform to manage vault resources.
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "4.5.0"
}
}
}
provider "vault" {
# Configuration options
address = "https://vault.svc.spakl:8200"
add_address_to_env = true
skip_child_token = true
}
Ready K8s Public CA
Get your kubernetes public ca from the ~/.kube/config
and base64 decode it into a file files/ca.crt
in the root of the tf project.
mkdir -p files
cat ~/.kube/config | grep certificate-authority-data | awk -F " " '{print $2}' | base64 --decode > files/ca.crt
Setup vars
Extract token from token-reviewer service account
kubectl -n vault get secret token-reviewer-token -o json | jq -r .data.token | base64 --decode > token-reviewer-token
Paste token into var...
platform.auto.tfvars
token_reviewer_jwt = ""
Create k8s_auth_backend module
Here is a module for creating access policy and kubernetes auth mount
can reference it like this or clone it into a modules/k8s_auth_backend folder.
source = "git::https://gitlab.com/D3vbd/tf-modules/k8s_auth_backend.git?ref=v1.0.0"
platform.auth.tf
variable "token_reviewer_jwt" {
type = string
sensitive = true
}
module "platform_auth_k8s_backend" {
source = "git::https://gitlab.com/D3vbd/tf-modules/k8s_auth_backend.git?ref=v1.0.0"
kubernetes_host = "https://pltfrm.c0.clstr.spakl:6443"
kubernetes_ca_cert_type = "inline"
kubernetes_ca_cert = file("${path.root}/files/ca.crt")
# token_reviewer_jwt = var.token_reviewer_jwt
}
KV Mount and Policies
I also made a kv_mount module that handles creating a mount and some default policies.
https://gitlab.com/D3vbd/tf-modules/kv_mount
platform.mounts.tf
module "pltfrm_mount_pltfrm" {
source = "git::https://gitlab.com/D3vbd/tf-modules/kv_mount.git?ref=v1.0.0"
path = "pltfrm"
mount_name = "pltfrm"
type = "kv-v2"
vrsn = "v0.0.0"
}
Create Auth Role
platform.auth.tf
variable "token_reviewer_jwt" {
type = string
sensitive = true
}
module "platform_auth_k8s_backend" {
source = "./modules/k8s_auth_backend"
kubernetes_host = "https://pltfrm.c0.clstr.spakl:6443"
kubernetes_ca_cert_type = "inline"
kubernetes_ca_cert = file("${path.root}/files/ca.crt")
# token_reviewer_jwt = var.token_reviewer_jwt
}
resource "vault_kubernetes_auth_backend_role" "read_only" {
role_name = "read-only"
alias_name_source = "serviceaccount_name"
backend = module.platform_auth_k8s_backend.backend.path
bound_service_account_names = ["*"]
bound_service_account_namespaces = ["*"]
token_policies = [
"default",
module.platform_auth_k8s_backend.access_policy.name,
module.pltfrm_mount_pltfrm.read_policy.name
]
}
Apply the terraform
terraform apply -auto-approve
# OR
tofu apply -auto-approve
Vault is now setup!!! And we have some reusable modules that handle policy creation per mouint and access per auth backend. Very Nice!!!
ESO Secret Store
ESO supports kv mounts only. we can configure a mount on the kv mount we made earlier...
First lets add the CA to our vault isng a secret
apiVersion: v1
kind: Secret
metadata:
name: vault-ca-secret
namespace: vault
type: Opaque
data:
ca.crt: LS0tLS1C......F3N1NIenRjMDdvTkdnbGdqVi90VjZSczBNo=
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: demo
namespace: vault
spec:
provider:
vault:
server: "https://vault.svc.spakl:8200"
path: "pltfrm"
version: "v2"
caProvider:
type: "Secret"
# namespace is mandatory for ClusterSecretStore and not relevant for SecretStore
namespace: "vault"
name: "vault-ca-secret"
key: "ca.crt"
auth:
kubernetes:
mountPath: "platform"
role: "read-only"
serviceAccountRef:
name: "token-reviewer"
namespace: vault
# secretRef:
# name: "token-reviewer-token"
# key: "token"
External secret
Lets go into the vault UI and make a secret on the pltfrm
mount under path demo
.
Now lets make and apply this resource.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: demo
namespace: vault
spec:
refreshInterval: "60s" # refresh rate
secretStoreRef:
name: demo
kind: ClusterSecretStore
target:
name: demo # name of secret it will make
creationPolicy: Owner
data:
- secretKey: username ## key in generated secret
remoteRef:
key: demo # path to secret from mount on secret store
property: user # key in vault
- secretKey: password
remoteRef:
key: demo
property: pass
Verify secret was fetched successfuly.
kubectl get secret -n demo demo -o yaml
You now have a working secrets management system
You are now starting to manage your secrets like a pro!
- 🟢 GitOps Friendly: YAML definition with no secrets.
- 🟢 Vault Integration: Vault encrypts secrets.
- 🟢 Automated Syncing: External Secrets Operator watches Vault.