GitOps: A Simple Approach to using AWS Secrets Manager with Kubernetes
Introduction
One of the most common questions about Kubernetes (K8s) is secrets management. In most organizations, engineers immediately think about security and Defense-in-Depth strategies — think Principle of Least Privilege, secure developer tools, etc.
But how do we put this in code? Abiding by the principles mentioned above, thinking about locking down access for every secret to specific applications is already intimidating.
This article aims to show a starting point for secrets management in K8s by using simpler ideas and minimal tooling so coding is easy.
Iterative Development
We are doing too much when we try to apply every security principle to start off with. Instead of doing it all, we can deliver “good enough” secret pipelines in the beginning and make it better through iterative development.
Let’s take a look at apps retrieving secrets. The Principle of Least Privilege dictates that every apps must have individual secret access paths.
Team A
App 1 maps to /path/app1/only/secret
App 2 maps to /path/app2/only/secret
Team B
App 3 maps to /path/app3/only/secret
Given the pattern above, we have to consider many components such as managing K8s Service Accounts, AWS IAM Roles and Policies, or which team member handles which secrets. This friction causes developers to push back in favor of delivering business value despite the risks. They may even hard code secrets to our source code repository out of convenience 😱.
It would be phenomenal to get to a super secure state but we have to start somewhere easier.
A Simpler Approach
The Idea
A better starting point is shallow. Teams typically have their own namespaces to work in. We can simply put secrets under namespaced paths, where only apps within respective K8s namespaces can get them.
Namespace 1 maps to /namespace1/*
Namespace 2 maps to /namespace2/*
WARNING: This already breaks the Principle of Least Privilege. Any application can pull any secret within the namespace. Yet, it is easier to reason about for developers who mostly care about business logic.
In regards to Service Accounts, every K8s Namespace always comes with one called default
. To avoid sprawl and favor simplicity, we will just use this.
The Tools
Terraform
We leverage the power of open source Terraform modules. For example, AWS modules gives us the ability to easily provision networks, EKS clusters, and even do Helm releases. They are continuously evolving to make our code leaner.
AWS Modules: https://registry.terraform.io/namespaces/terraform-aws-modules
AWS Secrets Manager and Config Provider for Secret Store CSI Driver
This service is well-documented here: https://github.com/aws/secrets-store-csi-driver-provider-aws
The key points are:
Kubernetes Service Accounts must map to AWS IAM roles.
AWS IAM roles and policies restrict access of secret paths.
Mounts K8s Volumes with secrets retrieved from AWS Secrets Manager. K8s Secrets are not created (though it has the option to do so).
Show Me The Code
Find my code here: https://github.com/xamroc/terraform-k8s-aws-secrets-manager
1. EKS Cluster
To start work for our plumbing, we need a cluster. Terraform AWS modules has an EKS module we can use to provision one with little code.
# main.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.23.0"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
disk_size = 50
instance_types = ["t3.medium"]
}
eks_managed_node_groups = {
eks_nodes = {
min_size = 1
max_size = 3
desired_size = 2
}
}
enable_irsa = true
}
The key configuration here is enable_irsa
. This will provide us with an Open ID Connect (OIDC) provider that will allows Pod identities to map to IAM roles.
2. Helm - Deploy CSI Secrets Store Provider for AWS
Helm helps us manage Kubernetes applications. Complex applications like CSI Secrets Store can be easily deployed using this block:
# main.tf
resource "helm_release" "csi_secrets_store_provider_aws" {
name = "csi-secrets-store-provider-aws"
repository = "https://aws.github.io/eks-charts"
chart = "csi-secrets-store-provider-aws"
version = "0.0.3"
namespace = "kube-system"
depends_on = [
module.eks
]
}
3. Mapping IAM Roles and Service Accounts
With our set up complete, we establish mapping must between two places: AWS and our cluster.
Starting from the AWS side, IAM Policies and Roles are the ingredients we use to provide access to specific secrets.
Define IAM Policies
Let’s define our policy:
# secrets-csi-aws-setup.tf
resource "aws_iam_policy" "secrets_manager" {
for_each = toset(var.namespaces)
name = "${var.cluster_name}-${each.key}-secrets-manager-policy"
description = "AWS Secrets Manager Policy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
]
Effect = "Allow"
Resource = "arn:aws:secretsmanager:*:${local.account_id}:secret:${var.cluster_name}/${each.key}/*"
},
]
})
}
This resource creates many policies based on the number of namespaces as shown in the for_each
line.
In the Statement
block, the Resource
key defines which secret path this policy has access to. Notice how ${each.key}
is the namespace name and “*”
follows right after.
"arn:aws:secretsmanager:*:${local.account_id}:secret:${var.cluster_name}/${each.key}/*"
The definition allows Roles associated to access secrets under this namespace.
Associate IAM Roles
Next, we need to associate these Policies to their respective Roles.
# secrets-csi-aws-setup.tf
module "aws_secrets_manager_role" {
for_each = toset(var.namespaces)
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "5.1.0"
role_name = "${var.cluster_name}-${each.key}-secrets-manager-role"
oidc_providers = {
cluster = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["${each.key}:default"]
}
}
role_policy_arns = {
secrets_manager = "arn:aws:iam::${local.account_id}:policy/${var.cluster_name}-${each.key}-secrets-manager-policy"
}
depends_on = [
aws_iam_policy.secrets_manager
]
}
Notice how it also leverages the namespaces
variable where {each.key}
is used to map to Policies we created above.
role_policy_arns = {
secrets_manager = "arn:aws:iam::${local.account_id}:policy/${var.cluster_name}-${each.key}-secrets-manager-policy"
}
Additionally, the OIDC provider from our EKS module comes in play here. We plug in the provider_arn
so our roles know which cluster to request for authorization.
Annotate K8s Service Accounts
Finally, we move on to the cluster side. Our last step is to annotate each Namespaces’ default
Service Account with the appropriate IAM Role ARN.
# secrets-csi-aws-setup.tf
resource "kubernetes_annotations" "serviceaccount_default" {
for_each = toset(var.namespaces)
api_version = "v1"
kind = "ServiceAccount"
metadata {
name = "default"
namespace = each.key
}
annotations = {
"eks.amazonaws.com/role-arn" = "arn:aws:iam::${local.account_id}:role/${var.cluster_name}-${each.key}-secrets-manager-role"
}
depends_on = [
kubernetes_namespace.namespaces,
module.aws_secrets_manager_role
]
}
Phew! That’s a lot of work but the payoff is great. We can create namespaces with the proper secrets plumbing by just defining variables.
# terraform.tfvars
namespaces = ["team-red", "team-blue"]
4. Testing It Out
Happy Path
Go to AWS Secrets Manager to create a secret named: cluster-aws-secrets-manager/team-red/secret
Let’s create the SecretProviderClass which tells the AWS provider which secrets are to be mounted in the pod. The parameters
show that we are looking for the secret we just created.
# examples/k8s-manifests/secrets-provider-class.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: team-red-secret
spec:
provider: aws
parameters:
objects: |
- objectName: "cluster-aws-secrets-manager/team-red/secret"
objectType: "secretsmanager"
Apply it to team-red
namespace.
$ kubectl apply -f ./examples/k8s-manifests/secrets-provider-class.yaml -n team-red
Next we create a Deployment where we define volumes
with this block that points to our SecretProviderClass:
# examples/k8s-manifests/secrets-provider-class.yaml
...
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "team-red-secret"
...
Create the Deployment.
$ kubectl apply -f ./examples/k8s-manifests/deployment.yaml -n team-red
You should see your secret displayed.
kubectl exec -it $(kubectl get pods -n team-red | awk '/nginx-deployment/{print $1}' | head -1) cat /mnt/secrets-store/cluster-aws-secrets-manager_team-red_secret; echo
Unhappy Path
So what happens if we try to run the same for team-blue
?
$ kubectl apply -f ./examples/k8s-manifests/secrets-provider-class.yaml -n team-blue
$ kubectl apply -f ./examples/k8s-manifests/deployment.yaml -n team-blue
You will notice that Pods will not go into a running state. When we run a kubectl describe pod <POD_NAME>
, we get an error under the Events section that looks like this:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 39s kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod team-blue/nginx-deployment-8567c79cf6-qgb6d, err: rpc error: code = Unknown desc = Failed fetching secret cluster-aws-secrets-manager/team-red/secret: AccessDeniedException: User: arn:aws:sts::xxx:assumed-role/cluster-aws-secrets-manager-team-blue-secrets-manager-role/secrets-store-csi-driver-provider-aws is not authorized to perform: secretsmanager:GetSecretValue on resource: cluster-aws-secrets-manager/team-red/secret because no identity-based policy allows the secretsmanager:GetSecretValue action
status code: 400, request id: xxx
The key statement being:
AccessDeniedException: User: ... is not authorized to perform: secretsmanager:GetSecretValue on resource: cluster-aws-secrets-manager/team-red/secret ...
Conclusion
And there you have it! Your very own secrets to namespaces starter kit. We can achieve a lot by starting simple. The code can be tweaked from here to improve our security posture. The idea can be applied with other tools besides AWS Secrets Manager.
I like Akuity’s post where they showcase secret managers out there. Read it here: https://akuity.io/blog/how-to-manage-kubernetes-secrets-gitops/.
This approach may work with them too. What do you think?