RBAC (Role Based Access Control) allows our Kubernetes clusters to provide the development team better visibility and access into the development, staging and production environments than it has have ever had in the past. Developers using the command line tool kubectl, can explore the network topology of running microservices, tail live server logs, proxy local ports directly to services or even execute shells into running pods.
Kubernetes and GitlabCI are the central components of our DevOps toolchain and have increased our productivity by many multiples over the traditional approaches of the past.
If you don’t already have a Kubernetes cluster up and running, then I highly suggest you read my article Production Hobby Cluster to get you up and running in a custom, vendor-neutral production capable cluster.
TLDR: If you are reading this article because you received a token and the URL of a cluster from your Kubernetes administrator, you can skip ahead to the section Accessing a Remote Kubernetes Cluster.
Our Clusters (Overview)
Development and staging environments share a cluster across many clients and projects. On the production front, more extensive projects and clients get a dedicated cluster, and smaller projects might share a cluster.
We use Kubernetes namespaces to separate clients. All of our security revolves around namespaces, and when RBAC is set up correctly, a context in which access granted to a developer can only operate and view within the assigned namespace.
We grant developers read-level access to one common developer account per namespace on the development cluster. We tighten access to some resources on production.
We create a separate deployment account for our continuous integration/deployment solution, see my article, A Microservices Workflow with Golang and Gitlab CI to get a high-level view of our toolchain and how it integrates cleanly with Kubernetes RBAC security model.
These cluster setup steps assume you are a cluster administrator if you are only looking to set up kubectl access on your local workstation to an existing cluster skip ahead to the section: Accessing a Remote Kubernetes Cluster, otherwise I suggestion that you set up a Production Hobby Cluster to help you follow along.
To demonstrate team access control we need some pods running in the namespace
the-project. The example below uses a pre-built Docker container designed specifically for testing called txn2/ok. If you are curious about how to automate subsequent deployments using the free and open source GitlabCI, check out A Microservices Workflow with Golang and Gitlab CI.
We don’t automate the initial configuration, and so the following steps are part of the setup stage of any new project. It is easy to automate this process with the Kubernetes package manager Helm, but that would be overkill for most of our projects and abstract away some welcome verbosity. Helm is a great tool but better-used adjacent to our initial development process. In fact, we use Helm to create charts from our initial development configurations, but that is outside the scope of this article.
apiVersion: v1 kind: Namespace metadata: name: the-project labels: client: mk.imti.co env: dev
The filename is not important. I use a numeric value to represent a suggested order in which to apply the configuration, followed by the
kind of the kubernetes object to create. However, this is not a rigid rule across all projects and only the [namespace[
labels section is optional and only used to give additional sections capabilities to command-line and automated tools.
Create the namespace kubernetes object:
kubectl create -f 00-namespace.yml
apiVersion: apps/v1 kind: Deployment metadata: name: ok namespace: the-project labels: app: ok client: mk.imti.co env: dev spec: replicas: 1 selector: matchLabels: app: ok template: metadata: namespace: the-project labels: app: ok client: mk.imti.co env: dev spec: containers: - name: ok image: txn2/ok imagePullPolicy: Always env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName ports: - name: ok-port containerPort: 8080
Create the Kubernetes Deployment object:
kubectl create -f 10-deployment-ok.yml
Kubernetes Deployments manage Pods. Consider Pods ephemeral, being moved from one node to another, destroyed or re-created at any time. Services are persistent and give us a point to attach our network ingress. If you don’t already have ingress setup, you might want to read my article Ingress on Custom Kubernetes to get started. Ingress needs a service to attach to; however you don’t need ingress to set up a service.
apiVersion: v1 kind: Service metadata: name: ok namespace: the-project labels: app: ok client: mk.imti.co env: dev spec: selector: app: ok ports: - protocol: "TCP" port: 8080 targetPort: 8080 type: ClusterIP
Create the Kubernetes Service object:
kubectl create -f 50-service-ok.yml
Setting up ingress for the txn2/ok service for the sake of completeness. We will configure ingress to send HTTP requests for the domain ok.d4ldev.txn2.com to the txn2/ok service running in
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ok namespace: the-project labels: app: ok client: mk.imti.co env: dev spec: rules: - host: ok.d4ldev.txn2.com http: paths: - backend: serviceName: ok servicePort: 8080 path: /
Create the ingress:
kubectl create -f 80-ingress-ok.yml
Adding TLS support is easy but out the scope of this article. If you are interested in setting up free, automated TLS certificate using Let’s Encrypt on Kubernetes, check out my article: Let’s Encrypt, Kubernetes. Setting up Let’s Encrypt, Kubernetes should only have to be done once and take only about twenty minutes.
Setup Remote Access
If you skipped setting up the Example Microservice this example limits access to the namespace
the-project exclusively, you could do the same with the
default namespace by simply not providing a
namespace key in the configuration or setting it to
Namespaces are the principal delimiter for our security model. We create deployer and developer ServiceAccount for each namespace, along with Role and RoleBinding objects used by them. Deployer has write access and used for a kubectl executed from a Docker container operating GitlabCI runner. Developer has read access to the namespace.
# create the deployer ServiceAccount kubectl create serviceaccount sa-deployer -n the-project # create the developer ServiceAccount kubectl create serviceaccount sa-developer -n the-project
You should see three service accounts, default, sa-deployer and sa-developer. Kubernetes automatically created the default ServiceAccount when we created the namespace.
kubectl get serviceaccounts -n the-project NAME SECRETS AGE default 1 30m sa-deployer 1 3m sa-developer 1 2m
You now have three secrets, assuming this is the new
kubectl get secrets -n the-project NAME TYPE DATA AGE default-token-8t2z6 kubernetes.io/service-account-token 3 30m sa-deployer-token-qxfsq kubernetes.io/service-account-token 3 3m sa-developer-token-pg7m7 kubernetes.io/service-account-token 3 3m
You need the tokens generated for
sa-developer-token-pg7m7, of course, your secrets end in different random characters.
kubectl describe secret sa-deployer-token-qxfsq -n the-project Name: sa-deployer-token-qxfsq Namespace: the-project Labels: <none> Annotations: kubernetes.io/service-account.name=sa-deployer kubernetes.io/service-account.uid=c2bb12cc-84b5-11e8-9c96-00163ec25389 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyYhbfEf7XaMa...REDACTED...
In the result above I truncated the token for security and brevity, the unedited token is much longer. Copy this token someplace safe as it is needed later; however you can always fetch it again with
kubectl describe secret. Retrieve the tokens from sa-deployer-token-… and sa-developer-token-…
Role and RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: namespace: the-project name: deployer rules: - apiGroups: ["apps","extensions"] resources: ["deployments","configmaps","pods","secrets","ingresses"] verbs: ["create","get","delete","list","update","edit","watch","exec","patch"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: namespace: the-project name: developer rules: - apiGroups: ["*"] resources: ["*"] verbs: ["get","describe","list","watch","exec"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: deployer namespace: the-project roleRef: kind: Role name: deployer apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount namespace: the-project name: sa-deployer --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: developer namespace: the-project roleRef: kind: Role name: developer apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount namespace: the-project name: sa-developer
If you want to give your team even more access, you can enable the ability to port-forward a service or pod directly to their local workstation. Add the following to the end of the developer role.
- apiGroups: - '*' resources: - 'pods/exec' - 'pods/portforward' - 'services/portforward' verbs: - create
kubectl port-forward to an
elasticsearch service running in
kubectl port-forward svc/elasticsearch 9200:9200 -n the-project
Create the Roles and RoleBindings:
kubectl create -f 90-RBAC.yml
Accessing a Remote Kubernetes Cluster
kubectl is a command-line tool for interacting with Kubernetes. If you work on MacOs and use homebrew, issue the following command:
brew install kubernetes-cli
Once installed you have the command
kubectl. If you are on another platform, you need to follow the official documentation, Install and Set Up kubectl.
Configuring remote access requires a token. If you followed along with the Example Microservice or Setup Remote Access for an existing Kubernetes namespace you find the token by running
kubectl describe on the Secret associated an appropriate ServiceAccount.
You also need the IP address of a server running Kubernetes, and the ability to communicate to that IP. Some networks require remote users to first connect to a VPN.
Next, you create a context. A
kubectl context is an association between a user and a cluster. Start with setting up the user and cluster in the steps below.
Add a Cluster to
Give your cluster a descriptive name; this cluster configuration is available for use in multiple contexts, so it’s best to name it after it’s purpose. In this fictional example world the IP 126.96.36.199 is a development cluster, so dev is a pretty good name.
kubectl config set-cluster dev --server=https://188.8.131.52:6443 --insecure-skip-tls-verify=true
Add User (credentials) to
The user configuration holds the token and therefore should have a name descriptive of the access provided by the token. In the case of instructions above to Setup Remote Access, this token is for a namespace on dev called
the-project so a good descriptive name would be the-project-dev*. In the example below replace THETOKEN with the one retrieved in the example above or received from your administrator.
kubectl config set-credentials the-project-dev --token=THETOKEN
Add a Context to
Finally, you tie the new dev cluster with the new the-project-dev user (credentials.) In the case of our example, the token is joined to a namespace, specifically
the-project, so it makes sense here to use the –namespace flag to tell
kubectl to always use that namespace with this new context. However, providing a namespace is optional.
Name the context with the cluster and access it provides. I find it useful to give it the same name as the user (credentials.)
kubectl config set-context the-project-dev --cluster=dev --user=the-project-dev --namespace the-project
To use the new context, issue the command:
kubectl config use-context the-project-dev
Get a list of all your configured contexts along with an indicator of the current context you are in:
kubectl config get-contexts
You now have access to the-project namespace on the dev cluster. Check out the kubectl Cheat Sheet for a quick list of useful commands.
Port Forwarding / Local Development
Check out kubefwd for a simple command line utility that bulk forwards services of one or more namespaces to your local workstation.
- kubectl is the essential Kubernetes command line administration utility.
- kubectl Cheat Sheet
- RBAC on Kubernetes for role-based access control
- Setup a Production Hobby Cluster, a production capable kubernetes on the cheap.
- A Microservices Workflow with Golang and Gitlab CI
- GitlabCI for Continuous Integration & Deployment
Kubernetes Team Access - RBAC for developers and QA: Role Based Access Control by Craig Johnston is licensed under a Creative Commons Attribution 4.0 International License.