Ingress on Custom Kubernetes

Setting up ingress-nginx on a custom cluster.

Posted by Craig Johnston on Tuesday, May 15, 2018

There are more than a handful of ways to set up port 80 and 443 web ingress on a custom Kubernetes cluster. Specifically a bare metal cluster. If you are looking to experiment or learn on a non-production cluster, but something more true to production than minikube, I suggest you check out my previous article Production Hobby Cluster, a step-by-step guide for setting up a custom production capable Kubernetes cluster.

This article builds on the Production Hobby Cluster guide. The following closely the official deploy ingress Installation Guide with a few adjustments suitable for the Production Hobby Cluster, specifically the use of a DaemonSet rather than a Deployment and leveraging hostNetwork and hostPort for the Pods on our DaemonSet. There are quite a few ingress nginx examples in the official repository if you are looking for a more specific implementation.

By now you may be managing multiple clusters. kubectl is a great tool to use on your local workstation to manage remote clusters, and with little effort you can quickly point it to a new cluster and switch between them all day. Check out my article kubectl Context Multiple Clusters for a quick tutorial.


Setup a new namespace called ingress-nginx

Create using the configuration:

kubectl create -f

Default Backend

Next, create a Deployment and a Service for the ingress controller.

Create using the configuration:

kubectl create -f

Ingress Nginx ConfigMap

Create an empty ConfigMap for ingress-nginx.

Create using the configuration:

kubectl create -f

TCP Services ConfigMap

Create an empty ConfigMap for ingress-nginx TCP Services.

Create using the configuration:

kubectl create -f

UDP Services ConfigMap

Create an empty ConfigMap for ingress-nginx UDP Services.

Create using the configuration:

kubectl create -f

RBAC - Ingress Roles and Permissions

Here we setup a ServiceAccount named nginx-ingress-serviceaccount, a ClusterRole named nginx-ingress-clusterrole, a Role named nginx-ingress-role, a RoleBinding named nginx-ingress-role-nisa-binding and a ClusterRoleBinding named nginx-ingress-clusterrole-nisa-binding:

Create using the configuration:

kubectl create -f


Creating a DaemonSet ensures that we have one Ingress Nginx controller Pod running on each node. Having an Ingress Controller on each node is crucial since we are using the host network and assigning the host ports 80 and 443 for HTTP and HTTPS ingress on each node. When adding a new node to the cluster, the DaemonSet ensures it gets an Ingress Nginx controller Pod.

Create using the configuration:

kubectl create -f


Add an ingress-nginx Service.

Create using the configuration:

kubectl create -f


Make sure the default-http-backend pod and nginx-ingress-controller controller pods are running, the nginx-ingress-controller should be running on each node.

kubectl get pods -n ingress-nginx -o wide

# example output
NAME                                   READY     STATUS    RESTARTS   AGE       IP               NODE
default-http-backend-5c6d95c48-wbvw9   1/1       Running   0          1d        la2
nginx-ingress-controller-v44xz         1/1       Running   0          1d      la2
nginx-ingress-controller-wbb52         1/1       Running   0          1d    la3
nginx-ingress-controller-wjhcf         1/1       Running   7          1d   la1

Test each node by issuing a simple curl call:

# Example call
curl -v
*   Trying
* Connected to ( port 80 (#0)
> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.54.0
> Accept: */*
< HTTP/1.1 404 Not Found
< Server: nginx/1.13.12
< Date: Thu, 17 May 2018 20:50:32 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 21
< Connection: keep-alive
* Connection #0 to host left intact
default backend - 404

In this case, the nginx-ingress-controller Pod running on responded adequately by passing the unknown route to the default-http-backend which correctly output a basic 404 page. Issue a curl call (or browse to them in a web browser) to each of your Nodes to test them.

Add an Ingress

We are finally at a spot where we can start routing ingress to services. If you don’t already have a service to route to, I recommend using the txn2 ok service. ok is specifically designed to give useful information when testing Pod deployments.

ok Deployment

Here we add a Deployment of ok with one replica.

Use the following command to add the Deployment:

kubectl create -f

ok Service

Create an ok service to front-end our new ok above.

Use the following command to add the Service:

kubectl create -f

ok Ingress

Finally, we have the easy task of creating an ingress route. The following is a minimal template since you need to point a domain name to your cluster:

apiVersion: extensions/v1beta1
kind: Ingress
  name: ok
    app: ok
    system: test
  - host:
      - backend:
          serviceName: ok
          servicePort: 8080
        path: /

I will go over https and managing certificates in future articles. For now you may want to checkout other ingress nginx examples.

Port Forwarding / Local Development

Check out kubefwd for a simple command line utility that bulk forwards services of one or more namespaces to your local workstation.

If in a few days you find yourself setting up a cluster in Japan or Germany on Linode, and another two in Australia and France on vultr, then you may have just joined the PHC (Performance Hobby Clusters) club. Some people tinker late at night on their truck, we benchmark and test the resilience of node failures on our overseas, budget kubernetes clusters. It’s all about going big, on the cheap.

k8s performance hobby clusters


Ingress on Custom Kubernetes: Setting up ingress-nginx on a custom cluster. by Craig Johnston is licensed under a Creative Commons Attribution 4.0 International License. Creative Commons License