How to Discover non-Kubernetes applications using Kubernetes Service Discovery
How to use Ingress Resources to discover Non-Kubernetes services using NGINX and BIND DNS
Kubernetes provides an excellent inbuilt service discovery through kubernetes services and employs the CoreDNS to resolve services internally within the Kubernetes cluster. Your applications within kubernetes can run anywhere within the cluster and Kubernetes Services would take care to route it to the correct IP. However, most of the organisations at this moment do not run 100% on Kubernetes and have mixed workloads of services running K8S and VMs. Some are on a migration path from VMs to K8S and do not want to do a Big Bang migration. While discovering services within K8s is easy, services running within VMs do not get discovered natively by Kubernetes, especially the East-West communication.
There are many ways to foster internal service discovery between kubernetes resources and VM services and vice-versa. Some might want to use tools like Consul to achieve this, and it is a great way to do it. However, this article would describe how to utilise a Bind DNS Server, native kubernetes ingress resources and NGINX Load Balancers running on containers to expose traffic to backend services running on VM. The VM can in-turn discover kubernetes services by utilising the same Bind DNS server to make dynamic service discovery.

Pre Requisites
You would need an ingress controller running within the kubernetes cluster. The exact configuration can vary based on your setup.
For more details with regards to an on-premise or Bare-Metal setup see https://github.com/kubernetes/ingress-nginx
The following paragraphs describe an example setup that exposes the Nginx Ingress controller via a NodePort, and an NGINX Load Balancer sits in front of the NodePort to provide a single load-balanced endpoint to the setup.
It also exposes the DNS server as a NodePort and NGINX Load Balancer described in the nginx.conf file caters to the DNS configuration as well. (I would come to this later when I describe the Bind DNS setup)
If you are using a cloud provided kubernetes setup, you can expose the Ingress Controller and Bind DNS Server as a LoadBalancer service instead of a NodePort service as described.
If Ingress Controllers are provided to you natively such as in GKE, then you do not need to do the Ingress setup and just substitute the Load Balancer IP of your Ingress controller.
In any situation, you should arrive with a Bind DNS Server Load Balancer IP (Description Later) and an Ingress Controller Load Balancer IP.
If you want to use the example setup do the following
git clone https://github.com/bharatmicrosystems/kubernetes-nginx-service-discovery.git
kubectl apply -f kubernetes-nginx-service-discovery/ingress/mandatory.yaml
kubectl apply -f kubernetes-nginx-service-discovery/ingress/service-nodeport.yaml
You would then need to spin up a new instance of a VM where you need to install nginx and copy the nginx.conf file to /etc/nginx/nginx.conf. The nginx.conf file assumes that you have a three-node cluster with node01, node02, and node03 as the hostnames for the worker nodes, you need to modify the nginx.conf file according to your setup.
Steps for installing NGINX is present on https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-open-source/
Setting up the Bind DNS Server
Bind DNS Server is one of the most popular open-source DNS products, and I have containerised it so that it can be used within Kubernetes to provide service discovery for K8S and non-K8S services.
On-Premise setup
git clone https://github.com/bharatmicrosystems/kubernetes-nginx-service-discovery.git
sed -i "s/example.com/<YOUR DOMAIN>/g" kubernetes-nginx-service-discovery/bind-dns-server/dns-server.yaml
sed -i "s/dns_loadbalancer_ip_value/<YOUR DNS LOAD BALANCER IP>/g" kubernetes-nginx-service-discovery/bind-dns-server/dns-server.yaml
sed -i "s/ingress_loadbalancer_ip_value/<YOUR INGRESS LOAD BALANCER IP>/g" kubernetes-nginx-service-discovery/bind-dns-server/dns-server.yaml
kubectl apply -f kubernetes-nginx-service-discovery/bind-dns-server/dns-server.yaml
Cloud setup
git clone https://github.com/bharatmicrosystems/kubernetes-nginx-service-discovery.git
Edit the yaml file to include an internal load balancer setup based on the Cloud Provider. Below is an example of Google Kubernetes Engine.
$ vim kubernetes-nginx-service-discovery/bind-dns-server/dns-server-lb-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: dns-server
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: dns-server
spec:
type: LoadBalancer
ports:
- port: 53
targetPort: 53
protocol: UDP
selector:
app: dns-server
Apply the Configuration
kubectl apply -f kubernetes-nginx-service-discovery/bind-dns-server/dns-server-lb-service.yaml
Wait for the Load Balancer service to come up and show the Load Balancer IP. Once you have obtained the Load Balancer IP, make a note of it as we would require it in subsequent steps.
sed -i "s/example.com/<YOUR DOMAIN>/g" kubernetes-nginx-service-discovery/bind-dns-server/dns-server-deployment.yaml
sed -i "s/dns_loadbalancer_ip_value/<YOUR DNS LOAD BALANCER IP>/g" kubernetes-nginx-service-discovery/bind-dns-server/dns-server-deployment.yaml
sed -i "s/ingress_loadbalancer_ip_value/<YOUR INGRESS LOAD BALANCER IP>/g" kubernetes-nginx-service-discovery/bind-dns-server/dns-server-deployment.yaml
kubectl apply -f kubernetes-nginx-service-discovery/bind-dns-server/dns-server-deployment.yaml
Make an entry on the coredns configuration for the Bind DNS server that would ensure that any DNS queries running within K8S services would also query the Bind DNS server.
kubectl edit -n kube-system cm/corednsapiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 172.16.0.1
cache 30
loop
reload
loadbalance
}
+ <YOUR DOMAIN HERE>:53 {
+ errors
+ cache 30
+ forward . <YOUR DNS LOAD BALANCER IP>
+ }
On your VM where you are running your service, run the following to update the entry on /etc/resolv.conf
sed -i "1 i\nameserver <YOUR DNS LOAD BALANCER IP>\noptions timeout:1" /etc/resolv.conf
Setting up the NGINX Load Balancer
sed -i "s/example.com/<YOUR DOMAIN>/g" kubernetes-nginx-service-discovery/nginx-load-balancer/nginx.yaml
kubectl apply -f kubernetes-nginx-service-discovery/nginx-load-balancer/nginx.yaml
The above commands would spin up NGINX Load Balancers which would read the Ingress resources to configure itself to point to the backend endpoints specified in the backend-host and backend-port labels. The Ingress resources like the below example yaml would be generated by the Pod automatically when agents running on the backend machines talk to the REST API running within the Pod.
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-backend-ingress
annotations:
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
labels:
backend-host: "nginx-backend-host1,nginx-backend-host2"
backend-port: "80"
spec:
rules:
- host: nginx.example.com
http:
paths:
- path: /
backend:
serviceName: nginx-load-balancer
servicePort: 80
Installing the Agent
Agents run within the backend servers that watch the config directory and talk to the nginx-load-balancer resources that they are available on the configured host and port as specified on configurable JSON files running within the agent. The agent allows for dynamic service discovery, and you are free to move your workloads to any VM which has an agent running. The configuration JSON should be present in the correct directory within that VM.
Choose a configuration directory and run the below
mkdir -p <configuration_dir>
./kubernetes-nginx-service-discovery/agent/setup.sh <configuration_dir> <domain>
The above commands will start the agent as a systemd service. And you are ready to begin creating your configuration JSON files within the configuration directory.
The below is an example configuration
$ vim <configuration_dir>/example.json
[
{
"name": "nginx-backend-ingress-example",
"ingress_host": "nginx.example.com",
"server_host": "nginx-backend",
"port": "80"
}
]
You can have multiple configuration files and numerous entries within a configuration file. The recommendation is to have one configuration file per service, and one entry within the file per server_host/port combination. The server_host should be reachable from the Kubernetes cluster worker nodes.
Testing the setup
Install NGINX within the backend VM (see https://www.nginx.com/resources/wiki/start/topics/tutorials/install/) and create a configuration JSON file.
$ vim <configuration_dir>/example.json
[
{
"name": "nginx-backend-ingress-example",
"ingress_host": "nginx.<YOUR DOMAIN>",
"server_host": "<HOST NAME OF THE SERVER>",
"port": "80"
}
]
Run the following on the Backend VM to see if it is resolving the DNS and using the Kubernetes route to reach itself
$ curl http://nginx.<YOUR DOMAIN>
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
Try running the same command from within a K8S container using the exec command, and you should get the same output that proves that service discovery is working within K8S Cluster and Beyond.
Further Reading
Thank you for reading through. I hope you enjoyed the story. If you are interested to learn further check out the following stories as they might be of interest to you