k8s
Welcome to the first post in the new “quick how-to” format!

In this article, we are going to deploy our first NGINX instance inside a kubernetes cluster, to start getting confident about basic resources and their usage.

What do we need?

First of all, what we need is to list our bill of materials:

  • A working kubernetes cluster, doesn’t matter if its a real cluster, a KinD deployment or minikube.
  • A text editor
  • Your favourite console
  • A cup of coffee

How do we create a deployment of NGINX in our kubernetes cluster?

Let’s start from the beginning! Let’s ideally track a logical path to follow.

We need to:

Deploy the application –> Verify it is locally working –> Expose it to the world

In order to proceed towards step one, we need to create the manifest for our resource, and we are going to use a deployment, let’s name it my-nginx-deployment.yml:

Copy to Clipboard

If you are curious enough, among some labels and metadata you can see that we are deploying nothing more than a container image in a single replica “fashion”, this means a single pod will be serving our application.

Now we can jump in our cluster and start by creating a namespace for our wonderful application:

Then just apply the kubernetes deployment resource that you created for NGINX:

And verify that the corresponding pod is up and running:

As you can see, the pod is working correctly! Let’s verify if the nginx pod can serve requests, and we do it by executing a simple curl inside the pod!

Copy to Clipboard

As a result, we can easily confirm it actually can serve it!

Let’s expose our NGINX deployment!

Now we have our instance up and running, but at this time, it is only accessible via the pod IP, that is actually visible only to the cluster machines for local setups.

At this extent, we are going to use another resources, the service, specifically a NodePort Service to make our application accessible, by using our nodes IP, that will be reachable from our network!

In some further articles, we will also speak about the ingresses, to link our services to an hostname, and access it without the need to know the IP of our nodes!

So, here’s the definition of our my-nginx-service.yml:

Copy to Clipboard

There are some important details you need to know about this:

  • the NodePort type is translated inside your nodes in iptables rules that allow routing the traffic (via DNAT in PRE-ROUTING chains) coming to the specified port of the node, to the pod(s) IP
  • port” and “targetPort” represent, respectively, the service port, and the port exposed by the pods
  • the “selector” tag is the crucial part, as it ‘selects’, using the labels, the resources, pods, that will be linked to it and whose ips are added as the service endpoint

Let’s apply the service!

Verify that it correctly bound itself to our nginx pod ip:

To recap, we have now:

  • A working pod serving nginx
  • A service that is correctly bond to port 30080 of our worker nodes
  • A service that serves the endpoint for our pod

What we need to do now is retrieve the IP of our nodes and check if it is working!

Does it work?

Now we can finally verify if our nginx deployment is reachable from outside the cluster:

Congrats, now you can access your nginx instance and play around with it.

If you want to check out the basic resources that we used together, you can clone the repository with the basic resources.

If you liked the article feel free to share it with your colleagues and friends, and leave your feedback back in the comments!