
How to Deploy an Application on Your Kubernetes Cluster
How to Deploy an Application on Your Kubernetes Cluster ๊ด๋ จ
Now that we've set up our Kubernetes cluster using Play with Kubernetes, it's time to deploy the application and make it accessible over the internet.
๐ง Understanding Imperative vs. Declarative Approaches in Kubernetes
Before we proceed, it's essential to grasp the two primary methods for managing resources in Kubernetes: Imperative and Declarative.
๐๏ธ Imperative Approach
In the imperative approach, you directly issue commands to the Kubernetes API to create or modify resources. Each command specifies the desired action, and Kubernetes executes it immediately.โ
Imagine telling someone, "Turn on the light." You're giving a direct command, and the action happens right away. Similarly, with imperative commands, you instruct Kubernetes step-by-step on what to do.
Example:
To create a pod running an NGINX container, run the below command in the terminal of the master node:โ
kubectl run nginx-pod --image=nginx
Now wait a few seconds and run the command below to check the status of the pod:
kubectl get pods
You should get a response similar to this

Now letโs expose our Pod to the internet by creating a Service. Run the command below to expose the Pod:
kubectl expose pod nginx-pod --type=NodePort --port=80
To get the IP address of the Cluster so we can access our Pod, run the command below:
kubectl get svc
The command displays the IP address from which we can access our service. You should get an output similar to this:

Now, copy the IP address for the nginx-pod
service and run the command below to make a request to your Pod:
curl <YOUR-SERVICE-IP-ADDRESS>
Replace the <YOUR-SERVICE-IP-ADDRESS>
placeholder with the IP address of your nginx-pod
service. In my case, itโs 10.98.108.173
.
You should get a response from your nginx-pod
Pod:

We couldnโt access the Pod from the internet, that is our browser, because our Cluster isnโt connected to a cloud service like AWS or Google Cloud which can provide us with an external load balancer.
Now letโs try doing the same thing but using the Declarative method.
๐ Declarative Approach
So far, we used the imperative approach, where we typed commands like kubectl run
or kubectl expose
directly into the terminal to make Kubernetes do something immediately.
But Kubernetes has another (and often better) way to do things: the declarative approach.
๐งพ What Is the Declarative Approach?
Instead of giving Kubernetes instructions step-by-step like a chef in a kitchen, you give it a full recipe โ a file that describes exactly what you want (for example, what app to run, how many copies of it, how to expose it, and so on).
This recipe is written in a file called a manifest.
๐ Whatโs a Manifest?
A manifest is a file (usually written in YAML format) that describes a Kubernetes object โ like a Pod, a Deployment, or a Service.
Itโs like writing down what you want, handing it over to Kubernetes, and saying: โHey, please make sure this exists exactly how I described it.โ
Weโll use two manifests:
- One to deploy our application
- Another to expose it to the internet
Letโs walk through it!
๐ Step 1: Clone the GitHub Repo
We already have a GitHub repo that contains the two manifest files we need. Letโs clone it into our Kubernetes environment.
Run this in the terminal (on your master node):
git clone https://github.com/onukwilip/simple-kubernetes-app
Now, letโs go into the folder:
cd simple-kubernetes-app
You should see two files:
deployment.yaml
service.yaml
๐ฆ Step 2: Understanding the Deployment Manifest (deployment.yaml
)
This manifest will tell Kubernetes to deploy our app and ensure itโs always running.
Hereโs whatโs inside:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
Now, letโs break this down:
apiVersion: apps/v1
: This tells Kubernetes which version of the API weโre using to define this object.kind: Deployment
: This means weโre creating a Deployment (a controller that manages Pods).metadata.name
: Weโre giving our Deployment a name:nginx-deployment
.spec.replicas: 3
: Weโre telling Kubernetes: โPlease run 3 copies (replicas) of this app.โselector.matchLabels
: Kubernetes will use this label to find which Pods this Deployment is managing.template.metadata.labels
&spec.containers
: This section describes the Pods that the Deployment should create โ each Pod will run a container using the officialnginx
image.
โ In plain terms: We're asking Kubernetes to create and maintain 3 copies of an app that runs NGINX, and automatically restart them if any fails.
๐ Step 3: Understanding the Service Manifest (service.yaml
)
This file tells Kubernetes to expose our NGINX app to the outside world using a Service.
Hereโs the file โ letโs break this down, too:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: v1
: Weโre using version 1 of the Kubernetes API.kind: Service
: Weโre creating a Service object.metadata.name: nginx-service
: Giving it a name.spec.type: NodePort
: Weโre exposing it through a port on the node (so we can access it via the node's IP address).selector.app: nginx
: This tells Kubernetes to connect this Service to Pods with the labelapp: nginx
.ports.port
andtargetPort
: The Service will listen on port 80 and forward traffic to port 80 on the Pod.
โ In plain terms: This file says, โExpose our NGINX app through the clusterโs network so we can access it from the outside world.โ
๐งน Step 4: Clean Up Previous Resources
If youโre still running the Pod and Service we created using the imperative approach, letโs delete them to avoid conflicts:
kubectl delete pod nginx-pod
kubectl delete service nginx-pod
๐ฅ Step 5: Apply the Manifests
Now letโs deploy the NGINX app and expose it โ this time using the declarative way.
From inside the simple-kubernetes-app
folder, run:
kubectl apply -f deployment.yaml
Then:
kubectl apply -f service.yaml
This will create the Deployment and the Service described in the files. ๐
๐ Step 6: Check That Itโs Running
Letโs see if the Pods were created:
kubectl get pods
You should see 3 Pods running!
And letโs check the service:
kubectl get svc
Look for the nginx-service
. Youโll see something like:

Note the NodePort (for example, 30001
) as weโll use it to access the app.
๐ Step 7: Access the App
You can now send a request to your app like this:
curl http://<YOUR-NODE-IP>:<NODE-PORT>
Note
Replace <YOUR-NODE-IP>
with the IP of your master node (youโll usually find this in Play With Kubernetes at the top of your terminal), and <NODE-PORT>
with the NodePort shown in the kubectl get svc
command.

You should see the HTML content of the NGINX welcome page printed out.

๐ Why Declarative Is Better (In Most Cases)
- ๐ Reusable: You can use the same files again and again.
- ๐ฆ Version-controlled: You can push these files to GitHub and track changes over time.
- ๐ ๏ธ Fixes mistakes easily: Want to change 3 replicas to 5? Just update the file and re-apply!
- ๐ง Easier to maintain: Especially when you have many resources to manage.