
How to Deploy an Application on Your Kubernetes Cluster
How to Deploy an Application on Your Kubernetes Cluster 관련
Now that we've set up our Kubernetes cluster using Play with Kubernetes, it's time to deploy the application and make it accessible over the internet.
🧠 Understanding Imperative vs. Declarative Approaches in Kubernetes
Before we proceed, it's essential to grasp the two primary methods for managing resources in Kubernetes: Imperative and Declarative.
🖋️ Imperative Approach
In the imperative approach, you directly issue commands to the Kubernetes API to create or modify resources. Each command specifies the desired action, and Kubernetes executes it immediately.
Imagine telling someone, "Turn on the light." You're giving a direct command, and the action happens right away. Similarly, with imperative commands, you instruct Kubernetes step-by-step on what to do.
Example:
To create a pod running an NGINX container, run the below command in the terminal of the master node:
kubectl run nginx-pod --image=nginx
Now wait a few seconds and run the command below to check the status of the pod:
kubectl get pods
You should get a response similar to this

Now let’s expose our Pod to the internet by creating a Service. Run the command below to expose the Pod:
kubectl expose pod nginx-pod --type=NodePort --port=80
To get the IP address of the Cluster so we can access our Pod, run the command below:
kubectl get svc
The command displays the IP address from which we can access our service. You should get an output similar to this:

Now, copy the IP address for the nginx-pod
service and run the command below to make a request to your Pod:
curl <YOUR-SERVICE-IP-ADDRESS>
Replace the <YOUR-SERVICE-IP-ADDRESS>
placeholder with the IP address of your nginx-pod
service. In my case, it’s 10.98.108.173
.
You should get a response from your nginx-pod
Pod:

We couldn’t access the Pod from the internet, that is our browser, because our Cluster isn’t connected to a cloud service like AWS or Google Cloud which can provide us with an external load balancer.
Now let’s try doing the same thing but using the Declarative method.
🚀 Declarative Approach
So far, we used the imperative approach, where we typed commands like kubectl run
or kubectl expose
directly into the terminal to make Kubernetes do something immediately.
But Kubernetes has another (and often better) way to do things: the declarative approach.
🧾 What Is the Declarative Approach?
Instead of giving Kubernetes instructions step-by-step like a chef in a kitchen, you give it a full recipe - a file that describes exactly what you want (for example, what app to run, how many copies of it, how to expose it, and so on).
This recipe is written in a file called a manifest.
📘 What’s a Manifest?
A manifest is a file (usually written in YAML format) that describes a Kubernetes object - like a Pod, a Deployment, or a Service.
It’s like writing down what you want, handing it over to Kubernetes, and saying: “Hey, please make sure this exists exactly how I described it.”
We’ll use two manifests:
- One to deploy our application
- Another to expose it to the internet
Let’s walk through it!
📁 Step 1: Clone the GitHub Repo
We already have a GitHub repo that contains the two manifest files we need. Let’s clone it into our Kubernetes environment.
Run this in the terminal (on your master node):
git clone https://github.com/onukwilip/simple-kubernetes-app
Now, let’s go into the folder:
cd simple-kubernetes-app
You should see two files:
deployment.yaml
service.yaml
📦 Step 2: Understanding the Deployment Manifest (deployment.yaml
)
This manifest will tell Kubernetes to deploy our app and ensure it’s always running.
Here’s what’s inside:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
Now, let’s break this down:
apiVersion: apps/v1
: This tells Kubernetes which version of the API we’re using to define this object.kind: Deployment
: This means we’re creating a Deployment (a controller that manages Pods).metadata.name
: We’re giving our Deployment a name:nginx-deployment
.spec.replicas: 3
: We’re telling Kubernetes: “Please run 3 copies (replicas) of this app.”selector.matchLabels
: Kubernetes will use this label to find which Pods this Deployment is managing.template.metadata.labels
&spec.containers
: This section describes the Pods that the Deployment should create - each Pod will run a container using the officialnginx
image.
✅ In plain terms: We're asking Kubernetes to create and maintain 3 copies of an app that runs NGINX, and automatically restart them if any fails.
🌐 Step 3: Understanding the Service Manifest (service.yaml
)
This file tells Kubernetes to expose our NGINX app to the outside world using a Service.
Here’s the file - let’s break this down, too:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: v1
: We’re using version 1 of the Kubernetes API.kind: Service
: We’re creating a Service object.metadata.name: nginx-service
: Giving it a name.spec.type: NodePort
: We’re exposing it through a port on the node (so we can access it via the node's IP address).selector.app: nginx
: This tells Kubernetes to connect this Service to Pods with the labelapp: nginx
.ports.port
andtargetPort
: The Service will listen on port 80 and forward traffic to port 80 on the Pod.
✅ In plain terms: This file says, “Expose our NGINX app through the cluster’s network so we can access it from the outside world.”
🧹 Step 4: Clean Up Previous Resources
If you’re still running the Pod and Service we created using the imperative approach, let’s delete them to avoid conflicts:
kubectl delete pod nginx-pod
kubectl delete service nginx-pod
📥 Step 5: Apply the Manifests
Now let’s deploy the NGINX app and expose it - this time using the declarative way.
From inside the simple-kubernetes-app
folder, run:
kubectl apply -f deployment.yaml
Then:
kubectl apply -f service.yaml
This will create the Deployment and the Service described in the files. 🎉
🔍 Step 6: Check That It’s Running
Let’s see if the Pods were created:
kubectl get pods
You should see 3 Pods running!
And let’s check the service:
kubectl get svc
Look for the nginx-service
. You’ll see something like:

Note the NodePort (for example, 30001
) as we’ll use it to access the app.
🌍 Step 7: Access the App
You can now send a request to your app like this:
curl http://<YOUR-NODE-IP>:<NODE-PORT>
Note
Replace <YOUR-NODE-IP>
with the IP of your master node (you’ll usually find this in Play With Kubernetes at the top of your terminal), and <NODE-PORT>
with the NodePort shown in the kubectl get svc
command.

You should see the HTML content of the NGINX welcome page printed out.

🆚 Why Declarative Is Better (In Most Cases)
- 🔁 Reusable: You can use the same files again and again.
- 📦 Version-controlled: You can push these files to GitHub and track changes over time.
- 🛠️ Fixes mistakes easily: Want to change 3 replicas to 5? Just update the file and re-apply!
- 🧠 Easier to maintain: Especially when you have many resources to manage.