Kubernetes has become the de facto standard for container orchestration, and Google Kubernetes Engine (GKE) offers a powerful managed service for deploying and managing containerized applications. This guide will walk you through the essential steps of deploying a basic application Pod in GKE and making it accessible to the internet using a LoadBalancer Service, all through declarative YAML manifests.
Understanding the Kubernetes YAML Foundation
Every resource you interact with in Kubernetes is defined using a YAML manifest. These files describe the desired state of your cluster. Here are the core top-level objects you’ll encounter in almost every manifest:
apiVersion: # Specifies the Kubernetes API version (e.g., v1, apps/v1)
kind: # Defines the type of resource (e.g., Pod, Service, Deployment)
metadata: # Holds descriptive data like name, labels, and namespace
spec: # Contains the desired configuration and state of the resource
Think of it as Kubernetes’ blueprint language:
* apiVersion
tells Kubernetes which API endpoint to use.
* kind
declares what specific resource you’re creating.
* metadata
provides unique identification and organizational details.
* spec
outlines the detailed configuration and behavior.
Step 1: Crafting Your First Pod Definition
Let’s begin by defining a simple Pod that will run an Nginx web server. First, create a directory for your manifest files and navigate into it:
mkdir kube-manifests
cd kube-manifests
Now, create a file named 01-pod-definition.yaml
with the following content:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp
image: stacksimplify/kubenginx:1.0.0
ports:
- containerPort: 80
In this definition:
* We’ve named our Pod myapp-pod
.
* A crucial app: myapp
label is applied, which will be used later for service discovery.
* The container myapp
uses the stacksimplify/kubenginx:1.0.0
image and exposes port 80
.
To deploy this Pod to your GKE cluster, execute:
kubectl apply -f 01-pod-definition.yaml
kubectl get pods
You should see your myapp-pod
in a Running
state.
Step 2: Exposing Your Pod with a LoadBalancer Service
While our Pod is running, it’s not yet accessible from outside the cluster. This is where a Kubernetes Service comes in. We’ll use a LoadBalancer
type Service, which will provision an external Google Cloud Load Balancer to route internet traffic to our application.
Create a new file named 02-pod-LoadBalancer-service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: myapp-pod-loadbalancer-service
spec:
type: LoadBalancer
selector:
app: myapp # Matches the label on our Pod
ports:
- name: http
port: 80 # The port the Service itself listens on
targetPort: 80 # The port on the Pod to which traffic is forwarded
Here’s how this Service works:
* type: LoadBalancer
instructs GKE to create a cloud load balancer.
* The selector: app: myapp
is vital; it tells the Service to forward traffic only to Pods with this specific label. This is how the Service discovers our myapp-pod
.
* port: 80
is the external port exposed by the LoadBalancer, and targetPort: 80
directs that traffic to port 80 of our container.
The traffic flow will be: [Internet User] → [GCP LoadBalancer] → [K8s Service Port 80] → [Pod Container Port 80]
.
Apply the Service manifest:
kubectl apply -f 02-pod-LoadBalancer-service.yaml
kubectl get svc
After a short while, check the output of kubectl get svc
. You should see an EXTERNAL-IP
assigned to myapp-pod-loadbalancer-service
. This is the public IP address of your GCP Load Balancer.
Test your deployment by opening a web browser to this EXTERNAL-IP
or using curl
:
curl http://<Load-Balancer-External-IP>
You should be greeted by the Nginx welcome page, confirming your application is successfully exposed!
Step 3: Cleaning Up Your Resources
Once you’ve finished experimenting, it’s crucial to clean up your Kubernetes resources to avoid incurring unnecessary costs. Delete the Pod and Service using the same manifests you applied:
kubectl delete -f 01-pod-definition.yaml
kubectl delete -f 02-pod-LoadBalancer-service.yaml
This will remove the Pod, the LoadBalancer Service, and the underlying GCP Load Balancer.
Summary
This guide demonstrated the fundamental process of deploying a containerized application within Google Kubernetes Engine:
* Kubernetes Pods
serve as the smallest deployable units for your application containers.
* Services
act as stable network endpoints, providing a consistent way to access your Pods.
* A LoadBalancer
Service type in GKE seamlessly integrates with Google Cloud’s infrastructure to provision an external load balancer, making your application publicly accessible.
This straightforward method is a cornerstone for exposing your applications in a GKE environment.
Latchu | Senior DevOps & Cloud Engineer