Deploying and managing microservices on Kubernetes can quickly become a labyrinth of YAML files, leading to configuration drift, repetitive tasks, and scalability headaches. For DevOps engineers striving for efficiency and consistency, this complexity often hinders agility. This is precisely where custom Helm charts emerge as a powerful solution, transforming a chaotic collection of manifests into a streamlined, reusable, and scalable deployment framework.

In this comprehensive guide, we’ll explore how to construct a robust microservices architecture on AWS EKS, provisioned with Terraform, and critically, how to modularize your deployments using your own custom Helm charts. We’ll walk through the entire process, from foundational infrastructure to automated TLS-secured ingress, culminating in a highly maintainable and extensible system.

Why Custom Helm Charts Are a Game Changer for Microservices

Imagine a world where adding a new microservice doesn’t mean duplicating dozens of lines of YAML. That’s the promise of custom Helm charts. They offer:

  • Unparalleled Reusability: Define your service patterns once and reuse them across multiple environments or similar services, eliminating redundant YAML.
  • Effortless Scalability: Easily scale your application by adding new service instances or entirely new microservices without significant overhead.
  • Dynamic Flexibility: Utilize values.yaml files to manage environment-specific configurations, making it simple to switch between development, staging, and production setups.
  • Deployment Consistency: Enforce a standard structure for deployments, services, ingress rules, and database configurations, ensuring uniformity across your entire application stack.

This approach shifts from one-off deployments to a strategic framework, empowering you to grow your application with confidence and minimal friction.

Laying the Foundation: Terraform for AWS EKS Infrastructure

Before we deploy any microservices, we need a solid foundation. Terraform is our tool of choice for provisioning the underlying AWS infrastructure. For this setup, we’ll provision:

  • A Virtual Private Cloud (VPC) with appropriate subnets.
  • An Amazon Elastic Kubernetes Service (EKS) cluster.
  • Necessary IAM roles and networking configurations.

With Terraform, managing your cluster lifecycle becomes incredibly simple:

cd infra
terraform init
terraform apply --auto-approve
terraform destroy --auto-approve

This ensures that spinning up or tearing down your entire environment is a repeatable and automated process.

Building Blocks: Crafting Custom Helm Charts for Each Component

Instead of bundling all microservices into a single, monolithic manifest, the power of Helm lies in breaking down your application into discrete, manageable charts. Our structure will look something like this:

manifests/
├── charts/
│   ├── ingress/       
│   ├── redis/         
│   └── shopping-ms/   
├── values/
├── helmfile.yaml
└── issuer.yaml

Here, ingress, redis, and shopping-ms each represent a distinct Helm chart. To create a new chart, you simply use the helm create command:

helm create <chart-name>

For instance, creating a chart for our shopping microservice would look like:

helm create shopping-ms

This command generates a default chart directory structure:

shopping-ms/
├── charts/
├── templates/
│   ├── deployment.yaml
│   ├── service.yaml
│   └── _helpers.tpl
├── values.yaml
├── Chart.yaml
└── .helmignore

Within the templates/ directory, you’ll define your Kubernetes resources (like deployment.yaml and service.yaml) using Go templating. The values.yaml file in the chart’s root directory holds default configuration values, which can then be overridden by environment-specific values.yaml files located higher up in your project structure (e.g., manifests/values/shopping-ms.yaml). This hierarchical approach provides immense flexibility and separation of concerns.

Securing Your Application: Ingress with Automated TLS

Exposing your microservices securely to the internet is paramount. We’ll achieve this using an Nginx Ingress Controller combined with Cert-Manager for automatic TLS certificate provisioning.

  1. Install Nginx Ingress Controller: This controller acts as the entry point for external traffic, creating an AWS Load Balancer and routing requests to the correct services based on your ingress rules.

    <

    pre># Add Helm repo for ingress-nginx
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update

Install ingress-nginx into its own namespace

kubectl create namespace ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.publishService.enabled=true

Verify Ingress Controller

kubectl get pods -n ingress-nginx

Check the LoadBalancer service to get its IP/Hostname

kubectl get svc -n ingress-nginx

Once the Load Balancer is provisioned, you’ll need to point your custom domain name to its IP or hostname using a CNAME record.

  1. Install Cert-Manager: This essential tool automates the management, issuance, and renewal of TLS certificates from various issuing sources like Let’s Encrypt.

    <

    pre>kubectl create namespace cert-manager

helm repo add jetstack https://charts.jetstack.io
helm repo update

helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--set installCRDs=true

This installs cert-manager and its Custom Resource Definitions (CRDs), which are crucial for issuers to function.

Verify Installation

kubectl get pods -n cert-manager

  1. Define a ClusterIssuer: Cert-Manager needs to know which Certificate Authority (CA) to use. We define this with a ClusterIssuer (or Issuer for namespace-specific certificates). This issuer.yaml will tell Cert-Manager to use Let’s Encrypt for certificate requests. After creating the issuer.yaml file, apply it:
    kubectl apply -f issuer.yaml
    

    Crucially, your Ingress resource must include an annotation like cert-manager.io/cluster-issuer: letsencrypt-prod to trigger certificate requests.

Key Considerations for Ingress and TLS:
* The Ingress Controller and Cert-Manager are typically cluster-wide resources, meaning they serve all namespaces.
* For certificates applicable across multiple namespaces, always opt for ClusterIssuer over Issuer.

Streamlined Deployments with Helmfile

Managing individual Helm charts can still be cumbersome, especially as your application grows. Helmfile simplifies this by allowing you to define all your Helm releases in a single file and manage them with one command:

helmfile apply
helmfile destroy

Helmfile reads your helmfile.yaml and executes the necessary helm install, helm upgrade, or helm uninstall commands for all your defined charts, making deployments, updates, and cleanups incredibly efficient.

Accessing Your Application

Once all components are successfully deployed, your shopping application will be accessible via your configured domain:

https://<your-domain>

Troubleshooting Common Deployment Hurdles

  • HTTPS Issues: Ensure Cert-Manager is fully installed and healthy before applying your ClusterIssuer.
  • Let’s Encrypt Rate Limits: When testing, start with the Let’s Encrypt staging issuer to avoid hitting rate limits on the production API.
  • Services Not Accessible: Meticulously check your Ingress resource annotations and rules within your Helm charts.
  • Kubernetes Basics: If you’re new to Kubernetes, consider starting with simple, single-file deployments before moving to Helm charts to build a foundational understanding.

Lessons Learned and Future Enhancements

This journey into custom Helm charts revealed several critical insights:

  • Proactive Charting: Don’t wait for YAML sprawl; begin building charts early in your project lifecycle.
  • Helmfile is Indispensable: A unified command for managing multiple charts is a massive time-saver.
  • Portability is Key: Well-crafted charts for common components like Redis or ingress can be easily reused across different clusters and projects.

Looking ahead, the next step for this setup is to extend these custom charts to include monitoring solutions like Prometheus and Grafana, evolving it into a truly production-ready, observable stack.

The full codebase for this project can be found here: Github repo

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed