Kubernetes, while incredibly powerful, often presents a steep learning curve, even for seasoned DevOps professionals. Managing and orchestrating containers at scale demands robust tooling and a clear strategy. This guide will walk you through deploying an Elastic Kubernetes Service (EKS) cluster on Amazon Web Services (AWS) and seamlessly connecting worker nodes using Terraform, an infrastructure as code (IaC) tool that simplifies cloud resource provisioning.
Getting Started: AWS CLI Configuration
Before diving into Terraform, ensure your AWS Command Line Interface (CLI) is properly configured. This setup grants you the necessary permissions to interact with your AWS account and allows Terraform to provision infrastructure on your behalf. If you haven’t done this already, consult the official AWS documentation for detailed instructions on configuring your CLI.
Terraform Project Structure
Our Terraform project will primarily consist of two files: main.tf and variables.tf. You can create these files with the following command:
touch main.tf variables.tf
main.tf: Core Infrastructure Definition
This file will house the definitions for our AWS provider, EKS cluster, and worker nodes.
Setting up the Terraform Block and AWS Provider
First, we define the required Terraform providers and configure the AWS provider, specifying the region for our deployment.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
Creating the EKS Cluster IAM Role
EKS requires an IAM role to interact with other AWS services on your behalf. We’ll create a role named eks-cluster-role and attach the AmazonEKSClusterPolicy to it, granting the necessary permissions.
resource "aws_iam_role" "eks-cluster-role" {
name = "eeks-cluster-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sts:AssumeRole",
"sts:TagSession"
]
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "cluster_AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks-cluster-role.name
}
Defining the EKS Cluster Resource
Now, we define the EKS cluster itself, named my-eks. We attach the eks-cluster-role we just created, specify the Kubernetes version, and configure the VPC subnets where our cluster will operate. Crucially, bootstrap_self_managed_addons is set to true, which automatically installs essential Kubernetes add-ons like aws-cni, kube-proxy, and CoreDNS, saving significant manual effort.
resource "aws_eks_cluster" "my-eks" {
name = "my-eks"
access_config {
authentication_mode = "API"
}
role_arn = aws_iam_role.eks-cluster-role.arn
version = "1.31"
vpc_config {
subnet_ids = [
var.subnet1,
var.subnet2,
]
}
access_config {
authentication_mode = "API"
bootstrap_cluster_creator_admin_permissions = true
}
bootstrap_self_managed_addons = true
depends_on = [
aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy,
]
}
Setting up Worker Nodes with Node Groups
After defining the EKS control plane, the next step is to create the worker nodes that will run your containerized applications. We achieve this using EKS node groups.
Creating the Worker Node IAM Role
Worker nodes also require an IAM role with specific permissions to interact with EKS and other AWS services. We’ll create node-group-role and attach three essential policies: AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy, and AmazonEC2ContainerRegistryReadOnly.
resource "aws_iam_role" "node-group-role" {
name = "eks-node-group"
assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}
resource "aws_iam_role_policy_attachment" "node-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.node-group-role.name
}
resource "aws_iam_role_policy_attachment" "node-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.node-group-role.name
}
resource "aws_iam_role_policy_attachment" "node-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.node-group-role.name
}
Creating the EKS Node Group
With the worker node role in place, we can define our my-node-group. This resource links to our EKS cluster, specifies the node role, subnets, and scaling configuration (desired, max, and min size). Optionally, remote_access can be configured for SSH access to the nodes, including custom security groups and an EC2 SSH key.
resource "aws_eks_node_group" "node-group" {
cluster_name = aws_eks_cluster.my-eks.name
node_group_name = "my-node-group"
node_role_arn = aws_iam_role.node-group-role.arn
subnet_ids = [var.subnet1, var.subnet2]
scaling_config {
desired_size = 1
max_size = 2
min_size = 1
}
update_config {
max_unavailable = 1
}
remote_access {
source_security_group_ids = [var.security_group_id]
ec2_ssh_key = "kube-demo"
}
depends_on = [
aws_iam_role_policy_attachment.node-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.node-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.node-AmazonEC2ContainerRegistryReadOnly,
]
}
variables.tf: Parameterizing Your Deployment
Remember to define all the variables used (e.g., subnet1, subnet2, security_group_id) in your variables.tf file. This allows for flexible and reusable configurations.
Deploying Your EKS Cluster
Once your main.tf and variables.tf files are complete, navigate to your project directory in the terminal and run the following Terraform commands:
terraform initterraform planterraform apply
The terraform apply command will provision all the defined resources in your AWS account. This process can take several minutes, so grab a coffee while Terraform works its magic.
Connecting to Your EKS Cluster
After a successful deployment, you can connect to your EKS control plane. Update your kubeconfig file to allow kubectl to interact with your new cluster:
aws eks update-kubeconfig --name my-eks --region us-east-1
Replace my-eks with your cluster name if it’s different.
Finally, verify that your worker nodes are registered and ready:
kubectl get nodes
You should see your worker nodes listed, indicating a successful EKS cluster setup with Terraform. You are now ready to deploy your applications to your new Kubernetes environment!