Mastering Kubernetes: Provisioning an Amazon EKS cluster using kubectl and eksctl.

Mastering Kubernetes: Provisioning an Amazon EKS cluster using kubectl and eksctl.

This article marks the second in a series dedicated to mastering Kubernetes. In the first series, we explored how to set up a Kubernetes cluster using kubeadm and deployed a weave socks shop application on it.

You can check out the article here.

At the conclusion of this tutorial, you will have a fully running Amazon EKS cluster that you can deploy applications on.

Prerequisites

  • An AWS account with an admin user

  • Access to the AWS CLI

Before you can start using eksctl, you have to install the AWS CLI.

This tool is required to verify your requests to your Amazon Web Services account.

You can check out the official documentation on how to install the AWS CLI here.

After installing the AWS CLI, you should run:

aws --version
aws-cli/2.8.12 Python/3.9.11 Linux/4.4.0-18362-Microsoft exe/x86_64.ubuntu.20 prompt/off

You should see the above if the installation was successful.

Configuring the CLI

The next step is configuring the CLI with your AWS credentials so we can provision your cluster on your AWS account.

For us to do this, we would need the following:

  1. AWS Access Key ID.

  2. AWS Secret Access Key

  3. default region name.

  4. default output format.

I would be assuming you know how to create an IAM user on AWS. The user needs to have access to some privileges to be able to create the cluster on AWS.

You can follow this Official Documentation by AWS to create an IAM user. Ensure you provide AdministratorAccess as permission because, at the very least, when provisioning Kubernetes resources, the user will need to interact with a lot of AWS infrastructure, from CloudFormation templates to network ACLs, subnet mappings, security groups, and actually provisioning and configuring EC2 instances.

Then navigate to the console of the user that you are going to use, create an access key, and download a csv export of the access key.

Now that we have our access keys and have ensured that the AWS CLI is installed, we proceed to configure our AWS account on the CLI by executingCSV aws configure on the terminal:

aws configure
AWS Access Key ID [****************ABCD]: 
AWS Secret Access Key [****************DEFG]: 
Default region name [us-west-2]: 
Default output format [yaml]:

list of AWS available region can be found here

The AWS CLI simply lets us interact with our AWS account without using the AWS web interface directly.

After configuring the CLI, you can try listing all your EKS clusters with

aws eks list-clusters
{
    "clusters": []
}

As you can see, it brought out an empty result, as expected. We don't have any clusters running at this moment.

Now let's proceed to use eksctl to set up our Kubernetes cluster.

What is eksctl?

eksctl, a simple command-line utility for creating and managing Kubernetes clusters on Amazon EKS, It is written in Go, uses CloudFormation, and was created by Weaveworks.

Normally, the AWS CLI has a command to create an EKS cluster: aws eks create-cluster without having to go through using eksctl.

Which means simply going to the terminal where we have our CLI configured and running aws eks create-cluster would provision us with a cluster. However, the caveat to this is that the command only creates a control plane. It does not create any worker nodes, set up authentication, permissions, or all of that.

However, eksctl is essentially an aws eks on steroids

With just a single command, you would be provisioned with a fully functioning cluster and all the necessary resources it needs.

Installing eksctl

But before we proceed, we have to install the eksctl binary. You can follow the guide from the official eksctl to get this installed.

In my case, i would be installing HomeBrew first

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

Then add the necessary tap to HomeBrew:

brew update
brew tap weaveworks/tap

Followed by installing eksctl:

brew install weaveworks/tap/eksctl

To check if it was installed correctly:

eksctl version
0.118.0

We then proceed to run the command below to spin up our cluster:

eksctl create cluster

now, this by default is making a few assumptions about the cluster we want

  • It creates a control plane (managed by AWS).

  • It joins two workers nodes.

  • It selects the m5.large as an instance type.

  • It creates the cluster in your chosen default region.

We can always modify our cluster to suit what we need based on our wants, to do this we can provision a new cluster with:

eksctl create cluster \
  --name uthys-cluster \
  --node-type t2.micro \
  --nodes 3 \
  --nodes-min 3 \
  --nodes-max 5 \
  --region eu-east-1

The command creates a control plane with the following resources:

  • In the region eu-east-1 (I am using eu-east-1 as the region because it’s closer to where I live.)

  • With cluster name as "uthys-cluster".

  • Using t2.micro instances.

  • The worker instances will autoscale based on load (from 3 to a maximum of 5 nodes).

You would have to be a bit patient here as provisioning a cluster takes 15–30 minutes on average.

Installation of kubectl

While we wait for the cluster to be provisioned, you should download kubectl — the command-line tool to connect and manage the Kubernetes cluster.

Kubectl can be downloaded from here.

You can check that the binary is installed successfully with:

kubectl version --client

If it was succesfully installed you should get a version as output.

Once your cluster is ready, we should see this:

[output truncated]
[✔]  EKS cluster "uthys-cluster" in "eu-east-1" region is ready

You can verify that the cluster is running by using:

eksctl get cluster --name uthys-cluster --region eu-east-1

yayyyyyy! it worked

Let's list all the Pods in the cluster:

kubectl get pods --all-namespaces
kube-system   aws-node-j4x9b            1/1     Running   0          4m11s
kube-system   aws-node-lzqkd            1/1     Running   0          4m12s
kube-system   aws-node-vj7rh            1/1     Running   0          4m12s
kube-system   coredns-5fdf64ff8-7zb4g   1/1     Running   0          10m
kube-system   coredns-5fdf64ff8-x5dfd   1/1     Running   0          10m
kube-system   kube-proxy-67967          1/1     Running   0          4m12s
kube-system   kube-proxy-765wd          1/1     Running   0          4m12s
kube-system   kube-proxy-gxjdn          1/1     Running   0          4m11s

You can see from the kube-system namespace, that Kubernetes created the mandatory pods needed to run the cluster.

With this, you have successfully created and connected to a fully functional Kubernetes cluster.

But with eksctl, there is even a better method to create clusters: by specifying your requirements in a YAML file. However the tool has its limitations which we will still get to find out soon.

When the cluster configuration is included in a single file, you can:

  • Publish to Git or another version control system.

  • Share it with your coworkers and friends.

  • Remember what the cluster's last configuration was.

Let's destroy the present cluster we created first before learning more about the YAML configuration for eksctl:

eksctl delete cluster --name uthys-cluster --region eu-east-1

We wait to ensure the command run its full course, as terminating earlier might leave a few resources still running which you would still get billed for.

Upon completion, we should get the following output:

[output truncated]
[✔]  all cluster resources were deleted

You may describe your cluster and node specifications in YAML format and provide that file to eksctl so that it can build the resources.

Let's take a look at how it works. Build a cluster.yaml file with the following content:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: uthys-cluster
  region: eu-east-1
nodeGroups:
  - name: worker-group
    instanceType: t2.micro
    desiredCapacity: 3
    minSize: 3
    maxSize: 5

You may already guess how the cluster in the yaml file above will look after deployment.

The cluster is identical to the one we initially built with command-line options, but this time, all of the prerequisites are kept in YAML.

  • eu-east-1 is our region

  • has the cluster "uthy's-cluster" as its name.

  • t2.micro instances are used.

  • Depending on the load, the worker instances will automatically scale (from 3 to a maximum of 5 nodes).

By adding the above to the YAML configuration, we can create our cluster with the following command:

eksctl create cluster -f cluster.yaml

As we observed earlier, creating a cluster takes a couple minutes on average.

We can then proceed to check if our cluster was successfully created.

eksctl get cluster --name learnk8s
NAME                VERSION    STATUS
uthys-cluster        1.24    ACTIVE

hurrayyyy!

We have successfully created our cluster using eksctl and our yaml file and with that, we have reached the end of this guide.

Resources and Additional Reading

The eksctl tool

Kubectl Install

Kubernetes Objects and Kubectl Command Cheatsheet:

Now imagine that after provisioning your cluster and you want to scale up or scale down your resources, Let's say you want to change your instance type to T3.medium from the T2.micro we initially provisioned.

How do we go about this?

There isn't currently a command to read YAML and update the cluster with the most recent modifications. (the feature is currently in the works)

Since the create command is only used to create the cluster initially, you won't be able to change the specification.

This is where provisioning our cluster with Terraform comes in. Terraform allows us to provision resources and subsequently modify them based on our needs.

In the next article in this series, we will be provisioning our cluster using Terraform and deploying a simple application on it.

Stay tuned.