"Mastering AWS EKS: How I Successfully Deployed and Scaled a Web Application in the Cloud"

"Mastering AWS EKS: How I Successfully Deployed and Scaled a Web Application in the Cloud"

"A Step-by-Step Guide to Harness the Power of Elastic Kubernetes Service for Seamless Cloud Deployment and Growth"

AWS-EKS

EKS stands for "Elastic Kubernetes Service" EKS is a fully managed AWS service EKS is the best place to run K8S applications because of its security, reliability, and scalability

• EKS can be integrated with other AWS services such as ELB, CloudWatch, Autoscaling, IAM and VPC

• EKS makes it easy to run K8S on AWS without installing, operating, and maintaining your k8s control plane.

• Amazon EKS runs the K8S Control Plane across three availability zones to ensure high availability and it automatically detects and replaces unhealthy masters.

• AWS will have complete control over Control Plane. We don't have control of the Control Plane.

• We need to create Worker Nodes and attach them to Control Plane.

Note: We will create a Worker Nodes Group using ASG Group

Control Plane Charges + Worker Node Charges (Based on Instance Type & No.of Instances)

Pre-Requisites

• AWS account with admin privileges

• Instance to manage/access EKS cluster using Kubectl (K8S

• AWS CLI access to use the Kubectl utility

Steps to Create EKS Cluster in AWS

Step-1) Create VPC using Cloud Formation (with below S3 URL)

(https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml)

Stack name: EKSVPCCloudFormation

Step-2) Create an IAM role in AWS

• Entity Type: AWS Service

• Select Usecase as 'EKS' ==> EKS Cluster

• Role Name: EKSClusterRole (you can give any name for the role)

Step-3) Create EKS Cluster using Created VPC and IAM Role

• Cluster endpoint access: Public & Private

Step-4) Create RedHat ec2 Instance (K8S_Client_Machine)

• Connect to K8S_Client Machine using Putty

Install Kubectl with the below commands
$ curl -LO "[https://dl.k8s.io/release/$(curl](https://dl.k8s.io/release/$(curl) -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubect/bin/linux/amd64/kubect)!"

$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

$ kubectl version --client

Install AWS CLI in K8S_Client Machine with below commands

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

$ sudo yum install unzip

$ unzip awscliv2.zip

$ sudo ./aws/install
Configure AWS CLI with Credentials
$ aws configure Access Key ID: AKIA4MGQ5UW73R***

Secret Access Key: Itv2YlqYGWpV0BvedwB4e5weExwITz***

Note: We can use the root user access key and secret key access

$ aws eks list-clusters

$fs ~l.

Update kubeconfig file in the remote machine from the cluster using the below command

$ aws eks update-kubeconfig-name --region

Ex: aws eks update-kubeconfig --name ashokit_eks -region ap-south-1

Step-5) Create an IAM role for EKS worker nodes (use case as EC2) with the below policies

a)AmazonEKSWorkerNodePolicy

b) AmazonEKS_CNI_Policy

c) Amazon EC2ContainerRegistryReadOnly

Step-6) Create Worker Node Group

• Go to cluster Compute > Node Group

• Select the Role we have created for WorkerNodes

• Use t2.large

• Min 2 and Max 5

Step-7) Once Node Group added then check nodes in K8s_client_machine

$ kubectl get nodes

$ kubectl get pods --all-namespaces

Step-8) Create POD and Expose the POD using LoadBalancer

#manifast file

Save file name as *****.yamp

Eg: webapp-deployment.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webappdeployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
      - name: webappcontainer
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webappsvc
spec:
  type: LoadBalancer
  selector:
    app: webapp
  ports:
  - port: 80
    targetPort: 80
...

Save the file and apply the configuration to your Kubernetes cluster:

$ kubectl apply -f nginx-pod.yaml

Let's break down the explanation of the YAML code for both the Deployment and Service objects:

  1. webapp-deployment.yaml:

This YAML file defines a Kubernetes Deployment that will manage the deployment and scaling of the web application using Nginx as the container image.

  • apiVersion: Specifies the Kubernetes API version used for this manifest. In this case, it's apps/v1, which corresponds to the "apps" API group and version 1.

  • kind: Specifies the type of Kubernetes resource, which is a Deployment in this case.

  • metadata: Contains metadata information for the Deployment.

    • name: Specifies the name of the Deployment, which is set to "webappdeployment."
  • spec: Specifies the desired state for the Deployment.

    • replicas: Defines the number of replicas (instances) of the application to be deployed. In this case, it's set to 1, meaning one instance of the Nginx container will be running.

    • selector: Specifies the label selector used to identify the Pods controlled by this Deployment. The label selector is set to "app: webapp," which will match the label "app: webapp" defined in the Pod template.

    • template: Specifies the Pod template used to create new Pods when scaling the Deployment.

      • metadata: Contains metadata information for the Pod template.

        • labels: Specifies labels to be applied to the Pods. In this case, the "app: webapp" label is applied.
      • spec: Specifies the specification for the Pods created by the Deployment.

        • containers: Defines the containers that will run in the Pods.

          • name: Specifies the name of the container, which is set to "webappcontainer."

          • image: Specifies the container image to be used, which is "nginx:latest" in this case, meaning the latest version of the Nginx container image.

          • ports: Specifies the ports the container listens on.

            • containerPort: Defines the port on which the container listens for incoming traffic, which is set to 80 for Nginx.
  1. webapp-service.yaml:

This YAML file defines a Kubernetes Service that will expose the Nginx Pods as a LoadBalancer service, allowing external access to the web application.

  • apiVersion: Specifies the Kubernetes API version used for this manifest, which is v1.

  • kind: Specifies the type of Kubernetes resource, which is a Service in this case.

  • metadata: Contains metadata information for the Service.

    • name: Specifies the name of the Service, which is set to "webappsvc."
  • spec: Specifies the desired state for the Service.

    • type: Defines the type of Service, which is set to LoadBalancer. This means that the Service will be exposed with an external IP address, allowing external traffic to be load-balanced to the Nginx Pods.

    • selector: Specifies the label selector used to target the Pods for the Service. The selector is set to "app: webapp," which matches the label "app: webapp" defined in the Pod template of the Deployment.

    • ports: Specifies the ports that the Service should listen on and forward traffic to.

      • port: Defines the port that the Service listens on, which is set to 80 for HTTP traffic.

      • targetPort: Specifies the port to which the Service forwards traffic, which is set to 80 to match the containerPort of the Nginx container in the Deployment.

Finally, the webapp-deployment.yaml sets up a Deployment to manage the Nginx container, while the webapp-service.yaml defines a Service with a LoadBalancer to expose the Nginx Pods externally on port 80. Together, these YAML files allow you to deploy and expose the Nginx-based web application in your Kubernetes cluster.

“With the LB DNS (a1896aee9200247868e65d6255a094c4-1001342374.ap-south1.elb.amazonaws.com), you can access this web application deployed on the AWS EKS cluster.”

Summary of AWS-EKS Work:

In this blog post, I covered the process of deploying and scaling a web application on AWS Elastic Kubernetes Service (EKS). EKS, a fully managed AWS service, offers exceptional security, reliability, and scalability, making it the ideal platform for running Kubernetes applications. It seamlessly integrates with other AWS services such as ELB, CloudWatch, Autoscaling, IAM, and VPC, providing a comprehensive ecosystem for deploying and managing containerized applications.

The deployment process began with creating a VPC using CloudFormation, ensuring a secure network environment for the EKS cluster. Next, an IAM role for the EKS cluster was established, granting appropriate access permissions to various AWS services. By creating the EKS cluster using the designated VPC and IAM role, the Kubernetes control plane was effortlessly set up across three availability zones, ensuring high availability and automatic detection of unhealthy masters.

Worker nodes, responsible for handling application workloads, were created using an Auto Scaling Group (ASG) to enable scaling capabilities. A RedHat EC2 instance (K8S_Client_Machine) was used to manage the EKS cluster through Kubectl utility. AWS CLI was installed and configured on the K8S_Client_Machine, allowing easy access to AWS services with the appropriate credentials.

The blog post further highlighted the process of creating an IAM role for EKS worker nodes, providing necessary permissions for tasks such as accessing the Elastic Container Registry (ECR). Finally, a Kubernetes Deployment was defined using YAML to manage the Nginx container, and a Service with a LoadBalancer was established to expose the Nginx Pods externally on port 80.

As a result, the web application was successfully deployed on the AWS EKS cluster, with a Load Balancer DNS providing external access. This demonstration showcased proficiency in AWS EKS and Kubernetes, exemplifying the ability to utilize AWS services for hosting and managing containerized applications effectively.

Did you find this article valuable?

Support @CNTKR's blog by becoming a sponsor. Any amount is appreciated!