How to deploy a Sample Application in a Managed EKS Environment in AWS

How to deploy a Sample Application in a Managed EKS Environment in AWS

A step by step guide

We previously looked at creating managed EKS cluster and Node group in AWS. This blog post assumes that you have a Kubernetes Environment, where you can follow along and deploy a sample application to.

Lets deploy a sample application in the managed EKS environment so we can test it out live. I was following a few tutorials on EKS from EKS Workshop and came across Sample Go Webserver App written by Michael S. Fischer. The webserver application is written in Golang and provides information on the EC2 Instance, Kubernetes Pod, client IP address and Availability Zone information. This app can come handy to reflect differences when we deploy an NLB or ALB and can also be used to show the difference in enabling the Proxy v2 Protocol on Network Load Balancers.

We will clone the repository locally, build a docker image and push it to AWS ECR (Elastic Container Repository) for use within EKS.

Prepare Docker Image

Clone Repository

git clone https://github.com/aws-samples/amazon-eks-sample-http-service
cd amazon-eks-sample-http-service/app

Build the Docker Image

docker build -t sample-eks-app .

Tag the image

docker tag sample-eks-app:latest ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/sample-eks-app:latest

Replace the values for parameters in ${} with the AWS Account ID and AWS Region in use. Verify you see the new image using command,

docker images

Assuming you have logged into the AWS account using CLI credentials, create an ECR Repository to hold container images,

aws ecr create-repository \
    --repository-name sample-eks-app \
    --image-scanning-configuration scanOnPush=true \
    --region ${AWS_REGION}
aws ecr get-login-password | docker login -u AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com

Finally, push the newly built docker image to AWS ECR,

docker push ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/sample-eks-app:latest

Optionally, verify image is present in ECR using the command,

aws ecr list-images --repository-name sample-eks-app

As seen from ECR Repository Console,

Screen Shot 2021-09-05 at 1.28.55 pm.png

Note the URI of the image, we will use it in Kubernetes App deployment manifest.

One important aspect to call out here is the use of IAM policy to fetch the docker image from ECR. Notice the IAM policies attached to the managed node group's IAM Role from my previous article,

  PrasKubeNodeGroupRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
        - Effect: Allow
          Principal:
            Service:
            - ec2.amazonaws.com
          Action:
          - sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy

The managed policy AmazonEC2ContainerRegistryReadOnly is what allows Kubernetes to reach out to ECR and get the custom docker image we created earlier.

Similarly, as Michael S. Fischer mentions the need of an IAM policy to read network interfaces from the pod, the access is allowed by the use of AmazonEKS_CNI_Policy as shown above.

But of course, a better way to implement this would be to,

  • Create an IAM OIDC provider for your cluster
  • Create an IAM role with ec2:DescribeNetworkInterfaces permissions and add a condition that the role can be assumed by a particular namespace and service account.
  • Create a service account for the application pod to use.

This ensures least privilege model for our application as its then going to be bounded to only the use of ec2:DescribeNetworkInterfaces API. A guide from AWS on IAM roles for service accounts can be found here .

Kubernetes App Deployment

A quick run of command kubectl get all -A shows that the cluster is up and running.

Screen Shot 2021-09-05 at 3.32.37 pm.png

Lets create a namespace for our application using the deployment yaml file, change the namespace name as suited.

apiVersion: v1
kind: Namespace
metadata:
  name: boltdynamics
  labels:
    name: boltdynamics

Run kubectl apply -f namespace.yaml to create the namespace.

Now that we have an application namespace, it's time to deploy the sample application in the form of Kubernetes Deployment,

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-eks-app-deployment
  namespace: boltdynamics
  labels:
    app: sample-eks-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sample-eks-app
  template:
    metadata:
      labels:
        app: sample-eks-app
    spec:
      securityContext:
        fsGroussssp: 101
      containers:
        - name: sample-eks-app
          image: ${AWS_ACCOUNT_ID}.dkr.ecr.ap-southeast-2.amazonaws.com/sample-eks-app
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: APP_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.labels['app']
          ports:
            - name: http
              containerPort: 8080
          securityContext:
              readOnlyRootFilesystem: true
              allowPrivilegeEscalation: false
              runAsNonRoot: true
              runAsUser: 101
          resources:
            requests:
              cpu: 100m
              memory: 128Mi

Change the metadata and labels as required, and replace the value of ${AWS_ACCOUNT_ID} with the AWS Account ID in use. This is the same URI noted from the ECR image.

You will also notice security context defined for the containers. It's always good to think of security when we are building applications and Michael S. Fischer has done a good job in including USER command in the Dockerfile which is then mapped to the container to run as the user. We have also specified runAsNonRoot to avoid giving containers access to host resources backed by allowPrivilegeEscalation set to false.

Screen Shot 2021-09-06 at 10.58.07 am.png

We will only create 1 replica for this lab purpose. The container port is set to 8080 as it is where the container will be listening for incoming traffic.

Run kubectl apply -f deployment.yaml and check the deployment status using kubectl get deployment sample-eks-app-deployment -n boltdynamics.

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
sample-eks-app-deployment   1/1     1            1           13s

Run kubectl get pods -n boltdynamics to see the pod running,

NAME                                         READY   STATUS    RESTARTS   AGE
sample-eks-app-deployment-7b5d547ccc-bwsmq   1/1     Running   0          2m38s

We now need to expose the application to the outside world using Kubernetes Service and Ingress Objects. But first we need to deploy an ingress controller into our cluster which will be responsible for handling incoming requests from the users.

We will deploy NGINX Controller in this lab. As seen in the docs , Run kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/aws/deploy.yaml to deploy the NGINX controller.

Then, run kubectl get all -n ingress-nginx to view all components of namespace ingress-nginx.

kubectl            ingress-nginx             
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-k4wfk        0/1     Completed   0          53s
pod/ingress-nginx-admission-patch-z6dsr         0/1     Completed   1          53s
pod/ingress-nginx-controller-65c4f84996-lvmjt   1/1     Running     0          54s

NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP                                                                          PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   172.20.28.86     a06c53cec53d2418f9136d4f399c0def-3914554eaa8632e1.elb.ap-southeast-2.amazonaws.com   80:30391/TCP,443:31966/TCP   54s
service/ingress-nginx-controller-admission   ClusterIP      172.20.184.252   <none>                                                                               443/TCP                      54s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           54s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-65c4f84996   1         1         1       54s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           5s         53s
job.batch/ingress-nginx-admission-patch    1/1           6s         53s

We can see that a Network Load Balancer with an external IP and DNS has been created for us which we will use later to access the sample EKS application.

Next, we will create a ClusterIP service to expose our sample application inside the cluster. Note the label app.kubernetes.io/name: ingress-nginx, this is so NGINX Controller can forward traffic to this service.

apiVersion: v1
kind: Service
metadata:
  name: sample-eks-app-service
  namespace: boltdynamics
  labels:
    app.kubernetes.io/name: ingress-nginx
    app: sample-eks-app
spec:
  type: ClusterIP
  ports:
    - name: http-port
      targetPort: 8080
      port: 80
  selector:
    app: sample-eks-app

In the configuration above, we have specified targetPort to be 8080 which is the container port specified in the deployment configuration earlier. The ClusterIP service itself listens on port 80.

Run kubectl apply -f service.yaml to apply service configuration.

$ kubectl get svc sample-eks-app-service -n boltdynamics         

NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
sample-eks-app-service   ClusterIP   172.20.214.130   <none>        80/TCP    18s

Lets create an ingress object to let NGINX controller know that it can forward traffic to the sample EKS app clusterIP service we created earlier.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sample-eks-app-ingress
  namespace: boltdynamics
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: sample-eks-app-service
            port:
              number: 80

Run kubectl apply -f ingress.yaml to apply the ingress configuration.

$ kubectl get ingress sample-eks-app-ingress -n boltdynamics

NAME                     CLASS    HOSTS   ADDRESS                                                                              PORTS   AGE
sample-eks-app-ingress   <none>   *       a06c53cec53d2418f9136d4f399c0def-3914554eaa8632e1.elb.ap-southeast-2.amazonaws.com   80      64m

The DNS name shown above is of ingress-nginx-controller which directs traffic to the application service which eventually reaches the pod.

We can verify from AWS EC2 Console that we see an NLB deployed with the DNS name assigned to the ingress controller.

Screen Shot 2021-09-05 at 5.43.11 pm.png

If we navigate to the NLB's public DNS from a web browser,

Screen Shot 2021-09-07 at 4.03.32 pm.png

We can reach the application over the web 🎉

As mentioned before, the application reflects the pod and namespace information , Availability Zone, client IP, proxy protocol and instance ID which can be super handy when learning Kubernetes.

Run kubectl logs <nginx-pod-name> -n ingress-nginx to see GET requests in the NGINX container logs,

Screen Shot 2021-09-05 at 9.19.07 pm.png

I hope you enjoyed this blog post on how to deploy a sample application to EKS. Next, we will see how to configure Public Hosted Zone in Route53, AWS ACM Certificate and external DNS in Kubernetes to discover Kubernetes resources over the internet using a custom domain name.

 
Share this