K8s with AWS - EKS

Container Services on AWS

Container Orchestration

  • Scenario
    • Microservice scenario
    • Containers scale quickly
    • Need an environment to deploy containers
    • Containers are deployed on AWS EC2 Instances
      • How do you manage all the containers?
        • How much resources are available?
        • Have containers crashed?
        • Where to schedule next container?
        • How to set up load balancing?
        • How to compare actual and desired state
      • All these are features of an orchestration tool
  • Container Orchestration Tool: Managing, scaling and deploying containers
  • E.g. of tools
    • Docker Swarm
    • K8s
    • Mesos
    • Nomad
    • AWS Elastic Container Service (ECS)

ECS

  • Orchestration service
  • Manages the whole container lifecycle
  • How it works
    • You'll create an ECS cluster
    • The ECS cluster contains all the services to manage containers
    • ECS cluster represents a Control Plane for all the virtual machines that re running our containers
    • The containers need to run on actual machines: EC2 instances
    • The EC2 instance will be connect to ECS and managed by the Control Plane processes on the ECS Cluster
    • On the EC2 instances
      • Docker Runtime
      • ECS Agents - for Control Plane communication
  • ECS hosted on EC2 Instances
    • Manages the containers
    • You still need to manage the VMs
      • Create EC2 instances
      • Join them to ECS cluster
      • Check whether you have enough resources before scheduling the next container
      • Manage server OS
      • Docker runtime and ECS Agent
  • Upside
    • Full access and control of your infrastructure
      • You delete only container management to ECS

ECS with Fargate

  • You want the container orchestration to be managed by AWS as well as the hosting infrastructure management
  • Alternative to EC2 instances
  • Create a Fargate instance and connect it to the ECS instance
  • Fargate
    • A Serverless way to launch containers
      • Serverless: No server in your AWS account
    • No need to provision and manage servers
    • Launch container with Fargate
    • Fargate will analyze how much resources your container needs
    • The Fargate provisions server resources for that container
    • Provisions a server on demand
  • Advantages
    • No need to provision and manage servers
    • On-demand
    • Only the infrastructure resources needed to run containers
    • Pay for what you use
    • Easily scales up and down without fixed resources defined beforehand
  • Infrastructure is managed by AWS
  • Containers also managed by AWS
  • Need to manage only your application
  • Disadvantages
    • Not as flexible as using EC2 instances

EKS

  • Elastic Kubernetes Service (EKS)
  • Alternative to ECS
  • Both EKS and ECS manage the Control Plane
EKSECS
Open-sourceSpecific to AWS
Easier to migrate to another platform (migration can be difficult when using other AWS services)Migration is difficult
Control plane is not freeControl plane is free

How EKS works

  • Create Cluster that represents the Control Plane
    • AWS provisions K8s Master Nodes with K8s master services installed
    • High availability - Master Nodes are replicated across Availability Zones
    • AWS takes care of managing etcd.
      • Creates backups and replicates it across AZs
    • Create EC2 instances (Compute Fleet) and connect it to the cluster
    • K8s processes will be installed on the worker nodes
    • EKS with EC2 instances
      • You need to manage the infrastructure for Worker Nodes (self-managed)
    • EKS with Nodegroup (semi-managed)
      • Creates and deletes EC2 instances for you but you need to configure it
      • Still have to manage other things like Auto-scaling
    • Good practice to use NodeGroups
    • EKS with Fargate (fully-managed)
    • Can use both EC2 and Fargate at the same time

Steps to Create EKS Cluster

  • Provision an EKS cluster
    • Master Nodes
  • Create Nodegroup of EC2 instances
    • Worker Nodes
    • Can use Fargate alternative
  • Connect Nodegroup to EKS cluster
  • Connect to cluster with kubectl and deploy your containerized apps

Elastic Container Registry (ECR)

  • Repository for Docker images
    • Store, manage and deploy Docker containers
  • Alternative to
    • Docker hub and Nexus
  • Advantages
    • Integrates well with other AWS Services
      • e.g. with EKS
        • Can get notified when new image is pushed
        • Pull new images into EKS

3 AWS Container Services

  • ECR
    • Private Docker Repo
  • ECS
    • Container Orchestration Service
  • EKS
    • Container Orchestration Service
    • Managed K8s service

Create EKS Cluster Manually

  • Steps
    • Create EKS IAM Role
    • Create VPC for Worker Nodes
    • Create EKS cluster (Master Nodes)
    • Connect kubectl with EKS cluster
    • Create EC2 IAM Role for Node Group
    • Create Node Group and attach EKS Cluster
    • Configure auto-scaling
    • Deploy our application to our ELS cluster

Create EKS IAM Role

  • Create IAM Role in our account
    • IAM components are global
  • Assign Role to EKS cluster managed by AWS to allow AWS to create and manage components on our behalf
    • Select EKS - cluster for use case

Create VPC for Worker Nodes

  • Why do we need another VPC?
    • EKS cluster needs a specific networking configuration
    • K8s specific and AWS specific networking rules
    • Default VPC not optimized for it
    • The VPC is meant for the Worker Nodes
  • Examples of specific config for EKS cluster
    • Each VPC has subnets
    • Each subnet can have its own firewall rules and these are configured with NACL (Network Access Control Lists)
    • EC2 instances in the subnets can also have their own firewall rules defined by Security Groups
    • Worker Nodes need specific Firewall configurations
      • Best practice: configure Public Subnet and Private Subnet
      • When you use a load balancer service for example, AWS will create a Cloud Native Loadbalancer in the Public Subnet and your Loadbalancer service will be in the private subnet
    • For Master and Worker Node communication
    • Master Nodes
      • In AWS managed account
      • In another VPC outside our account
      • Control Plane
  • Through IAM Role you give K8s permission to change VPC configs
  • AWS has template preconfigured when creating a VPC
  • Need to specify VPC when creating worker nodes

Create EKS Cluster (Control Plane)

  • Will be charged for Master Nodes and Worker Nodes
  • Secret encryption
    • Have to install additional tools for secret encryption
    • AWS has a service for secret encryption - AWS Key Management Service (KMS)
  • Cluster endpoint access
    • API Server is the entry point to the cluster
    • API Server will be deployed and running inside the AWS managed VPC
    • Public
      • If we want to access the Control Plane from outside the VPC eg. using kubectl, we have to make its access Public
    • Private
      • Allowing worker Nodes to connect to cluster endpoint within the VPC
      • Will enable master nodes and worker nodes to talk to each other through our VPC
    • Choose Public and Private

Connect to EKS Cluster Locally with kubectl

  • We create a kubeconfig file for the EKS cluster
aws eks update-kubeconfig --name eks-cluster-test

Cluster Overview

  • Can use EC2 or Fargate
    • EC2 - self-managed
    • Fargate - managed

Create EC2 IAM Role for Node Group

  • Worker Nodes run Worker Processes
    • Kubelet is the main process
      • Scheduling, managing Pods and communicating with other AWS Services
  • Need to give Kubelet permissions to execute tasks and make API calls to other services
    • 3 policies
      • AmazonEKSWorkerNodePolicy
      • AmazonEC2ContainerRegistryPolicy (readOnly)
      • AmazonEKS_CNI_Policy
        • Container Network Interface
        • The internal network in K8s so that pods in different servers inside the cluster can communicate with each other

Add Node Group to EKS Cluster

  • Creates EC2 instances with worker processes installed
    • Container runtime
    • Kube proxy
    • Kubelet
  • Save infrastructure cost with Auto-scaler

Configure AutoScaling in EKS Cluster

  • Logically groups a number of EC2 instances together
  • Has the config for minimum and maximum EC2 instances for scaling
  • AWS doesn't automatically autoscale our resources
  • We'll use K8s Autoscaler with Auto Scaling Group
  • Things we need
    • Auto Scaling Group - automatically created
    • Create custom policy and attach to Node group IAM Role
    • Deploy K8s Autoscaler

Custom Policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "autoscaling:DescribeAutoScalingGroups",
        "autoscaling:DescribeAutoScalingInstances",
        "autoscaling:DescribeLaunchConfigurations",
        "autoscaling:DescribeTags",
        "autoscaling:SetDesiredCapacity",
        "autoscaling:TerminateInstanceInAutoScalingGroup",
        "ec2:DescribeLaunchTemplateVersions"
      ],
      "Resource": "*",
      "Effect": "Allow"
    }
  ]
}
  • Attach policy to existing Node Group IAM Role

Tags

  • Used in order for different services or components to read and detect some information from each other
  • K8s Autoscaler in K8s will require these tags to auto-discover autoscaling groups inside the AWS account
  • Uses
    • k8s.io/cluster-autoscaler/<cluster-name>
    • k8s.io/cluster-autoscaler/enabled
    • Have been configured automatically

Deploy Autoscaler Component

kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autos
caler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Verify autoscaler

kubectl get deployments -n kube-system cluster-autoscaler

Edit deployment

kubectl edit deployment -n kube-system cluster-autoscaler

Changes

  • Update cluster name
  • Update cluster-autoscaler version
  • Add new commands
  • Add annotation
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    cluster-autoscaler.kubernetes.io/safe-to-evict:"false"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"cluster-autoscaler"},"name":"cluster-autoscaler","namespace":"kube-system"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"cluster-autoscaler"}},"template":{"metadata":{"annotations":{"prometheus.io/port":"8085","prometheus.io/scrape":"true"},"labels":{"app":"cluster-autoscaler"}},"spec":{"containers":[{"command":["./cluster-autoscaler","--v=4","--stderrthreshold=info","--cloud-provider=aws","--skip-nodes-with-local-storage=false","--expander=least-waste","--node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/\u003cYOUR CLUSTER NAME\u003e"],"image":"registry.k8s.io/autoscaling/cluster-autoscaler:v1.22.2","imagePullPolicy":"Always","name":"cluster-autoscaler","resources":{"limits":{"cpu":"100m","memory":"600Mi"},"requests":{"cpu":"100m","memory":"600Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"readOnlyRootFilesystem":true},"volumeMounts":[{"mountPath":"/etc/ssl/certs/ca-certificates.crt","name":"ssl-certs","readOnly":true}]}],"priorityClassName":"system-cluster-critical","securityContext":{"fsGroup":65534,"runAsNonRoot":true,"runAsUser":65534,"seccompProfile":{"type":"RuntimeDefault"}},"serviceAccountName":"cluster-autoscaler","volumes":[{"hostPath":{"path":"/etc/ssl/certs/ca-bundle.crt"},"name":"ssl-certs"}]}}}}
  creationTimestamp: "2023-03-25T20:15:13Z"
  generation: 1
  labels:
    app: cluster-autoscaler
  name: cluster-autoscaler
  namespace: kube-system
  resourceVersion: "55095"
  uid: e8e212c9-41a6-4cc2-82e7-0ddeb8b36826
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: cluster-autoscaler
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        prometheus.io/port: "8085"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app: cluster-autoscaler
    spec:
      containers:
      - command:
        - ./cluster-autoscaler
        - --v=4
        - --stderrthreshold=info
        - --cloud-provider=aws
        - --skip-nodes-with-local-storage=false
        - --expander=least-waste
        - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/eks-cluster-test
        - --balance-similar-node-groups
        - --skip-nodes-with-system-pods=false
        image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.25.0

Error

  • When pod creation was stuck on pending
k get events -n kube-system
  • To check events to see why scheduling failed

  • Auto-scaler is running on one of the pods

  • To check pod

k get pod <pod-name> -n kube-system -o wide
  • Save to logs file
k logs -n kube-system cluster-autoscaler-5bfc6699ff-56k4g > as-logs.txt
  • Auto-Scaling
    • Saves cost
    • Provisioning a new EC2 instance takes time

Deploy nginx application: LoadBalancer

  • We want an app with a UI, so we can see how to access it in the cluster

nginx.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
  • Will have AWS LoadBalancer attached to it
  • Service type of LoadBalancer creates a cloud native LoadBalancer
k apply -f nginx.yaml

Points

  • Port mapping
    • From browser -> LoadBalancer on port 80
    • LoadBalancer 80 -> Node port (eg. 30204)
    • From Node Port -> Service port 80
    • Service port 80 -> Nginx port 80
  • While creating the VPC for the Node groups, we created 2 private and 2 public subnets
    • The LoadBalancers are scheduled in public subnets
    • The IP of the LoadBalancer will be assigned from the public subnet and that's where the LB will be created
      • IP will be from one of the public subnet ranges
    • External components like the LoadBalancer will get both public and private IP addresses
    • In AWS, there is a requirement for LB to have at least 1 EC2 instance in at least 2 availability zones
      • If not, will be flagged as unhealthy

Test 20 Replicas - Autoscaler in Action

  • Change nginx config to 20 replicas

Create EKS Cluster with Fargate

  • Serverless - no EC2 instances in our account
  • AWS manages and creates the compute resources on AWS' account
  • Fargate provisions 1 pod per VM
  • On EC2, we can have multiple pods running
  • Limitations when using Fargate
    • no support for stateful applications
    • no support for DaemonSets
  • We can have Fargate in addition to Node Group attached to our EKS cluster

Create IAM Role For Fargate

  • Pods/Kubelet on server provisioned by Fargate need permissions
  • Need to create a Fargate Role
    • Role will be attached to EKS
    • Select EKS - Fargate pod

Create Fargate Profile

  • On EKS dashboard, create Fargate profile
  • Fargate profile creates a rule
    • pod selection rule
    • specifies which Pods should use Fargate when they are launched
  • Why provide our own VPC if AWS will manage our infrastructure for us?
    • The Pods that will get scheduled in Fargate will get an IP which will be from the range of our subnets
    • Remove public subnets
    • Fargate will create pods in private subnets alone
  • We need selectors to tell Fargate if a Pod is meant to be scheduled by Fargate
  • The name of the inputted namespace should be used in the deployment yaml file to let Fargate know its supposed to handle the scheduling of that pod
  • Can also configure labels

nginx.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: dev
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
        profile: fargate

Use Cases - having both Node Group and Fargate

  • Dev and Test environment inside the same cluster
    • Node Group: Test
    • Fargate: Dev
    • Pods with specific selector launched through Fargate
    • Pods with e.g. namespace dev launched through Fargate
  • Fargate limitations
    • Since Fargate has limitations in terms of what K8s components it can deploy, eg. can't deploy stateful apps or daemon sets
    • Can have EC2 Node Group for Stateful Apps
    • Use Fargate for Stateless apps

Deploy Pod Through Fargate

Create dev namespace for Fargate

k create ns dev

Apply deployment

k apply -f nginx-config.yaml
  • For Fargate, each pod runs in its own VM
  • Can create multiple fargate profile
    • Each profile can target a specific namespace

Cleanup Cluster resources

  • Before we can delete cluster, we need to delete node groups and fargate profiles
  • Delete IAM roles
  • Delete CloudFormation stack
  • Delete cluster

Create EKS Cluster with eksctl

  • We previously create the EKS cluster manually
    • Difficult to replicate
  • eksctl
    • Command Line Tool for working with EKS clusters that automates many individual tasks
    • Execute just one command
    • Necessary components get created and configured in the background
    • Cluster will be created with default parameters
    • With more CLI options, you can customize your cluster

Install eksctl and Connect eksctl with AWS Account

  • Repo
  • Configure aws credentials
aws configure list

Create EKS Cluster

eksctl create cluster \
> --name demo-cluster \
> --version 1.25 \
> --region us-east-1 \
> --nodegroup-name demo-nodes \
> --node-type t2.micro \
> --nodes 2 \
> --nodes-min 1 \
> --nodes-max 3
  • Can also customize the cluster by using a config file. Just run
eksctl create cluster -f cluster.yaml
  • eksctl is not just for creating the cluster but for managing and configuring the cluster as well
  • After creating the cluster, eksctl configures kubectl to talk to the cluster

Delete cluster

eksctl delete cluster --name <cluster-name>

Deploy to EKS cluster from Jenkins

Steps

  • Install kubectl command line tool inside Jenkins container
  • Install aws-iam-authenticator tool inside Jenkins container
    • Authenticate with AWS
    • Authenticate with K8s cluster
  • Create kubeconfig file to connect to EKS cluster
    • Contains all the necessary information for authentication
      • authenticate with AWS account
      • authenticate with K8s cluster
  • Add AWS credentials on Jenkins for AWS account authentication
  • Adjust Jenkinsfile to configure EKS cluster deployment
  • .kube/config file contains all the necessary information for authentication
  • eksctl automatically installs aws-iam-authenticator

Install kubectl on Jenkins Container inside Server

docker exec -u 0 -it 28643b51d6ad bash

Install kubectl

Install AWS IAM Authenticator

mv ./aws-iam-authenticator  /usr/local/bin

Create kubeconfig File For Jenkins

Sample

apiVersion: v1
clusters:
- cluster:
    server: https://9918823FE18D162E013EA489B21E32FC.gr7.us-east-1.eks.amazonaws.com
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1ETXlPREUxTXpnd05sb1hEVE16TURNeU5URTFNemd3Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTEF2CjFEMXNIUy9ITGE1Y0JRTzkzTjVVd1o0MVVWV210UmNtYU8zRlp5UFR6UlR1SllSUHZ0R1FNdVh6NDExeDQrZU8KdVBBR1VHd0tGbkFzRXRQNFplZlQwbHJDUXpJUys4ak5ha1Zmb2oyaU9WanRzZi84OGJ3OFlVTU1pbno0cS8ydwowVGd1cE5VcElvd3VUd2NYaG5NaFhpM285Qks1UDFkUW5LaC8yU1IrVzE3dHJqTHg0VENOc3BCRGVicEtmbFphCnZIQjU4YVNwMzdyMnh3L01CbXZXTnRWQXFqdHlQcjJGMHBmSHBzMHBMNVNxMDUvWnRCYnFsLzJOWGgrSlJSZUMKYVpkamhCUHBvWEErb3BYNHBNNmw4OGRZM3cvR3o4ZmthbGJKTWx3dEFTK3UzOUFEVGowNXd2M1J2a0Q2VE9yOApXN242cVBOS3o2R3V5OHpoMVJrQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZKK1BJUmtmOUNrSi9QMFVuY3lVQzd3bVkxV3ZNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBR05MTlltRTE4Uk5LSEt0ZGJXbgpwZUJ2cENvKytnOW8wenRHaG5sRVJ0cTRydXBxYlJPUExCaTA1Wk1aeHA4VkpnVXV1Q2JrWFkxTzlDQkUrazlHCjFFOHdMK2QvZUtvTEM4cU1nVTUyeDZkS2hQSUgrQktBbS9TTE51UE83QlB0WXdSbG5wNGlJOWk5YmZCaXlkYlEKYm9OMWpBYUZMTzZrY0orTHpiTTZna1JFZS9kdjZlZnNIQ2RkRlhOSU55WmY1WGdLR2hOVDRLY1BSMUNzUXQ5OApJbUdIRTY4UVZqSnFid1RPZlRxWFBXZjdKbjNndjBxc0RpUjlWRUxDcHg2Mm5odm51cWx2WElTZUVxaEFqSmEvCnExb1ozbmQ0UHVYd2k2dlQwUjZxRVg0WG5VU1JMdmxxc2ZkU2NhUHRUQk1YQWFjaDRuTzNnQ25lYXkxei9MemQKeHZvPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  name: arn:aws:eks:us-east-1:227984707254:cluster/demo-cluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:227984707254:cluster/demo-cluster
    user: arn:aws:eks:us-east-1:227984707254:cluster/demo-cluster
  name: arn:aws:eks:us-east-1:227984707254:cluster/demo-cluster
current-context: arn:aws:eks:us-east-1:227984707254:cluster/demo-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:227984707254:cluster/demo-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "demo-cluster"
        # - "- --role"
        # - "arn:aws:iam::$account_id:role/my-role"
      # env:
        # - name: "AWS_PROFILE"
        #   value: "aws-profile"
  • Values we need to update
    • K8s cluster name
    • Server endpoint
    • Certificate authority generator (copy from local kubeconfig file)

If we can't edit in container

  • Create config file in droplet
  • Enter container as Jenkins user and create kube directory in user's home
docker exec -it 28643b51d6ad bash
cd ~
mkdir .kube
  • Exit container and run...
docker cp config <container-id>:/var/jenkins_home/.kube

Create AWS Credentials

  • Best practice
    • Create AWS IAM User for Jenkins with limited permissions
  • Create credentials in Jenkins
    • Kind: Secret text
    • One credential for Access key. Another for Secret key
    • Copy and paste credentials from local
  • Don't forget to configure aws credentials in the .aws directory

Configure Jenkinsfile to deploy to EKS

Jenkinsfile sample

#!/usr/bin/env groovy

pipeline {
    agent any
    stages {
        stage('build app') {
            steps {
               script {
                   echo "building the application..."
               }
            }
        }
        stage('build image') {
            steps {
                script {
                    echo "building the docker image..."
                }
            }
        }
        stage('deploy') {
            environment {
               AWS_ACCESS_KEY_ID = credentials('jenkins_aws_access_key_id')
               AWS_SECRET_ACCESS_KEY = credentials('jenkins_aws_secret_access_key')
            }
            steps {
                script {
                   echo 'deploying docker image...'
                   sh 'kubectl create deployment nginx-deployment --image=nginx'
                }
            }
        }
    }
}

Deploy to LKE cluster from Jenkins

  • No specific platform authentication necessary
  • Create K8s cluster on Linode Kubernetes Engine (LKE)
  • Use kubeconfig file to connect and authenticate
  • Steps
    • Kubectl command line tool available inside Jenkins container
    • Install Kubernetes CLI Jenkins plugin
      • Execute kubectl with kubeconfig credentials
    • Configure Jenkinsfile to deploy to LKE cluster

Create Kubernetes Cluster on Linode and Connect to it

  • Download kubeconfig file
  • Point to config file
export KUBECONFIG=jenkins-k8s-kubeconfig.yaml

Add LKE credentials on Jenkins

  • Use type of Secret file
  • Upload file

Install Kubernetes CLI Plugin on Jenkins

  • Find Kubernetes CLI

Configure Jenkinsfile To Deploy to LKE cluster

Jenkinsfile

#!/usr/bin/env groovy

pipeline {
    agent any
    stages {
        stage('build app') {
            steps {
               script {
                   echo "building the application..."
               }
            }
        }
        stage('build image') {
            steps {
                script {
                    echo "building the docker image..."
                }
            }
        }
        stage('deploy') {
            steps {
                script {
                   echo 'deploying docker image...'
                   withKubeConfig([credentialsId: 'lke-credentials', serverUrl: 'https://79fa9228-1d11-47ec-870b-33106d53122b.eu-central-2.linodelke.net']) {
                       sh 'kubectl create deployment nginx-deployment --image=nginx'
                   }
                }
            }
        }
    }
}
  • Steps depend on the platform you use

Jenkins Credentials Note

  • On EC2, can create a Jenkins user with its own credentials like SSH keys and use those credentials on Jenkins
  • For LKE kubeconfig, create Jenkins service account and give it only the permissions Jenkins needs
    • Then we'd create credentials for Jenkins service account with the user token

Complete CI/CD Pipeline with EKS and Dockerhub

  • Pipeline
    • Increment version
    • Build app
    • Build and push Docker repo
    • Deploy to K8s cluster
    • Commit version update
  • Branch
    • jenkins-eks-hub

Create Deployment and Service and Adjust Jenkinsfile

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: $APP_NAME
  labels:
    app: $APP_NAME
spec:
  replicas: 1
  selector:
    matchLabels:
      app: $APP_NAME
  template:
    metadata:
      labels:
        app: $APP_NAME
    spec:
      imagePullSecrets:
        - name: my-registry-key
      containers:
        - name: $APP_NAME
          image: alfredasare/devops-demo-app:$IMAGE_NAME
          imagePullPolicy: Always
          ports:
            - containerPort: 8080

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: $APP_NAME
spec:
  selector:
    app: $APP_NAME
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  • We'll use a command-line tool called envsubst (environment substitute)
  • Need to install it inside Jenkins container
sh 'envsubst < kubernetes/service.yaml | kubectl apply -f -'

Install "gettext-base" tool on Jenkins

  • SSH into droplet
  • Enter container as root user
apt-get update
apt-get install gettext-base

Create Secret for DockerHub Credentials

  • K8s must be allowed to fetch new image from the private repository
  • The credentials must be available
  • Create a secret of type docker registry type with all the credentials
  • We need to create the Secret just once
    • We won't put it in the pipeline because it it will run every time
    • Secrets would be hosted in one repository
    • One Secret per namespace
      • Can't use Secret from another namespace
k create secret docker-registry my-registry-key \
--docker-server=docker.io \
--docker-username=alfredasare \
--docker-password=****
  • k get secret

Complete Pipeline

def gv

pipeline {
    agent any
    tools {
        maven 'maven-3.9'
    }
    stages {
        stage("init") {
            steps {
                script {
                    gv = load "script.groovy"
                }
            }
        }
        stage("increment version") {
            steps {
                script {
                    echo "incrementing app version..."
                    sh "mvn build-helper:parse-version versions:set -DnewVersion=\\\${parsedVersion.majorVersion}.\\\${parsedVersion.minorVersion}.\\\${parsedVersion.nextIncrementalVersion} versions:commit"
                    def matcher = readFile('pom.xml') =~ '<version>(.+)</version>'
                    def version = matcher[0][1]
                    env.IMAGE_NAME = "$version-$BUILD_NUMBER"
                }
            }
        }
        stage("build jar") {
            steps {
                script {
                    gv.buildJar()
                }
            }
        }
        stage("build image") {
            steps {
                script {
                    gv.buildImage()
                }
            }
        }
        stage('deploy') {
            environment {
               AWS_ACCESS_KEY_ID = credentials('jenkins_aws_access_key_id')
               AWS_SECRET_ACCESS_KEY = credentials('jenkins_aws_secret_access_key_id')
               APP_NAME = 'java-maven-app'
            }
            steps {
                script {
                   echo 'deploying docker image...'
                   sh 'envsubst < k8s/deployment.yaml | kubectl apply -f -'
                   sh 'envsubst < k8s/service.yaml | kubectl apply -f -'

                }
            }
        }
        stage("commit version update") {
            steps {
                script {
                    sshagent(credentials: ['github-credentials']) {
                        // Need to set this once
                        // Can also ssh into Jenkins server to set it
                        // sh 'git config --global user.email "jenkins@example.com"'
                        // sh 'git config --global user.name "jenkins"'

                        // sh "git status"
                        // sh "git branch"
                        // sh "git config --list"

                        sh "git remote set-url origin git@github.com:alfredasare/java-maven-app.git"
                        sh "git add ."
                        sh 'git commit -m "ci: version bump"'
                        sh "git push origin HEAD:${BRANCH_NAME}"
                    }
                }
            }
        }
    }
}

Complete CI/CD pipeline with EKS and ECR

  • We'll replace Docker Hub with AWS ECR
  • Steps
    • Create ECR repo
      • unlimited number of private repos
      • repository per app
    • Create Credentials in Jenkins
    • Adjust building and tagging
    • Create Secret for ECR
    • Update Jenkinsfile
  • Branch: jenkins-eks-ecr

Create Credentials In Jenkins

  • Create username and password credentials
    • Username: AWS
    • Password: aws ecr get-login-password --region us-east-1
  • docker.io = default server
  • For other Docker repos, you have to specify the server URL

Create Secret for AWS ECR

k create secret docker-registry aws-registry-key \
--docker-server=227984707254.dkr.ecr.us-east-1.amazonaws.com \
--docker-username=AWS \
--docker-password=<password - aws ecr get-login-password>

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: $APP_NAME
  labels:
    app: $APP_NAME
spec:
  replicas: 1
  selector:
    matchLabels:
      app: $APP_NAME
  template:
    metadata:
      labels:
        app: $APP_NAME
    spec:
      imagePullSecrets:
        - name: aws-registry-key
      containers:
        - name: $APP_NAME
          image: $DOCKER_REPO:$IMAGE_NAME
          imagePullPolicy: Always
          ports:
            - containerPort: 8080

Update Jenkinsfile

Jenkinsfile

def gv

pipeline {
    agent any
    tools {
        maven 'maven-3.9'
    }
    environment {
        DOCKER_REPO_SERVER = '227984707254.dkr.ecr.us-east-1.amazonaws.com'
        DOCKER_REPO = "${DOCKER_REPO_SERVER}/java-maven-app"
    }
    stages {
        stage("init") {
            steps {
                script {
                    gv = load "script.groovy"
                }
            }
        }
        stage("increment version") {
            steps {
                script {
                    echo "incrementing app version..."
                    sh "mvn build-helper:parse-version versions:set -DnewVersion=\\\${parsedVersion.majorVersion}.\\\${parsedVersion.minorVersion}.\\\${parsedVersion.nextIncrementalVersion} versions:commit"
                    def matcher = readFile('pom.xml') =~ '<version>(.+)</version>'
                    def version = matcher[0][1]
                    env.IMAGE_NAME = "$version-$BUILD_NUMBER"
                }
            }
        }
        stage("build jar") {
            steps {
                script {
                    gv.buildJar()
                }
            }
        }
        stage('build image') {
            steps {
                script {
                    echo "building the docker image..."
                    withCredentials([usernamePassword(credentialsId: 'ecr-credentials', passwordVariable: 'PASS', usernameVariable: 'USER')]) {
                        sh "docker build -t ${DOCKER_REPO}:${IMAGE_NAME} ."
                        sh "echo $PASS | docker login -u $USER --password-stdin ${DOCKER_REPO_SERVER}"
                        sh "docker push ${DOCKER_REPO}:${IMAGE_NAME}"
                    }
                }
            }
        }
        stage('deploy') {
            environment {
               AWS_ACCESS_KEY_ID = credentials('jenkins_aws_access_key_id')
               AWS_SECRET_ACCESS_KEY = credentials('jenkins_aws_secret_access_key_id')
               APP_NAME = 'java-maven-app'
            }
            steps {
                script {
                   echo 'deploying docker image...'
                   sh 'envsubst < k8s/deployment.yaml | kubectl apply -f -'
                   sh 'envsubst < k8s/service.yaml | kubectl apply -f -'

                }
            }
        }
        stage("commit version update") {
            steps {
                script {
                    sshagent(credentials: ['github-credentials']) {
                        // Need to set this once
                        // Can also ssh into Jenkins server to set it
                        // sh 'git config --global user.email "jenkins@example.com"'
                        // sh 'git config --global user.name "jenkins"'

                        // sh "git status"
                        // sh "git branch"
                        // sh "git config --list"

                        sh "git remote set-url origin git@github.com:alfredasare/java-maven-app.git"
                        sh "git add ."
                        sh 'git commit -m "ci: version bump"'
                        sh "git push origin HEAD:${BRANCH_NAME}"
                    }
                }
            }
        }
    }
}
  • This is just one way of doing it