Introduction to AWS
- Services we'll use
- EC2
- S3
- VPC
- IAM
- ECR
- EKS
- Scopes of services
- Global
- Regions
- AZs
- Different resources will be created in one of these scopes
- Group - IAM, Billing, Route53
- Region - S3, VPC, DynamoDB
- AZ - EC2, EBS, RDS
IAM: Manage users, roles and permissions
- Root user is created by default
- Has unlimited privileges
- Create admin user with less privileges
- Can have system users
- e.g. Jenkins deploys Docker containers on AWS
- Jenkins pushes Docker images to AWS Docker Repo
- Users
- Human users
- System users
- Groups
- Granting access to multiple IAM users
- Group users with same permissions
- Roles
- Granting AWS services to other AWS services
- IAM user vs IAM role
- Users
- Human or system users
- Roles
- Policies can't be assigned directly to services
- Policies and permissions are assigned to roles and those roles are assigned to services
- Services act as a user
- Users
- E.g. For creating EC2 instances
- User
- Create IAM User
- Assign Policy to IAM User
- Role
- We want EKS or ECS to create an EC2 instance
- Create IAM Role
- Assign Policy to IAM Role
- Create a role for each Service
- Create a policy specific to Service
- We want EKS or ECS to create an EC2 instance
- User
- Whenever we create a role, we have 2 steps
- Assign Role to AWS Service
- Attach Policies to that Role
Regions and Availability Zones
- Cloud providers have physical data centres
- Data centres are all over the world
- Have data centres distributed all over the world in regions
- Region: Physical location where data centres are clustered
- Whenever you're creating resources, you can select which region to create that resource
- Availability Zone: One or more discrete data centres
- AWS can have 2 or 3 data centres in the same region for replication
- If something happens to one data centre, your data and infrastructure will not be lost
- Create virtual servers and services closer to your customers
- If users are distributed, you can replicate your app in multiple regions
Virtual Private Cloud (VPC)
- In each region, you'll have a VPC
- Is your own isolated network in the cloud
- VPC spans all the AZ (Subnet) in that region
- Private network isolated from others
- Bigger companies can use multiple VPCs in different regions
What VPCs include
- Is a virtual representation of network infrastructure
- Setup of servers, network configuration moved to the cloud
- Your components have to run in a VPC
Subnets
-
Are subnetworks of the VPC
-
Subnets span individual availability zones
-
For each zone, you'll have a subnet
-
Public and Private subnet
- In the config, you can't request to create a private or public subnet
- But you can configure the
networkingorfirewall rulesfor the Subnet that make it into either a private or public subnet - When you block all outside communication to the Subnet, you're making it a
Private Subnet- Other applications inside the VPC will still be able to access services within the VPC
-
Typical use case
- Private Subnet
- Database can be running here because you don't want your database to be accessible directly from the outside
- Public
- Allows external traffic into the Subnet on any port
- You can open port
80or8000to your app which will allow access to your app from the outside - That app can now communicate with your database from within the VPC
- Private Subnet
-
Whenever you create a virtual server, it has to have an IP
- In VPC, we have a range of private or internal IP addresses
- The range is defined on the VPC level
- Whenever you create an EC2 instance, an IP address will be assigned from that range
- That internal IP is not for outside web traffic but rather for traffic inside the VPC
- Each subnet also gets its own IP address range from that total range
-
So for an EC2 instance, we require a public IP address besides the private one that is assigned in the VPC
- This is also configured in the VPC
- After creation, the instance will get both a
privateandpublicIP address
-
For allowing internet connectivity with the VPC, we also have Internet Gateways
- Internet Gateway connects the VPC to the outside internet
-
In addition to all the network configurations, we want to secure our components
- Control access to VPC
- Control access to your individual server instances
- Can control access on multiple levels
-
Security
- Can control access on a Subnet level or individual instance level
- Can control access on the Subnet level using Network Access Control Lists (NACL)
- Can control access on the instance level with Security Groups
Classless Inter-domain Routing (CIDR)
- CIDR block
- It is a range of IP addresses
- How to choose a CIDR block
- Calculate on MX Toolbox
- For
172.31.0.0/16for instance- Its range is
172.31.0.0-172.31.255.255 - Depending on how many IPs you want, you can set the range
- The 16 means 16 bits will be fixed = 172.31
- Its range is
- From the VPC CIDR block, you can give your subnet sub CIDR blocks
- Calculate Sub CIDR block for Subnet
- Subnet calculator
- For 4 Subnets, you can divide the results into 4 subnet addresses to get your CIDR blocks
Introduction To EC2
- Elastic Compute Cloud
- Virtual server in AWS cloud
- Provides compute capacity
- Demo: Deploy a Web App on EC2
- Create an EC2 instance on AWS
- Connect to EC2 instance with
SSH - Install Docker on remote EC2 instance
- Run Docker Container (docker login, pull, run) from private repo
- Configure EC2 firewall to access app externally from browser
- Each availability zone has its own Subnet
- Tags are for adding metadata
SSH
- After downloading
pemfile, we have to make permissions stricter
chmod 400 .ssh/devops-ec2.pem
ssh -i ~/.ssh/devops-ec2.pem ec2-user@44.199.228.221
- After ssh-ing, the private IP address is shown in the terminal with
ec2-user
Install Docker on EC2
Update your package manager tool for up-to-date repositories
sudo yum update
Install docker
sudo yum install docker
Start docker daemon
sudo service docker start
- We want to add our current user to the docker group so we can run docker commands without sudo
sudo usermod -aG docker $USER
- We have to log out and log in again for the changes to take effect
Check user groups
groups
Run Web Application on EC2
-
Pull docker image
-
Run docker container
-
App starts on
3080inside the container, so we have to bind that port to the server port. -
We need to log in before we can fetch from private repository
Check docker host
docker login host
- If host is undefined, it assumes we're using docker hub
- Creates a hidden file
.docker/config.json- Will hold credentials or auth tokens to the docker repo
docker pull alfredasare/devops-demo-app:1.0
docker run -d -p 3000:3080 alfredasare/devops-demo-app:1.0
- Add new inbound rule to Security Group to allow traffic to port 3000
- Can now connect using the public IP or the DNS name with the port number
Note
Buidling on M1
docker buildx build --platform linux/amd64,linux/arm64 --push -t <tag_to_push> .
- The build image has to be pushed right after building
Deploy to EC2 server from Jenkins Pipeline - 1
- Previously, we built our docker image and deployed to Docker Hub using Jenkins
- Now we want to deploy the application to our EC2 instance
- Deployment
- Connect to EC2 server instance from Jenkins server via ssh (ssh agent)
- Execute docker run on EC2 instance
Install SSH Agent Plugin and Create SSH credentials type
- Install
SSH Agentfrom plugins - Need to create credentials from EC2
Pipeline Syntax
- In snippet generator, select ssh agent
- Click Generate Pipeline syntax for the snippet you can use in your Jenkinsfile
Jenkinsfile: Connect to EC2 and run Docker Command
- Should have already run docker login on the remote server
- We need to give Jenkins IP permission to connect to EC2
- Add Jenkins IP to Security Group
Jenkinsfile
def deployApp() {
echo "deploying the application to EC2"
def dockerCmd = 'docker run -p 3000:3080 -d alfredasare/devops-demo-app:1.1'
sshagent(['ec2-server-key']) {
sh "ssh -o StrictHostKeyChecking=no ec2-user@44.199.228.221 ${dockerCmd}"
}
}
Notes
- Deploying using SSH
- Applicable for all servers or cloud providers
- This is a simple use case
- Connect to server and start 1 Docker container
- For smaller projects
- For complex setups
- They have tens or hundreds of containers
- Use container orchestration tool
Using Docker Compose for Deployment
- A more advanced use case
- Small app which starts through Docker Compose
- Copy docker-compose file from Git Repo and execute inside sshAgent
- Execute docker-compose command on the EC2 server
Executing the complete pipeline
Jenkinsfile
#!/usr/bin/env groovy
// @Library('jenkins-shared-library')_
library identifier: 'jenkins-shared-library@main', retriever: modernSCM(
[
$class: 'GitSCMSource',
remote: 'git@github.com:alfredasare/jenkins-shared-library.git',
credentialsId: 'github-credentials'
]
)
def gv
pipeline {
agent any
tools {
maven 'maven-3.9'
}
environment {
IMAGE_NAME = 'alfredasare/devops-demo-app:java-maven-1.0'
}
stages {
// stage("init") {
// steps {
// script {
// gv = load "script.groovy"
// }
// }
// }
stage("build jar") {
steps {
script {
buildJar()
}
}
}
stage("build and push image") {
steps {
script {
buildImage(env.IMAGE_NAME)
dockerLogin()
dockerPush(env.IMAGE_NAME)
}
}
}
stage("deploy") {
steps {
script {
echo "Deploying the app..."
echo "deploying the application to EC2"
def dockerCmd = "docker run -p 8080:8080 -d ${IMAGE_NAME}"
sshagent(['ec2-server-key']) {
sh "ssh -o StrictHostKeyChecking=no ec2-user@44.199.228.221 ${dockerCmd}"
}
}
}
}
}
}
Deploy To EC2 Server From Jenkins Pipeline 2
- Small application with multiple services would start with Docker Compose
docker-compose -f docker-compose.yaml up
- Steps
- Install
docker-composeon EC2 - Create
docker-compose.yamlfile - Adjust Jenkinsfile to execute
docker-composecommand on EC2 instance
- Install
Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- Next, set the correct permissions so that the
docker-composecommand is executable:
sudo chmod +x /usr/local/bin/docker-compose
Create Docker Compose File
docker-compose.yaml
version: '3.8'
services:
java-maven-app:
image: alfredasare/devops-demo-app:java-maven-1.0
ports:
- "8080:8080"
postgres:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my-pwd
Make Jenkinsfile Adjustments
- We need the docker-compose file on the ec2 instance
Jenkinsfile
stage("deploy") {
steps {
script {
echo "Deploying the app..."
echo "deploying the application to EC2"
def dockerComposeCmd = "docker-compose -f docker-compose.yaml up --detach"
sshagent(['ec2-server-key']) {
sh "scp docker-compose.yaml ec2-user@44.199.228.221:/home/ec2-user"
sh "ssh -o StrictHostKeyChecking=no ec2-user@44.199.228.221 ${dockerComposeCmd}"
}
}
}
}
Improvement: Extract to Shell Script
server-cmds.sh
#!/usr/bin/env bash
docker-compose -f docker-compose.yaml up --detach
echo "success"
export TEST=testvalue
Jenkinsfile
stage("deploy") {
steps {
script {
echo "Deploying the app..."
echo "deploying the application to EC2"
def shellCmd = "bash ./server-cmds.sh"
sshagent(['ec2-server-key']) {
sh "scp server-cmds.sh ec2-user@44.199.228.221:/home/ec2-user"
sh "scp docker-compose.yaml ec2-user@44.199.228.221:/home/ec2-user"
sh "ssh -o StrictHostKeyChecking=no ec2-user@44.199.228.221 ${shellCmd}"
}
}
}
}
Improvement: Replace Docker Image with Newly build version
- Image in
docker-compose.yamlfile is hardcoded - We have to replace it with the newly built image
docker-compose.yaml
version: '3.8'
services:
java-maven-app:
image: ${IMAGE}
ports:
- "8080:8080"
postgres:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my-pwd
server-cmds.sh
#!/usr/bin/env bash
export IMAGE=$1
docker-compose -f docker-compose.yaml up --detach
echo "success"
Jenkinsfile
environment {
IMAGE_NAME = 'alfredasare/devops-demo-app:java-maven-2.0'
}
stage("deploy") {
steps {
script {
echo "Deploying the app..."
echo "deploying the application to EC2"
def shellCmd = "bash ./server-cmds.sh ${IMAGE_NAME}"
sshagent(['ec2-server-key']) {
sh "scp server-cmds.sh ec2-user@44.199.228.221:/home/ec2-user"
sh "scp docker-compose.yaml ec2-user@44.199.228.221:/home/ec2-user"
sh "ssh -o StrictHostKeyChecking=no ec2-user@44.199.228.221 ${shellCmd}"
}
}
}
}
More Minor Optimizations
- The repo stays the same
- We'll be dynamically changing tags based on versioning logic and recommitting
docker-compose.yaml
version: '3.8'
services:
java-maven-app:
image: "alfredasare/devops-demo-app:${TAG}"
ports:
- "8080:8080"
postgres:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my-pwd
Jenkinsfile
stage("deploy") {
steps {
script {
echo "Deploying the app..."
echo "deploying the application to EC2"
def shellCmd = "bash ./server-cmds.sh ${IMAGE_NAME}"
def ec2Instance = "ec2-user@44.199.228.221"
sshagent(['ec2-server-key']) {
sh "scp server-cmds.sh ${ec2Instance}:/home/ec2-user"
sh "scp docker-compose.yaml ${ec2Instance}:/home/ec2-user"
sh "ssh -o StrictHostKeyChecking=no ${ec2Instance} ${shellCmd}"
}
}
}
}
Deploy to EC2 server from Jenkins Pipeline 3
- Set image name dynamically
- Process
- Increment version
- Build App with Maven
- Build and push image
- Deploy to EC2
- Commit version bump
- In maven app shared lib
Jenkinsfile
#!/usr/bin/env groovy
// @Library('jenkins-shared-library')_
library identifier: 'jenkins-shared-library@main', retriever: modernSCM(
[
$class: 'GitSCMSource',
remote: 'git@github.com:alfredasare/jenkins-shared-library.git',
credentialsId: 'github-credentials'
]
)
def gv
pipeline {
agent any
tools {
maven 'maven-3.9'
}
stages {
// stage("init") {
// steps {
// script {
// gv = load "script.groovy"
// }
// }
// }
stage("increment version") {
steps {
script {
echo "incrementing app version..."
sh "mvn build-helper:parse-version versions:set -DnewVersion=\\\${parsedVersion.majorVersion}.\\\${parsedVersion.minorVersion}.\\\${parsedVersion.nextIncrementalVersion} versions:commit"
def matcher = readFile('pom.xml') =~ '<version>(.+)</version>'
def version = matcher[0][1]
env.IMAGE_NAME = "alfredasare/devops-demo-app:java-maven-$version-$BUILD_NUMBER"
}
}
}
stage("build jar") {
steps {
script {
buildJar()
}
}
}
stage("build and push image") {
steps {
script {
buildImage(env.IMAGE_NAME)
dockerLogin()
dockerPush(env.IMAGE_NAME)
}
}
}
stage("deploy") {
steps {
script {
echo "Deploying the app..."
echo "deploying the application to EC2"
def shellCmd = "bash ./server-cmds.sh ${IMAGE_NAME}"
def ec2Instance = "ec2-user@44.199.228.221"
sshagent(['ec2-server-key']) {
sh "scp server-cmds.sh ${ec2Instance}:/home/ec2-user"
sh "scp docker-compose.yaml ${ec2Instance}:/home/ec2-user"
sh "ssh -o StrictHostKeyChecking=no ${ec2Instance} ${shellCmd}"
}
}
}
}
stage("commit version update") {
steps {
script {
sshagent(credentials: ['github-credentials']) {
sh "git remote set-url origin git@github.com:alfredasare/java-maven-app.git"
sh "git add ."
sh 'git commit -m "ci: version bump"'
sh "git push origin HEAD:${BRANCH_NAME}"
}
}
}
}
}
}
AWS CLI
- Includes commands for every AWS service and resources
- We'll
- Create a new EC2 instance
- Create a new SG
- Create new SSH key pair
Install and configure CLI
brew update
brew install awscli
aws --version
Connect AWS CLI with AWS user
aws configure
- Location:
~/.aws- Has config and credentials
Command Structure
aws <command> <subcommand> [options and parameters]
- aws - the base call to the aws program
- command - the AWS service
- subcommand - specifies which operation to perform
EC2
- Create SG
- Create key-pair
- Create instance
Create a Security Group
- Need a VPC id
# List VPCs
aws ec2 describe-vpcs
# List SG
aws ec2 describe-security-groups
# Create SG
aws ec2 create-security-group --group-name my-sg --description "My Security Group" --vpc-id
vpc-00dc1341f0c31b0b1
Response
{
"GroupId": "sg-033edff3d7b8b1283"
}
Get SG info
aws ec2 describe-security-groups --group-ids sg-033edff3d7b8b1283
Configure Firewall Rule to Receive Port 22 Traffic
aws ec2 authorize-security-group-ingress \
--group-id sg-033edff3d7b8b1283 \
--protocol tcp \
--port 22 \
--cidr 197.251.183.152/32
Create Key-Pair
aws ec2 create-key-pair \
> --key-name devops-ec22 \
> --query 'KeyMaterial' \
> --output text > devops-ec22.pem
Create EC2 Instance
# Subnet Id
aws ec2 describe-subnets
aws ec2 run-instances \
> --count 1 \
> --instance-type t2.micro \
> --key-name devops-ec22 \
> --security-group-ids sg-033edff3d7b8b1283 \
> --subnet-id subnet-068bd85aebdf98574 \
> --image-id ami-005f9685cb30f234b
aws ec2 describe-instances
- Change permission of pem file for SSH
chmod 400 ~/.ssh/devop-ec22.pem
ssh -i ~/.ssh/devops-ec22.pem ec2-user@18.212.163.247
Tutorial commands
## List all available security-group ids
aws ec2 describe-security-groups
## create new security group
aws ec2 describe-vpcs
aws ec2 create-security-group --group-name my-sg --description "My security group" --vpc-id
vpc-1a2b3c4d
## this will give output of created my-sg with its id, so we can do:
aws ec2 describe-security-groups --group-ids sg-903004f8
## add firewall rule to the group for port 22
aws ec2 authorize-security-group-ingress --group-id sg-903004f8 --protocol tcp --port 22 --cidr
203.0.113.0/24
aws ec2 describe-security-groups --group-ids sg-903004f8
# Use an existing key-value pair or if you want, create and use a new key-pair. 'KeyMaterial' gives
us an unencrypted PEM encoded RSA private key.
aws ec2 create-key-pair --key-name MyKeyPair --query 'KeyMaterial' --output text > MyKeyPair.pem
# launch ec2 instance in the specified subnet of a VPC
aws ec2 describe-subnets
aws ec2 describe-instances -> will give us ami-imageid, we will use the same one
aws ec2 run-instances
--image-id ami-xxxxxxxx
--count 1
--instance-type t2.micro
--key-name MyKeyPair
--security-group-ids sg-903004f8
--subnet-id subnet-6e7f829e
# ssh into the ec2 instance with the new key pem after creating it - public IP will be returned as
json, so query it
aws ec2 describe-instances --instance-ids {instance-id}
chmod 400 MyKeyPair.pem
ssh -i MyKeyPair.pem ec2-user@public-ip
# check UI for all the components that got created
# describe-instances - with filter and query
--filter is for picking some instances. --query is for picking certain info about those instances
Filter and Query: describe command option
- Can add filters for describe commands
- Filter picks components
- Query picks specific attributes of component
# Show instances whose type is t2.micro
aws ec2 describe-instances --filters "Name=instance-type,Values=t2.micro" --query "Reservations[].Instances[].InstanceId"
# Instance whose tag type is "Web Server with Docker"
aws ec2 describe-instances --filters "Name=tag:Type,Values=Web Server with Docker"
# Multiple values
aws ec2 describe-instances --filters "Name=image-id,Values=ami-x0123456,ami-y0123456,ami-z0123456"
Using IAM command
Create User, Group and Assign Permissions
- Arn - Amazon Resource Name
- To uniquely identify a resource in AWS
- We need a policy ARN before we can attach it to a user or group
- Policy - Group of permissions
- After attaching policies, create credentials for new User
- User needs to have permission to change password
- We need a JSON file to create our own policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:GetAccountPasswordPolicy",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:ChangePassword",
"Resource": "arn:aws:iam::227984707254:user/${aws:username}"
}
]
}
Create Access Keys for New User
- We've added password access
- Now we want to add programmatic access
aws iam create-access-key --user-name MyUser
Response
{
"AccessKey": {
"UserName": "abramov",
"AccessKeyId": "AKIATKFHSR23BRUB5RKV",
"Status": "Active",
"SecretAccessKey": "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",
"CreateDate": "2023-03-12T19:58:23+00:00"
}
}
Useful Commands
# same way as ec2 had a bunch of commands for components relevant for ec2 instances, iam does too
aws iam create-group --group-name MyIamGroup
aws iam create-user --user-name MyUser
aws iam add-user-to-group --user-name MyUser --group-name MyIamGroup
# verify that my-group contains the my-user
aws iam get-group --group-name MyIamGroup
# attach policy to group
## this is the command so we need the policy-ARN - how can we get that?
aws iam attach-user-policy --user-name MyUser --policy-arn {policy-arn} - attach to user directly
aws iam attach-group-policy --group-name MyGroup --policy-arn {policy-arn} - attach policy to group
## let's go and check on UI AmazonEC2FullAccess policy ARN
## OR if you know the name of the policy 'AmazonEC2FullAccess', list them
aws iam list-policies --query 'Policies[?PolicyName==`AmazonEC2FullAccess`].{ARN:Arn}' --output text
aws iam attach-group-policy --group-name MyGroup --policy-arn {policy-arn}
# validate policy attached to group or user
aws iam list-attached-group-policies --group-name MyGroup - [aws iam list-attached-user-policies
--user-name MyUser]
# Now that user needs access to the command line and UI, but we didn't give it any credentials. So let's do that as well!
## UI access
aws iam create-login-profile --user-name MyUser --password MyUser1Login8P@ssword --password-reset-required
-> user will have to update password on UI or programmatically with command:
aws iam update-login-profile --user-name MyUser --password My!User1ADifferentP@ssword
# Create test policy
aws iam create-policy --policy-name bla --policy-document file://changePwdPolicy.json
## cli access
aws iam create-access-key --user-name MyUser
-> you will see the access keys
Change AWS User for executing commands
- Use
awsconfigure to change default User
For only credentials
$ aws configure set aws_access_key_id default_access_key
$ aws configure set aws_secret_access_key default_secret_key
To change credentials temporarily, set environmental variables
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
Useful Commands Continued
## Now let's ssh into the EC2 instance with this user
# 'aws configure' with new user creds
$ aws configure set aws_access_key_id default_access_key
$ aws configure set aws_secret_access_key default_secret_key
export AWS_ACCESS_KEY_ID=AKIAIO7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
## Now let's login with this user on UI and see what got created!
### NOTES at the end
# Revert to admin user credentials
# Delete all the stuff
Aws Terraform Preview
- Had many commands to execute
- No overview
- We don't know the resources we created and which ones to clean up
- There are automation tools to help with working with AWS in a much more effective way
- These automation tools are called Infrastructure as Code
- Provision infrastructure
- Create and manage resources
- e.g. Terraform
Container services on AWS (Preview)
- We used AWS to run our dockerized app
- Containers have become popular due to microservices
- Creating EC2 instances yourself and installing docker on each instance to run and manage your containers is daunting
- Due to this, AWS has services for containers
- e.g. EKS