CloudFront Blue/Green Deployment Using GitLab Where Origin Is ALB+EKS Service

Vinayak Pandey
10 min readFeb 7, 2024

Prerequisites:

1-OIDC connectivity between GitLab and AWS is also required . You may refer to https://docs.gitlab.com/ee/ci/cloud_services/aws/ to set this up.

2-Make sure your GitLab IAM role has necessary permission. For this demo, I gave Administrator permission but in real world scenarios, access should be restricted.

Step 1: Launch an EC2 instance with Amazon Linux 2 AMI.This instance will act as a command center :).

Connect to it and Install kubectl,eksctl,helm,docker and git using following command:

sudo yum install -y git docker
sudo service docker start
sudo curl --silent --location -o /usr/local/bin/kubectl \
https://s3.us-west-2.amazonaws.com/amazon-eks/1.28.5/2024-01-04/bin/linux/amd64/kubectl
sudo chmod +x /usr/local/bin/kubectl

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv -v /tmp/eksctl /usr/local/bin
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Step 2: Follow https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html#getting-started-install-instructions to upgrade AWS CLI to v2.

Step 3: Now create a file named eks-cluster-config.yaml with following content:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-cluster
region: us-east-1
version: '1.28'
iam:
withOIDC: true
availabilityZones: ['us-east-1a','us-east-1b','us-east-1c']
fargateProfiles:
- name: defaultfp
selectors:
- namespace: default
- namespace: kube-system
cloudWatch:
clusterLogging:
enableTypes: ["*"]

Now execute following command to create our EKS cluster:

eksctl create cluster -f eks-cluster-config.yaml

It will take around 20 minutes before cluster is ready.

Step 4: Verify you can connect to your cluster using following commands:

aws eks update-kubeconfig --region us-east-1 --name my-cluster
kubectl get nodes

Step 5: Since we are using AWS Load Balancer Controller , follow https://repost.aws/knowledge-center/eks-alb-ingress-controller-fargate to install AWS Load Balancer Controller. Make sure you specify the VPC created by eksctl.

Also install metric server using following command:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml

Step 6: Create 2 target groups named nginxblue and nginxgreen with target type as IP.

Also create 2 internet-facing ALB named nginxblue which use nginxblue target group, and nginxgreen which use nginxgreen target group.We’ll be using default security group and will allow all traffic from any IP.

Also allow all traffic from any IP in the EKS cluster security group.

Step 7: Now go back to your EC2 instance and create a helm chart using following command:

helm create nginx-chart

in nginx-chart directory, change content of Chart.yaml to this:


apiVersion: v2
name: nginx-chart
description: A Helm chart for deploying NGINX
version: 0.1.0

and templates/deployment.yaml to this:

apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "nginx-chart.fullname" . }}
labels:
#app: {{ include "nginx-chart.name" . }}
app: {{ tpl .Release.Name . }}
spec:
selector:
matchLabels:
app: {{ tpl .Release.Name . }}
#app: {{ include "nginx-chart.name" . }}
template:
metadata:
labels:
app: {{ tpl .Release.Name . }}
#app: {{ include "nginx-chart.name" . }}
spec:
containers:
- name: nginx
image: {{ .Values.image }}
ports:
- containerPort: 80

and templates/service.yaml to this:

apiVersion: v1
kind: Service
metadata:
name: {{ include "nginx-chart.fullname" . }}
labels:
#app: {{ include "nginx-chart.name" . }}
app: {{ tpl .Release.Name . }}
spec:
ports:
- port: 80
targetPort: 80
selector:
#app: {{ include "nginx-chart.name" . }}
app: {{ tpl .Release.Name . }}

and templates/target-group-binding.yaml to this:

apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
name: {{ include "nginx-chart.fullname" . }}
spec:
serviceRef:
name: {{ include "nginx-chart.fullname" . }}
port: 80
targetGroupARN: {{ .Values.targetGroupBinding.arn }}

and templates/hpa.yaml to this:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "nginx-chart.fullname" . }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "nginx-chart.fullname" . }}
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

remove any other files in the templates directory:

rm -rf templates/ingress.yaml
rm -rf templates/NOTES.txt
rm -rf templates/tests/
rm -rf templates/serviceaccount.yaml
rm -rf values.yaml

Step 8: Move out of nginx-chart directory and create an index.html with following content:

<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Docker Nginx</title>
</head>
<body>
<h2>Hello from Nginx container v1</h2>
</body>
</html>

and a Dockerfile with following content:

FROM nginx:latest
COPY ./index.html /usr/share/nginx/html/index.html

Build and push the image to docker hub.

sudo docker build -t vinycoolguy/mypyapp:v1 . 
sudo docker login
sudo docker push vinycoolguy/mypyapp:v1

Step 9: Install 2 helm releases named my-nginx-blue and my-nginx-green.Specify ARN of target groups and image accordingly.

helm install my-nginx-blue ./nginx-chart --set targetGroupBinding.arn=<nginxblue target group arn> --set image=<image>
helm install my-nginx-green ./nginx-chart --set targetGroupBinding.arn=<nginxgreen target group arn> --set image=<image>

Step 10: Verify your helm is released properly, pods and service created, target group has target registered and both the ALBs are giving proper response.

Step 11: Create a CloudFront Blue/Green environment which uses ALBs as origin.You may refer to https://aws.amazon.com/blogs/networking-and-content-delivery/use-cloudfront-continuous-deployment-to-safely-validate-cdn-changes/ for more details about CloudFront Blue/Green deployments.

Check whether you can access Primary as well as Staging distribution.

Step 12: Now we need to grant GitLab role access to our EKS cluster. For that, go to EKS and select your cluster->Access->Create access entry. Provide your GitLab role name and give AmazonEKSClusterAdminPolicy permission.

You can create a simple .gitlab-ci.yml file to test EKS connectivity.Change <GITLAB_ROLE> to ARN of GitLab role.

---
stages:
- test

.setup-script:
before_script:
- mkdir -p ~/.aws
- echo "${MY_OIDC_TOKEN}" > /tmp/web_identity_token
- echo -e
"[default]\nrole_arn=${ROLE_ARN}\nweb_identity_token_file=/tmp/web_identity_token"
> ~/.aws/config

test:
stage: test
variables:
ROLE_ARN: <GITLAB_ROLE_ARN>
EKS_CLUSTER: "my-cluster"
REGION: "us-east-1"
image:
name: matshareyourscript/aws-helm-kubectl
entrypoint:
- ""
id_tokens:
MY_OIDC_TOKEN:
aud: https://gitlab.com
before_script:
!reference [.setup-script, before_script]
script:
- aws eks update-kubeconfig --name $EKS_CLUSTER --region $REGION
- kubectl get nodes

Once pipeline is executed, you can see that connectivity is working fine.

Step 13: Now push index.html, Dockerfile and nginx-chart directory to GitLab repo. Change .gitab-ci.yml to this:

---
stages:
- image_build_push
- staging_cloudfront_deployment
.setup-script:
before_script:
- mkdir -p ~/.aws
- echo "${MY_OIDC_TOKEN}" > /tmp/web_identity_token
- echo -e
"[default]\nrole_arn=${ROLE_ARN}\nweb_identity_token_file=/tmp/web_identity_token"
> ~/.aws/config

image_build_push:
image: docker:latest
stage: image_build_push
variables:
CI_REGISTRY_PASSWORD: <DOCKERHUB_PASSWORD>
CI_REGISTRY_USER: <DOCKERHUB_USER>
CI_REPO: <DOCKER_REPONAME>
services:
- docker:dind
before_script:
- echo "$CI_REGISTRY_USER $CI_REGISTRY_PASSWORD"
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD"
script:
- docker build -t $CI_REGISTRY_USER/$CI_REPO:$CI_COMMIT_SHA . --no-cache
- docker push $CI_REGISTRY_USER/$CI_REPO:$CI_COMMIT_SHA
only:
- main

staging_cloudfront_deployment:
image: matshareyourscript/aws-helm-kubectl
stage: staging_cloudfront_deployment
needs:
- image_build_push
variables:
ROLE_ARN: <GITLAB_ROLE>
EKS_CLUSTER: "my-cluster"
CI_REGISTRY_USER: <DOCKERHUB_USER>
CI_REPO: <DOCKER_REPONAME>
ORIGIN_NAME: default
STAGING_CLOUDFRONT_ID: <STAGING_CLOUDFRONT_ID>
PRIMARY_CLOUDFRONT_ID: <PRIMARY_CLOUDFRONT_ID>
GREEN_TARGET_GROUP: <GREEN_TARGET_GROUP>
BLUE_TARGET_GROUP: <BLUE_TARGET_GROUP>
GREEN_HELM_RELEASE_NAME: my-nginx-green
BLUE_HELM_RELEASE_NAME: my-nginx-blue

id_tokens:
MY_OIDC_TOKEN:
aud: https://gitlab.com
before_script:
!reference [.setup-script, before_script]
script:
- export AWS_DEFAULT_REGION="us-east-1"
- STAGING_ALB_NAME=$(aws cloudfront get-distribution --id $STAGING_CLOUDFRONT_ID --output text --query 'Distribution.DistributionConfig.Origins.Items[?Id==`'${ORIGIN_NAME}'`].DomainName'|cut -d "." -f1| cut -d "-" -f1)
- PRIMARY_ALB_NAME=$(aws cloudfront get-distribution --id $PRIMARY_CLOUDFRONT_ID --output text --query 'Distribution.DistributionConfig.Origins.Items[?Id==`'${ORIGIN_NAME}'`].DomainName'|cut -d "." -f1| cut -d "-" -f1)
- |
if [ "$STAGING_ALB_NAME" == "$PRIMARY_ALB_NAME" ]; then
echo "Staging and Primary CloudFront are using same ALB. Aborting deployment"
exit 1
fi
- echo "Staging CloudFront Origin is using following ALB- $STAGING_ALB_NAME"
- STAGING_ALB_ARN=$(aws elbv2 describe-load-balancers --names $STAGING_ALB_NAME --output text --query 'LoadBalancers[*].LoadBalancerArn')
- STAGING_TARGET_GROUP_ARN=$(aws elbv2 describe-listeners --load-balancer-arn $STAGING_ALB_ARN --output text --query 'Listeners[*].DefaultActions[*].TargetGroupArn')
- echo "$STAGING_ALB_NAME is using following target group - $STAGING_TARGET_GROUP_ARN"
- |
if [ "$STAGING_TARGET_GROUP_ARN" == "$GREEN_TARGET_GROUP" ]; then
HELM_RELEASE=$GREEN_HELM_RELEASE_NAME
elif [ "$STAGING_TARGET_GROUP_ARN" == "$BLUE_TARGET_GROUP" ]; then
HELM_RELEASE=$BLUE_HELM_RELEASE_NAME
else
echo "$STAGING_ALB_NAME is using a different target group.Aborting deployment"
exit 1
fi
- echo "$HELM_RELEASE will be deployed"
- aws eks update-kubeconfig --name $EKS_CLUSTER
- helm upgrade $HELM_RELEASE ./nginx-chart --set image=$CI_REGISTRY_USER/$CI_REPO:$CI_COMMIT_SHA --reuse-values
- echo "Invalidate CloudFront"
- aws cloudfront create-invalidation --distribution-id $STAGING_CLOUDFRONT_ID --paths "/*"
- |
distribution_status=""
max_retries=30
while [[ $distribution_status != "Deployed" || max_retries == 0 ]]; do
sleep 15
distribution_status=$(aws cloudfront get-distribution --id $STAGING_CLOUDFRONT_ID --output text --query 'Distribution.Status')
echo "Retries left $max_retries | Distribution status $distribution_status"
max_retries=$((max_retries-1))
done

if [[ $distribution_status != "Deployed" ]]; then
echo "Distribution failed to complete within maximum retries with status $distribution_status"
echo "Please check the cloudfront distribution on the AWS console directly for status updates"
exit 1
fi
environment:
name: staging

and edit content of index.html. Once the pipeline is executed, access staging CloudFront distribution and you’ll see the updated content.

Step 14: Next we’ll modify the pipeline to add another stage to promote staging distribution to primary. Final pipeline will look like this:

---
stages:
- image_build_push
- staging_cloudfront_deployment
- enable_staging_cloudfront_distribution
- promote_staging_cloudfront
.setup-script:
before_script:
- mkdir -p ~/.aws
- echo "${MY_OIDC_TOKEN}" > /tmp/web_identity_token
- echo -e
"[default]\nrole_arn=${ROLE_ARN}\nweb_identity_token_file=/tmp/web_identity_token"
> ~/.aws/config

image_build_push:
image: docker:latest
stage: image_build_push
variables:
CI_REGISTRY_PASSWORD: <DOCKERHUB_PASSWORD>
CI_REGISTRY_USER: <DOCKERHUB_USER>
CI_REPO: <DOCKER_REPONAME>
services:
- docker:dind
before_script:
- echo "$CI_REGISTRY_USER $CI_REGISTRY_PASSWORD"
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD"
script:
- docker build -t $CI_REGISTRY_USER/$CI_REPO:$CI_COMMIT_SHA . --no-cache
- docker push $CI_REGISTRY_USER/$CI_REPO:$CI_COMMIT_SHA
only:
- main

staging_cloudfront_deployment:
image: matshareyourscript/aws-helm-kubectl
stage: staging_cloudfront_deployment
needs:
- image_build_push
variables:
ROLE_ARN: <GITLAB_ROLE>
EKS_CLUSTER: "my-cluster"
CI_REGISTRY_USER: <DOCKERHUB_USER>
CI_REPO: <DOCKER_REPONAME>
ORIGIN_NAME: default
STAGING_CLOUDFRONT_ID: <STAGING_CLOUDFRONT_ID>
PRIMARY_CLOUDFRONT_ID: <PRIMARY_CLOUDFRONT_ID>
GREEN_TARGET_GROUP: <GREEN_TARGET_GROUP>
BLUE_TARGET_GROUP: <BLUE_TARGET_GROUP>
GREEN_HELM_RELEASE_NAME: my-nginx-green
BLUE_HELM_RELEASE_NAME: my-nginx-blue

id_tokens:
MY_OIDC_TOKEN:
aud: https://gitlab.com
before_script:
!reference [.setup-script, before_script]
script:
- export AWS_DEFAULT_REGION="us-east-1"
- STAGING_ALB_NAME=$(aws cloudfront get-distribution --id $STAGING_CLOUDFRONT_ID --output text --query 'Distribution.DistributionConfig.Origins.Items[?Id==`'${ORIGIN_NAME}'`].DomainName'|cut -d "." -f1| cut -d "-" -f1)
- PRIMARY_ALB_NAME=$(aws cloudfront get-distribution --id $PRIMARY_CLOUDFRONT_ID --output text --query 'Distribution.DistributionConfig.Origins.Items[?Id==`'${ORIGIN_NAME}'`].DomainName'|cut -d "." -f1| cut -d "-" -f1)
- |
if [ "$STAGING_ALB_NAME" == "$PRIMARY_ALB_NAME" ]; then
echo "Staging and Primary CloudFront are using same ALB. Aborting deployment"
exit 1
fi
- echo "Staging CloudFront Origin is using following ALB- $STAGING_ALB_NAME"
- STAGING_ALB_ARN=$(aws elbv2 describe-load-balancers --names $STAGING_ALB_NAME --output text --query 'LoadBalancers[*].LoadBalancerArn')
- STAGING_TARGET_GROUP_ARN=$(aws elbv2 describe-listeners --load-balancer-arn $STAGING_ALB_ARN --output text --query 'Listeners[*].DefaultActions[*].TargetGroupArn')
- echo "$STAGING_ALB_NAME is using following target group - $STAGING_TARGET_GROUP_ARN"
- |
if [ "$STAGING_TARGET_GROUP_ARN" == "$GREEN_TARGET_GROUP" ]; then
HELM_RELEASE=$GREEN_HELM_RELEASE_NAME
elif [ "$STAGING_TARGET_GROUP_ARN" == "$BLUE_TARGET_GROUP" ]; then
HELM_RELEASE=$BLUE_HELM_RELEASE_NAME
else
echo "$STAGING_ALB_NAME is using a different target group.Aborting deployment"
exit 1
fi
- echo "$HELM_RELEASE will be deployed"
- aws eks update-kubeconfig --name $EKS_CLUSTER
- helm upgrade $HELM_RELEASE ./nginx-chart --set image=$CI_REGISTRY_USER/$CI_REPO:$CI_COMMIT_SHA --reuse-values
- echo "Invalidate CloudFront"
- aws cloudfront create-invalidation --distribution-id $STAGING_CLOUDFRONT_ID --paths "/*"
- |
distribution_status=""
max_retries=30
while [[ $distribution_status != "Deployed" || max_retries == 0 ]]; do
sleep 15
distribution_status=$(aws cloudfront get-distribution --id $STAGING_CLOUDFRONT_ID --output text --query 'Distribution.Status')
echo "Retries left $max_retries | Distribution status $distribution_status"
max_retries=$((max_retries-1))
done

if [[ $distribution_status != "Deployed" ]]; then
echo "Distribution failed to complete within maximum retries with status $distribution_status"
echo "Please check the cloudfront distribution on the AWS console directly for status updates"
exit 1
fi
environment:
name: staging

enable_staging_cloudfront_distribution:
stage: enable_staging_cloudfront_distribution
needs:
- staging_cloudfront_deployment
variables:
ROLE_ARN: <GITLAB_ROLE>
PRIMARY_CLOUDFRONT_ID: <PRIMARY_CLOUDFRONT_ID>
image:
name: amazon/aws-cli:latest
entrypoint:
- ""
id_tokens:
MY_OIDC_TOKEN:
aud: https://gitlab.com
before_script:
!reference [.setup-script, before_script]
script:
- export AWS_DEFAULT_REGION="us-east-1"
- DEPLOYMENT_POLICY_ID=`aws cloudfront get-distribution-config --id $PRIMARY_CLOUDFRONT_ID --output text --query 'DistributionConfig.ContinuousDeploymentPolicyId'`
- aws cloudfront get-continuous-deployment-policy-config --id $DEPLOYMENT_POLICY_ID > POLICY_CONFIG.txt
- sed -i "s/false/true/" POLICY_CONFIG.txt
- sed -i '/ETag/d' POLICY_CONFIG.txt
- sed -i '/ContinuousDeploymentPolicyConfig/d' POLICY_CONFIG.txt
- sed -i '$ d' POLICY_CONFIG.txt
- ETAG=`aws cloudfront get-continuous-deployment-policy-config --id $DEPLOYMENT_POLICY_ID --query 'ETag' --output text`
- aws cloudfront update-continuous-deployment-policy --continuous-deployment-policy-config "file://POLICY_CONFIG.txt" --id $DEPLOYMENT_POLICY_ID --if-match "$ETAG"


promote_staging_cloudfront:
stage: promote_staging_cloudfront
when: manual
needs:
- enable_staging_cloudfront_distribution
variables:
ROLE_ARN: <GITLAB_ROLE>
PRIMARY_CLOUDFRONT_ID: <PRIMARY_CLOUDFRONT_ID>
STAGING_CLOUDFRONT_ID: <STAGING_CLOUDFRONT_ID>
ORIGIN_NAME: default
image:
name: amazon/aws-cli:latest
entrypoint:
- ""
id_tokens:
MY_OIDC_TOKEN:
aud: https://gitlab.com
before_script:
!reference [.setup-script, before_script]
script:
- export AWS_DEFAULT_REGION="us-east-1"
- STAGING_ALB_NAME=$(aws cloudfront get-distribution --id $STAGING_CLOUDFRONT_ID --output text --query 'Distribution.DistributionConfig.Origins.Items[?Id==`'${ORIGIN_NAME}'`].DomainName')
- PRIMARY_ALB_NAME=$(aws cloudfront get-distribution --id $PRIMARY_CLOUDFRONT_ID --output text --query 'Distribution.DistributionConfig.Origins.Items[?Id==`'${ORIGIN_NAME}'`].DomainName')
- echo ""
- |
if [ "$STAGING_ALB_NAME" == "$PRIMARY_ALB_NAME" ]; then
echo "Staging and Primary CloudFront are using same ALB. Aborting promotion"
exit 1
fi
- PRIMARY_ETAG=`aws cloudfront get-distribution-config --id $PRIMARY_CLOUDFRONT_ID --query 'ETag' --output text`
- STAGING_ETAG=`aws cloudfront get-distribution-config --id $STAGING_CLOUDFRONT_ID --query 'ETag' --output text`
- echo $PRIMARY_ETAG $STAGING_ETAG
- echo "Promoting Staging CloudFront distribution"
- aws cloudfront update-distribution-with-staging-config --id $PRIMARY_CLOUDFRONT_ID --staging-distribution-id $STAGING_CLOUDFRONT_ID --if-match "$PRIMARY_ETAG,$STAGING_ETAG"
- |
distribution_status=""
max_retries=30
while [[ $distribution_status != "Deployed" || max_retries == 0 ]]; do
sleep 15
distribution_status=$(aws cloudfront get-distribution --id $PRIMARY_CLOUDFRONT_ID --output text --query 'Distribution.Status')
echo "Retries left $max_retries | Distribution status $distribution_status"
max_retries=$((max_retries-1))
done

if [[ $distribution_status != "Deployed" ]]; then
echo "Distribution failed to complete within maximum retries with status $distribution_status"
echo "Please check the cloudfront distribution on the AWS console directly for status updates"
exit 1
fi
- aws cloudfront get-distribution-config --id $STAGING_CLOUDFRONT_ID --query 'DistributionConfig' > STAGING_CONFIG.txt
- sed -i "s/$STAGING_ALB_NAME/$PRIMARY_ALB_NAME/g" STAGING_CONFIG.txt
- STAGING_ETAG=`aws cloudfront get-distribution-config --id $STAGING_CLOUDFRONT_ID --query 'ETag' --output text`
- aws cloudfront update-distribution --id $STAGING_CLOUDFRONT_ID --distribution-config "file://STAGING_CONFIG.txt" --if-match $STAGING_ETAG
- aws cloudfront create-invalidation --distribution-id $STAGING_CLOUDFRONT_ID --paths "/*"
- |
distribution_status=""
max_retries=30
while [[ $distribution_status != "Deployed" || max_retries == 0 ]]; do
sleep 15
distribution_status=$(aws cloudfront get-distribution --id $STAGING_CLOUDFRONT_ID --output text --query 'Distribution.Status')
echo "Retries left $max_retries | Distribution status $distribution_status"
max_retries=$((max_retries-1))
done

if [[ $distribution_status != "Deployed" ]]; then
echo "Distribution failed to complete within maximum retries with status $distribution_status"
echo "Please check the cloudfront distribution on the AWS console directly for status updates"
exit 1
fi

Change content to index.html and trigger a deployment.Once the deployment is complete

check primary Cloudfront url and you’ll see updated content while staging cloudfront will have outdated content(earlier served by primary Cloudfront)

--

--