Complete Guide to Setting Up Jenkins Pipeline with ECR, Kubernetes, and Ingress Controller
In today’s dynamic software development environment, continuous integration and continuous deployment (CI/CD) have become essential practices. This project demonstrates a comprehensive CI/CD pipeline using Jenkins, Docker, Kubernetes, and Ingress Controller on AWS. By leveraging these powerful tools, we ensure that every code change is automatically built, tested, and deployed, minimizing manual intervention and increasing deployment efficiency.
This tutorial will guide you through setting up a Jenkins pipeline on an AWS EC2 instance to deploy a Dockerized application to an AWS EKS cluster. Our source code and Kubernetes manifests will be stored in GitHub repositories, and the pipeline will be triggered on any commit to the repository. The pipeline will build a Docker image, push it to the AWS Elastic Container Registry (ECR), and deploy it to the EKS cluster. Additionally, the Ingress Controller will handle routing and access management, ensuring that your application is properly exposed and accessible.
Prerequisites
- AWS Account with IAM permissions to create and manage EC2, ECR, and EKS resources.
- Basic knowledge of Docker, Kubernetes, and Jenkins.
- GitHub account to store the application code and Kubernetes manifests.
1. Set Up AWS EC2 Instance for Jenkins
1.1 Create an Ubuntu Instance
Launch an Ubuntu EC2 instance in the us-west-2 region (you can use any region, but you need to use the same region for all resources) using the AWS Management Console. Choose an t2.micro instance for this setup to stay within the free tier limits and use 15 GiB storage.
1.2 SSH to the Instance
Use the following command to SSH into the instance from your local machine. Replace <your-key-pair> with your key pair filename and <your-ec2-instance-public-ip> with the public IP of your EC2 instance.
$ ssh -i "rija_oregon.pem" ubuntu@ec2-34-219-162-197.us-west-2.compute.amazonaws.com
2. Install Necessary Tools
2.1 Install AWS CLI v2
- Download and install AWS CLI v2:
$ sudo su
$ apt update
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ sudo apt install unzip
$ unzip awscliv2.zip
$ sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin --update
2. Configure AWS CLI with your credentials:
$ aws configure
- AWS Access Key ID: Enter your AWS Access Key ID.
- AWS Secret Access Key: Enter your AWS Secret Access Key.
- Default region name: Enter us-west-2.
- Default output format: Enter json.
2.2 Install Docker
- Update package information and install Docker
$ sudo apt-get update
$ sudo apt install docker.io
2. Verify Docker installation
$ docker --version
3. Add your user to the Docker group to run Docker commands without sudo
$ sudo usermod -aG docker $USER
4. Log Out and Log Back In
- You need to log out and log back in for the group changes to take effect. Simply exiting the SSH session and reconnecting might not be enough. Log out of your current session completely and then log back in.
5. Verify Docker Socket Permissions
- Check the permissions of the Docker socket to ensure it is accessible by the Docker group:
$ ls -l /var/run/docker.sock
The output should look something like this:
srw-rw---- 1 root docker 0 Jul 30 12:34 /var/run/docker.sock
- Here, the socket should be owned by root and the group docker. Ensure that the permissions srw-rw — — allow read and write access to the docker group.
6. Restart Docker Service
Sometimes, restarting the Docker service can help in applying permission changes:
$ sudo systemctl restart docker
7. Check Docker Service Status
Make sure Docker is running:
$ sudo systemctl status docker
If Docker is not running, start it with:
$ sudo systemctl start docker
8. Try Running Docker Commands
After completing the above steps, try running the Docker commands again:
$ docker ps
2.3 Install kubectl
- Download and install kubectl:
$curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin
$ kubectl version --short --client
2.4 Install eksctl
- Download and install eksctl:
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
$ eksctl version
3. Set Up EKS Cluster
3.1 Create an EKS Cluster
- Use eksctl to create an EKS cluster. This command creates a cluster named my-eks-cluster with two managed nodes in the us-west-2 region:
$ eksctl create cluster \
--name my-eks-cluster \
--region us-west-2 \
--nodegroup-name standard-workers \
--node-type t2.medium \
--nodes 2 \
--nodes-min 2 \
--nodes-max 2 \
--managed
4. Install and Configure Jenkins
4.1 Install Jenkins
- Install Jenkins on your EC2 instance:
$ sudo apt update
$ sudo apt install openjdk-11-jdk
$ wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
$ sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
$ sudo apt update
$ sudo apt install jenkins
2. Start Jenkins and enable it to start on boot:
$ sudo systemctl start jenkins
$ sudo systemctl enable jenkins
3. Add the Jenkins user to the Docker group:
$ sudo usermod -aG docker jenkins
4. Restart Jenkins to apply the group membership changes:
$ sudo systemctl restart jenkins
4.2 Access Jenkins
- open port:8080 on your instance
- Open your browser and navigate to http://<your-ec2-instance-ip>:8080.
- Follow the instructions to unlock Jenkins using the initial admin password found in /var/lib/jenkins/secrets/initialAdminPassword.
- Install suggested plugins during the setup.
- Create an admin user when prompted.
- On instance Configuration step, click on Save and Finish
5. Configure Jenkins for the Pipeline
5.1 Install Jenkins Plugins
- Navigate to Manage Jenkins -> Manage Plugins.
- Install the following plugins:
- Docker Pipeline
- GitHub Integration Plugin
- Kubernetes CLI Plugin
- Pipeline: AWS Steps Plugin
5.2 Add Credentials
- Go to Manage Jenkins -> Manage Credentials.
- Add the following credentials:
- AWS Credentials: Add your AWS Access Key ID and Secret Access Key.
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
- Kubernetes Config File: Add your kubeconfig file contents.
To download kubeconfig file, use the following command:
$ aws eks update-kubeconfig --region us-west-2 --name my-eks-cluster
Now upload kubeconfig file to Jenkins using name “kubeconfig-credentials-id” :
kubeconfig-credentials-id
6. Update Your kubeconfig File:
Use the following AWS CLI command on es2 instance to update your kubeconfig file:
$ aws eks update-kubeconfig --region us-west-2 --name my-eks-cluster
This command fetches the necessary credentials and configuration details and updates your ~/.kube/config file to include the EKS cluster context.
7. Install and Configure Ingress Controller
7.1 Deploy Nginx Ingress Controller
- Install the Nginx Ingress Controller:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
Verify the installation:
$ kubectl get pods --namespace ingress-nginx
$ kubectl get services --namespace ingress-nginx
2. Configure Ingress Resource for Your Application:
Create an Ingress resource YAML file (e.g., apache-ingress.yaml) to manage external access to your application:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: apache-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: aws.digitxel.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apache-service
port:
number: 80
Apply the Ingress resource:
$ kubectl apply -f apache-ingress.yaml
8. Create an AMAZON ECR repository
8.1 Create ECR Repository:
$ aws ecr create-repository --repository-name python-flask-app --region us-west-2
- Note: Replace “python-flask-app” with your desired repository name.
8.2 Verify ECR Repository Creation:
$ aws ecr describe-repositories --region us-west-2
9. Create Jenkins Pipeline
9.1 Pipeline Script
Create a new pipeline job in Jenkins and use the following script:
Use the GitHub repo URL which has your application code
Use the pipeline script:
Pipeline Script:
pipeline {
agent any
environment {
AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
AWS_DEFAULT_REGION = 'us-west-2'
AWS_ACCOUNT_ID = '891377137882'
AWS_ECR_URI = '891377137882.dkr.ecr.us-west-2.amazonaws.com'
IMAGE_NAME = 'python-flask-app'
KUBE_NAMESPACE = 'ingress-nginx'
IMAGE_TAG = "${BUILD_NUMBER}-${env.BUILD_ID}" // Unique image tag
MANIFEST_REPO = 'https://github.com/TheAbdullahChaudhary/aws-eks-jekins-pipline-manifest-files.git'
KUBECONFIG_CREDENTIALS_ID = 'kubeconfig-credentials-id'
} stages {
stage('Logging into AWS ECR') {
steps {
script {
def ecrLogin = sh(script: "aws ecr get-login-password --region ${AWS_DEFAULT_REGION}", returnStdout: true).trim()
def loginCmd = "docker login --username AWS --password-stdin ${AWS_ECR_URI}"
sh "echo '${ecrLogin}' | ${loginCmd}"
}
}
} stage('Checkout Code') {
steps {
git branch: 'main', url: 'https://github.com/TheAbdullahChaudhary/python-flask-app.git'
}
} stage('Building Docker Image') {
steps {
script {
dockerImage = docker.build("${IMAGE_NAME}:${IMAGE_TAG}")
}
}
} stage('Pushing Docker Image to ECR') {
steps {
script {
sh "docker tag ${IMAGE_NAME}:${IMAGE_TAG} ${AWS_ECR_URI}/${IMAGE_NAME}:${IMAGE_TAG}"
sh "docker push ${AWS_ECR_URI}/${IMAGE_NAME}:${IMAGE_TAG}"
}
}
} stage('Create Namespace on EKS') {
steps {
withCredentials([file(credentialsId: "${KUBECONFIG_CREDENTIALS_ID}", variable: 'KUBECONFIG')]) {
sh "kubectl create namespace ${KUBE_NAMESPACE} || echo 'Namespace already exists'"
}
}
} stage('Checkout Kubernetes Manifests') {
steps {
git branch: 'main', url: "${MANIFEST_REPO}"
}
} stage('Update Image Tag in Manifests') {
steps {
script {
sh """
# Update image tag in the manifest files
find /var/lib/jenkins/workspace/Deploy-to-EKS/ -type f -name '*.yaml' -exec sed -i 's|image: ${AWS_ECR_URI}/${IMAGE_NAME}:.*|image: ${AWS_ECR_URI}/${IMAGE_NAME}:${IMAGE_TAG}|g' {} +
"""
}
}
} stage('Apply Kubernetes Manifests') {
steps {
withCredentials([file(credentialsId: "${KUBECONFIG_CREDENTIALS_ID}", variable: 'KUBECONFIG')]) {
sh "kubectl apply -f /var/lib/jenkins/workspace/Deploy-to-EKS/ --namespace=${KUBE_NAMESPACE}"
}
}
}
} post {
success {
echo 'Pipeline completed successfully! Application deployed.'
}
failure {
echo 'Pipeline failed! Check logs for details.'
}
}
}
Detailed Breakdown:
1. Environment Variables:
These variables are used throughout the pipeline for AWS credentials, ECR URI, image name, Kubernetes namespace, and the GitHub repository URLs.
environment {
AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = credentials('AWS_SECRET_ACCESS_KEY')
AWS_DEFAULT_REGION = 'us-west-2'
AWS_ACCOUNT_ID = '891377137882'
AWS_ECR_URI = '891377137882.dkr.ecr.us-west-2.amazonaws.com'
IMAGE_NAME = 'my-django-todo-app'
KUBE_NAMESPACE = 'ingress-nginx'
IMAGE_TAG = 'latest'
MANIFEST_REPO = 'https://github.com/TheAbdullahChaudhary/aws-eks-jekins-pipline-manifest-files.git'
KUBECONFIG_CREDENTIALS_ID = 'kubeconfig-credentials-id'
}
2. Stages:
a. Logging into AWS ECR:
This stage logs into the AWS ECR registry using the AWS CLI.
stage('Logging into AWS ECR') {
steps {
script {
def ecrLogin = sh(script: "aws ecr get-login-password --region ${AWS_DEFAULT_REGION}", returnStdout: true).trim()
def loginCmd = "docker login --username AWS --password-stdin ${AWS_ECR_URI}"
sh "echo '${ecrLogin}' | ${loginCmd}"
}
}
}
b. Checkout Code:
This stage checks out the application code from the specified GitHub repository.
stage('Checkout Code') {
steps {
git branch: 'main', url: 'https://github.com/TheAbdullahChaudhary/jenkins-k8s-pipeline-app.git'
}
}
c. Building Docker Image:
This stage builds a Docker image from the checked-out code.
stage('Building Docker Image') {
steps {
script {
dockerImage = docker.build("${IMAGE_NAME}:${IMAGE_TAG}")
}
}
}
d. Pushing Docker Image to ECR:
This stage tags and pushes the Docker image to AWS ECR.
stage('Pushing Docker Image to ECR') {
steps {
script {
sh "docker tag ${IMAGE_NAME}:${IMAGE_TAG} ${AWS_ECR_URI}/${IMAGE_NAME}:${IMAGE_TAG}"
sh "docker push ${AWS_ECR_URI}/${IMAGE_NAME}:${IMAGE_TAG}"
}
}
}
e. Create Namespace on EKS:
This stage creates a namespace on the EKS cluster if it does not already exist.
stage('Create Namespace on EKS') {
steps {
withCredentials([file(credentialsId: "${KUBECONFIG_CREDENTIALS_ID}", variable: 'KUBECONFIG')]) {
sh "kubectl create namespace ${KUBE_NAMESPACE} || echo 'Namespace already exists'"
}
}
}
f. Checkout Kubernetes Manifests:
Check out the Kubernetes manifests from the specified GitHub repository.
stage('Checkout Kubernetes Manifests') {
steps {
git branch: 'main', url: "${MANIFEST_REPO}"
}
}
g. Update Image Tag in Manifests:
Updates the image tag in the Kubernetes manifest files to use the newly built Docker image.
stage('Update Image Tag in Manifests') {
steps {
script {
sh """
# Update image tag in the manifest files
find /var/lib/jenkins/workspace/Deploy-to-EKS/ -type f -name '*.yaml' -exec sed -i 's|image: ${AWS_ECR_URI}/${IMAGE_NAME}:.*|image: ${AWS_ECR_URI}/${IMAGE_NAME}:${IMAGE_TAG}|g' {} +
"""
}
}
}
h. Apply Kubernetes Manifests:
Applies the updated Kubernetes manifests to the EKS cluster to deploy the application.
stage('Apply Kubernetes Manifests') {
steps {
withCredentials([file(credentialsId: "${KUBECONFIG_CREDENTIALS_ID}", variable: 'KUBECONFIG')]) {
sh "kubectl apply -f /var/lib/jenkins/workspace/Deploy-to-EKS/ --namespace=${KUBE_NAMESPACE}"
}
}
}
3. Post Actions:
These actions are executed after the pipeline stages have been completed. They provide notifications on the success or failure of the pipeline.
post {
success {
echo 'Pipeline completed successfully! Application deployed.'
}
failure {
echo 'Pipeline failed! Check logs for details.'
}
}
10. Setting Up GitHub Webhook
6.1 Configure Webhook in GitHub
- Navigate to the GitHub repository: https://github.com/TheAbdullahChaudhary/python-flask-app.git
- Go to Settings -> Webhooks -> Add webhook.
- Enter your Jenkins URL followed by /github-webhook/ as the Payload URL, e.g., http://<your-ec2-instance-ip>:8080/github-webhook/.
- Set the Content type to application/json.
- Select Just the push event.
- Click Add webhook.
11. Set the DNS/Records against the domain
To expose your Kubernetes services (aws.digitxel.com) to the public internet using your External IP of loadbalancer of ingress nginx controller (a3228509e1f244e43a475647adc73688–1453384094.us-west-2.elb.amazonaws.com ) , I need to create DNS records that point to this Loadbalancer. Here are the DNS records you would need to create:
DNS Records
1. CNAME Record for aws.digitxel.com:
- Name: aws
- Type: CNAME
- Value: a3228509e1f244e43a475647adc73688–1453384094.us-west-2.elb.amazonaws.com
You can get the External IP of loadbalancer of ingress nginx controller using following command:
$ kubectl get all -n ingress-nginx
12. IP address and the corresponding hostnames
- Edit the /etc/hosts File:
Use a text editor with administrative privileges (like sudo) to edit the /etc/hosts file.
sudo vi /etc/hosts
- You can use any text editor you prefer (vi, nano, gedit, etc.). Just replace vi with your preferred editor.
2. Add the Entry:
In the /etc/hosts file, add the IP address and the corresponding hostnames. For example ( <Public IP> aws.digitxel.com):
a3228509e1f244e43a475647adc73688-1453384094.us-west-2.elb.amazonaws.com aws.digitxel.com
****Testing the Pipeline Execution****
Now, I am going to make a commit to the GitHub repository to trigger the Jenkins pipeline. This commit will initiate the automated process we have configured: Jenkins will fetch the latest code, build and test it, create a Docker image, push the image to Amazon ECR, and apply the Kubernetes manifests to deploy the application on the EKS cluster. By observing the pipeline execution, we can verify that each stage runs smoothly, ultimately deploying the application and ensuring it is accessible via the configured Nginx Ingress controller. This test will validate the end-to-end functionality of our CI/CD pipeline.
Currently, our application is:
Let’s make commit:
Pipeline triggered:
The pipeline completed successfully:
Application deployed:
Conclusion
With the Jenkins pipeline configured and integrated with AWS ECR and EKS, you are well-equipped to handle the deployment of your Python Flask application efficiently. This automated pipeline not only simplifies the deployment process but also enhances consistency and reliability in delivering updates to your application. By committing code changes to your GitHub repository, you can trigger the pipeline to build and deploy new Docker images seamlessly. The detailed stages, from logging into ECR to applying Kubernetes manifests, ensure that each step of the deployment process is handled effectively. As you test the pipeline and observe the automated deployment in action, you’ll gain valuable insights into its robustness and reliability. This setup paves the way for scalable, automated deployments, allowing you to focus on developing and improving your application while Jenkins and EKS manage the deployment lifecycle.
Github repo links:
App URL: https://github.com/TheAbdullahChaudhary/python-flask-app/
Manifests URL: https://github.com/TheAbdullahChaudhary/aws-eks-jekins-pipline-manifest-files/
As a seasoned DevOps engineer with over 3+ years of hands-on experience, I thrive on tackling complex infrastructure challenges and optimizing workflows. Whether you’re looking to streamline your CI/CD pipelines, implement robust cloud solutions, or enhance security through automation, I’m here to help.
Feel free to contact me for expert advice, consultation services, or project collaborations:
LinkedIn: Muhammad Abdullah
WhatsApp/Call: +92 (337) 4840 228
Email: devopswithabdullah@gmail.com