How To Build Docker Image Using Kaniko | Jenkins | Kubernetes

In this demo we will be using Kaniko for building container images and using Jenkins we will push to AWS ECR (Container Registry)

What is Kaniko?

Kaniko is a tool to build container images from a Dockerfile. Kaniko doesn’t depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. This enables building container images in environments that can’t easily or securely run a Docker daemon, such as a standard Kubernetes cluster.

Why Kaniko?

Building images from a standard Dockerfile typically relies upon interactive access to a Docker daemon, which requires root access on your machine to run. This can make it difficult to build container images in environments that can’t easily or securely expose their Docker daemons, such as Kubernetes clusters

How does Kaniko work?

  1. The Kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry.
  2. The Kaniko debug image is recommended (gcr.io/kaniko-project/executor:debug) because it has a shell, and a shell is required for an image to be used with CI/CD.
  3. Kaniko accepts three arguments. A Dockerfile, build context, and a remote Docker registry.
  4. Kaniko extracts the filesystem of the base image (the FROM image in the Dockerfile)
  5. Then, execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each.
  6. After each command, they append a layer of changed files to the base image (if there are any) and update image metadata.
  7. Finally, it pushes the image to the given registry.

Kaniko Build Contexts

You will need to store your build context in a place that Kaniko can access. Right now, Kaniko supports these storage solutions:

  • GCS Bucket
  • S3 Bucket
  • Azure Blob Storage
  • Local Directory
  • Local Tar
  • Standard Input
  • Git Repository

Let’s Get Started….

Prerequisites

  1. Kubernetes
  2. Jenkins deployed in a Kubernetes cluster
  3. AWS ECR Repository access

Create an ECR repository. Here I have already created an ECR repo named “demo”.

To access the ECR repo from Kubernetes, you need to add a valid AWS ACCESS_KEY, SECRET_KEY, or AWS role to the desired Kubernetes cluster.

For this demo, I have used k3s.io for setting up a Kubernetes cluster in EC2 and attached an IAM role with ECR permission for pushing images to ECR.

ECR Permission Policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability"
],
"Resource": "*"
}
]
}

Deployed Jenkins in Kubernetes cluster using Jenkinsci helm chart.

When using instance roles we no longer need a secret, but we still need to configure Kaniko to authenticate to AWS, by using a config.json containing just { “credsStore”: “ecr-login” }, mounted in /kaniko/.docker/.

Apply the given below configmap for Kaniko ECR configuration.

apiVersion: v1
kind: ConfigMap
metadata:
name: docker-config
data:
config.json: |-
{
"credsStore": "ecr-login"
}
view raw configmap.yml hosted with ❤ by GitHub
kubectl apply -f configmap.yml

Now, Let’s create the Jenkins pipeline

For creating a pipeline Jenkinsfile is required. A Jenkinsfile is a text file that contains the definition of a Jenkins Pipeline and is checked into source control.

Jenkinsfile:

// Uses Declarative syntax to run commands inside a container.
pipeline {
agent {
kubernetes {
yaml '''
kind: Pod
metadata:
name: kaniko
namespace: default
spec:
containers:
- name: shell
image: gcr.io/kaniko-project/executor:debug
imagePullPolicy: IfNotPresent
env:
- name: container
value: "docker"
command:
- /busybox/cat
tty: true
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker
volumes:
- name: docker-config
configMap:
name: docker-config
'''
defaultContainer 'shell'
}
}
stages {
stage('Build') {
steps {
container('shell'){
sh "/kaniko/executor --dockerfile `pwd`/Dockerfile --context `pwd` --destination=${env.ECR_REPO}:${env.BUILD_ID}"
}
}
}
}
}
view raw Jenkinsfile hosted with ❤ by GitHub

Here, “ECR_REPO” is the ECR repo name that I have added as a variable in the Jenkins system and “BUILD_ID” is used as tagging container images.

To add environment variable in Jenkins.

Manage Jenkins →Configure System → Global properties → Check Environment variables box.

Like this, you can add desired environment variables in Jenkins.

To create a pipeline:

New Item → Enter the job name →Choose Pipeline → OK

Then in the pipeline section configure SCM. Configure credentials to authenticate your repository using SSH/HTTPS authentication and choose that credential from the drop-down. Add Jenkinsfile path in script path and apply.

Now, Let’s build a pipeline.

Build Success!! and now you can check the container images in the ECR console. Later you can add the Kubernetes deployment stage in the pipeline and deploy it to a Kubernetes cluster.

GitHub – raino007/kaniko-jenkins

References

Kaniko: https://github.com/GoogleContainerTools/kaniko

Jenkins Helm Chart: https://github.com/jenkinsci/helm-charts

k3s: https://k3s.io/

AWS ECR: https://aws.amazon.com/ecr/

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: