Build a complete CI/CD Pipeline and its infrastructure with AWS — Jenkins — Bitbucket — Docker — Terraform → Part 1

Hello guys and welcome to this tutorial in which I will be guiding you through the creation of a complete CI / CD Pipeline governed by Jenkins with all the infrastructure on AWS.
Let me first lay down a summary of what we are going to build and the different steps we’ll be taking.
Part 1 (current article) → Set up the project by downloading and looking at the Web App which will be used to test our infrastructure and pipeline. Here we also create & test suitable Dockerfiles for our project and upload everything to Bitbucket.
Part 2 (here)→ Set up Slack and create a Bot which will be used by Jenkins to send notifications on the progression/status of the pipeline.
Part 3 (here)→ Create the first part of the AWS Infrastructure with Terraform. Here we will create the EC2 instances / SSH keys and the actual Network infrastructure plus the basis for the IAM roles.
Part 4 (here)→ Create the second part of the AWS Infrastructure with Terraform. We are going to create the S3 buckets, the ECR repositories and complete the definition of the IAM roles by adding the correct policies.
Part 5 (here)→ Complete the configuration of the Jenkins and Web App instances by implementing the correct user data.
Part 6 (here)→ Implement the Pipeline in a Jenkinsfile and try out the pipeline, see how eveything fit together and lay down some final comments.
GOAL
Suppose we are building / working on a Web App and we would like to implement a Continuous Integration (CI) / Continuous Deploy (CD) Pipeline to be able to develop and release working parts of the software in a more reliable and faster way in the spirit of DevOps foundations. The pipeline needs to run on a push on the remote repository (Bitbucket) and go through various steps (specified in a Jenkinsfile) in which unit / integration / load balancing tests will be run alongside some security / vulnerability scans, while saving artifacts along the way with tests’ results uploaded to an S3 bucket (for future use) and a Slack notification for immediate feedback. Artifacts will be saved in AWS Elastic Container Registry as Docker images and we would like to build all the infrastructure with Terraform, implementing our Infrastructure as Code (IaS), allowing us to programmatically define everything and treat our infrastructure (and the pipeline thanks to the use of a Jenkinsfile and the multibranch pipeline) as normal code.
BONUS
As a bonus we will also set up Jenkins completely from the user data
of the AWS EC2 instance (namely from Bash at creation time). It will automatically create a jenkins admin, confirm the jenkins URL, download and install all the necessary plugins, create ssh credentials to let jenkins access Bitbucket and finally set up a multibranch pipeline which will be triggered by a push on the bitbucket repository.
CONSIDERATIONS
The pipeline we are going to create relies on a single branch. In a more mature and bigger project, you’d want to implement a Test, Staging and Production branches, to better manage all these different phases of the software development process. Still, I believe the techniques we are going to employ are the foundations for more advanced ones and with not that many modifications one could easily implement more branches in the project.
STYLE
I will present the code I’m running on the console as images taken from my Windows Powershell Terminal. As this might sound unhandy (since one cannot copy-paste commands from images), I believe it will serve two purposes: the first is purely aestethic (I believe it is way cooler that way) and the second is that it will push readers to actually write these commands instead of simply copy-paste them (which is better for learning purposes). Having said that, I will, however, leave in the images’ description the code that is actually used in these images (for more than one commands, these will be seperated by a semicolon ;
so that one could still in principle be able to copy-paste them).
COMPLETED PROJECT
The completed project is available at my github:
However, to be able to implement it correctly you will need to follow the steps provided in this tutorial!
A First Look
The web app will be served from an EC2 instance, in particular from a docker container which will be pulled (its image) at boot time from a specific AWS Elastic Container Registry. The Jenkins server will be hosted on its own EC2 instance and will be accessible to the outside world at the jenkin’s specific port (8080). These two instances will be supplemented with their own Elastic Network Interface under their own Subnet. A Router with a Route Table will allow internal comunications and allow the Internet Gateway to correctly let external users communicate through the VPC with the instances. Instances will have only the necessary policies to perform their duties without having access to unnecessary AWS services (for obvious security reasons). Since the Jenkins setup scripts will be quite lengthy, we will upload them to an S3 bucket and in the EC2 user data we will pull them down and run them (this is needed to avoid the 16k characters limit of AWS EC2’s user data, to get more information you can take a look at this article I’ve written regarding this workaround). We will also upload the Bitbucket SSH keys to AWS Secret Manager. Let’s see what we are going to build using terraform:

Regarding the Jenkins pipeline, the stages we are going to create are the following (taken from a correctly completed pipeline from the BlueOcean plugin of jenkins):

- Setup → This step initializes the variables needed by the pipeline and logs in the AWS Elastic Container Registry.
- Build Test Image → This step builds and pushes the docker image for the unit / integration tests.
- Run Unit Tests → This step runs the unit tests and produces a report which will be uploaded to an S3 bucket. It also sends a Slack message telling the channel the tests’ results.
- Run Integration Tests → This step runs the integration tests and produces a report which will be uploaded to an S3 bucket. It also sends a Slack Message telling the channel the tests’ results.
- Build Staging Image → This step builds and pushes the staging image, namely a copy of the production one, which will be used for Load Balancing and Security checks.
- Run Load Balancing tests / Security checks → This step runs some load balancing tests and performs security checks on the Staging Image. It saves reports which are uploaded to an S3 bucket and it also sends a Slack message telling the channel that these tests has been run.
- Deploy to Fixed Server → This step builds and pushes the production image and then reboots the EC2 instance hosting the Web App (this instance will be constructed such that it will pull down the new ‘release’ image and run it at each boot).
- Clean Up → Since we have already pushed the images to the AWS ECR in the previous steps, we can (and we must) remove the old images in the local machine to avoid stacking them up and cluttering the storage. The last uploaded images will be kept, while the older ones will be discarded. This step also clears the config.json file (which otherwise would store the credentials for the remote AWS ECR).

NOTES
There is a huge number of improvements one could perform on this project. Just to mention some:
- The EC2 instance hosting the Web App will be rebooted after a new successful pipeline has been completed. This means that the site will be down for all the boot time. For a real world project, this could be quite an inconvenience and to obtain a zero downtime of the Web App, one could implement a Blue/Green deployment or simply use Elastic Beanstalk (or many more other possibile solutions).
- The Staging Image will be run on the Jenkins server as a docker container. Since we will not have a big infrastructure, it’s fine for us, however it would be best to create a copy of the required infrastructure, deploy the Staging image there, run tests and then destroy the infrastructure.
- As was mentioned at the beginning, the Jenkins Pipeline will be triggered on a push on any branches of the remote repository. It would be better to define more branches and change a bit the logic of the CI / CD pipeline.
- Our infrastructure will not be able to handle high levels of web traffic. We could Vertically Scale, namely replace the instance with a more capable and powerful one when the traffic gets high, and/or we could Horizontally Scale, namely create more instances serving the same web app and redirecting a portion of the traffic to those. This could easily be implemented by adding an Auto Scaling Group and a Load Balancer (or using the Elastic Beanstalk). Nodejs, also, offers the possibility to fork child processes to distribute the load to worker nodes.
This tutorial is still going to be quite lengthy even without the features just mentioned. As ‘stylistic’ choice, I decided to leave them out to not make this project too complicated but still create a nice template which could be enhanced in the future at any time (and as a useful exercise).
Set Up
Since this tutorial is not about creating a Web App, we’ll just use a very simple template I’ve created for that purpose. This can be found at
https://github.com/KevinDeNotariis/simple-web-app
In order to get started, let’s git clone this repo and check whether we got the correct folder:

Let’s remove the .git
folder, as if we were starting a project anew.
For Windows’ folks, in powershell type:

While for Unix friends:
cd ./simple-web-app
chmod -R a+w .git
rm -rf .git
These lines are necessary since it seems that git makes some files inside .git/object
read-only, so that if one tries to delete them they would get an error.
Now that we have cleared our project, let’s see what does it actually contains. I’ll use Visual Studio Code for this tutorial, but obviously any other IDE would be fine too. To open the project in VSCode I’ll just:

The folder structure of the web app is the following:

Everything is placed inside theserver
folder which contains a src/
and test/
folders plus some other files (package.json+package-lock.json and configuration files for Webpack, Mocha and Babel).
We got some unit and integration tests which make use of mocha
and chai
/chai-http
. In the package.json
one can see under "scripts"
that we have a "test:load": “loadtest -n 10000 http://localhost:8000
which will be used to make a simple load test using the npm library loadtest
. Babel is also used to employ ES6 features and Webpack is used to build a more compact version for the staging and production images.
If we want to see what this server provides us, we first need to install the dependencies (if you do not have node.js installed, you can download it here):

Now we can start the server:

And by navigating to http://localhost:8000
we should be greeted with:

Now we can navigate to http://localhost:8000/users
to see:

Now, this page simulates a call to a database, which in this case is just a .json
file in server/src/routes
.
To run some tests, let’s type:

This command will run two unit tests (in server/test/unit/index.js
) and will use mochawesome
to save and print some nice-looking reports which will then be available at server/mochawesome-report/mochawesome.html
.
We can also run some integration tests (the following will check whether the users are fetched from the ‘database’):

The results are again saved in server/mochawesome-report/mochawesome.html
.
For completeness, let’s also try to run the load balancing test. To do that we first start the server using npm run watch
and then, in another terminal, we run:

All these tests will be automated in the CI / CD pipeline and if for some reason one of these should fail, the pipeline would stop and return an error.
Bitbucket
We can create a free account on Bitbucket, define a project, say SimpleWebApp
and then a repository simple-web-app
. The screen should look like the following when creating the repository:

Once created, we will be brought to a page where some tips suggest us what we could do to connect our local simple-web-app
to this remote repository. Let’s hop back to the terminal (if the server is still running, just CTRL-C
and confirm to exit) and initialize our local git repository (be sure to be in /simple-web-app
):

Once done that, we need to add
and commit
our first changes (actually the whole app). But before doing that, let’s create a .gitignore
to avoid committing node_modules
and other files. To create a suitable .gitginore
we could employ npx gitignore node
as follows:

This should have created a .gitignore
in the root directory. Let’s open it and add the following line at the end:

This new line in the .gitignore will tell git to not commit the folder mochawesome-report
to the repository.
We are now ready to add
and commit
:

Let’s also rename the branch to main
:

At this point, we are ready to push our repository to Bitbucket, and we can do that by following the instructions kindly provided by Bitbucket (minding that our branch is named main
and not master
):
git remote add origin https://<YOUR_USERNAME>@bitbucket.org/<YOUR_USERNAME>/simple-web-app.gitgit push -u origin main
Upon refreshing the bitbucket page, we should see our files correctly uploaded:

Docker
Docker is an awesome software which allows to have consistent builds across multiple platforms. We’ll have test, staging and production images which will be stored in an AWS Elastic Container Registry. These images will be tagged with the Git commit hash, so that it will be easy to reference the corresponding artifact image from a given commit.
Let’s start implementing our Dockerfiles, one for the test image and one for the staging/production image. The test image will be raw and will contain all the ‘devDependencies’ without any optimization. The staging/production image, instead, will be created using Webpack and with only the actual needed dependencies to make it lighter and faster.
Test Image
Jumping to VSCode (or your favourite IDE), create a new file in the root directory ( simple-web-app/
) called Dockerfile.test
(it is not inside /server
):

And put there the following code:
Let me briefly explain these steps:
- The first line:
FROM node:lts-alpine@sha256:.....
will pull down thenode:lts-alpine
docker image from Docker Hub. This image will contain all the necessary tools for our web app to correctly run. In this tutorial, I’m trying to apply some of the best practices taken from this wonderful article: https://snyk.io/blog/10-best-practices-to-containerize-nodejs-web-applications-with-docker/. Quoting from that:
The recommendations for building better Docker images are:Use small Docker images. This will translate to a smaller software footprint on the Docker image reducing the potential vulnerability vectors, and a smaller size, which will speed up the image build processUse the Docker image digest, which is the static SHA256 hash of the image. This ensures that you are getting deterministic Docker image builds from the base image.
- The second line
COPY . /opt/app
will just copy everything in the current directory to the/opt/app
folder inside the container. - With the third line
WORKDIR /opt/app/server
we set the working directory to be that of the server, so that we are ready for the dependencies installation. npm i
to install all the required modules to run tests.
Since we do not need to put every file/folder in our project into the Docker Image, we can add a .dockerignore
in which we are going to specify what files / folders docker does need to ignore:

With the following content:

Both the .gitignore
and the .dockerignore
will be updated as we proceed with the tutorial and we add new features that we do not need to commit / put in the docker images.
Sanity Check
Alright, at this point we can try to build our image and run it as a container to see whether everything works fine. If you do not have docker and you are on Windows, you can download it here:
https://www.docker.com/products/docker-desktop,
while if you are on Linux, you can follow the instructions here:
https://docs.docker.com/engine/install/ubuntu/ (for Ubuntu)
or here:
https://docs.docker.com/engine/install/debian/ (for Debian)
For windows, check whether the Daemon is running, while for Linux you can systemctl status docker
to see whether it is up and running.
Let’s hop back to the terminal and make sure we are in the folder /simple-web-app
. Then we can:

Let’s break down this command:
docker build
→ Command to build an image;-f Dockerfile.test
→ Specifiy the dockerfile to use for the build (default would be a file namedDockerfile
);-t hello:world
→ Specify the tag ashello:world
;.
→ What directory we want to build, with the dot.
we specify the current directory.
If everything goes well, we can then run a container based on that image with the following:

docker run
→ Specify that we would like to start a container;-d
→ Detached, meaning that the container needs to run in the background;-i
→ Interactive, allowing the container to remain ‘active’ in the background (thanks to the-d
) without exiting immediately;--name hello_world
→ Name of the container which will allow us to reference it more easily;hello:world
→ The tag of the image we would like to run.
We can check that the container is up and running with:

Since we have specified the interactive mode, we can ‘enter’ the container, or more precisely, we can get a shell inside the container. Our image node:lts-alpine
is a very light one and it does not have /bin/bash
for example, so we’ll enter with the /bin/sh
:

docker exec
→ Specifies that we would like to execute a command in the container;-it
→ Allows us to interactively communicate with the container via the shell we are going to spawn;hello_world
→ Name of the container;/bin/sh
→ Command that we would like to execute.
Once the command has been sent, we should be prompted with a shell inside the container and already in the directory opt/app/server
as we specified in the Dockerfile.test
(in the WORKDIR
part):

Cool, now we can try to run the unit / integration tests as before but inside the container:

And everything seems to work fine.
We can then exit
the container, stop it and remove it along with the image:

Staging / Production Image
Let’s now create the Dockerfile
which will build the staging/production image. This will be a bit more elaborated, but nothing too fancy. Let’s create a Dockerfile
in the root directory ( /simple-web-app
) of the project:

And inside it we’ll put the following:
Let’s analyze it:
- First line is the same as before, we pull down the
node:lts-alpine
docker image with a specific sha256 to have consistent builds. COPY --chown=node:node . /opt/app
→ With this command we copy everything in the current directory.
to the/opt/app
directory inside the container but assigning eveything to the usernode
which is provided to us for security reasons. We’ll run the server as this user which has low privileges so that if the server gets pawned, the attacker still wouldn’t have much power (unless privilege escalation vectors would be there);- We set the working directory
WORKDIR
to/opt/app/server
inside the container; RUN
→ Execute the following commands:npm i
to install all the dependencies.chmod 775 -R ./node_modules
to allow us to prune some modules afterwards.npm run build
will use Webpack to build our compact staging/production version.npm prune --production
will remove all the ‘devDependencies’. The other 4 commandsmv ... rm ... mv .. rm
are used to delete everything in the working directory apart form thedist
andnode_modules
folders and thepackage.json
andpackage-lock.json
files. These last four commands, if we had/bin/bash
, could be combined in one:/bin/bash -O extglob -c ‘rm -r -v !("dist"|"node_modules"|"package.json"|"package-lock.json")'
;ENV NODE_ENV production
→ Set the environment variablenode_env
toproduction
and this comes with a lot of optimizations/good practices for a node.js project in production (see https://expressjs.com/en/advanced/best-practice-performance.html);EXPOSE 8000
→ Will expose the port8000
to the external world;USER node
→ Will set the user to use when running the image;CMD ["node", "./dist/bundle.js"]
→ Will start the server which is Webpacked in thedist
folder in thebundle.js
file.
Sanity Check
Let’s see whether everything works fine with this Dockerfile
by trying to build the image, run the container and see whether it correctly serves the Web App.
As before, we build the image (this time we do not need to specify the Dockerfile
, since it defaults to a file indeed called Dockerfile
):

We run the container, mapping the inside port 8000
to our port 8000
:

Navigating to http://localhost:8000
we should see the home page of our Web App.
Cool cool!
We could also enter the container and ‘remove + install’ loadtest
to run some load balancing tests (we need to make sure that loadtest
is not in the ‘devDependencies’ otherwise it will not be installed since we are in ‘production’ mode):

and then run the load test:

Again, to clear eveything, we exit
the container, stop it and remove it (plus remove the image) :

Let’s add Dockerfile
to the .dockerignore
:

And now we are ready to add
, commit
and push
these changes to our remote repository. Be sure to be in /simple-web-app
and:
git add .git commit -a -m "Added Dockefiles for building test and staging/production images"git push
If we check on Bitbucket we should see our new files in the repository.
This first part concludes here! We managed to setup our simple-web-app by downloading the server and creating the Dockerfiles to build the test and staging/production images. We have also setup Bitbucket to push our local changes to the remote repository, so that in the future we’ll set the Webhook that will trigger the Jenkins Pipeline.
In the next step we are going setup Slack, stay tuned and see ya there!
Cheers!
Kevin
Join FAUN: Website 💻|Podcast 🎙️|Twitter 🐦|Facebook 👥|Instagram 📷|Facebook Group 🗣️|Linkedin Group 💬| Slack 📱|Cloud Native News 📰|More.
If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇