INTRODUCTION TO CICD

CICD is a term which is derived from two acronyms: CI and CD. CI stands for Continuous Integration and CD stands for Continuous Delivery/Continuous Deployment.
Continuous Integration is the process of frequently building and testing new code changes.
Continuous Delivery is the process of deploying new versions of an application frequently.
CI and CD together empower organizations to design and deliver high-quality software in shorter development life cycles and respond to the needs of the users promptly, resulting in greater customer satisfaction.
Continuous integration, delivery and deployment are practices designed to help increase the speed of development and the release of well-tested products. Continuous integration encourages developers to integrate their code frequently to a shared code base early and each integration is verified by a build to minimize integration errors. Continuous delivery removes barriers on the way to deployment or release. Continuous deployment goes one step further by deploying every successful build that passes the test suite automatically.

CICD AND DEVOPS

Continuous Integration and Continuous Delivery are often quoted as the pillars of successful DevOps. DevOps is a software development approach which bridges the gap between development and operations teams by automating build, test and deployment of applications. It is implemented using the CICD pipeline. Below image depicts DevOps in a nutshell.

We can say, that, DevOps is about culture while CI/CD is mainly about the process and tools.  CICD enables us to make application changes more reliably as there is a consistent, reproducible pipeline for pushing new features to your users.

Let us first have a look at the important phases of DevOps lifecycle:

  1. Continuous Development: Software functionality is implemented, and the versions of code is maintained using Version Control tools like Git, SVN, CVS etc. Maintaining versions helps the developers to collaborate on the latest committed code and to rollback to previous versions in case of an unstable deployment.
  2. Continuous Testing: It is extremely important to test the code thoroughly before it is released to production. The most efficient way to test is Automation Testing. Tools like Selenium, TestNG, JUnit are used to automate the execution of test cases which saves time and effort. Reports can be generated to analyse and study the cause of bugs. Triggering the entire automation testing process is vital and hence, the Continuous Integration tools come into picture.
  3. Continuous Integration: Continuous integration is a process for frequently building and testing new code changes. It is undoubtedly the most crucial phase of the cycle which holds it all together. Popular tools are Jenkins, Bamboo and Hudson where Jenkins is the most widely used. These tools orchestrate the cycle and integrate with the tools of other phases. CI is important in configuration management, version control, scheduling test scripts, monitoring performance etc. Generally, code is hosted on a central repository monitored by a CI service. Whenever there are update notifications, the CI service pulls and builds the new code.
  1. Continuous Deployment: This is the phase where the code is deployed in production-like environments.
  2. Continuous Monitoring: Monitoring performance of an application is as vital as is its development. Defects that are left during Testing phase need to be reported as bugs and dealt with. These tools help you minimize the impact of failures.

 

CICD PIPELINE IMPLEMENTATION

CICD enables the inclusion of freshly committed code into the code repository and facilitates testing at each stage — fulfilling the integration aspect — and ending it’s run with a deployment of the complete application to the end-users — fulfilling the delivery aspect.

Let us understand how a CICD Pipeline works and how each phase of DevOps cycle gets mapped to various stages in a CICD pipeline. We will discuss the important tools involved at each step, in parallel.

Our task is to automate the entire software development. For this, we would need automation tools. Jenkins is an ultimate CI tool as it provides us with various interfaces and tools to automate the entire process. It plays a vital role in the pipeline implementation. The image below shows the role of Jenkins in the pipeline.

Now, let us have a look at each of the phases in detail:

  • Build Phase:

Suppose, we have a Git repository where the development team commits the code, which needs to be compiled before execution. Git helps maintain the multiple versions of the code with appropriate version tags. Tracking changes or reverting to previous versions at any moment is extremely easy in Git. From Git, Jenkins pulls the code and then moves it to the commit phase, where the code is committed from every branch of the repository. Then Jenkins moves it into the build phase where the code gets compiled. If it is Java code, we use tools like Maven for code compilation. Jenkins is a front-end tool where you can define jobs/tasks. This whole process is called the build phase.

  • Testing Phase:

Once the build phase is over, then you move on to the Testing phase. where we have various kinds of testing done on the committed code. One of them is the Unit Testing where you test the chunk/unit of software or for its sanity test). These test cases are overseen by Jenkins again.

  • Deploy Phase:

When the test is completed, you move on to the Deploy phase, where you deploy the product into a staging or a test server. To deploy, we will need a staging server which will replicate the production environment, I.e., Docker.

Docker is just like a virtual environment where we can create an entire server in few steps and deploy the artefacts to be tested. Docker is extremely popular as you can run the entire cluster in a few seconds.

  • Auto Test Phase:

Once the code is deployed successfully, you can run another series of Acceptance Tests or Sanity test for the final check. If everything is accepted, then it can be deployed to production.

  • Deploy to Production:

At each step, errors can be reported to development team through mails so that they can fix them. They can fix the issues in parallel and push the updated code onto the version control system. Once they work on feedback and commit changes, the process is re-iterated from build phase.

CD can mean Continuous Delivery or Deployment based on the model used in the project. If there is an approval required to move the product from stage to production, then it’s a Continuous Delivery and if the pipeline continues, without any human intervention, till the product is deployed in production, its Continuous Deployment. Configuration Management and Containerization tools that help in Continuous Deployment are Puppet, Ansible, Chef, SaltStack, Docker etc.

  • Measure & Validate:

The pipeline continues until we get a product or application which can be deployed in the production server where we measure and validate the code. Splunk, ELK Stack, Nagios etc. are some of the popular tools for monitoring.

This is a sequence diagram to show the working of a typical pipeline:

PRINCIPLES OF CI/CD

The three main principles that help build a well-assembled pipeline are:

  • Segregation for Stakeholder Responsibility
  • Risk Reduction
  • Short Feedback Loop
  1. Segregation for Stakeholder Responsibility

For a successful CICD implementation, all stakeholders need to work in collaboration, taking complete ownership of their respective tasks and responsibilities, ensuring the integrity of application.

  • Developers code the business logic and test them through unit tests.
  • Quality Engineers maintain product quality, review completed features and write end-to-end acceptance tests.
  • Business Analysts and Owners interact with actual users and create relevant use cases. They coordinate and review results from user acceptance tests to validate the functionality and create new test cases, if required, based on feedback.
  • Operations (Ops)/DevOps Engineers ensure that the product/ application is available to users at the desired time. They work on the scaling of the process, and other logistics so that the code from developers can smoothly move to the production environment.
  1. Second Principle of CI/CD: Risk Reduction

At each stage in the CI/CD pipeline, stakeholders are responsible to reduce risk ensuring complete quality control.

  • Developers are accountable to reduce the risk of inaccurate business logic.
  • QEs are responsible for testing user flow integrity to reduce the risk of broken flows/user cases.
  • BAs and POs are involved in user acceptance testing to reduce the risk of creating unusable/unwanted features.
  • Ops/DevOps are involved in CI/CD maintenance, deployment related work and scaling it to reduce the risk of product unavailability.
  1. Short Feedback Loop

Automation is the key in CI/CD. It helps us to get feedback way faster on any newly developed features as the entire process is automated to achieve quicker deployment of functionality to users.

CHOOSING THE RIGHT TOOLS: IMPACT ON THE PIPELINE

No single tool will satisfy the needs of every project, but with so many high-quality open source solutions available in the market, there’s a a fair possibility that you will be able to find a combination that is a perfect fit for your requirements. A galnce at the tools available in the market for each stage of the pipeline:

There are several considerations one must make while choosing the right set of tools:

  • Solutions hosted on your machine offer a great deal of build process flexibility and there is a local access to the artefacts.
  • Hosted solutions have no setup issues and scale better as there is no hardware setup required to use them.
  • Another important thing is the Docker support. Although majority of the tools support Docker, some do it much better than others.
  • User interface is another aspect. One of the main features of any good CI tool is to make a build process easier, and a less complicated UI makes a lot of difference.

It is best to have a tools strategy that adheres to a common set of objectives while providing seamless collaboration and integration between tools. The purpose is to automate everything.

Developers should be able to send new and updated software to deployment and operations effortlessly.

Pipelines if not implemented properly, can have a negative impact as they can affect the choices of tools to be used. We should optimize and substitute components in the pipeline as per our business needs.

Some of the highlights of a well assembled pipeline are:

  • Developers can focus on code and dependencies, without any botheration for runtime configuration.
  • Testers can see what tests have been automatically run, and then either automate more, investigate trouble areas, or simply spend their time on the new functionality to validate it against client requirements.
  • Operations can just redeploy and retract to previous versions, if some functionality breaks inside an application. It’s easy to track changes.
  • Introducing new versions or bug fixes is a seamless process.

 

CONCLUSION

Its rightly said “It is not the strongest of the species that survive, not the most intelligent, but the one most responsive to change”. It has become essential for organizations to embrace the world of DevOps and enable frequent delivery of good-quality software through CICD pipeline, to set themselves apart and have a cutting edge over others in the competition.

Our highest priority is to ‘satisfy’ the ‘customer’ through ‘early’ and ‘continuous delivery’ of ‘valuable’ software. CICD helps large organizations become lean, agile and innovative. The single most important piece of a solid deployment pipeline is that it is so consistent and reliable.

Continuous Delivery makes it feasible to continuously adapt software in line with user feedback, evolving trends in the industry and changing business strategies, through reliable and low-risk frequent releases.