CI / CD & DevOps Introduction
Software today is more complex than it has ever been. New technologies emerge rapidly and as applications evolve to utilize them, gaps occur. Some gaps result in “technical debt”, an industry term to describe development practices where ideal craftsmanship has not been achieved and additional work needs to be done.
Other gaps result in flaws that lead to security vulnerabilities or performance issues in the application. These can critically impact the safety of the data that the software is leveraging or the user experience of the application. Ultimately, a breach or performance failure during a key time of operation can lead to the destruction of the organization that developed it. Examples of this are described in Nine Digit Defects by Bill Curtis and Lev Lesokhin.
Additionally, the scale of an application continues to increase. The three tier applications (UI, Services, Database) that were once common are becoming micro-service infrastructures with numerous shared components. Being able to validate a single project no longer ensures that the solution is well written or secure. It also means that if a defect is introduced in a shared component or central service, it spreads throughout the application.
A long-standing remediation to these potential failures has been code security and quality scanning during development and software delivery. While the technique has proven to be effective, the process is often manual, time-consuming and done just before the software is released. This often causes a significant delay when defects are found.
The Agile movement begun by Ken Schwaber and Jeff Sutherland has been adopted in many organizations and is becoming common practice. One core tenant of the Agile manifesto is cross-functional teams. A cross-functional team ensures the team itself can address every specialty needed to deliver a working software increment. While this does not preclude the inclusion of specialists for expert advice, the goal is to have the team handle quality and security aspects of software delivery and build into their basic Definition of Done (DoD). Adding this as a basic quality aspect leads the team to want to incorporate this into their software delivery pipeline.
Continuous Delivery (CD) Pipeline
In 2008, Andrew Clay Shafer and Patrick Dubois discussed Agile Infrastructure, which evolved into the concept of DevOps. DevOps is the integration of development and operations to a shared ownership of what is needed to develop, deliver, and operate software. The DevOps movement is predicated on the Three Ways introduced by Gene Kim, Kevin Behr and George Spafford in “The Phoenix Project”.
The first way is incorporating Systems Thinking into software delivery. While the concept has many details, the basic premise is to identify the bottlenecks in software delivery and optimize or eliminate them. One of these bottlenecks is software analysis and security checks.
Figure 1 illustrates the development path and the amount of time it takes to resolve potential security or quality issues in a traditional development process and the overall cost savings when those same evaluations are automated and moved earlier in the process.
Figure 1: Continuous Delivery Pipeline Comparison
The Second Way is to amplify feedback loops and move that feedback to earlier in the process. The implication here for software delivery is obvious: software and quality scanning needs to occur as early in the process to prevent it from becoming an afterthought and thus a bottleneck.
Teams often adopt a continuous delivery pipeline to assist with frequent software delivery. This process consists of source control, a continuous integration server to build and test source code, an artifact package repository to hold the tested artifacts and a deployment infrastructure to deploy and configure the target software environment.
The effectiveness of the pipeline is rooted in the fact that the source code is built and core tested once, then deployed and configured into each target environment. Many groups choose to perform security analysis during the initial build of the software. This is possible, but presents two challenges: First, analysis takes time. Developers want feedback as quickly as possible. This means that a typical 5-10-minute build could seem too long if a scan adds 5-10 minutes.
Second, most builds represent only a portion of the overall solution. Given the industry shift towards microservice architectures, a build could be a single service that is one of several within an application. Once combined with a database or mainframe back end and a web tier, the entire solution is far more complex than the single service. For this reason, the recommended approach is to integrate contextual software analysis into the deployment pipeline.
Contextual software analysis understands the application at a global level, with the added context of intercomponent dependencies and data flows. Using semantic language analyzers, this analysis decomposes the software and develops a model of the entire system. Contextual software analysis doesn’t just find vulnerabilities in the system; it also uncovers obstacles that prevent system developers from optimally achieving the system’s nonfunctional requirements: performance bottlenecks, scalability problems, and opportunities to improve robustness. You can read more about contextual software analysis in this Cutter Consortium paper.
Integrating contextual software analysis with deployment allows the deployment tooling to inform software analysis tools of all the components involved in the application and what versions of those components need to be validated. Additionally, the analysis can be done in parallel with other longer running test steps in this environment such as integration testing. If the automation is run early (ideally in the first deployed environment), the results of the scan could prevent a build from progressing through the pipeline.
Figure 2: The Deployment Pipeline Overview
Figure 2 illustrates how the testing and validation responsibilities are split between the CI build and deployment processes. The CI Build stage performs all the steps in the process that do not require significant time to accomplish. This allows a development team member to receive rapid feedback on the initial state of the build. Once this completes, the code is deployed to an automated test environment and integration tests are run to ensure basic components work together correctly. Either simultaneously or after this, security analysis is triggered in the contextual software analysis suite. Analysis is run and the results of these steps determine if the deployment can be deployed to a manual testing / acceptance environment. Finally, it is deployed to production, or follows any additional validation needed by the organization.
Continuous Deployment (CD) Process Details
To make this overall process successful, there are some key prerequisites that need to be available at the various stages of the delivery pipeline. Details of these are as summarized in Table 1, broken out by pipeline stage. In the following example, CAST Application Intelligence Platform is integrated into the DevOps pipeline to provide the automated contextual software analysis.
Table 1: CI / CS Deployment Prerequisites with Software Intelligence
Once these prerequisites are achieved, the process flow is as follows:
- CI Build: During the build, a copy of the component’s source code is packaged and uploaded using the same release number as the build. This ensures that the component version can be easily identified during deployment for release assembly. Following steps described in the “Jenkins Whitepaper” and leveraging CAST Workstation allows this to be done in an automated fashion. This should be the last step prior to triggering a deployment into your automated test environment.
- Deployment Process: Once initial release and deployment are complete, and initial integration tests are performed, the deployment server inventories the components used and creates a code analysis snapshot that matches the deployment version number and submits it for analysis. Once the analysis is complete, the deployment step can retrieve the metrics for the release number as well as the previous release and validate the release is meeting the desired metrics and the incremental change does not have a negative impact on the overall system.
Continuous Delivery and Software Intelligence Outcomes
This methodology has several positive outcomes. First, your software remains in compliance with development security standards required by industry regulations and many organizational standards. While contextual analysis is an important step in the process, it should be addressed as a routine compliance checkmark instead of a key focus before a software release.
Second, it has proven to result in increased software delivery performance for the team. The 2017 State of DevOps report noted that teams who integrate security and quality analysis in their delivery pipelines spend 50% less time remediating security issues than those who don’t. They also noted that those organizations spend 21% less time on unplanned work or rework and 44% more time on new work.
Finally, organizations leveraging this practice will minimize interruptions in the costly stages of development such as manual testing, ultimately resulting in a reduction of technical debt. Discovering issues with software early allows quality assurance team members to focus on the functionality of the application rather than security vulnerabilities and routine testing caused by preventable errors.
- Nine Digit Defects, Bill Curtis and Lev Lesokhin, InfoWorld http://www.infoworld.com/blog/nine-digit-defects
- Mitigate Business Risk and Unlock Software Potential with Contextual Software Analysis, Cutter Consortium, Peter Kaminski, https://www.cutter.com/article/mitigating-business-risk-unlock-software-potential-495881
- The DevOps Handbook, Gene Kim,Jez Humble,Patrick Debois,John Willis, IT Revolution Press 2016
- 2017 State of DevOps Report – Puppet Labs and DORA, https://puppet.com/resources/whitepaper/state-of-devops-report
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.