Building Your Software House with Big Data, Reliability, and Technical Debt

by

Software is like a house.How so?

According to Lev Lesokhin, senior vice president of strategy and analytics at CAST, a house has to be continuously maintained or else you may start to notice, as its paint starts to peel, that its foundation is not as stable as it once was. This could not be a truer analogy for big data environments as they are inherently distributed systems, and the probability of issues - like partial failure or unexpected latencies - only increase with scale.

However, constructing a scalable big data system is an incredible software architecture challenge for software engineers and program managers. And while there is trend for development teams to focus on the use of analytics that goes from data to decision, there also needs to be a focus on quality analytics that looks directly at source code. Without this being taken into consideration, big data management can quickly become problematic.

Therefore, IT managers should ensure that a quarterly clean up is on their teams checklist, so as to prevent any possible future issues. This is especially prudent if you don't want to end up spending hours answering help desk tickets and customer complaints (hours that could otherwise be spent developing new features).

In order to keep this from being you IT teams reality, which stalls innovation, there are two factors you need to keep an eye on: reliability and technical debt.

U.S organizations are losing up to $26 billion a year in revenue due to downtime. There is a solution to much of this loss, which is implementing routine application benchmark testing.

For example, look at the recent discovery at the U.S Department of Corrections, where over 3,200 prisoners were accidentally released in Washington state over a span of 12 years due to a software glitch. In this case, some officials were aware of the issue but did not handle it adequately allowing a breakdown of the system and the release of potentially dangerous criminals. So it is imperative that IT managers not only take action to care for the output of data in their systems, but also their structural quality. By establishing a reliability benchmark managers will be able to have visibility into their system's stability and data integrity before it is too late. Discovering vulnerabilities in critical systems allows you to see possible issues that could disrupt services and negatively impact customer satisfaction and the organization's reputation. Having a reliability benchmark allows you to do just this.

Besides measuring reliability it is highly important for IT managers to  measure technical debt present in their systems. Technical debt can be simply defined as the accumulated cost and effort that is needed to fix problems in code that remain after an application's release. The average sized application carries around $1 million of technical debt. According to Deloitte, CIOs are starting to turn much of their focus on handling technical debt by building business cases for core renewal projects, the prevention of business disruption, and the prioritization of maintenance work.

However, despite this new awareness they still struggle estimating the actual levels of technical debt they have, and in turn struggle building the case to paying it back. In big data environments technical debt is made worse by the urgency that often comes along with trying to make sense of scattered information sources. In order to deal with this IT managers should enlist their teams to arrange structural quality tools that measure the cost of remediation ad improvement of core systems. This takes into account the code quality and structural quality of their organization's software.

Structural quality metrics can help identify code defects that can pose risk to the business when they are not fixed before a system's release. These represent defects whose remediation costs will have to be tacked onto future releases. Meanwhile, code quality measures account for coding practices that can make code more complex and difficult to change in the future, this manifests itself as technical debt.

An adequate estimate of technical debt allows businesses to plan and allocate resources for future projects to deal with this debt. Business success is continually becoming more reliant on the 'software house' that IT builds, so it is key to have visibility into these systems in order to ensure that it is stable and won't pose any serious risk in the future.

To read the full post visit here

Filed in: Technical Debt
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|