The Machine Learning Hype Dampened by Technical Debt

by

There has become a recent trend in discussing the benefits of machine learning - however, despite its recent popularity there are few large-scale systems that actually employ it in production. A lot of the talk surrounding machine learning is thus about its potential rather than its actual application. This means that talk about the risks that come with using machine learning are often overlooked.

The actual building of machine learning is not a necessarily complex process. With new tools, frameworks, and platforms constantly developing, machine learning has become a hot buzz word similar to when cloud and big data were used as selling points at any company or start-up a few years ago. Machine learning is the product of both - it is the natural progression in the evolution of analytics and data science.  However, there is a catch - machine learning is susceptible to an issue that while not as catchy as cloud security and privacy, it has to do with building a system seem simple but are plagued with a compounding set of issues. This is technical debt.

The issues that stem from this are not immediately recognizable but threaten the reputation of machine learning as the next phase of basic data analysis. Software frameworks are not assembled from entirely unknown parts, therefore, what makes machine learning difficult to use is not a hardware or software issue but a system-level issue. Often when talking about the issue with machine learning it is summarized as a black-box - that we can't see what is occurring within - but ultimately the greater issue is technical debt. Along with technical debt come the code and maintenance issues that any large software project has to grabble with.

Developing machine learning is enticing because it is relatively fast and cheap to develop - but the other side of this is that its maintenance is difficult and expensive. These systems have all the problems of traditional code with the addition of a new set of machine learning specific problems. For example, the extent of technical debt in a system can go undetected for quite some time because it rests at the system-level. Technical debt, simply put, means that small issues compound over time and thus become more difficult to manage the more time they are left alone. When this technical debt rests at the system-level, it is difficult to tackle technical debt quickly because often times it is not apparent until it is too late.

Traditional technical debt management (as we often talk about here on the blog) is not enough to deal with the system-level debt incurred with machine learning systems. Data dependencies create dependency debt, feedback loops create analysis debt - these forms of debt have comparable obstacles as data testing, process management, and cultural debt. So what does that mean for machine learning technical debt? It means that awareness from the get-go is a must. So much of machine learning devoted to intelligence, therefore, rerouting work to pay off debt is not a simple task: with all the glue code, pipeline jungles, and experimental codepaths shining a light on possible vulnerabilities is hard.

All systems are ultimately plagued with similar issues; whether it be pipelining, job scheduling, or dealing with multiple versions of code. But the particular problem that machine learning has to grapple with is that there are not a large amount of machine learning systems running in full production (unlike HPC sites), and so there are not solutions readily available for this specific problem set. Machine learning is based on many moving parts and glue code so it is extremely easy to dig yourself deep into debt. As more and more organizations move past classic big data tools and look for new intelligent analysis with machine learning keep an eye out for technical debt.

This like any organization looking to deal with technical debt could mean a shift in culture; prioritizing and rewarding efforts to pay back technical debt is an important step for the long-term success of machine learning.

To read the full post visit here

Filed in: Technical Debt
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|