There has become a recent trend in discussing the benefits of machine learning - however, despite its recent popularity there are few large-scale systems that actually employ it in production. A lot of the talk surrounding machine learning is thus about its potential rather than its actual application. This means that talk about the risks that come with using machine learning are often overlooked.
The actual building of machine learning is not a necessarily complex process. With new tools, frameworks, and platforms constantly developing, machine learning has become a hot buzz word similar to when cloud and big data were used as selling points at any company or start-up a few years ago. Machine learning is the product of both - it is the natural progression in the evolution of analytics and data science. However, there is a catch - machine learning is susceptible to an issue that while not as catchy as cloud security and privacy, it has to do with building a system seem simple but are plagued with a compounding set of issues. This is technical debt.
The issues that stem from this are not immediately recognizable but threaten the reputation of machine learning as the next phase of basic data analysis. Software frameworks are not assembled from entirely unknown parts, therefore, what makes machine learning difficult to use is not a hardware or software issue but a system-level issue. Often when talking about the issue with machine learning it is summarized as a black-box - that we can't see what is occurring within - but ultimately the greater issue is technical debt. Along with technical debt come the code and maintenance issues that any large software project has to grabble with.
Developing machine learning is enticing because it is relatively fast and cheap to develop - but the other side of this is that its maintenance is difficult and expensive. These systems have all the problems of traditional code with the addition of a new set of machine learning specific problems. For example, the extent of technical debt in a system can go undetected for quite some time because it rests at the system-level. Technical debt, simply put, means that small issues compound over time and thus become more difficult to manage the more time they are left alone. When this technical debt rests at the system-level, it is difficult to tackle technical debt quickly because often times it is not apparent until it is too late.
Traditional technical debt management (as we often talk about here on the blog) is not enough to deal with the system-level debt incurred with machine learning systems. Data dependencies create dependency debt, feedback loops create analysis debt - these forms of debt have comparable obstacles as data testing, process management, and cultural debt. So what does that mean for machine learning technical debt? It means that awareness from the get-go is a must. So much of machine learning devoted to intelligence, therefore, rerouting work to pay off debt is not a simple task: with all the glue code, pipeline jungles, and experimental codepaths shining a light on possible vulnerabilities is hard.
All systems are ultimately plagued with similar issues; whether it be pipelining, job scheduling, or dealing with multiple versions of code. But the particular problem that machine learning has to grapple with is that there are not a large amount of machine learning systems running in full production (unlike HPC sites), and so there are not solutions readily available for this specific problem set. Machine learning is based on many moving parts and glue code so it is extremely easy to dig yourself deep into debt. As more and more organizations move past classic big data tools and look for new intelligent analysis with machine learning keep an eye out for technical debt.
This like any organization looking to deal with technical debt could mean a shift in culture; prioritizing and rewarding efforts to pay back technical debt is an important step for the long-term success of machine learning.
To read the full post visit here
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.