What looks like a single app on a phone conceals deep software complexity. There is nothing singular about it. Each tap on favourite apps like Uber, Twitter or Netflix activates one of as many as 700 next-gen individual computer programs; microservices.
As businesses become increasingly online-only, application code is fielding an unprecedented number of requests. Microservices enable the busiest applications to function effectively by breaking business processes down so they can scale to meet demand from thousands of users.
Microservices also make code easier to maintain, creating savings and reducing workloads once re-architecting for microservices has been accomplished. If one microservice crashes, it doesn’t bring the whole application down with it as monolithic codesets would. The fragmentation of services reduces IT operational complexity, cutting the time it takes to find and fix faults as they occur in operations.
So, are microservices the perfect solution to the problem of user demand outstripping software supply? Not quite.
Divide & cripple?
Unfortunately, it’s too easy for an uninitiated team dabbling in microservices to create a Frankenstein's monster by scaling up prematurely. Inefficient or unstable code will still be inefficient and unstable code. Poorly engineered code will crash just as often, regardless of whether it is running in a monolithic application or as a thousand parallel microservices when hitting peak demand.
Monolithic applications are easier for business users to understand. These are older, proven applications which have been thoroughly tested through being live for years, making them highly structurally resilient. It’s also harder for IT professionals to orchestrate, control and fix a thousand microservices, rather than one application. A high-quality monolithic application will outperform an army of floundering, poorly-engineered microservices.
In either case though, better Software Intelligence and a focus on measuring code quality, its efficiency and reliability, before scaling up the quantity of code is just good business sense.
Microservices though, are not going away and they require a different functional software development methodology, mostly agile and adoption of new digital architectures than previous legacy technology-based architectures. Even the most deft of object-oriented programmers still create code dependencies within an application and old habits die hard. This means unless there is a clean break between code on functional aspects, perhaps even a full replatforming, re-architecting for microservices adds complexity instead of reducing it.
This issue, known as under-fragmentation; is the awkward halfway-house between microservices and monoliths codesets, slowing down code execution and even causing a crash if the complexity creates unintended consequences.
Businesses can avoid under-fragmented microservices by clearly separating the teams and business functions operating different features where possible. Putting distance between teams does the same for the microservices they manage. But how can IT leaders know they are moving the dial in a positive direction?
It pays to look at the big picture of software quality. It’s essential to measuring the progress and performance of microservices is in-depth software analysis, providing Software Intelligence to impartially measure key aspects of code such as complexity, stability and security.
Quality code is what matters most. The first step in getting there is knowing the current status of Software Intelligence in your organization. Use software intelligence to ensure code is as elegant and easy-to-understand as possible in a monolithic application before modernizing and replicating structural flaws with microservices. Otherwise, dealing with one instance of poor code and one breakage will be replaced by hundreds or thousands of potential red lights all creating problems at once.
Microservices are key to the application modernization trend, they make code more resilient, but fragmenting and duplicating already-inefficient code just gives more of the same. So, embrace the transformation with Software Intelligence to gain application insights on functional plus technology factors segmentation and track your performance on modernizing with microservices to align with new age digital platform/architectures. Think small, ensure an app’s code is already running correctly and efficiently, then go big.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.