Application modernization has always been a priority in the IT world. Whatever the reason, applications must always be regularly adapted to the next modern environment to prevent business disruption and enable disruptive ways of working. Of course, we saw this happen nearly 20 years ago for Y2K. In more recent years, however, we have seen mainframe replacement and cost reduction goals drive most of the transformation narrative. In a nutshell, we are speaking about legacy modernization.
Legacy Modernization Tooling Must Enable System-Wide Transparency
I have contributed to several system migration projects, and each time we considered a similar scenario. Tooling was deployed by a dedicated team and set up with a project perspective only. Goals were multiple: it was mandatory to analyze the system or the application to identify the project boundary. In addition, it was necessary to organize the project by creating batches to identify software risk across the perimeter. Creating batches requires teams to know how components are linked together and which components are needed to perform correct test plans. Identifying related software risk means knowing which components are not compatible with the target, which ones have a poor quality or are classified as dangerous.
This tooling was most of the time very efficient but, at the end of the project, it was often abandoned because the team in charge left and the benefit for day-to-day work was not always well perceived. This is a shame, because teams can and should continue to use these tools throughout application lifecycle and on other initiatives beyond legacy application modernization.
Why Legacy Modernization Tools Should Be Used Across Application Portfolios
Tooling used by legacy modernization projects can also benefit the day-to-day work of developer teams. In maintenance, it is important to evaluate the impact of an incident on the application to identify answers to questions like: “Which component failed?” and “Where can the problem propagate?” Teams in charge of maintenance effort must be in a position to understand how the system is organized and if there are components considered at risk that must be fixed or replaced.
The same modernization tools are useful for new development as well. Developers must know how the code they will develop must be integrated in the existing software. It is also important for them to be sure they do not introduce new software risk into the applications.
I recently met with a company to explain how application analysis tools provide value, and they voiced two primary concerns:
- Their applications were mainly implemented in COBOL and .NET, and only few people know how they are structured. Existing documentation was outdated, and it was very challenging for new team members to ramp-up quickly.
- There was quite a bit of suspicious code across the application portfolio. They wanted to update and modernize these codebases, but they lacked an understanding of how the code changes would impact the overall application. They wanted to use modernization tools but were unable to estimate cost or time to modernize. Not to mention, they didn’t want code changes to break the app.
Software Intelligence Aids Legacy Modernization Efforts
Software Intelligence helps teams perform analysis at both the system- and application-level. It’s important to look for a solution that provides an overview of the application and identifies software risk at a global point of view. Once the “heat map” of risky components is identified, teams can drill-down into specific violations to manage action plans for the remediation of critical issues.
Another positive benefit of Software Intelligence is that it will shed light on how systems and applications are structured and how components are all connected. These capabilities are valuable for maintenance, new developments and legacy modernization efforts. Not to mention, this insight will help teams prolong the life of modern applications and slow their evolution into becoming the legacy applications of tomorrow.