In the evolution of technology, one of the major components of its trajectory is that it has become integrated in every possible product. For example, automobiles which used to be mechanical devices are now highly complex technology platforms - software now controls every aspect of its functions (including engine control, braking, and driving assistance). Now, new studies show that 4 out of 5 new cars will have an internet connection. All this new technology in cars is likely to bring about issues regarding the software quality in these safety critical machines.
One of the shifts towards building safety-critical software in cars is new features like "auto SOS" which is used to call for assistance after a car accident or new calls to implement automated emergency braking (which will use cameras that before we simply used for parking assistance, changing a consumer-friendly component in a safety-critical system).
In any software application, new features are usually built off of existing legacy code. The issue that comes along with building on existing systems is that they usually carry a large amount of technical debt. This sort of debt is usually caused due to continual development without adequate quality controls in place, typically due to business pressure to release features to market as quickly as possible.
In order to reduce technical debt is to go into the legacy code and refactor it. But this also poses issues as often times developers are hesitant to refactor due to fear of breaking existing functionality in the process. And one of the biggest hurdles in refactoring is that there is often insufficient testing in place that means that the correct parts of existing software isn't formally recognized. Therefore, it makes it difficult to refactor software without affecting existing functionality.
In the case where legacy code has insufficient testing the best way to deal with this isn't to go in a write in the low-level testing that should have been in place to begin with, but to implement automatic test case generation (ATG). An ATG will not prove the correctness of the code but it will formalize what the application does at the moment allowing you to create a baseline of current functionality. This will allow developers to make incremental changes to legacy code and not change existing functionality.
Legacy code finds its way into critical application, this is not likely to change, but in order to ensure a high code quality it is necessary to deal with the technical debt that exists. Baseline testing is great way to do this. As software begins to make its way into the functionality of products that can mean the difference between life and death it is necessary to make sure that your code is flexible to change and free of system failure. Getting rid of technical debt is great way to start.
To read the full post visit here.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.