In a recent post I wrote about the craziness of just measuring the things you do to build something, without ever measuring the quality of the thing you build.
Imagine Michelangelo Buonarroti measuring the force and angle of his blows on stone without any sense of the beauty of the statue emerging from these blows?
Imagine Usain Bolt clocking his sleep time, his training time, his eating time, his Playstation time, but somehow failing to clock the time it takes him to run 100 meters!
Sure, that's crazy. But it's EXACTLY what happens in software development, enhancement, and maintenance projects.
Now, software projects do suffer from the Michelangelo problem: how do you define and measure the "goodness" of the software you produce?
Usain has it easy in this regard. How can we make the software product measurement problem less like Michelangelo's problem and more like Usain's?
Here's how we at CAST define and measure the quality of the software you produce.
For any given technology/language, CAST goes right down to the smallest unit of that language/technology. All metrics below are captured at that fundamental unit level and then rolled up all the way to the application level. The application-level metrics are also rolled up to the application portfolio level. An application may consist of many different languages and technologies -- that's a major strength of the CAST platform: it provides an end-to-end view of the entire system from database, to application logic, to middleware, to business logic, to the user interface.
There is extensive documentation on how each metric below is defined and how it is computed. But it's worth a short explanation here. CAST measures the degree to which an application satisfies a set of rules. At the foundational level, you can think of CAST as three things:
The language parsers and the knowledge base or rules evolve to keep pace with changing technologies, languages, and software engineering standards and practices. They rules are kept up to date by our engineering team.
Rules in the knowledge base are classified by technology/language. For example, there is a set of rules for Java and another distinct set of rules for C++. There is also one general set of rules that is independent of technology/language.
Each rule in the knowledge base is assigned a series of weights - each weight tracks that rule's contribution to a quality metric (weights can be zero). A complete assignment of rules to weights is called a Quality Model. The weights of the quality model can easily be tailored to particular production environments in which the systems operate.
Once the source code of the software system (multi-tier, multi-platform, multi-language) is parsed, the rules engine goes to work assessing the degree to which each basic component of the system complies with all rules that apply to it. Some rules will apply to sets of components; for example, how a set of UI objects calls a database object.
Quality measures are rolled up to two high level metrics: The Technical Quality Index (TQI) a gross measure of the quality of the software system, and 5 Health Factors:
So there you have it. A way to define and measure the quality of the software output you produce.
When it comes to software, be like Usain!
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.