The Problem with PPM Tools

by

There are a lot of Project Portfolio Management (PPM) tools out there. And there's a lot of hot air about the distinctions between PPM, ALM,  PLM, and who-knows-what-other-M.

Bottom line: they don't work. Certainly not in the way they are advertised to work. Not even close.

Over the last 7 years I've heard large companies complain about three main things.

* Sticker shock -- the up-front cost is too high.

* Data integration is difficult -- doesn't play nice with the systems I have (e.g. defect tracking, resource management).

* The customization conundrum -- I need to change it to fit my processes but somehow that starts a vicious circle of change that never stabilizes.

I'll focus on a narrower problem. The data contained in these systems can never be accurate. To the extent that your decisions depend on the accuracy of this data, you're sunk. The reasons for this are not hugely novel - they are a priori points that apply in large part to all quantitative models (I'm assuming PPM tools are a subset of the class of quantitative models -- nothing very controversial.)

So here we go -- there are some repetitions in the list and that's purely on purpose.

Root Causes of Inaccurate Results Generated By PPM Tools

1.   Input inaccuracy

1.   data quality is low

1.   lack of completeness

1.   lack of motivation

2.   lack of time

3.   data not available when needed

1.   lack of interfaces/connections

2.   tool not optimized for the environment

4.   data too difficult to get

5.   gaming the system

6.   lack of expertise

7.   no place to enter relevant data (e.g. PTO hours)

2.   lack of timeliness

1.   lack of motivation

2.   lack of time

3.   data not available when needed

1.   lack of interfaces/connections

2.   tool not optimized for the environment

4.   data too difficult to get

5.   gaming the system

6.   lack of expertise

3.   inaccurately entered data

1.   lack of motivation

2.   lack of time

3.   data not available when needed

1.   lack of interfaces/connections

2.   tool not optimized for the environment

4.   data too difficult to get

5.   gaming the system

6.   lack of expertise

7.   sensitivity to sequence in which data is entered (e.g. one input has to be set before the other but isn't)

8.   sensitivity to interdependencies (e.g. two inputs need to be entered in concert but are typically set independently)

4.   imprecisely entered data

1.   lack of motivation

2.   lack of time

3.   data not available when needed

1.   lack of interfaces/connections

2.   tool not optimized for the environment

4.   data too difficult to get

5.   gaming the system

6.   lack of expertise

7.   sensitivity to sequence in which data is entered (e.g. one input has to be set before the other but isn't)

8.   sensitivity to interdependencies (e.g. two inputs need to be entered in concert but are typically set independently)

5. Skills in reality are not what they appear to be on paper -- NO tool can solve this problem -- This a version of input inaccuracy.

2.   Misaligned map of resources to tasks (things don't quite fit neatly into pigeon holes when mapping resources to tasks)

3.   Input interdependencies that are hidden -- for example, the tool calculates an output -- e.g. the existence of a particular bottleneck -- based on three inputs. Depending on how sensitive the output is to the existence of and the accuracy of each input variable the calculated output can turn out to be useless. This is a pernicious because it is hidden to the user of the tool and puts the user in the dangerous position of making bad decisions due to algorithmic subtleties. In other words, the user has no grip on the amount of variance in the result. This makes it dangerous to base decisions on this output.

2.   Divergence of actual resource management processes from the processes dictated by the tool

3.   When multiple models are used (most PPM tools contain project, portfolio, resource management modules), the inputs and outputs of each of these sub-models can have unknown interdependencies that drive systematic or unsystematic error.

4.   Parameters or constants of the model change and these are not part of the experiential feedback loop for the model (this could include weights of variables or strengths of connections between variables.

1.   internal factors (new project types, new resource types, new business models, new resourcing strategies, ...)

2.   external factors (macroecononic, microeconomic, exogenous - e.g. Gladwell: "your algorithm doesn't accommodate the fact that the Russian government decides to default on its loans.")

5.   Variables of the model change or new ones need to be added

1.   internal factors (new project types, new resource types, new business models, new resourcing strategies, ...)

2.   external factors (macroecononic, microeconomic, exogenous)

6.   Algorithm changes

1.   internal factors (new project types, new resource types, new business models, new resourcing strategies, ...)

2.   external factors (macroecononic, microeconomic, exogenous)

7.   Model changes (i.e. both variables and algorithm change)

1.   internal factors (new project types, new resource types, new business models, new resourcing strategies, ...)

2.   external factors (macroecononic, microeconomic, exogenous)

And lastly, what a PPM tool (or risk model) cannot by itself do:

·     get you executive support

·     figure out the process you should put in place

·     ensure that the skill needed is actually the skill listed in the input

·     ensure data quality

·     make decisions on what actions to take

Whew! Write me if you got through that!

Filed in: Industry News
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|