There is a lot of talk about DevOps these days. I guess you've noticed that too, if you have anything to do with tech and haven’t been living in the woods the last three years.
I spoke on a panel a few weeks ago at the MIT CIO Symposium called Running IT Like a Factory. One of my co-panelists talked a lot about cloud-native companies, and how Netflix does 3,000 releases per month and Amazon does 11,000 releases per year. He also referenced the robustness of AWS and how companies like this can create a ton of value very quickly.
Netflix and Amazon both have a market cap that's the envy of any large enterprise. Feedback from the media and analysts isn’t helping. By comparison, many enterprise executives are made to feel that the work their IT organization is doing just doesn't measure up to what Silicon Valley startups can build with small teams of pimply teenagers in their garages.
This is unfortunate.
Whether Amazon is releasing apps at a rate of 11 per hour or 11,000 per year, the message to IT pros is the same: Theirs goes to 11. Yours probably only goes up to 3.
Somehow release rate and cycle time seem to have become the vogue metrics for application development. If your cycle time is nice and short, you're like the cloud-native Silicon Valley geniuses. This is what we, in the measurement world, call vanity metrics. But, since Wall Street values cloud-native startups on a different scale than Fortune 100 enterprises that generate revenue, we might as well turn our app dev metrics upside down.
What bothers me is that we are presumably a community of smart people and technologists who are connected digitally (by Twitter, the media, etc.), but some reason we behave like a herd of sheep and buy into foolish conclusions fanned by mass hysteria. How could the number of releases be a relevant measure for anything? What if I screw something up five times a day and have to re-release it with that same frequency? What if each release is just a tiny little tweak? How can I compare that to a release of a critical application that carries in it a synchrony three dozen projects across the enterprise?
The biggest fallacy here is that these cloud-native companies actually have systems that require any significant level of resilience. Nor do they have a truly significant level of software complexity. Netflix and Amazon don’t have anything near the mission-critical systems of, say, a bank. You might object that the Amazon ecommerce site is mission critical. Sure. It's pretty critical. But, if they add a new feature and it makes you lose your order, or send you the wrong item, then Amazon will just make good by giving you that item for free. You'll be happy and Amazon will roll back and fix their bug. And that's worst case. In most cases, the site will just have yet another glitch that we won't even notice due to the level of glitchiness we've come to expect from web apps. And don't even get me started on the mission criticality of Netflix's systems.
If you're a bank and you screw up a transaction, that will be noticed. If you do it enough times, it will be reported by the Wall Street Journal. Your regulators will breathe fire on you and your customers will rather hide their money in their mattresses. Slightly higher stakes, I think.
I know this is a bit of a rant, but the point is that we can't blindly follow the same metrics in all scenarios, and we can't compare apples to oranges. Cycle time and release rate metrics do tell us something of relevance, but it's far from the whole story - even when considering throughput. And rolling out canary releases while tracking MTTR for your fast fails is nice, but it doesn't fit every business scenario.
We've been seeing Netflix and Amazon both showing up at architecture and app dev events, looking for solutions to deal with their growing system complexity. It's nice to be lightweight and greenfield. But eventually – and usually it's when you start to actually generate revenue – your technical debt catches up and things stop being so simple and carefree. This is when the real work of managing your technology landscape begins.
Our cloud-native friends are starting to come full cycle. At some point they too will look like legacy. Let’s hope they are building robustness and changeability into their code bases. Else they will also suffer next-gen envy.
Erik Oltmans, an Associate Partner from EY, Netherlands, spoke at the Software Intelligence Forum on how the consulting behemoth uses Software Intelligence in its Transaction Advisory services.
Erik describes the changing landscape of M & A. Besides the financial and commercial aspects, PE firms now equally value technical assessments, especially for targets with significant software assets. He goes on to detail how CAST Highlight makes these assessments possible with limited access to the targetâ€™s systems, customized quality metrics, and liability implications of open source components - all three that are critical for an M&A due diligence.