Category: Industry News
It’s simple physics: a piece of application code gets caught in a logic loop, the CPU heats up as the increased throughput tries to make sense of the commands, the computer reacts by pumping more power to the motherboard and cooling system to keep everything up and running, and your electricity bill goes up.
In a merger, integrating company names is hard enough -- imagine having to integrate massive application portfolios?
As the Justice Department and the FCC evaluate the proposed merger between corporate behemoths Time Warner Cable and Comcast, I wonder if the C-suite at both companies are investing as much time evaluating the health and security of one another’s application portfolio. Historically, technical due diligence has lagged greatly behind the financial due diligence.
Few moments compare to the pressure-filled environments of hackathons, where the best developers from around the globe cram into a rented room with 24 hours to conceive, design, and create an app that wins a chance to present an idea, showcase talent, and gain invaluable exposure.
On April 7, the IT industry was rocked when it was announced that over 60 percent of the Internet -- even secure SSL connections -- were vulnerable to attack due to a new weakness codenamed Heartbleed. The weakness lives in the OpenSSL cryptographic software library, which encrypts sessions between consumer devices and websites. It’s usually referred to as the “heartbeat” since it pings messages back and forth. Hence the name of the bug.
The current state of outsourced application development is a sorry state of affairs because of myriad software quality issues causing unprecedented glitches and crashes. It’s not that all outsourcers are making terrible software, rather, it’s that governments and organizations have no way of accurately measuring the performance, robustness, security, risk, and structural quality of the applications once they’ve been handed the keys.
Pay attention US financial sector, because the UK is one step ahead of you … sort of. They’re at least willing to admit they have a problem with software risk and IT system resiliency, which is on the path to recovery.
When I arrived at Agile 2013, I looked at the program and picked out sessions -- mostly about improving the front-end of development. In order to do this, I had to pass the lounge area, which had tables, chairs, couches, easels ... and lots and lots of white boards. This area was the “open jam”, a collaboration space where anyone could propose anything.
At a time when other conferences are splitting into smaller and smaller regional and micro-tech events, the Agile Conference, with its 1,700 attendees, stands alone.
Alone and overwhelming. The event had sixteen different tracks spanning everything from DevOps to coaching and mentoring, leadership, and lean startup to classic elements like development, testing, and quality assurance.
Not to mention the vendor booths, the Stalwarts Stage (where experts "just" answered questions for 75 minutes), the four-day boot camp for beginners, and the academic track. The 215 sessions brought one word to mind: overwhelming.
Instead of focusing on one track or concept, I spent my time at the conference looking for themes and patterns. What surprised me was where I found those ideas -- to the left, in product design, and to the right, in DevOps, not in the middle, in classic software.
Ever wonder what reality looks like when your external IT systems crash? Well here you go. This might be of particular interest to CIOs and business stakeholders who push IT to meet unrealistic deadlines without managing their software risk.
TD Bank's credit and debit card systems went offline for approximately 45 minutes yesterday as the result of a supposed system upgrade. Immediately, Twitter exploded with angry customers.