Category: Risk & Security

It is becoming more and more obvious that the software risks and complexity that face today's legacy systems is a growing problem for many IT organizations.
Executive Dinner Series: Managing Software Risk within the Insurance Industry

We currently live in a futuristic world that past generations could only dream of. News, weather, updates from friends all over the world come pouring into our computers and smart devices and we don’t even think twice about the IT risk. Whether we’re at home with family, socializing with friends, or even working, technology is constantly surrounding us in one way or another.

Our reliance on technology is so heavy in fact, we often forget about the science behind it and how much goes into the IT risk management to support it. Beneath the surface of our most frequently used apps, social media accounts, games, and programs, highly complex software and code is constantly operating to maintain a satisfied user experience. Even non-tech businesses now realize they would not be able to function in today’s world without effective technological resources.

Predicting the Future of IT Risk Management with Melinda Ballou

When the entire Facebook platform -- including mobile, web, and third party apps -- went down last week, users took to Twitter hashtag #FacebookDown in a blind panic to lament the social media outage. Though these outages might seem harmless and commonplace, Facebook’s reputation rides on their users’ ability to log onto Facebook from anywhere, at any time. And the more Facebook users have to turn to Twitter or other social networks to have their online voices heard, the harder it will be for them to log back in.

#FacebookDown is a Trend For Now, But Could Turn Into an IT Risk Management Nightmare

We just finished up the 30-minute webinar where Dr. Bill Curtis, our Chief Scientist, described some of the findings that are about to be published by CAST Research Labs. The CRASH (CAST Research on Application Software Health) report for 2014 is chock full of new data on software risk, code quality and technical debt. We expect the initial CRASH report to be produced in the next month, and based on some of the inquiries we’ve received so far, we will probably see a number of smaller follow-up studies come out of the 2014 CRASH data.

This year’s CRASH data that we saw Bill present is based on 1316 applications, comprising 706 million lines of code – a pretty large subset of the overall Appmarq repository.  This means the average application in the sample was 536 KLOC. We’re talking big data for BIG apps here. This is by far the biggest repository of enterprise IT code quality and technical debt research data. Some of the findings presented included correlations between the health factors – we learned that Performance Efficiency is pretty uncorrelated to other health factors and that Security is highly correlated to software Robustness. We also saw how the health factor scores were distributed across the sample set and the differences in structural code quality by outsourcing, offshoring, Agile and CMMI level.

CRASH Webinar: Code Quality Q & A Discussion
With the cost of U.S. data breaches increasing nine percent from last year, and the news of Target CEO Gregg Steinhafel announcing his resignation amidst the fallout of their massive credit card breach, every IT organization has software risk management top of mind in 2014.
Launch Party Wrap-Up: Software Risk Management Goes to Broadway

The current state of outsourced application development is a sorry state of affairs because of myriad software quality issues causing unprecedented glitches and crashes. It’s not that all outsourcers are making terrible software, rather, it’s that governments and organizations have no way of accurately measuring the performance, robustness, security, risk, and structural quality of the applications once they’ve been handed the keys.

CISQ Aims to Bring Software Quality Sanity Back to Federal Outsourcing

The software architecture is one of the most important artifacts created in the lifecycle of an application. Architectural decisions directly impact the achievement of business goals, as well as functional and quality requirements. Yet once the architecture has been designed, most architectural descriptions are seldom verified or maintained over time. Architecture compliance checking is a sound application risk management strategy that can detect deviations between the intended architecture and the implemented architecture.

Application Risk Management: Good Software Architecture is Good Business

Because the world of software development is so incredibly complex and modular, quality assurance and testing for software risk has become costly, time-consuming, and at times, inefficient. That’s why many organizations are turning towards a risk-based testing model that can identify problem areas in the code before it’s moved from development to testing. But be careful, because hidden risks can still exist if you don’t implement the model properly throughout your organization.

Software Risk: 3 Things Every IT Manager Must Know About A Risk-Based Testing Model

As technology increasingly becomes the backbone of business, it is also becoming the single most expensive asset within any organization—bringing the topic straight to the boardroom. What most of us may not appreciate is that the custom software being built for enterprises has surpassed a threshold of complexity that would allow for any one person to understand an entire mission-critical system. And, unlike any other part of our critical infrastructure, there is no single manager who is responsible and accountable for the integrity of a company’s software backbone.

Software Risk Goes to Washington

Most organizations have started to realize that code quality is an important root cause to many of their issues, whether it’s incident levels or time to value. The growing complexity of development environments in IT -- the outsourcing, the required velocity, the introduction of Agile -- have all raised the issue about code quality, sometimes to an executive level.

Business applications have always been complex. You can go back to the 70s, even the 60s, and hear about systems that have millions of lines of code. But here’s the rub: In those days it was millions of lines of COBOL or some other language. But it was all one language. All one system. All one single application in a nice, neat, tidy package.

Does code quality really help the business?

I had the pleasure of moderating a panel discussion with Bill Martorelli, Principal Analyst at Forrester Research Inc; Dr. Richard Mark Soley, Chairman and CEO of Object Management Group (OMG); Siva Ganesan, VP & Global Head of Assurance Services at Tata Consultancy Services (TCS); and Lev Lesokhin, EVP, Strategy & Market Development at CAST.

Reduce Software Risk through Improved Quality Measures with CAST, TCS and OMG

In my last post we discussed the complimentary nature of remediation cost and risk level assessment. As a follow up, I wanted to dwell on the objective risk level assessment. Is it even possible? If not, how close to it can we get? How valuable is an estimation of the risk level? Could it be the Holy Grail of software analysis and measurement? Or is it even worth the effort?

The Holy Grail: Objective risk level estimation

  • Why good architecture is a synonym of cost reduction

  • Reduce Software Risk through Improved Quality Measures with CAST, TCS and OMG

    I had the pleasure of moderating a panel discussion with Bill Martorelli, Principal Analyst at Forrester Research Inc; Dr. Richard Mark Soley, Chairman and CEO of Object Management Group (OMG); Siva Ganesan, VP & Global Head of Assurance Services at Tata Consultancy Services (TCS); and Lev Lesokhin, EVP, Strategy & Market Development at CAST.

  • Estimating the Hidden Costs of Cost Estimation

  • The Holy Grail: Objective risk level estimation

    In my last post we discussed the complimentary nature of remediation cost and risk level assessment. As a follow up, I wanted to dwell on the objective risk level assessment. Is it even possible? If not, how close to it can we get? How valuable is an estimation of the risk level? Could it be the Holy Grail of software analysis and measurement? Or is it even worth the effort?

  • -->