SODA, Anyone?

It’s no surprise that organizations are moving more and more of their business critical systems to the cloud because of its availability, speed, and ease-of-use. But how does this effect and organizations ability to properly test and maintain the quality of those systems?

CAST-SODA-anyoneThe best approach we’ve seen so far is Service-Oriented Development of Applications (SODA) which is the process of developing applications with Service Oriented Architecture (SOA) in mind. The idea is to create an overall business service that is able to adapt to business ever-changing requirements at the lowest cost yet with the shortest cycle.

SODA: Challenges and Benefits

Despite SODA’s apparent simplicity -- “wrap every legacy component into Web Services and it’s SOA-enabled” -- it requires even more development skills and control. The skills used in designing and deploying reusable components in traditional languages and tools are all-the-more applicable to SODA.

Yet, wrapping components with web services is generally not enough. Indeed, legacy and packaged applications and databases were designed for traditional business transactional processing, so their reuse through web services can require a fair amount of redesign.

And SODA even increases complexity with further abstraction of underlying technology, making dependency analysis even more difficult to perform as well as creating a new challenge to track all the much-smaller and much-more-numerous components.

However, SODA presents a great opportunity for increased quality of all applications. Because if one develops high-quality software components, they will be reused in multiple contexts and therefore bring along their intrinsic quality. The opportunity turns into real strength if one is able to ensure that its components are high-quality ones. Otherwise, it turns into a major weakness as the poor-quality components will automatically bring their frailties to the applications they participate in.

Analyzing Multi-Tiered Systems

The level of unpredictability is related to the openness of the exposed service as well as its success as a reusable component. This means extra-care has to be taken regarding its:

  • Reliability: it must operate as expected, both in term of accuracy and in term of up-time,
  • Security: it must protect data integrity and confidentiality, despite the new ways "in" the application and the new ways to use each feature,
  • Performance: it must be able to cope with unexpected and unpredictable workload

Compared to traditional development contexts, the need to ensure the application development quality is even more critical. The addition of multiple layers to a system leads to increased service downtime: if each layer is reliable 80% of the time, a three-layer system would only be reliable ~50% of the time (80% x 80% x 80%). The need for higher quality layers is critical. With multiple layers, one cannot be satisfied with a fair quality level. A three-layer system that must be reliable 80% of the time requires that each layer be reliable 93% of the time.

The recommended approach to face this challenge is to employ a full life-cycle defect removal model. This model includes source code and architecture inspection for defect tracking from the early stages of the application life cycle and takes into account the entire source code package. Functional and dynamic testing are more unlikely than ever to cover all the operating use cases.

Being able to understand the actual orchestration patterns is also key to unraveling architectural inconsistencies or missed opportunities. For example, when multiple elementary services access the same or similar resources, this can be an opportunity to create a new service that will handle the whole interaction -- avoiding multiple elementary service invocation and removing functional, and therefore technical, redundancy.

I will talk more about how you can eliminate redundancy and ensure quality service-oriented applications in my next blog post, so stay tuned to this space.

Filed in: Technical Debt
Get the Pulse Newsletter  Sign up for the latest Software Intelligence news Subscribe Now <>
Open source is part of almost every software capability we use today. At the  very least libraries, frameworks or databases that get used in mission critical  IT systems. In some cases entire systems being build on top of open source  foundations. Since we have been benchmarking IT software for years, we thought  we would set our sights on some of the most commonly used open source software  (OSS) projects. Software Intelligence Report <> Papers
In our 29-criteria evaluation of the static application security testing (SAST)  market, we identified the 10 most significant vendors — CAST, CA Veracode,  Checkmarx, IBM, Micro Focus, Parasoft, Rogue Wave Software, SiteLock,  SonarSource, and Synopsys — and researched, analyzed, and scored them. This  report shows how each measures up and helps security professionals make the  right choice. Forrester Wave: Static Application Security Testing, Q4 2017  Analyst Paper
This study by CAST reveals potential reasons for poor software quality that  puts businesses at risk, including clashes with management and little  understanding of system architecture. What Motivates Today’s Top Performing  Developers Survey
Philippe-Emmanuel Douziech
Philippe-Emmanuel Douziech Principal Research Scientist
Philippe Emmanuel Douziech is a Principal Research Scientist at CAST Research Labs and is the Head of European Science Directorate at CISQ. He has worked in the software industry for more than 20 years and is skilled at assessing software risk and quality.
Load more reviews
Thank you for the review! Your review must be approved first
Rating
New code

You've already submitted a review for this item

|