CI Tools and Best Practices in the Cloud

Continuous Integration

Subscribe to Continuous Integration: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Continuous Integration: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Continuous Integration Authors: Elizabeth White, Gordon Haff, Flint Brenton, Yeshim Deniz, Stackify Blog

Related Topics: Continuous Integration

Continuous Integration: Article

Upstreaming Quality & Visibility in the Application Lifecycle

Early prevention of doomed projects is within reach

  • Finding ways to embed continuous testing and policy enforcement into the development process - while adopting a more holistic view of quality
  • Isolating defects before they are allowed to enter the main source code line - to take a more preventative approach to quality
  • Gaining visibility through frequent builds and tests to uncover trends and conduct thorough analyses for a more systematic approach to lifecycle quality.
We've heard the chants "Test Early, Test Often" in QA, but what about rolling that back, up and over the proverbial wall between development and testing, so that issues can be resolved well in advance of launch day? Lifecycle quality management aims to test early and test often - before an application "leaves" the development phase - to enable continuous testing in a process-aware way, so it's not an anxiety-ridden burden or a neglected last step before release.

Despite years of vendor claims to the contrary, there really hasn't been much progress towards establishing an effective process or technology that prevents defects from propagating down the lifecycle. Despite widespread agreement and understanding that defects found late in the lifecycle exact a great cost, it's still the norm to tackle the chore of finding and fixing defects at the end of the lifecycle, in QA. Unfortunately, by the time software gets to QA, there's often little time before the promised ship date and no way for QA to test and fix everything. In fact, testing has been traditionally treated as an "add-on" to development - something that's so separate it's handled by a different team. Integration and testing traditionally start late, letting a large backlog of hidden defects accumulate. Because this traditional approach to quality very much divorces testing from development, there's been no consistent way to find and fix issues in the flat (cheap) part of the curve (see Figure 1), leading to the classic and pervasive problem of expensive late-stage rework. Rework that is expensive and frustrating because it forces development teams to hunt through many weeks or months of changes to find the source of problems, then not just fix the source but everything else that may have become dependent on the particular implementation of the defective component during that time.

A Proactive Approach to ALM by Upstreaming Quality
For developers taking a proactive approach to quality and adopting key processes and tools to "bake" quality checks and measurements into every stage of the application lifecycle can help guarantee applications are deployed on time, to specification, and with fewer bugs - while reducing the need for rework. So, from a practical standpoint, what do we mean by "taking a proactive approach?" What are some strategies that development teams can undertake to start to truly change the way they approach quality throughout the development lifecycle and begin "upstreaming" quality?

First, let's discuss what I mean by "upstreaming."

Development is often referred to as a "black box" activity and in most cases the quality testing processes and tools happen outside this box. This means that organizations often have poor visibility into how defects and issues arise and propagate in the box, and no effective way to catch these issues early in the lifecycle before they become expensive problems. From a management perspective, this also means that decision makers have to rely on late, incomplete, or anecdotal information to judge software readiness, estimate schedule risk, and evaluate performance.

Without real-time visibility into testing status, developers and managers alike can't answer pertinent questions such as: Are we on schedule? How confident are we about readiness? How effective are our teams? Organizations simply do not know what the status of their projects is, what the quality or stability of their applications is, how close they are to being done, or how much risk there is for various projects. This lack of knowledge spirals into unpredictable release schedules, lack of confidence in software quality and stability, and lack of objective measures of developer and team quality and productivity. The problem is just multiplied for distributed, outsourced, or offshore teams.

At its simplest, "upstreaming" quality seeks to build visibility, measurement, and a series of quality checks into the "black box," giving development organizations confidence in their estimates, technical predictions, and risk assessments. This can help management identify at-risk projects early on when it's cheaper to pull the plug or determine more effective ways to allocate more resources. So we've seen the ramifications of eleventh-hour testing in terms of "the scramble" to go live, the wasted resources spent on projects that wind up getting killed, and the regrets endured by waiting for "the build" to assess success or failure. What are the top techniques we can easily employ to avoid such catastrophe? In my experience, these are the Top 10 best practices you can implement immediately in your organization to upstream quality and visibility by attacking it earlier in the application lifecycle:

  1. Build automation into the process early and often
  2. Isolate defects to avoid broken builds
  3. Validate changes against target environments
  4. Automate build validation tests
  5. Check in software changes frequently
  6. Automate build publication
  7. Employ relative code coverage and trending
  8. Foster a culture of open visibility
  9. Monitor to enable continuous process improvement
  10. Don't ignore history! (Capture and mine data for valuable insights)
Best Practices for Upstreaming Quality
1.  Build automation into the process early and often
Many organizations fail to take advantage of the build process as an opportunity to test earlier and more consistently. Automated build validation tests can help you detect regressions and other problems early, when they're easier to find and fix (not least of all because the changes in question will be recent enough for developers to remember what they were thinking when they wrote that piece of code!)

Think about it like going to the dentist. Just as the solution to painful dentist visits isn't going less often, it's going more often! The same is true of painful software integration events. Building frequently - preferably continuously - lets you discover problems early on and nip them in the bud before they grow into big expensive disasters. Plus, building software is a critical validation step - you don't know if something's broken until you try putting it together, just like assembling a puzzle with hundreds or thousands of pieces. Building frequently gives you greater confidence that you know what the state of your software development project really is.

More Stories By Brad Johnson

Brad Johnson, director of product marketing for Lifecycle Quality Management, is responsible for the product strategy and marketing of Borland's Lifecycle Quality management solution. In this role, he is dedicated to improving the project success rate for IT teams with a comprehensive quality management solution that supports quality early in the lifecycle with complete, testable requirements, helps developers build higher quality code, and leverages powerful test automation to improve efficiency and reduce costs. Prior to joining Borland, Brad held senior-level positions in product management and product marketing at Mercury Interactive and Compuware. He earned a BS in business with a specialty in management information systems from the University of Phoenix.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.