CI Tools and Best Practices in the Cloud

Continuous Integration

Subscribe to Continuous Integration: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Continuous Integration: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Continuous Integration Authors: Flint Brenton, Elizabeth White, Liz McMillan, Pat Romanski, Stackify Blog

Related Topics: Continuous Integration

Continuous Integration: Article

Upstreaming Quality & Visibility in the Application Lifecycle

Early prevention of doomed projects is within reach

2.  Isolate defects to avoid broken builds
The way to eliminate the risk of breaking the build is to address issues in a "sandbox" or isolated space on the server where developers can fix problems without impacting the "live" product. How many times have you (or one of your developers) been stuck waiting for someone to fix the build before you could continue your work? For all the time developers save with visual editors and other coding productivity tools, 10 times as much is wasted because of broken builds. Code needs to be successfully built before being integrated into the main code line where it will affect others. Ideally, work in progress should still be regularly checked into version control but it should be isolated from others until it builds successfully.

Like containing a virus, developers can quarantine problematic changes in sandboxes, which are basically transparently managed branches in the version control repository, only returned to the main code line when fixed and tested. This is a far more preventative approach and stops the problem from spreading. Management gets quicker feedback, and development teams are armed with greater productivity, increased confidence, and a sense of impact on the business. It fosters a culture of transparency and accountability.

3.  Validate changes against target environments
No one is happy to hear "well, it worked on my machine!" It's dangerous to assume that the developer's environment is an acceptable proxy for the "real world" and yet it's impractical to expect that developers will have all the needed tools and environments on their desktop to validate their changes. Even if your developers are all good process-abiding citizens, there are multi-platform build and test issues, test environments that require external resources, and even licensing restrictions. Changes need to be validated against all the appropriate target environments, preferably by an automated build system.

The way to solve this is to target server-based builds, that is, employ a testing solution that resides on the server side and is integrated with version control. It's out of the way in the "sandbox," and here you can "pollute" it with different versions of the environment, different versions of application servers, different platforms such as Windows or Linux, and, once remedied, return the clean version to development. Various libraries can be stored on an individual's local environment, but not updated to the integration environment.

4.  Automate build validation tests
Since software is tested every time changes are made, detected defects are easy to resolve - the developer just made those changes so they still have the context in their head, they don't have to waste much time searching for the source of the problem, and the defect won't have had much time to propagate (or for other components to become dependent on it). Many organizations fail to take advantage of the build process as an opportunity to test earlier and more consistently. Automated build validation tests can help you detect regressions and other problems early when they're easier to find and fix.

5.  Check in software changes frequently
Making large, infrequent check-ins is not only a recipe for losing lots of work when you accidentally corrupt your hard drive or your laptop's battery explodes, it also limits the value of other best practices like frequently executing and testing the build. Build and test results are only useful if they tell you something new, even if it's just that things are still working - that "still" is important! By checking in changes frequently, you can measure the health of your software project more effectively. Plus, by checking in more often, you make it easier to track down problems that the build or test reveals.

6.  Automate build publication
Collaboration is the key to successful software projects (well, at least those with a development team bigger than one) and working code is the best artifact around with which to collaborate - ask any agile development proponent (or open source developer). Giving other developers and QA engineers easy access to the latest (and earlier) builds helps keep everyone in sync. And make sure your build system can automatically publish successful builds so you don't have to deal with managing distributions manually.

7.  Employ relative code coverage and trending
The most frequently cited code coverage-related blunder is probably wasting time trying to get all the way to 100% coverage, or believing that high coverage means your code works. A more fundamental issue, however, is that the only really useful code coverage is relative code coverage. If you can't compare coverage between different projects then you don't know where your test resources will be spent best. If you don't know (or can't remember) what your code coverage was last week, last month, or on the previous version of the software, it's hard to evaluate today's number. Code coverage can actually be an extremely useful tool, but only when you can view it in context.

8.  Foster a culture of open visibility
Today it's very common to see business adopt open source software, but it's fairly uncommon to see it adopt even the most basic practices that helped make open source software so successful. One of the core aspects of open source development projects is transparency - everyone can see what everyone else is doing, fixing, or breaking - and this encourages both peer accountability and recognition of individual contributions. One great way to begin "upstreaming" quality and visibility into your development practices is to take some cues from the open source model and infuse these principles into your organization. This is especially important with distributed development teams where the right hand often doesn't know what the left hand is doing (because it's asleep when the other one's working). Commercial software development organizations would be well served by adopting more transparent practices to improve visibility among developers and, equally important, give QA and management better and earlier visibility into development.

9. Monitor to enable continuous process improvement
Just like customer requirements change, software is ever-changing, and that means you have to review your build and test procedures regularly and make any necessary adjustments to adapt to these changes. Keeping track of key metrics like build and test times can help you identify trends and make informed decisions if you need to prioritize. With the right coverage data, you can even analyze the effectiveness of your test suites to make sure they're testing what you think they're testing. Just as continuous testing keeps small problems from growing into big ones, so does continuous process improvement. It's all about being informed, and addressing any issues immediately. See Figure 2

10.  Don't ignore history! (Capture and mine data for valuable insights)
If you regularly build and test your software, but don't persist the results in a database, you're missing the chance to gain valuable insights into your development process and software projects. Even relatively simple queries can uncover previously unknown relationships or dependencies - finding out that half of your build failures occurred after someone touched an obscure configuration file would give you a pretty simple way to optimize your build process!

These and other best practices can help to ensure higher quality products at lower costs. Granted it's a departure from the traditional develop - test - deploy - hold-your-breath methodology, embracing the right tools and solutions to achieve early prevention of doomed projects is within reach.

Solutions for Upstreaming Quality - Open, Preventative, and Process-Driven
There is no single tool that can enable successful testing in development, but organizations can embrace an open, continuous build-and-test automation system that integrates testing with existing version control processes to test earlier, more frequently, and more consistently, isolating defects before they impact others, and giving organizations better visibility into code quality, stability, and compliance earlier in the lifecycle. To be non-intrusive, the solution must be on the server side and integrated with version control to build, compile, and report on all of the latest changes that the developers have checked in.

A solution for upstreaming quality and visibility must enable continuous integration and continuous testing in development. It should give developers immediate feedback on the impact of their changes by automatically building software every time they check in their changes. This will give all teams better visibility into the status of the project. Besides, early notification of issues followed by the ability to isolate problems and solve them in quarantine keeps problems at bay.

The key to adopting the 10 best practices I've outlined here is to implement a solution in the development phase that lets you embed test and measurement earlier in the application lifecycle. Testing early and often will improve software quality and predictability. Establishing process in the form of proven best practices helps organizations rally around "what works" and build on success. Identifying problems as early as possible saves countless hours of heartache. Finally, being able to report project status in real-time promotes a culture of transparency and provides visibility across development teams, QA, and management.

You can reduce the business risk of releasing applications of unknown quality or reliability with continuous testing. Executive confidence will increase because they will have an accurate picture, through extensive test coverage, of projects and will be able to measure the productivity and quality of internal and outsourced teams objectively. Development will know where it is and see how it is impacting the end result. Resources will be better allocated because issues will be addressed before they become major problems, so teams can focus on more strategic steps in the development process.

More Stories By Brad Johnson

Brad Johnson, director of product marketing for Lifecycle Quality Management, is responsible for the product strategy and marketing of Borland's Lifecycle Quality management solution. In this role, he is dedicated to improving the project success rate for IT teams with a comprehensive quality management solution that supports quality early in the lifecycle with complete, testable requirements, helps developers build higher quality code, and leverages powerful test automation to improve efficiency and reduce costs. Prior to joining Borland, Brad held senior-level positions in product management and product marketing at Mercury Interactive and Compuware. He earned a BS in business with a specialty in management information systems from the University of Phoenix.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.