CI Tools and Best Practices in the Cloud

Continuous Integration

Subscribe to Continuous Integration: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Continuous Integration: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Continuous Integration Authors: Stackify Blog, Aruna Ravichandran, Plutora Blog, Dalibor Siroky, PagerDuty Blog

Related Topics: Virtualization Magazine, Continuous Integration

Virtualization: Article

Continuous Integration with Team Foundation Server 2008

Widely accepted best practices easily incorporated into any development methodology

Keep the Build Fast
If every check-in triggers a build, clearly the build should be fast so that builds don't queue up (or the "fewer builds" feature may need to be considered). Regardless of how often the builds are triggered, everyone wants fast feedback, so speed is desirable anyway. You want to keep the builds fast so developers get nearly immediate feedback on their check-in. If they break the build you want them to know before they do a mental context switch to another part of the code. You never want other developers to do a "GET" from version control and get broken code. They'll probably waste time trying to fix it without realizing it's from someone else's code.

TFS provides a lot of tools to keep the builds fast. Having build agents keeps the builds off the TFS server, so a dedicated build server can be easily established with whatever hardware is required to achieve the desired SLA for completing a build. The 2008 version introduced parallel compilation that can be specified for the build agent by specifying "maxprocesses" to allow simultaneous compilation on multi-core or multiprocessor servers.

If the long-running builds are caused by ancillary tasks such as integration tests, performance tests, or packaging for deployment, it's possible to establish multiple build types. This way the tests won't be included in the CI-triggered builds, but rather, triggered separately. The point of contention on this topic is typically the database, so if data-dependent tests are a bottleneck (particularly setup and tear-down, let alone the tests themselves), a "nightly build" is a good option for removing these tests from the CI test list.

Test in a Clone of the Production System
This is easily done by letting automated tests run as part of the build. The configuration files for the test project would specify the appropriate resource (database connection strings, etc.) to indicate the "production clone" environment.

Again, the most common point of contention on this issue is the database. If it isn't practical to have a "clone" of the production database, tooling in the Visual Studio Team Edition for Database Developers can aid in creating test databases with a controlled set of test data.

Make It Easy for Everyone to Get the Latest Executables
If the build master communicates the drop location for a build type to the team, and permissions are set properly, getting the latest executables couldn't be easier. The folder name will match the build name, which is listed in the Build Explorer in the IDE (or the TFS Web Access application). Since the build report is available to everyone, the entire team can easily get the appropriate build's binaries. (See Figure 3 for specification of the drop location.)

Everyone Can See What's Happening
This is where TFS really shines. The Build Explorer will show all the builds and their statistics with the familiar "red/green" indicators that Martin Fowler suggests. Double-clicking the build name will show a summary report, and if desired the Build Log can be viewed as well.

Furthermore, the results are incorporated in the TFS Data Warehouse. This includes not just the expected work item information, but build and version control data. There is a rich set of reports out-of-the-box, and users can develop custom reports using Microsoft Reporting Services. These reports can be run from with the Visual Studio IDE, the Reporting Services Report Manager website, the Team Project SharePoint Portal, or the Team System Web Access application. With all these options, reports can be shared not just with the development team, but with management and end users as well. (If desired, access can be limited using the appropriate authorization mechanism.)

Automate Deployment
Out-of-the-box, this is the one area in which TFS is lacking. TFS doesn't address deployment directly, which is a consequence of the simple fact that deployment is handled so differently across different types of development groups - small versus large organizations, as well as IVR versus consulting versus corporate versus Microsoft. Microsoft is well known for what it terms "dog fooding": using its own products to produce its software. TFS is a prime example of that.

However, there are many tools available in TFS to do automated deployment. One example might be to include a deployment project type to produce MSIs. Some TFS users create special build types for deployment with the specific deployment tasks (stop an IIS application, copy binaries, restart IIS, etc.) created as custom tasks in the MSBuild project.

Another option is "TFS Deployer," which is an open source Codeplex application originally developed by the Australian consulting firm Readify. This application listens for TFS events and can trigger a Powershell script. Typically this is implemented by listening to changes in the "Build Quality" field for a build. Since this field is customizable, a build quality could be created for each environment (QA, staging, production, etc.) and the appropriate deploy script executed for the matching build. A good introduction to TFS Deployer is available here. The Codeplex site is here.

Summary
Regardless of whether a team uses agile or more traditional methodologies, integrating changes into a standard build as quickly as possible will prevent problems. It's a lot easier to fix code you just wrote than code you wrote 30 minutes ago, let alone the previous day or last week! A build that contains only one developer's changes since the previous build makes it obvious who "Broke the build."

Clearly, TFS supports all facets of Continuous Integration - most very easily and out-of-the-box. Regardless of the development methodology, development teams of all sizes should consider Team Foundation Server. For users of TFS 2008, Continuous Integration - whether using the "textbook" definition, or any derivation desired - is within easy reach.

Creating a Team System Build Type
To start the wizard to create a new Build Type, simply right-click "Builds" under the appropriate Team Project and select "New Build Definition."

The first step is to name the new Build Type.

Next select the Workspace. This will select the portion of the version control tree that contains the source code files to be built.

Next select MSBuild Project File. By default, all Built Types are stored in the "Team Build Types" Folder. An existing project file can be selected, or the Create button can be clicked to launch the sub-wizard to create a new one (see Figure 1).

The first option of the sub-wizard is to select the solution desired for the build. This dialog will show all solutions under the location in Version Control Selected in the previous step. As you can see, "Select All" is available so multiple solutions can be built in a single build type.

Next, Build Configurations can be selected. Again, multiple configurations can be built with the same Build Type.

Figure 2 lets selections of tests to be run (and specification of the test lists), as well as code analysis, if desired.

After the MSBuild Project File is specified (or created with the sub-wizard) the Retention policy is defined.

Next the Build Agent and Drop Site are specified as shown in Figure 3.

Lastly, the "Trigger" is defined. In this example; we're building no more often than every half-hour (see Figure 4).

The build has run and shows Partial Success.

The Build Report shows some tests failed - more work to do (see Figure 5)! After the changes are checked in, a new build will be triggered automatically (or in this case, in the next half-hour of the last build).

More Stories By Daniel Sniderman

Daniel Sniderman first learned to program FORTRAN in high school in the late ?70s using a keypunch machine. He has a BA in History from the University of Illinois at Urbana-Champaign, and a MCSD.NET and MCTS in Team Foundation Server. Dan has been a senior consultant with Magenic since 2004.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.