CI Tools and Best Practices in the Cloud

Continuous Integration

Subscribe to Continuous Integration: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Continuous Integration: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Continuous Integration Authors: Jason Bloomberg, Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan

Related Topics: Continuous Integration, Application Performance Management (APM), DevOps Journal

Blog Feed Post

When You Can't Squeeze Your App | @DevOpsSummit #DevOps #Microservices

It’s time to look to the network

Developers are often caught between a rock and a hard place. They aren’t allowed to employ the tricks of the trade that can squeeze more performance out of their code because the consequences – technical debt stemming from impaired maintainability - are generally considered even worse. It’s not appropriate, for example, to use bit-shifting techniques to do simple multiplication because while it might be milliseconds faster, it isn’t always universally understood and thus can cause issues with long-term maintainability.

While we (developers) like to joke that “if it was hard to write it should be hard to read”, in the real world that’s not feasible. Coding standards exist to ensure some semblance of uniformity across time and developers and to reduce the time it takes to troubleshoot, modify, and enhance in the future.

So it is that optimization is often de-prioritized in favor of readability. Because until it matters, optimization doesn’t matter. Newcomer said it best:

Optimization matters only when it matters. When it matters, it matters a lot, but until you know that it matters, don't waste a lot of time doing it. Even if you know it matters, you need to know where it matters. Without performance data, you won't know what to optimize, and you'll probably optimize the wrong thing.

The result will be obscure, hard to write, hard to debug, and hard to maintain code that doesn't solve your problem. Thus it has the dual disadvantage of (a) increasing software development and software maintenance costs, and (b) having no performance effect at all.

-- Dr. Joseph Newcomer, “Optimization: Your Worst Enemy

There will undoubtedly, then, come a point in the balancing act between performance and maintainability where maintainability is going to win, even if the business demands more performance. That’s because performance has real business impacts.

Consider that “three hundred million dollars was spent to lay a high-speed fiber optic cable from the futures market in Chicago to the exchanges in New Jersey to improve the speed of stock trading (particularly high- frequency trading — HFT) by 3 milliseconds.” (What is worth 100 million per millisecond?)

That’s 100 million dollars per millisecond.

Now most organizations (like yours, probably )work in much bigger slices of time, like 100 milliseconds. A quick blink of your eye is 100 milliseconds. That still doesn’t seem like a lot, but it turns out that 100 milliseconds can actually be worth (many) millions.

milliseconds matterAnd even if you’ve managed to squeeze every last millisecond out of code, you may still be underperforming. That’s because apps are not islands. They have to be deployed on platforms and operating systems. They have to use the networking stack that’s given to them, whether it’s optimized for their app or not.

And sometimes it isn’t. And sometimes it can’t be, because you don’t have the information you need. A classic example is the use of HTTP compression. In the right situation – over a low-bandwidth (WAN or mobile) connection – it can be instrumental in improving the performance of an app or web site as perceived by the end user. In the wrong situation – over a high-bandwidth (LAN) connection – it can actually degrade (yes, as in ‘make worse’) – the performance.

Going all in on either option means necessarily condemning one group of users to delays.

That’s why it’s sometimes important to move upstream, from the app and its assigned network stack, to the proxy in front of it to optimize for performance. Because its networking stack was designed to be tweaked. It was designed to be configured on a per-app basis and provide specific TCP and HTTP optimizations that boost the performance of apps and APIs alike. When it’s a full-proxy it can double-down on those optimizations by tweaking its separate networking stacks. It can leverage additional caching and provide offload capabilities that reduce the load on the apps and APIs themselves which in turn helps keep performance ideal.

Sometimes there’s just nothing more a dev – or a DevOps – can do to squeeze out another 100 milliseconds from the app, API, or its app infrastructure. When that happens, it’s time to look upstream, to the proxy, to provide the boost the business needs.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.