Don’t Cry Wolf – Know your Performance Contexts
Posted in: Ecommerce   -   August 22, 2012

I recently worked on a project with a performance testing team who reported that the average response times for a critical transaction had grown to just over a 3-second page complete time using a transport-level load testing tool. The team immediately got into a huddle to consider whether we could give a GO or NO-GO on this release, with significant pressure to be defending the business requirement for conversion (e-commerce) and end-user experience. A NO-GO on this release would be a serious show stopper.

Here’s a rundown of our observations during the escalation:

  • The testing data was taken from the tool’s web virtual user measuring at the transport layer, which wasn’t accurate for a rich web presentation layer including asynchronous calls.
  • The root cause of the performance issue was related to in-line loading of resources (JavaScript and CSS) which we diagnosed and cross-checked in Chrome, FireBug and Charles proxy. The problem was worst on IE7 – which was known to have limitations.
  • The investigation revealed that we need only to debug the client-side performance for the release, but that any load-related scalability issues would only make things worse. Single-user baselines don’t include multi-user latency or bottlenecks.
  • We learned the real response time measurement was not “page load complete” but instead a point in the script where a load mask (grey transparent cover with animated spinner) was lifted so that an end-user could continue with their shopping purchase and checkout.
  • The checkout business process was extremely important to the business (no surprise), so if we even have a slight 500ms increase in end-user flow through the cart, the stakeholders would want to know.
  • Development already knew much of the technical root cause for this issue, but they were relying on the performance team to accurately give an official measurement from testing in the performance environment.

All that work was really valuable, we thought.  Of course, when we checked the website analytics we noticed that less than 7% of the end-users were still on IE7 and even fewer of them were on slow routes to the website.  We cried wolf about a performance issue without fully understanding the whole context for the issue.

The First Lesson Learned: don’t just think of a performance issue technically…expand your thinking to include the whole context for the end-user population.

The Second Lesson Learned: don’t just use your load testing tool like you always have (at the transport-level)…you have to include the full browser rendering to accurately measure performance.


Mark Tomlinson (@mtomlins) is a performance engineering and software testing consultant. His career began in 1992 with a comprehensive two-year test for a life-critical transportation system, a project which captured his interest for software testing, quality assurance, and test automation. That first test project sought to prevent trains from running into each other -- and Mark has metaphorically been preventing “train wrecks” for his customers for the past 20 years. He has broad experience with real-world scenario testing of large and complex systems and is regarded as a leading expert in software testing automation with a specific emphasis on performance.

Tags: , , , ,

  • http://blog.justindorfman.com jdorfman

    Great post, thanks for sharing this in-depth rundown.

  • http://twitter.com/ionutzp Ionut Popa

    There’s not much information in this post, it’s too general and your conclusions bring no value. Sorry.

    The First Lesson Learned: don’t just think of a performance issue technically…expand your thinking to include the whole context for the end-user population. -> of course nobody will optimize for users they don’t have

    The Second Lesson Learned: don’t just use your load testing tool like you always have (at the transport-level)…you have to include the full browser rendering to accurately measure performance. -> i think this is just too obvious for anyone reading this, there are three aspects of web performance: network time, application time and client time. And as steve souders pointed out most visible impact on optimization is in the browser.