When 80/20 Becomes 20/80
Posted in: Monitoring, Performance Testing   -   August 2, 2013

You’ve probably heard the 80/20 Rule of web performance: typically, 80% of page load time is on the front-end, and only 20% on the back-end. While that’s generally true, there are some important things to watch out for.

Caveats to the 80/20 Rule

The 80/20 Rule is supported by data gathered mainly from the home pages of top Internet sites. However, there’s a lot of selection bias here since home pages are usually static content. Static content, by definition, involves no significant back-end processing — so of course on pages like this, the back-end is going to account for a very small slice of overall response times.

More importantly, the 80/20 observation is based on normal load conditions. During a traffic spike, back-end response time can play a much bigger role.

Front-End vs. Back-End Performance

Front-end response time tends to be relatively constant regardless of user load. At normal load, client-side resource loading and rendering really does account for 80-90% of the overall response time. However, as load increases, this ratio changes.

The back-end of any web application has many potential bottlenecks. Insufficient memory to work with, thread contention, or hitting the limit on database connections? Slow disks, the wrong kind of database locking, or missing indexes on a table that needs them? A web server with an artificially low thread count or max connections? These are just a handful of the many possibilities.

As soon as the load increases such that a single back-end factor becomes a bottleneck, you’ll see a dramatic slowdown in performance that affects all your users.

Back-end performance is a higher stakes game

Web performance optimization, more than anything, is about mitigating risk.

The business risk of poor performance, obviously, is that with every extra millisecond of page load time, there’s an increased chance that your users will abandon you. Many advocates of the 80/20 Rule say for this reason we should focus on the front-end, because the front-end is normally where the biggest gains are to be made.

But let’s not forget that there are actually two components to risk: the probability of the bad thing happening, and the severity of the bad thing if it does happen.

I would argue that the severity of back-end performance problems under heavy load can be far more crippling than the more predictable performance degradation we see on the front-end.

Loading an uncompressed jQuery or serving unoptimized JPEGs, while annoying, is not going to crash your site. But if a single back-end bottleneck gets saturated, everything grinds to a halt. An unexpected surge of high traffic might be all it takes to bring your site down completely.

As site owners, high-traffic events like these actually present our biggest opportunity: the most potential sales if you’re a retailer; the most potential exposure if you’re a blogger; and so on. There is more to gain during a high-traffic event than at any other time.

However, that means there’s also a lot more to lose during a high-traffic event if we fail to scale. More users to annoy, more sales to be lost, and more bad publicity in the event of a crash.

The severity of a problem caused by high traffic is amplified by the fact that it always comes at the worst possible time!

So why aren’t we doing more optimization for scalability?

Scalability is complicated because it adds another dimension to web performance optimization. It takes what’s mostly an arithmetic problem (the cumulative, additive effect of each part of your page loading in sequence or parallel) and changes it into a really complex function (queuing, concurrency and throughput, as a result of competition for shared back-end resources).

It takes some mind stretching to factor in this extra dimension of scalability. It also takes planning and measurement. Because back-end performance and scalability problems have the potential to be so severe, it’s that much more important that we plan ahead, test for them whenever we can, and have the tools and visibility at our disposal to react quickly when they do occur.

Steve Souders and many others have done an outstanding job these past few years, evangelizing for front-end performance optimization and sharing best practices with the community. Unfortunately, we talk relatively little about back-end performance and scalability, and the extremely costly problems that can result if the back-end fails to perform well and scale.

And so, to start off the discussion (and only partly in jest), I’d like to propose another 80/20 Rule…

80% of site outages are caused by a failure on the back-end!


Andy Hawkes is the creator of Loadster, a cloud-hybrid solution for load and stress testing web applications and services. A software developer and consultant by trade, performance has been a recurring theme throughout Andy's career. He lives in the Arizona desert during the cool months and escapes to the mountains or the coast during the summer. Time is precious, but Andy can occasionally be found wasting it on Twitter as @azhawkes.

Tags: , , , ,