In this post I’ll go through the brief history of the project to help you understand what it takes to make things like this to happen. In addition I’ll discuss the technical side.
How hard can it be to build a portal like this? Apparently not as easy as envisioned. First of all, how do you measure CDN performance in a nice way? Just having a server pinging the services isn’t enough. This leaves one of the core features of CDNs untested. They are all about global performance after all.
As we didn’t have the resources to build something sufficiently robust, we decided to contact Pingdom and see if they would be willing to collaborate with us. Pingdom is a Swedish company that specializes on this sort of thing so it was a natural fit. They agreed to the scheme and gave us access to their service. They also interviewed us for their blog. I don’t think the service would exist without their support.
I was responsible for the technical part. Michael and Dmitriy in particular helped a lot with the design. It helps a lot when you don’t have to work alone as it keeps you motivated and the ball rolling.
Getting There Technically
It took plenty of technical and design effort to pull it through. Before the official launch early June, quite a few things happened. We ended up with a very utilitarian design that gets straight to the point. The portal itself isn’t very popular at the moment but at least it gets its job done.
The initial idea was very crude as illustrated by the screenshot. The basic idea is there. You can see some filters, graphs and legend on top of the graph. As we progressed with the work, we decided to move the legend to the right side and add summaries to it. This is illustrated by the second screenshot.
As there still was some redundancy around and the layout felt a bit messy, we settled on the approach we use currently. We decided to ditch the separate toggles and merge them with the legend. The legend was moved to the top. In addition we integrated the help texts to the user interface in a more fluent manner. Now you can see help only when you need it, not all the time.
In order to make it easier to link to the graph you want, we used Backbone’s router component. It’s a nice little touch. If you notice something peculiar, all you need to do is to make the graph look right and copy the url. There are also some resources to help you get started with the CDNs and a small API even should want to access the latest data.
Technically speaking the project is based on Node.js. As our site is static by nature, it allows us to use nginx reverse proxy in front of it. In addition we use CloudFront for additional caching. I have no doubt the site wouldn’t work adequately without these precautions but this is all about performance, right?
The site has been fairly well optimized in other ways too. It follows most of the recommendations provided by Google PageSpeed Tools and such. There isn’t much we could do about it and it wouldn’t matter at this point. The site is fast enough as is.
Frontend side is the usual jQuery soup combined with Backbone for routing. We settled on using Chart.js for drawing the charts. The charts look okay but let’s say Chart.js API wasn’t a pleasure to work with. It isn’t particularly extensible and comes with a lot of baggage. This is understandable given it’s still in early stages of development and I think the author plans to splice the library up.
The frontend relies on a JSON file generated by the backend. It contains the relevant data that is then formatted to the user interface. It is a nice and simple architecture. Why to complicate things unless you really have to? We have set up a cronjob that fetches the data from Pingdom. I wrote a small library known as pingdom-api for this particular purpose.
If we wanted, we could ditch Node.js from the production version apart from this bit. The content is static after all and the dynamic part of the system is minimal. As I mentioned earlier it’s all served through a couple of proxies so we’re fine with the current solution. Now static content is served through Node.js and then cached by that nginx reverse proxy.
The final part of our stack, hosting, was kindly provided to us by BlueVM. Big thanks!
In case you are interested in the technical details, check out our repository at GitHub.
CDNperf is far from a complex site. I hope this post gave you some idea on how to proceed on something like that. And of course I hope you find our service useful! We are willing to grow it further.
In case you have anything to ask about the technical side or want to provide development ideas for us, comment below!
Juho (@bebraw) is a business oriented web developer with an artistic twist. He participates in open source development through various projects of his own. One of these, jswiki, lead to the birth of jster.net. Currently he is working as a freelancer.