Intermezzo about stability and compliance

When building a large scale high availability and high performance system you will need to scale horizontally in number of nodes, as compared to boosting a single node with higher performing hardware. You will also need to find components that does not limit the capabilities of underlying hardware. In a system with 1000 nodes an increase in performance of 30% means you need 230 less nodes also reducing data center cost, power consumption and maintenance cost. This does add up to a serious amount of time and money.

If you run a small scale solution quite often the 30% means little or nothing. Many will go for the “camera with the the megapixels” regardless of what they are going to use it for or the actual quality of the pictures.

Assuming your system is not overloaded what will decide the perceived quality of it will be that it actually works, that there is no down time, and of course that the integrity of the data isn’t compromised.

I wrote in a comment regarding benchmarks that I have trust issues with closed source components having worked with computer security for more than 15 years. I also wrote “Of course there are plenty of excellent products that aren’t open source, and not seldom they are more mature, stable and well written than the open source alternatives, but when they are on par I personally prefer to build on open source when possible since it gives me more control. But this is just my own preference, I’m not saying it applies to anybody else.”

G-WAN

Continue reading Intermezzo about stability and compliance

Advertisements

Benchmarking the benchmarks (part 2)

This is a continuation of the previous post

Scaling the benchmark tool

We were able to improve on the original “ab” benchmark quite  a bit especially for large files but as the authors of both “weighttp” and “G-WAN” points out the benchmark tool is only running on a single core. Here we leave “ab” behind since it has no such capabilities.

“weighttp” is built to be able to scale over a number of cores using the “-t” option specifying the number of threads to run.

“pounce” will (now) by default spawn a process per core but this can be specified with a “-d” option similar to the one above.

Optimization

If we are going to try to push the daemons we likely need to optimize the configuration. I’m going to use a few common recommendations but not dive too deeply into this. Please note that the configurations are based on the fact the we have 4 cores on this system.

Continue reading Benchmarking the benchmarks (part 2)

Benchmarking the benchmarks (part 1)

Why another benchmark

I’ve spent a lot of time benchmarking over the last years while building distributed systems and working with cost/performance optimization. Everything from infrastructure, hardware, storage, database and application solutions.

An understanding that gradually comes to you along the way is that benchmarking actually is very difficult. That is to say, benchmarking in itself is easy, but producing valuable data is not. The result seems to vary from “does at least give an indication” at best to “totally useless in real life scenarios”, with the latter typically occurring more often.

The reason is of course that there just are too many variables and unknowns, and to add to the difficulty some of them are quite complicated to simulate realistically. To be able to produce any data at all we make a lot of assumptions and simplify as much as possible, and keep maybe one or two variables, hoping that it will at least to a degree reflect on what we want to see.

Keeping this in mind it is of course obvious that you can’t put a lot of trust in a single benchmark, and even more obvious that you likely can’t trust someone who benchmarks with agenda at all. Lying with benchmarks is as easy as lying with statistics, you just pick the set of assumptions and fix variables where you perform at your best and your opposition at their worst. Knowing who has an agenda can be difficult, but someone who is benchmarking their own product, well, maybe has one…

This being said, I spend some time looking at the web-proxy Varnish at this summer and since I was curious of potential performance gain I did some benchmarks and decided to share them. I will actually redo them to make them a bit more up-to-date and I will probably skip Varnish itself since it actually is a somewhat different solution than a pure web server.

So this will be just another benchmark of web distribution of static content. If nothing else it will be an additional, for a brief period the most recent, indication of the performance of web server daemons running on Linux. Hopefully there will be a few valuable thoughts along the way.

Software tested

I will benchmark

  • Apache v2.2.21 – The old work horse
  • Nginx v1.1.5 – Probably the most common Linux alternative to Apache
  • Cherokee v1.2.99 – “The fastest free Web Server out there”
  • Lighttpd v1.4.29 – That will “scale several times better with the same hardware than with alternative web-servers. “
  • G-WAN v2.10.6 – According to the vendor the silver bullet that makes all other software regardless of purpose obsolete (and will cure disease and solve the worlds conflicts along the way)

Continue reading Benchmarking the benchmarks (part 1)

First time post

Having worked as chief architect for a large European content distribution network for the last seven years I’ve mostly used software from the open source community, without having any real time or option to give back or contribute myself. Now the game plan has changed somewhat so hopefully I’ll be able to dust of some old code or even finish some new projects.