Since I got a new machine (Dell XPS 15) last week I decided to re-run my Helios load tests along with a few other usage scenarios and was pleasantly surprised by the improved performance which is boosted by over 30%, but also found some odd behavior in some of the Web API related load tests. This post is kind of a rambling review of some of the stuff I found – rambling because I’m not quite sure how to account for some of the results.
So, last year I posted a blog entry related to ASP.NET raw throughput performance which aimed to measure basic request throughput for the various ASP.NET frameworks available today. A couple of weeks ago I added Helios to this and found that Helios improved performance quite a bit. Last week I also got a new machine and as I set up the machine I re-ran some of the tests I had previously run and found that performance with this new Dell XPS 15 laptop is near over 30% faster than my previous XPS 15 (both of which have quad-core cpu’s rated at the same speed and the same SSD as before).
Here are the numbers I ended up with:
The numbers are not all that different than my original Helios tests – except faster. The spread between technologies is staying roughly in the same range. But for a laptop these numbers are pretty amazing. I ran these tests about 20 different times and while the overall top values varied a bit the relationship between the different technologies staid always just about constant.
I also updated the parser to show errors – note that there are a bunch of errors for the JSON results, which is actually OK. AB.exe which I use for the load tests detects that the response is not the identical length and flags the request. This is normal as the result contains a JSON date which varies in length depending on the millisecond count.
Anti-Virus Seriously Affects Performance
Even before I upgraded to the new machine, by chance turned off Windows Defender and retested performance. I was having some odd issues in another Web application where files were mysteriously disappearing so I turned off Defender. While I had it off I re-ran the ASP.NET Perf tests – performance immediately jumped by 30%! The numbers above reflect the 30% Defender penalty removal, plus another 30% on top of that from the numbers I published in the original performance article. This is a significant performance hit to take and AFAIK Windows Defender is one of the less intrusive/hogging AV applications.
I suppose on a desktop machine this hit is not a serious problem but I on a server it might be worthwhile to see if server AV has a similar effect and if it does tweak the AV to not affect network traffic so heavily.
For those of you on Twitter you might have seen some Web API related posts from me over the weekend. While testing on the new machine, I mentioned some numbers and a few people chimed in with Web API questions, like how are you returning results etc. So I added a few more tests to check for returning results in a few different ways.
- Returning a Typed Result
- Return HttpResponseMessage created with Response.CreateResponse()
- Return HttpResponseMessage created in the controller explicitly with a MediaFormatter
Quick, which do you think is faster?
Here’s what the three different methods look like:
public Person HelloWorldJsonTypedResult()
return new Person();
public HttpResponseMessage HelloWorldJsonCreateResponse()
var response = Request.CreateResponse<Person>(HttpStatusCode.OK, new Person());
public HttpResponseMessage HelloWorldJsonManualResponse()
var response = new HttpResponseMessage(HttpStatusCode.OK);
response.Content = new ObjectContent<Person>(new Person(),
If you look at the performance result of the benchmark test shown earlier you’ll find that the fastest to slowest order is:
- Manual Response Creation
- Typed Result
- String Result
It’s also surprising that typed results and in particular string results are quite a bit slower 5% to over 10% actually. I would have thought that string conversion must be one of the easier things to do especially given that there’s no object parsing required.
Self Host – Sloooooow
Another surprising thing is that Self-Host is slow using the same controller used in the Helios example. It’s very nice that you can pretty much reuse just about the exact code from the Helios sample with a self-hosted server – only a minor change in the startup class is required. But performance – at least on this Windows 8.1 desktop machine, is roughly half that of the Helios IIS example. IIS is clearly optimized for HTTP operations, but I expected that HttpSys based self-hosting would be at least close, if not faster than IIS since there’s no overhead at all going directly to the Web API or plain through OWin endpoints. Just raw Http.sys.
There’s been a lot of talk recently of using Self-hosting rather than IIS for efficiency and a leaner platform in general, but that doesn’t necessarily translate into better performance if the host technology is slower. IIS is pretty darn efficient, stable and performant and it doesn’t sound like a good idea to me to abandon the tried and tested IIS platform, unless there is a very good reason to do so (ie you need to embed HTTP functionality into a non-Web application).
These tests aren’t meant to be definitive end-all – I basically have this suite set up so I can quickly gauge performance of the various technologies at a glance. The code for all of this lives on GitHub, so you can install and test this yourself and add your own custom scenario tests to these existing tests as needed. I’ve updated the project with the tests described here and also tweaked some of the batch files and test result parser.