Recent Comments

re: Bitmap types, Binary Resources and Westwind.Globalization
Saturday @ 1:28am | by Rick Strahl

@Frank - yup totally agree. I don't believe it really makes sense to store binary resources, but in this case I have to support it since that's what Resx supports. You can also use this stuff outside of ASP.NET where you might need to have some resource access.
re: Bitmap types, Binary Resources and Westwind.Globalization
Friday @ 10:01pm | by frank

It sounds like if someone needs a localized embedded image for a web app, the easiest thing for them to do is have a string resource that contains the data url. Then the processing concerns go away.
re: Azure VM Blues: Fighting a losing Performance Battle
March 25, 2015 @ 7:27pm | by nom-nom

We recently migrated a medium-ish sized system from two dedicated hosted servers at RackSpace over to Azure. We run a data api that serves 3k/6k (off-peak/peak) requests per minute - about 7M requests a day total to about 70,000 unique clients, some international.

We're having a lot of sporadic issues. In fact, as I write this one of the websites in our group has been returning 503's for 25 minutes for no discernible reason.

We're using about dozen D-series cores of cloud service for our main api app. It does a fair bit of image generation using System.Drawing, GDI+, 3rd party native libraries, etc. An A2 cloud service for recurring jobs and the Scheduler feeding to a storage queue to initiate them. An A2 for a few data ingestion apps and FTP, mounts a storage file share that's shared with the recurring job processor and api. Another A2 cloud service for periodic video encoding and image generation. A handful of websites in one pool - our public storefront and a site just running ImageResizer off our blob storage along with some internal tool sites. 13GB Redis cache.

Our main DB is currently a P3 because Azure has some bug where our database was failing over 10+ times a day and our apps would be unable to connect to our DB for 1-2 full minutes at a time, several times a day. We also use a P1 master and P2 active readable secondary DB.

I can't even begin to enumerate all the many little weird issues we experience, but the end result is that we've barely had a single day go by without all our klaxons blazing at least once. Service downtime is not an exceptional event, it's a matter of course.

There's also no effective way to get questions about these events answered, short of perhaps paying the $1000 a month support plan. Currently we submit tickets and get non-answers after 5-7 days.
re: A jquery-watch Plug-in for watching CSS styles and Attributes
March 25, 2015 @ 10:16am | by Rick Strahl

@Tibi - this was asked before and the behavior is by design. When a property changes you get passed the list of properties with their state so you can decide on what you need to address. There seems to be no need to raise multiple callbacks for each change because you're going to get exactly the same data with each of them.

If you want to handle multiple callbacks, go through the list of props and determine what needs to be done based on the values.
re: Using an alternate JSON Serializer in ASP.NET Web API
March 25, 2015 @ 10:13am | by Rick Strahl

This article refers to a pre-release version and yes JSON.NET is the default serializer now. However, this article still serves as guide for replacing the default serializer with something else.
re: A jquery-watch Plug-in for watching CSS styles and Attributes
March 25, 2015 @ 4:17am | by Tibi Neagu

First off - amazing plugin! Really hits the nail on the head for a lot of us out here.

I've just started using it and noticed that if you add more than one watcher on the same element, only the first callback will be called.

Maybe I'm doing something wrong or is this a limitation of the Mutation API?


P.S. I've also opened an issue on Github:
re: Using an alternate JSON Serializer in ASP.NET Web API
March 25, 2015 @ 12:21am | by Dennis


Have you updated this article lately. Looks like ASP.NET Web API implements JSON.NET already or am I confused? Yeah I know it is two years old a lot has changed.
re: ASP.NET MVC, Localization and Westwind.Globalization for Db Resources
March 24, 2015 @ 11:49am | by Rick Strahl

@KA - The core library can be used completely outside of the ASP.NET context, so if you have a Windows (non Web) app you can use database resources in that as well using the ResourceManager or DbRes.

When running under ASP.NET using the Web package you get two different project options to run under: WebForms or Project where WebForms uses Local/Global Resource folders and naming conventions or Project which uses simple files and that can be used with any .NET project and application.

All the front end stuff in the localization UI depends on the Web package and it assumes that ASP.NET is available, so if you run the Web Admin form HttpContext is always there. All the tooling that is used however is available in classes that you can call directly from your own applications/code.

For a WCF project you would just use the core library that has no dependencies on HttpContext - there are only two support classes in the core library that rely on System.Web - DbRes (which has helpers that return HtmlString) and the various exporters that default paths to the default web root path *if* Http context is available.

if you find other places in the core library, please file an issue on GitHub and I'll take a look at it.

re: ASP.NET MVC, Localization and Westwind.Globalization for Db Resources
March 24, 2015 @ 5:35am | by KA

Hey Rick,

Really cool stuff, I am impressed.
I have one question. Saw that for retrieving the resources you use HttpContext.GetBlobal or local resources. Is there any other way to retrieve the resources. For example if you use a WCF Service and is on TCP the HttpContext is null.
I mean somehow to be protocol independent.

re: A dynamic RequireSsl Attribute for ASP.NET MVC
March 22, 2015 @ 1:27pm | by Tim

Hi Rick,

We use this little trick when decorating classes/methods:

#if !DEBUG

public ActionResult MyAction()

For those not familiar, this uses the debug flag set up against the project configuration in Build tab. 'Define Debug constant'. You can configure whether the flag is set or not for each project build profile.
re: Creating a dynamic, extensible C# Expando Object
March 20, 2015 @ 11:29am | by Chief

I am using the Expando object to facilitate an adhoc data query in my application where the returned data is a result of a sql query built during run time. Not tied to strong typed is so very VFP like.

Thank you.
re: ASP.NET MVC, Localization and Westwind.Globalization for Db Resources
March 18, 2015 @ 10:28am | by Douglas Hammon

Good stuff. Looking forward to your post about SPA scenarios
re: Article: A low-level look at the ASP.NET Architecture
March 15, 2015 @ 3:04am | by Veverke

Excellent and a must read article. I share the philosophy that understanding the inner workings of things may contribute greatly for further creations - besides the fact the seeing the big picture gives lots of satisfaction.

A must read !!!
re: Azure VM Blues: Fighting a losing Performance Battle
March 14, 2015 @ 8:49am | by roger geisert

I has similar issues. Someone here mentioned setting up an affinity group which I did. Now it's quite snappy.
re: Prefilling an SMS on Mobile Devices with the sms: Uri Scheme
March 14, 2015 @ 3:01am | by JĂșlio

This post helped me figure out how to get SMS URIs to work.

Here’s a little follow up:
re: Publish Individual Files to your Server in Visual Studio 2012.2
March 13, 2015 @ 2:38am | by Costin

It seems that extra settings in the publishing profile are not taken into account on individual publishing.

In my publishing profile I have added some custom settings to minify CSS and JS files. When I publish the entire project, everything works as expected, but if I want to publish just the Scripts folder, the JS files don't get minified anymore.

Also, I have set up the profile to precompile source files, so almost everything ends up in the bin folder. However, there's no way to publish only this folder.
re: Using FiddlerCore to capture HTTP Requests with .NET
March 12, 2015 @ 9:40am | by Ira

Greate job! thank you!
but I have 1 trouble.
in my local machine Fiddler creates certificate and FiddlerCore Api works perfectly.

but I need to build my project also on CI TeamCity and there I have some problem.
FiddlerCore could not create certificate.
my code fails on this line:
with next error message:
"System.IO.FileNotFoundException : Cannot locate: MakeCert.exe. Please move makecert.exe to the Fiddler installation directory.
at Fiddler.DefaultCertificateProvider.CreateCert(String sHostname, Boolean isRoot)
at Fiddler.DefaultCertificateProvider.CreateRootCertificate()".

In my project I have 2 dlls: BCMakeCert.dll and CertMaker.dll;

and here is my method:
"void InstallCertificate()
if (!CertMaker.rootCertExists())
if (!CertMaker.createRootCert())
throw new Exception("Unable to create certificate for FiddlerCore.");
if (!CertMaker.trustRootCert())
throw new Exception("Unable to trust certificate for FiddlerCore.");

X509Store certStore = new X509Store(StoreName.Root, StoreLocation.LocalMachine);

where I miss something??

thank you a lot!
re: RequestValidation Changes in ASP.NET 4.0
March 11, 2015 @ 5:04pm | by fib(Littul)

just encode with javascript ?! - amounts to writing a massive encoder/decoder that will be broken each time W3 changes things, just about. Just try this ... examples:"POST", "page.aspx?val1=abc&val2=<d", false(or true));
or anything "<(letter)"...
or anything that has '&' in it...
The above is with RequestValidation in force.
Examples: if you encode '<d' ... no go!
if you swap '<' or & with certain 3 character sequences and of course, decoding code galore on the server side... wow... things can be made to work... but the code is horrendous. Don't know, show me what I am missing.
so, hypothetically <div would become xxx, <p...zzz, ... etc... may fail in a porno context! lol
re: Web Browser Control – Specifying the IE Version
March 11, 2015 @ 12:22am | by Govindarajan

Hi Guys,
I'm running a windows service to render a HTML content on WebBrowserCotrol and take the screenShot of the output and save it in the specified local directory. I'm able to achieve this if i'm logged in the server, if I'm logged out of the server I'm getting the script error. I think it still try to find the registry entry in the HKEY_CURRENT_USER

I have tried all the below option

HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATION



Any idea on this?

re: Cordova and Visual Studio CODE Magazine Article
March 06, 2015 @ 3:30am | by Rick Strahl

Thanks Dave. If you've already been doing Cordova development there's probably not much new stuff here, except maybe the focus on Visual Studio, which I was rather impressed with in how easy it makes the development process even with iOS.
re: Cordova and Visual Studio CODE Magazine Article
March 05, 2015 @ 8:27pm | by Dave Ward

Nice. I just pulled my first print copy of CODE magazine out of the mailbox this afternoon and thought that would be an interesting article to read since I've been doing a lot of Cordova work lately myself. Didn't realize you wrote it. I will definitely find some time to read it now.
re: <main> HTML5 Tag not working in Internet Explorer 9/10/11
March 05, 2015 @ 12:39pm | by Rick Strahl

@Sean - thanks for checking... and confirming what I suspected that main was treated as an inline tag.
re: <main> HTML5 Tag not working in Internet Explorer 9/10/11
March 05, 2015 @ 8:53am | by Sean W.

I was a bit curious about this, so I hacked up a dirt-simple test case in JSFiddle. You can try it yourself (with jQuery):

Before main. <main>Foo</main> After main.

Chrome prints "block" to the console. Firefox also prints "block". IE10 prints "inline".

The W3C and WHATWG recommend this should be the default style rule:

main { unicode-bidi: isolate; display: block; }

I ran the same test including "unicode-bidi" on a few browsers, and got these results:

Firefox 36: main { unicode-bidi: -moz-isolate; display: block; }
Chrome 40:  main { unicode-bidi: normal; display: block; }
IE 11:      main { unicode-bidi: normal; display: inline; }
IE 10:      main { unicode-bidi: normal; display: inline; }
IE 9:       main { unicode-bidi: normal; display: inline; }
IE 8:       main { unicode-bidi: normal; display: inline; }
IE 7:       main { unicode-bidi: normal; display: inline; }

// For comparison:
IE 10:      gronk { unicode-bidi: normal; display: inline; }

It's interesting that none of the three major browsers get the unicode-bidi property right by default. Firefox comes closest.

But these results suggests to me that the IE team simply forgot to include a stylesheet rule for <main>, since it shows the same properties as "gronk" or any other unknown element.
re: ASP.NET Frameworks and Raw Throughput Performance
March 05, 2015 @ 4:16am | by Rick Strahl

Currently the answer is no. vNext is very slow compared to say Web API or current MVC, but it's also not optimized yet. In my tests I ran in with beta 2 perf was somewhere in the 75% range from what MVC/Web API produced.

I haven't added vNext to the tests because of the unstable and changing environment and the really crappy perf at the moment. This will get better once they get closer to a fixed set of features and release candidates I think.

To be honest I don't expect performance to be greatly improved over current tech, but scalability - higher overall request load might be the case if code is properly async optimized.

We'll just have to wait.
re: ASP.NET Frameworks and Raw Throughput Performance
March 05, 2015 @ 2:41am | by Yassine

Thank you for this great article.

I agree with Andy, would vNext be faster ? ( because they have merged Web api and Mvc, and is more lighter because it doesn't reference System.Web and the heavy 200mb framework ? ( lighter means small memory print to handle each request... ) )

These are good questions to answer :) .

Thanks again
re: Azure VM Blues: Fighting a losing Performance Battle
February 27, 2015 @ 7:15am | by Marty Glynn

We're getting similar results on Dell - which is apparently an Azure implementation.
re: .NET 4.5 is an in-place replacement for .NET 4.0
February 27, 2015 @ 5:54am | by Ole

Thanks for this explanation!

As much as I like doing c# and .net tech, .NET 4.5 is causing major headaches in our web application projects, mainly mvc.

Some of our backend libs still are 4.0 since they're mainly used in WinForms applications where we don't want to force the clients to do a new download.

But as good as it is, Visual Studio 2013 is a pain, when using MVC and you do NOT want to use 4.5, especially if you're using nuget for all those nifty packages like bootstrap, jquery, log4net etc.

I sometimes don't understand why the VS and .NET teams don't have the Developers in mind, who are actually using their stuff. I would like to keep costs low, but I'm wasting my companies money on such issues.
re: ASP.NET Frameworks and Raw Throughput Performance
February 27, 2015 @ 5:31am | by Andy

I've always referred to this article over the years in my various discussions of performance :) Any chance you might be able to revisit it for the new 2015 technologies (MS has put a lot of effort into performance since 2012)
re: Using FontAwesome Fonts for HTML Radio Buttons and Checkboxes
February 27, 2015 @ 3:22am | by Rick Strahl

@Allen - thanks for catching the missing CSS. Fixed.

Not sure why you can't click the checkboxes. Does it work if you take the with-font off? Does the sample form work for you? I can't see a reason that it work with the keyboard but not the mouse. The mouse target should work for both the label and the actual check box. I use the exact code you have in an actual application with out the angular bindings without problems. You might want to double check the actual HTML that is rendered with dev tools and Inspect Element to ensure that there isn't something that's getting injected into the middle.

@Pawel - it works down to IE 9 which is the first IE version that partially supports CSS3 which is what makes this work.

re: Using FontAwesome Fonts for HTML Radio Buttons and Checkboxes
February 27, 2015 @ 1:08am | by Allen

Great article.

I believe your consolidated css at the bottom does not contain the css to make the actual checkbox/radio button invisible.

Also when I use the following html:

<input name="rememberMe" type="checkbox" class="with-font" data-ng-model="vm.loginData.useRefreshTokens"><label for="rememberMe"> Remember me</label>

I am unable to click on the font-awesome checkbox, I can use keyboard shortcuts to check and uncheck the checkbox but not the mouse? Any ideas what I'm doing wrong? This is all in Chrome.

re: Using FontAwesome Fonts for HTML Radio Buttons and Checkboxes
February 26, 2015 @ 11:47pm | by Pawel

I wonder if it works well in old IEs?
re: Azure VM Blues: Fighting a losing Performance Battle
February 26, 2015 @ 2:50pm | by Minnesota Steve

You really don't want to use Azure's SQL databases... They've put a lot of effort into these over the past year to make them perform dreadfully slow. A lot of this I wouldn't have a problem with, as much of it is to throttle bad database design and bad queries. Except they don't have good tooling to help you identify your problems. You can't run the SQL performance analyzer. And worse, you can't easily get a copy of your database and bring it down to your local machine to do this kind of analysis. (even if it was comparable)

Hosting your own database on a VM is a preferable option. Your crappy A2 vm experience is easily equivalent to the $900/month P2 level of Azure SQL.
re: Chrome DevTools Debugging Issues
February 24, 2015 @ 12:53pm | by DNewb

I went to settings and then workspace. I had some old workspaces referenced and once I cleared those out and restarted the browser the issues cleared up for me. Seems like it might be related to the source mapping features, as @Pilotbob pointed out.
re: A Localization Handler to serve ASP.NET Resources to JavaScript
February 24, 2015 @ 12:32pm | by Rick Strahl

@Chris - Not sure. I think that should work, but you'll have to make sure the handlers are registered in the <system.web> section instead of <system.webServer>. I think you may have the configuration backwards? the httpHandler section is classic mode, handler is integrated.
re: Back to Basics: UTC and TimeZones in .NET Web Apps
February 24, 2015 @ 6:34am | by MarcelDevG

Hi Rick,

I'm with you on the datetime offset/datetime issue. On the server I only want to deal with UTC dates.
But I wonder why you don't use the javascript method getTimezoneOffset() of a Date on the client to get the timezone of the user ?

re: A Localization Handler to serve ASP.NET Resources to JavaScript
February 24, 2015 @ 1:03am | by chris

Hey Rick, I really like your work. I'm trying to implement the javascriptresourcehandler using classic mode. But this doesn't work. The requests are answered with a PlatformNotSupportedException, telling me that the integrated pipelinemode has to be used. I followed the implementation-guide in your post, but I was only able to make it work using integrated mode and adding the handler to the handler-section (not the httphandler-section). Is the classic mode not supported anymore?

re: Azure VM Blues: Fighting a losing Performance Battle
February 23, 2015 @ 12:31pm | by Peter Seewald

We would love to use SQL Database (aka SQL Azure) but the feature set isn't there and running a VM with SQL on it is overly expensive for our shop. Having all of our data and infrastructure in Azure would be way easier to manage and work with but we ended up having on premise environment. We tested performance of our VM versus Azure's VM for a similar setup and saw a similar pattern to what you saw with your older physical machine.

Bottom line is that it if you can't move to Azure Websites and SQL Database in Azure than it's not worth moving. The cost/performance of VM's in Azure for smaller companies/individuals is holding quite a bit of migration to the cloud back. At least in the Microsoft world. Bizspark is decent but when the cost is so high, it's hard to justify that kind of money.
February 23, 2015 @ 1:57am | by Jack

Thank you for this.
I was tearing my hair out wondering why IE was mangling my site.
Adding main{display:block} fixed it!
re: Visual Studio 2013 'Could not evaluate Expression' Debugger Abnormality
February 22, 2015 @ 4:19pm | by RobinHood70

I'm currently getting the same issue with a property in an abstract class. The code is stupidly simple.

public abstract class TestBase<T> : TestBase
    protected T Output { get; set; }

and then the class using it is a simple:

public class LoginTest : TestBase<LoginOutput>

In this case, Output is not being evaluated in the debugger window when I hover over a "this.Output" it in the code. When I hover over "this", I get the window for that and drilling down to Output, I get the "Could not evaluate expression" error with a refresh icon. When I refresh, it tells me that it cannot convert LoginTest to TestBase<LoginOutput>.

If I add in a plain-and-simple backing field, the backing field gets evaluated fine.

So far, none of the solutions presented have helped in the slightest.
re: A Small Utility to Delete Files recursively by Date
February 20, 2015 @ 11:48am | by Rick Strahl

@Marcio - Any more specifics on the error? I run it on 64 bit here and on my server so pretty sure that works, unless there's a problem with Anti-Virus or other system protection software interfering.

If you have more info can you file an issue in the GitHub repo, please? Thanks.
re: A Small Utility to Delete Files recursively by Date
February 20, 2015 @ 10:30am | by Marcio

Hi Rick,

I tried using your utility (the binary) on Windows 8 64 bit and Windows gave me an error on compatibility with 64 bit versions of Windows.

Thanks anyway
re: A WebAPI Basic Authentication MessageHandler
February 20, 2015 @ 5:52am | by Jk

if I add a role in the principal:

var principal = new GenericPrincipal (identity, new string [] {role});

how can I authorize methods of the controller with the role?

Thank You

re: Back to Basics: UTC and TimeZones in .NET Web Apps
February 20, 2015 @ 3:23am | by Andrei Ignat

Wrote also about JsonResult in MVC and the corresponding problems in reading in HTML .
re: ResourceProvider Localization Sample posted
February 20, 2015 @ 2:09am | by Rick Strahl

@Andreas - Resource Managers and Providers are case sensitive so you have to match key names explicitly, since they are based on dictionary lookups. If you turn the provider off you're getting the default values you have in controls I'm guessing but it's not actually using any resources at all. Even Resx resources names are case sensitive.

In the database the key case sensitivity depends on the database character set settings. If you're using Sql server you can use a case insensitive character set which is the default.
re: ResourceProvider Localization Sample posted
February 20, 2015 @ 1:43am | by Andreas


The keynames seems case sensetive. Some strings in our application only show the keynames. And when we turn of the provider it all works. If we change the resource key name to match what is on the webpage it works.

Is there a way to turn case sensitive off?

re: Back to Basics: UTC and TimeZones in .NET Web Apps
February 19, 2015 @ 3:12pm | by Rick Strahl

@James - I think I understand what DateTimeOffset does - it stores the DT *and* a timezone offset of when the data is captured for that particular tz. Makes sense. But for Web apps you NEVER capture time that way. You want to capture the date in UTC and the offset is irrelevant because it represents the server's time not the client's time.

Even if you DID store the offset from the original users timezone (which means you'd have to do the conversion up front because the server's not running that users timezone) you still get *only that timezone*. Not the timezone that a user of the date might want to see at a later time.

I guess I don't see how DTO helps if the time value's offset is fixed to a specific timezone when the application always adjusts to the user's timezone preference which mostly will not be the original DTO offset. You STILL have to do these conversions for EVERY user and if I do that then what does DTO buy me? Nothing except more storage required.

I also don't agree with this:

> The *only* time you need to do any conversion is strictly when displaying

because if you do certain datetime operations like Date queries that group by day or month or less granual time increment, that requires that you adjust those queries for that timezone. Otherwise you're going to include the wrong time range. So there are a number of places where this matters - almost every date query with user input in particular since those are typically done based on day ranges.
re: Back to Basics: UTC and TimeZones in .NET Web Apps
February 18, 2015 @ 7:45am | by James Manning

@Rick - WRT "Since DTO only supports a single timezone - I first have to convert to that timezone to save" and "have to get the date into the right TimeZone for saving first" - those are not correct (and very unfortunate that those were the reasons you've avoided using it).

The whole point is that since the data type stores the offset with the datetime, then the same column can store any timezone. You don't have to do *any* timezone conversions at any point if you don't want. You get a DateTimeOffset from someone in Hawaii that's got a UTC offset of -10 and you can store it as-is and still happily allow Oregon people to store in -8/-7, EST people to store -5/-4, etc. If SQL Server forced developers/users to convert to a particular timezone for saving the datatype would have no benefit over just saving as datetime2 and telling people to store as UTC (certainly the best practice if you're stuck using datetime/datetime2).

The *only* time you need to do any conversion is strictly when displaying, and only if you want to display it in a different timezone than what it already is (for instance, displaying it in the user's timezone regardless of what the originating timezone was). You can do comparisons, queries, etc all without having to do any conversions.

In your enter-in-Hawaii/display-in-Oregon scenario, with datetime or datetime2, you would typically convert twice, once on the write path to convert to UTC for storage (since your storage has no support to encode the UTC offset, so you would either encode it as a separate column, or more likely, store the version with no offset), then again on the read path to convert the UTC to Oregon.

If you use datetimeoffset instead, you don't need to convert on your write path at all. It comes in as offset -10 from your Hawaii person, -8 half the time from your Oregon people, -7 the rest of the time, etc. and you just store it like that. On your read path, you can display it with the original offset if you want (something that's not really an option if you forced conversion to UTC on the write path), or convert it to local time (just like you would in the datetime/datetime2 scenario)

I think Bart Duncan said it best, IMHO :)

When should you use datetimeoffset instead of datetime? The answer is: you should almost always use datetimeoffset. I’ll make the claim that there is only a single case where datetime is clearly the best data type for the job, and that’s when you actually require an ambiguous time
re: Azure VM Blues: Fighting a losing Performance Battle
February 17, 2015 @ 1:16pm | by Rick Strahl

@Mark - yeah I'm using my MSDN account to experiment with this stuff. After all that is what it's supposed to be for. I sure hope they're not throttling MSDN accounts - if they are it's a great way to ensure people won't use Azure because the performance is so terrible :-)

It's interesting to hear responses here that mostly seem to concur on the abysmal performance, but a few here and there seem to suggest that performance is just fine.

Just to clarify when I re-installed new VMs more recently I've had better luck with performance and at least the RDP performance is 'usable'. It's better, but overall with load tests even these newer installs have been very slow compared to other providers I've tried with smaller (and much cheaper) server configurations.
re: Azure VM Blues: Fighting a losing Performance Battle
February 17, 2015 @ 3:53am | by Mark Randle

Hi Rick

I noticed similar results especially the extra slow response when doing anything through RDP.

I did think it was the server set up. However I did try Azure previously using a standard try it for 30 days offer. When I did this response was good definitely no RDP lag and also comparable to my current Dedicated Server hires.

I do have a theory though - I like you have credit through Visual Studio and this time I am using this. Surely Microsoft wouldn't throttle this back as we are effectively getting free credit would they ???

Azure was in my migration plan to get away from leasing Physical Servers but not so sure now - Maybe should try again with a public (not free) account!