Rick Strahl's Weblog  

Wind, waves, code and everything in between...
.NET • C# • Markdown • WPF • All Things Web
Contact   •   Articles   •   Products   •   Support   •   Advertise
Sponsored by:
West Wind WebSurge - Rest Client and Http Load Testing for Windows

My PDC 2008 Wrap up

On this page:

Ugh, I’m finally back and settled from a long 3 week road trip. Ah, it feels good to sleep in my own bed again :-}. PDC last week was the last stop of the trip and as always it’s been an interesting experience. More than anything events like these are great to catch up with other developers and meeting people I’ve only seen virtually. I made face contact with a bunch of folks I only know through Twitter or comments on this blog. It’s always great to match a face with the message.

As usual PDC as an event was full of new technology and announcements. To me this PDC felt maybe a bit less revolutionary than previous ones in that a lot of the content covered has been previously announced or is even already in some sort of pre-release state available to the public. And that’s actually a good thing – I think it symptomizes Microsoft’s more open approach in product development which solicits early feedback and is generally good for all parties involved. For me this also worked out rather well – in the last half a year or so I’ve been pretty slow to play with new technology because I’ve been swamped with work and frankly have gotten a little tired of playing with the latest and greatest that’s not quite ‘there’ yet. A lot of stuff that’s been coming out of Microsoft in the last couple of years has been less than stellar and so for me the last year’s been mostly about getting back to the basics and working on core development rather than trying to gape at the latest and greatest.

Usually when I go to conferences I’m either speaking or otherwise involved in some official capacity that I rarely have time to attend sessions. This time around though – being purely an attendee – I had lots of time to go see sessions which has been great fun and even though I spent most days in sessions I wish I could have made to more sessions. Luckily most of the PDC sessions are actually online although finding them can be a little tricky. I’ve been making good use of this and catching up on additional content I missed.

PDC Impressions

So here are a few impressions from some of the sessions and keynotes I went to:

Windows 7
The second day’s keynote started off with preview of Windows 7 and well frankly it was a pretty bland presentation and show of new stuff. And yeah it wasn’t just me going by audience response and feedback. Bland yes, but maybe that’s a good thing. I’m starting to get into the mindset that less is more for Microsoft – the more Microsoft can focus on fixing/optimizing existing features across its products I think the better of we as users will be. I’d like to really see Microsoft focus more on fixing what’s there rather than keep building more and more crap nobody really needs.

As such the changes shown were fairly minor. The ones that stood out to me were native support for the VHD file format (used for Virtual PC Images) both for directly mounting virtual drives as well as booting from them. The remote desktop stretching over dual screens is useful too as is the better taskbar management. There will also be multi-touch support which will be cool if hardware vendors actually implement this stuff. Given how badly the tablet platform has fared over the years (no thanks to Microsoft’s inability to market this platform properly) I’m not having high hopes. I do think though that having touch support in screens would be pretty cool. Recently had a chance to play with an HP TouchSmart machine and it sure is nice to be able to work with the OS via touch in some situations.

Maybe the biggest news is that Microsoft is apparently doing some work to make Windows 7 run on less hardware (they showed a sub-notebook running with 1 gig of RAM running well with Windows 7) and improving core performance and startup. To me I don’t really care about anything else but a stable base platform, so if nothing else gets done but make sure that Windows 7 runs fast, stable and maybe has an Explorer that doesn’t crash and copies at a snail’s pace it’ll be a win.

All that said I think Microsoft’s goal needs to be polish the seriously tarnished image of Windows as a resource hog as well as the public image imposed by of all thing Apple marketing.

I personally don’t have any serious issues with Vista now, but I surely would like the resource usage in general to decrease. There are still too many times when even my dual core 2.4ghz machine is dragging because the OS performing some task in the background.

Windows Azure

Windows Azure was the big announcement at PDC this year. Azure is meant to be new hosted platform that provides scalable hosting of Web based applications. Azure can host traditional ASP.NET content as well as providing a host of service based components that are managed on the remote platform such as a blob and data storage mechanism, that effectively amounts to a remote file system. It also includes back ground worker processes that can be easily queued for background tasks, which is an interesting concept.

The big sell of the platform is the scalability – Azure runs on a load balanced platform that makes it easy to add new instances of applications to scale out easily and without having to reinstall and reconfigure anything. Azure effectively provides for packaging and replication of an application via what is called Fabrik Controller. This packaging also allows for smooth updates to staging and deployment servers which lets you push up a deployed app to a staging site, test it and then hot swap it across all configured instances.

A lot of what Azure does is pretty cool in concept. Certainly the packaged scalability is a nice and the main talking point of discussion I found with several other developers – if you’ve ever built applications that needed to span multiple machines you know what a pain it can be to scale out and make sure everything gets configured properly across machines.

However, in discussing with various developers I still am not sure just how applicable this sort of environment will be. Basically it’s NOT a typical ASP.NET hosting scenario where you can just park existing applications. The services deployed to Azure need to use the Azure data services (Sql Data Services?) for storage rather than a more traditional database. This means existing applications will have to be redesigned. The Azure data storage mechanism isn’t a SQL like store – rather the data mechanism is more like an object database that works by modeling types which get translated into single record entities in the database (like a reverse OR/M modeling wizard). It’s an interesting concept – you end up modeling your data as classes entirely and having the platform generate the data structures for you in the storage backend. The data stores are also accessible directly via REST endpoints so that any data stored is both application local and globally accessible which is also nice.  The format currently looks more like a ISAM type engine rather than a relational engine – there are no implicit relationships supported although nothing stops you from referencing foreign keys explicitly. The bottom line is that you have to work with the provided data model.

One concern I see is that servers deployed today often do more than just act as Web application servers. You may have background services running and code may be interacting with a wide variety of technologies outside of the basic Web application’s scope. All of this doesn’t look like it’s going to be supported on Azure at the moment. As it stands it looks like a platform for very specialized Web applications and that can buy into the very specific data format supported by Azure. I’m not clear how the data access actually works in applications – it looks like the access is all over HTTP and REST points, but I wonder if the data access on the live deployed server may be more direct? It’s hard to tell. Certainly I can’t imagine HTTP based data access scaling very well.

Overall the concept sounds intriguing, but I’m not sold on this idea. How many IT shops would really like to give up this much control over their platform and give it to Microsoft with the potential ability to control the pricing of the service? You’d be forever a slave to Microsoft’s pricing policies. Also how many sites actually need massive scalability? With the high end hardware available today it’s much less likely than it once was

The key thing that popped into my mind as I was watching the keynote and demos was: Lock in. Short of existing Microsoft fan boys I think this platform would be an extremely tough sell for Microsoft given that it’s based on all proprietary black box technology running on the server…

Keynotes in General

The keynotes in general were pretty dry and uninspired. Some of the speakers where painful to watch as they laughed at their own jokes – alone. There were a few highlights but most of the presenters where just spewing undefined marketing terms one after another. At a developer conference I found this pretty disappointing. Most of the presenters just sounded way too over-rehearsed. Heck even Scott Gu seemed to be just rushing through his presentation at a million miles and hour and his was probably the highlight of all the keynotes I saw. Don Box and Chris Anderson were entertaining but I’m not sure if this is a good ‘keynote’ presentation format either. It ended up feeling like filler more than anything. Hey but at least we got to see some code in their presentation (which without more context wasn’t all that useful either).

Honestly for a developer conference I would have hoped to have seen more detail on technologies. For all the time that was spent on Azure on the first day’s keynote I walked out of the room thing WTF does this really mean? It wasn’t until I went to the two follow up sessions that I had a reasonable idea.Not a good start to an idea. You know what they say “If you can’t define what it does in a few words…”

Luckily though the actual sessions were much better. This year I thought that the presentation skills of presenters were much better than in past MS conferences. A lot of presentations I saw were extremely professionally  presented with lots of time for questions at the end which is in stark contrast to the last PDC where there was lots of good content but often bad presentations.

C# and Dynamic Language Enhancements

Went to Ander’s Future of C# session and as expected C#’s main improvements are going to be based around dynamic language support. In the next version of .NET C# (and VB.NET) are going to provide services sitting ontop of the Dynamic Language Runtime (DLR). For C# the big improvement will be a ‘statically typed dynamic’ keyword. In essence the dynamic keyword allows assigning any type of variable and ‘dynamically’ access the members of that type as long as the members exist without having to hold a fully cast reference. What this means that you can access an object of type object and access its members directly without having to wade through Reflection.

The services that provide this functionality is calling on the DLR to provide the dynamic member resolving rather than direct calls to Reflection. The apparent advantages of this approach are that there can be better error handling, and improved performance through caching of Reflection type information as types are addressed. This functionality will be useful for any type of user provided code (plug-ins and the like) and especially useful for things like Interop with COM. Special attention has been given to the COM interop scenario since dynamic invokation there works a bit differently than for native .NET types. One additional improvement there is that the type hierarchy can be dynamically generated so no COM Interop Assembly needs to be generated.

VB folks must be snickering just about now because most of what the dynamic keyword functionality has been available for some time by using Option Expiclit Off, but still I think this is highly useful to get this capability in C# as well.

There’s also an IDynamicObject interface and an abstract DynamicObject class that can be implemented by developers to create objects that simulate Expando properties in C#. In effect this provides the ‘method missing’ functionality through explicitly implementing a class and intercepting all member access including access to members that don’t exist and returning a result that is effective conjured up out of thin air. The example Anders used was to implement a dictionary that uses properties instead of indexers to get and set values which is a little contrived but illustrates the point well. This can be quite useful in some situations.

I’ve talked to quite a few people who took the “So what?” attitude on this stuff. I can see why: the dynamic keyword should be something sparingly used in day to day code that you write in applications. However, if you build framework level code, scripting related code or do a lot of COM interop (all of which are still heavily in my operational domain) having an easy way to invoke non-typed members can be extremely convenient and make code much more readable than having to wade through the related thicket of Reflection commands. For me personally this will make life a lot easier in a few applications and API interfaces.

Another related feature is that C# will finally support optional parameters by allowing to assign default values to parameters. C# traditionally accomplishes this by way of method overloads and using named parameters should make it easier to reduce method clutter. Be curious to see how this works out for Reflection parsing.

Finally there was also a demo for the compiler as a service that should make it much easier and transparent to compile and execute code on the fly. It basically brings Eval() type functionality to C# in a simple manner. Anders then proceeded to build a Command Line interpreter (can you say FoxPro Command Window? <s>) that basically let him build up all sorts of constructs – classes, forms, ui, data access etc. – completely on the fly and executing that code.

Again this is special case technology, but it’s surprising how often it’s useful to be able to dynamically execute a stub of dynamically provided code. Doing things like meta driven validation rules using full or again building up scripting code are primary targets for me personally. This feature is also important to me as I have several scripting services I have built for internal apps and while I have this working now the code that does it is pretty messy and very resource intensive because effectively the compiler fires up out of process to compile the code. Having the compiler in .NET and available as a service that can be called should make this process much easier.

Unfortunately the compiler as a service is planned for C# 5.0 rather than the next release – bummer on that!

All of this dynamic stuff is very nice, but for most folks this is borderline functionality that will be useful only at times. However I think this stuff will be more important as you start looking at the whole language abstraction concepts that Anders also talks about in his talk – abstraction into things like LINQ and Lamda expressions that express intent instead of actual code. In many of those situations strongly typed values can get in the way because it’s not always clear exactly what results are typed as when the results return. For example, being able to return results like a list of anonymous types from a LINQ query gets a heck of a lot easier if you can access those values as dynamic as oppposed to having to use Reflection.

I also went to Jim Hugunin’s Dynamic Language talk which followed Anders talk to get a little more background on what exactly goes on in the DLR. I’ve been meaning to dig into the DLR precisely because I’ve had the needs to access dynamic content as mentioned earlier but I just haven’t gotten to it to date. It’s an excellent talk and I encourage folks to check it out either online or whenever you have a chance to see Jim present to give some background to those new features coming in the main .NET languages.


I also spent a bit of time attending ASP.NET sessions just to keep up with all that’s changing. Nothing terribly new here because the ASP.NET team’s been very busy pushing out just about everything to the Web.

The highlights of ASP.NET features that I’m excited about:

Client ID Improvements
In ASP.NET 4.0 there will be a mechanism to specify a client id explicitly to avoid the Container based ID mangling that’s currently occurring. If you’re writing JavaScript code today you know that this is a big problem if you have any sort of content containers like Master Pages that mangle the ClientIDs when rendering.

In ASP.NET 4.0 the ClientID is a writable property and in combination with the ClientIDMode determines how the control is rendered. The following always renders a client id of txtName even if rendered inside of a naming container:

<asp:TextBox runat="server" ID="txtName" 
            ClientID="txtName"             ClientIDMode="Static">

The ClientIDMode supports a few settings including Static which forces the name you provided, Preditable which is a fully qualified naming container style name based on the clientID (ie. the path is still there but your name is used), Legacy and Inherit. Inherit appears to not work currently but I suspect that it will let you inherit the ClientIDMode from the nearest container control.

There’s also supposed to be a mechanism to allow for predictable names for template controls, but I haven’t seen an example of this.

This is encouraging, although I still hope that this will get simpler. If I understand this correctly the current behavior would require 3 attributes to be set in order to get a simple ClientID – ID, ClientID and ClientIDMode. I much rather would like to specify the ClientIDMode on a container and push it into the child controls and specify just an ID overriding only when necesary ( on things like list controls ).

ASP.NET Ajax Focus on Client Centric Javascript

Several talks at PDC talked about Javascript related enhancements and there are a few to mention. As you know if you read this blog I’ve been less than enthusiastic about ASP.NET AJAX mainly because of its server heavy focus. That’s changing a bit with the next version with Microsoft investing in a nice set of client side features.

The most prominent and amply demo’d feature are client side templates which lean themselves to a data centric AJAX approach where you load and render your data primarily on the client rather than using server rendering to draw the initial page and updating the page with HTML content retrieved from the server which is the premise on which most of ASP.NET AJAX has been based.

Client Templates bring a client side binding mechanism along with a client list control that makes it much easier to render content once on the client. The template binding syntax looks a bit like WPF binding semantics so it should be familiar if you’ve played with WPF or Silverlight.

In addition to templates there’s also the concept of Live Bindings which provides real time databinding to data – when the data changes it automatically updates anything that is bound to the data. This stuff relies on some of the object semantics of the MS AJAX client libraries, but regardless the client side stuff is purely client side which is nice.

The new client side features look fairly clean and not overengineered at least at first glance which is nice, but they do still rely on the core MS Ajax libraries so library size will continue to be an issue. The concepts interestingly enough look somewhat similar to the former XmlScript stuff from the Atlas days (and now buried in the Futures releases), but with a more friendly interface based on raw text based markup with JavaScript script embedding. The script language used is JavaScript so all expressions and operations eval JavaScript which seems like a sensible way to get flexibility with a familiar model that doesn’t require learning yet another markup/template language.

For me personally I’m glad to see Microsoft provide this functionality although I’ve been doing similar stuff for some time with just jQuery and some helpers. I’ve been using jTemplates with jQUery for some time and recently switched over to John Resig’s MicroTemplating engine which provides basic templating. No doubt the ASP.NET Ajax templating is richer and provides support for direct databinding where with the templates you have a little more work to do, but not much. The live binding stuff is really cool and useful and I don’t think that this sort of thing can be implemented without the sort of infrastructure (property get/set routines that intercept assignments) that MS Ajax provides, so that part at least would be difficult to do outside of ASP.NET AJAX I suspect.

Also went to Stephen Walther’s jQuery session which was well presented, although a bit on the basic side at least from the jQuery end (and not just for me <s>). Stephen’s approach was more of interoperating of ASP.NET AJAX and jQuery rather than using jQuery by itself and a lot of demos were using functionality that could have easily been done natively in jQuery, but that is somewhat expected at a Microsoft event. Stephen’s session also highlighted the ASP.NET 4.0 Client Templates extensively which was nice to see as well. It’ll be interesting to see what Microsoft will do with jQuery in ASP.NET and whether it will be used internally in the framework or whether it’ll just be an external component for developers that is supported. I suspect the selector engine would be extremely useful to the core engine.

Not sure if this is enough to lure me back to ASP.NET AJAX but I do think they ASP.NET AJAX team is heading in the right direction here. Obviously a lot can be done here with the platform that’s slowly coming together tying both the client side and server side data services together. But still, I just can’t get excited about the static control approach that the framework uses where everything has to be declared up front (ie. controls can be $create() once only) and the complexity of the client control model. I’m starting to see where this is having some benefits now with the smart binding stuff taking advantage of the notification events, but I’m still not sure if it’s worth the bother. We shall see where it goes from here…

Other stuff

I went to a few other sessions as well a few which were a dud for me. Attended the Oslo session which is about domain language creation and while that was interesting to some degree I fail to see where this is going to a be a good fit except for tool and language vendors. This looks to be a way to map domain specific objects to custom sytnax and designers that can then execute and operate on those classes. While that’s interesting in concept I don’t see that as something I can see myself using a lot of. It’s like WorkFlow – it looks great on paper until you actually sit down and try to conform yourself to the limited development options that this environment allows you. I suspect there are good use cases but I also suspect they are aimed at language and tool vendors rather than average developers. The last thing I want to see is a new inconsistent languages/syntaxes being created by developers…

But the most productive time of the conference I probably spent in discussions with other developers or some of the devs at Microsoft. These more personal discussions are really the reason to go to these types of conferences for me so you can get multiple points of view and the occasional Aha moment that escaped me perhaps during a session.

If you didn’t go to PDC you missed out on the personal interaction, but all the content for sessions is online and the quality of the presentations (using Silverlight) is excellent. I’ve been busy watching quite a few additional sessions that I missed due to conflicts or simply spaced on. This is great to have online.

The Voices of Reason


John S.
November 02, 2008

# re: My PDC 2008 Wrap up

For setting a ClientID, from my testing with the CTP bits, you don't need to set the ClientID property. Set ID as usual, then set ClientIDMode to Static and you get the desired output. I'm not sure what the ClientID property is for (maybe the Predictable setting, which I haven't tested as much) and I don't remember it from the ASP.NET Futures session (I could have just missed it). You can also set the ClientIDMode in the pages config element in web.config to enable it site-wide.

Chris Kapilla
November 03, 2008

# re: My PDC 2008 Wrap up


thanks for providing this detailed summary of your experiences at PDC; almost feels like cheating to get the benefits of your experience without having had to expend the time and money to go there myself!

Really glad to hear about the coming improved support for ClientID in asp.net.

November 03, 2008

# re: My PDC 2008 Wrap up

Hi Rick, looking forward to seeing you at DevConnection in L.V. next week! What topics will you be presenting there?

November 03, 2008

# re: My PDC 2008 Wrap up

The PDC website now has the session videos integrated with the session calender and tags:
Multiple Download formats and easier to search / find the sessions you missed then channel9.

November 04, 2008

# re: My PDC 2008 Wrap up

I didn't attend microsoft pdc but have been catching up with some of the videos. I like the changes announced in asp.net 4.0 web forms. It sounds like they will be fixing bugs and making the experience of building websites in web forms alot easier. I hope they fix as many bugs as possible (e.g. two way databinding with updatepanel and when hiding elements doesn't work) even if it breaks backwards compatibility as long as they document this then i will be happy.

A couple of other particular problems i have with 2 way databinding is i'd like to be able to set the initial value of a text box (or any other form control) instead of having to set it in the code behind file. The second problem is also based around 2 way databinding, i'd like to be able to set a different name for both the property evaluated on the display and the value send to my update method (again without using the code behind). I have had a rant about this before here http://www.flixon.com/2007/11/03/aspnet-2-way-databinding-suggestionsimprovements/ but these problems cost me hours of time each week.

If these changes are not fixed then i will consider switching to asp.net mvc as this will save me more time in the long run.

One change which i believe will be welcomed is reversing the logic in how to enable/disable viewstate. I have encountered various bugs when disabling viewstate across a page and wanted to enable it just for the one control causing the problem but currently to do this you need to disable the veiwstate for every control other than the one you want enabled.

Roger Jennings
November 04, 2008

# re: My PDC 2008 Wrap up


Azure does support background services, which Google App Engine eschews. According to the Windows Azure SDK:

A worker role is a background processing application. A worker role may communicate with storage services and with other Internet-based services. It does not expose any external endpoints. A worker role can read requests from a queue defined in the Queue storage service.

See http://oakleafblog.blogspot.com/2008/11/linq-and-entity-framework-posts-for.html#SSDS

Rick Strahl
November 04, 2008

# re: My PDC 2008 Wrap up

One question that comes to mind is whether this background worker is ALWAYS running even when the front end app is not. So if you have what normally would be a service can you count on the worker *always* running so that the background task is effectively always up.

November 11, 2008

# re: My PDC 2008 Wrap up

I have been using Amazon's EC2 cloud and it is awesome. Not having seem Azure yet, I think EC2 would still be better because you are not locked into a stack like the other. With EC2 you can have Linux and Windows instances, postgre, mysql, sql server.

And the best part, you can RDP (Windows) or SSH (Linux) into the machine and install and do whatever you want. Too cool.

Rick Strahl
November 11, 2008

# re: My PDC 2008 Wrap up

I haven't looked closely at EC2, but it sounds much more like a virtual co-location setup which is very different than what MS is selling which is basically a bunch of services. EC2 sound more like traditional hosting wrapped up in a Virtual environment.

But that's a very different focus and addresses different needs. A traditional hosting setup does little to help with scalability and portability, which seems the goal of the MS solution. Then again I raelly wonder how many people needs this type of scalability/replication environment. Hardware being what it is it's tough to outrun even a single box with anything but huge, huge amoutns of traffic.

November 11, 2008

# re: My PDC 2008 Wrap up

You are correct in that it looks a lot like a virtualized hosting/co-location facility and that is a different model than Microsoft is showing. But what really differeniates it from traditional services like that is that you can control the creation of instances through web services. So you can literally create and destroy machine instances as needed based on your load and what you are willing to pay. So if you need to scale, you just create (and pay for) another instance of a server.

There are also third party tools that will do the management for you if you don't want to do it yourself. Amazon has a free front end to control all of this from FireFox extension (ElasticFox).

I think the problem with the MS and Google solutions, from what I have seen so far, is that you are locked into their stack. Whereas with the EC2 cloud you can do whatever you want and don't have to re-write code to work in their cloud. What I though was really cool is that I was literally able to move a sql driven web app over to their cloud and all I had to do is copy files over and change the connection strings.

Having said all that I am going to check out the Azure when it goes to a live beta. Really curious on how they will be pricing this out too.

Adam Kahtava
December 03, 2008

# re: My PDC 2008 Wrap up

Great recap of PDC. Thanks!

West Wind  © Rick Strahl, West Wind Technologies, 2005 - 2024