I did my Script Callbacks and Ajax session yesterday at the Portland User Group which was a ton of fun and well received. Attendees had lots of questions both on the technology itself and the various implementations. There seems to be a huge amount of interest in this technology yet I heard a lot of comments at the end that they were glad I showed a lot of different examples that demonstrate a few useful things to do with the technology.
It also got me thinking about the current state of affairs. My session was long (what else is new <g>) as I went over Script Callbacks, showed a custom implementation for 1.1 of my own called wwHoverPanel that’s used primarily for retrieving content from a URL and display it as pop overs as well as supporting URL based content retrieval with callbacks for 1.1. Finally I showed My.Ajax.Net from Jason Diamond, which I really like a lot because it’s simple and provides a combination of Script Callbacks in ASP.NET 2.0 and the best features of Ajax.NET. Frankly I think it’s a much better implementation than Ajax.NET.
But this begs a few questions: 3 different approaches – which one is best??? Brock Allen and Bertrand Leroy (in comments) raise some of these questions in this Blog entry:
As for me I think that right now there are at least two scenarios that are complementary to each other:
- A page pipeline based engine (like Script Callbacks and My Ajax.NET)
- A Url based engine
Interestingly enough the popular Ajax.NET from Michael Schwarz falls in neither of these categories as it avoids the Page Pipeline completely and uses a custom HTTP handler based interface to communicate with the server. But let’s look at the pros and cons of Page Pipeline and Url based engines.
Page pipeline engine
The advantage of this mechanism is that you’re tied into the current ASPX page that is executing. This means any client callbacks that are made are made in the current context of the page, which in turn means form variables are POSTed back and you can access controls on the form just like in a full page POSTback.
This is potentially huge as it eliminates much of the need of having to pass complex parameters from the client to the server. Instead you can simply read the client state or if necessary you can even assign additional state to hidden form variables and pass that back. So to read state you can just do this.txtFirstName.Text to retrieve the current content of a control for example.
Both My.Ajax.Net and Script Callbacks in ASP.NET 2.0 support this (actually ASP.NET beta 2 doesn’t – for some odd reason it sends a GET and querystring values, but this is supposed to change to POST and should be fixed in the latest CTPs (can anybody confirm?))
The disadvantage of this approach is that this can be a big handicap for data getting passed back to the server on every hit. If you have a lot of form fields on a form – or worse have lots of viewstate – all that crap is getting sent back whether you need it or not.
Script Callbacks also makes handling client events way more difficult than it should be. For example, if you have multiple client callbacks you have to create multiple controls and stick them on the form and either manage the callback events in the control code or forward the events. While this isn’t difficult, it’s non-obvious and a fairly convoluted design.
Page pipeline callbacks will work great for semi stateful page like this example where data mostly deals with form control based data. In that example, POSTing back data to the server makes sense to some degree.
The other problem with page based Pipelines is that they HAVE TO post back to the same page. This is not always a good thing as it forces you to manage multiple UI contexts in a single page. The classic example for me here is the InvoiceList that does hover over popups that display the invoice detail. Do really want to generate the HTML display for individual invoice as part of the Invoice listing page? This would mean the invoice listing page actually handles 2 separate UI contexts – both the List creation and the Detail Item creation, which is not cool at all. If you have a few of those imaging the bloat that will occur in a single page.
URL based Engine
This is where a URL based engine comes in. There are many scenarios that I can think of where you simple want to go to another URL to retrieve the content for the AJAX request. Whether it’s HTML created by another page or output that comes from some sort of data service that feeds data to the page. Imagine for example that you have an XML based data service that forwards data requests directly to your client side page – you wouldn’t want to feed that data out of the existing ASPX page, but rather at a URL that’s responsible for managing the Data access generation. Heck, you can call Sql Server SQLXML via IIS directly to feed you the data – can’t do that with either of the tools above.
The other scenario is HTML pop overs or Html Fragment generation that gets embedded into an existing page. Again, if you do this the most likely scenario is to generate this HTML externally to a page not within it. You really don’t want to pollute your pages with multiple technically unrelated UI contexts.
To address this I use a custom control I created – wwHoverPanel - that is URL based. You pass it a URL and a dynamic querystring and pulls that HTML down to the client. The control can manage hover windows like the one in the example above automatically – all you do is set a few properties (URL, size of the window) and hook up the event to a callback function and the rest is taken care of. The control can both automatically display the result in a hover window, or you can route the content to a script callback handler much like Script Callbacks or the My.Ajax.NET library does. It also supports POST data (optionally – you can turn it on or off) and exposes the XmlHttp object so you can gain easy access to the responseXml property. This is underrated – ironic that most AJAX implementation DON’T actually use XML – since it makes it much easier to consume XML directly without having to resort to XML strings and manually loading browser dependent XML DOM instances. Note that the POST mechanism actually just works into the page pipeline if you post back to the same page. There’s really no magic here – simply posting back the form’s POST content is all that is needed – ASP.NET handles all the rest including assigning the control values.
This approach works extremely well for URL based access. It can be done with practically no code (other than the event hookups) and you can externally create the URLs that feed content.
The downside to a URL based approach is has no explicit server side callback support (at least not yet <g>) and well, it’s a bit simplistic. But that’s really the point – the idea is that if you want to just retrieve URL based content especially if it’s coming from a different URL it can be easily accomplished.
Which of these approaches is better? Neither – they are complimentary to each other and can be used side by side as I am doing right now. At the moment I don’t see how they can be easily combined into a single model, because the target is so different. Within the single page mode I really like the Attribute based approach that My Ajax .NET uses. If I need to pass raw data or messages this approach works better especially since it provides for some type conversion features that make it easy to pass things like DataTables/DataRows.
But on a lot of pages I simply pull plain HTML content into a page and for that the URL approach works much better.
Other Posts you might also like