Rick Strahl's Web Log

Wind, waves, code and everything in between...
ASP.NET • C# • HTML5 • JavaScript • AngularJs
Contact   •   Articles   •   Products   •   Support   •   Search
Ad-free experience sponsored by:
ASPOSE - the market leader of .NET and Java APIs for file formats – natively work with DOCX, XLSX, PPT, PDF, images and more

Complex Detached Entities in LINQ to SQL - more Nightmares


:P
On this page:

Ok, so I thought I had the detached entity state in LINQ to SQL figured out, only to have that thrown back into my face. Basically the word from Microsoft is that if you have an entity that was loaded through a LINQ to SQL DataContext at any point it cannot be re-attached to the DataContext.

Note that all the stuff below works fine with 'connected' entities that are attached to a single instance of the DataContext. The following only applies to scenarios where the entity needs to get detached from the data context such as in Web Services or even in middle tier scenarios where the data gets passed around.

So, I figured I can safely get away with something like the following. And the following indeed works:

TimeTrakkerContext context = new TimeTrakkerContext();
 
EntryEntity entity = context.EntryEntities.Single(en => en.Pk == 110);
 
context = null;  // kill the context
 
entity.TimeOut = DateTime.Now;
 
context = new TimeTrakkerContext(); // create a new one
context.EntryEntities.Attach(entity, true);
context.SubmitChanges();

 

The code above works currently. You can save the entity in the new context.  This is easy to abstract as well so this could be abstracted in  a middle tier transparently and I've done just that in my generic base Save() method of the business layer.

However last night as I was screwing around with a few more complex objects I noticed that this only works with a single level entity. If any child entities are updated it doesn't work properly:

 
TimeTrakkerContext context = new TimeTrakkerContext();
 
EntryEntity entity = context.EntryEntities.Single(en => en.Pk == 110);
 
// *** Kill the old context
context = null;
 
// *** Update single entity
entity.TimeOut = DateTime.Now;
 
// *** Update a related entity
entity.ProjectEntity.CustomerEntity.Company = "Summa LP III";
 
// *** Create a new context
context = new TimeTrakkerContext();
 
// *** re-attach the entity to the new context
context.EntryEntities.Attach(entity, true);
 
// *** Save - no error
context.SubmitChanges();


There are no errors but when this is run what happens is that a new Customer and new Project are created and the Entry is updated with a new ProjectPk (iow, the relationship is changed). Ouch!

There's a heck of a hack to work around this which is basically to force every one of the related entities to be attached as well:

entity.TimeOut = DateTime.Now;
entity.ProjectEntity.Customer.Company = "Summa LP III";
 
context = new TimeTrakkerContext();
 
context.CustomerEntities.Attach(entity.ProjectEntity.Customer, true);
context.ProjectEntities.Attach(entity.ProjectEntity, true);
context.EntryEntities.Attach(entity, true);
 
context.SubmitChanges();

and that actually works.

I posted a message on the MSDN forums last night and I got a note back that that apparently Microsoft is not planning on supporting this scenario at all. According to Matt Warren if an entity has been attached to one DataContext it cannot be attached to another one, even if the old context is no longer alive.

The problem here is that there is no way to detach an entity.

I'm not sure what the use case is but it seems the only way that this can be done is to create a new instance and then copy values over. I don't really understand the premise here though. Entities are POCO objects - and they hold no state, how is L2SQL even keeping track that an entity was once attached? The entity has no state. Whatever state there might have been is on the now defunct DataContext. How could it possibly know the difference between a new object entity and one that was created off another context.

IAC, while I'm sure that this isn't an easy problem to solve, it is solvable even if it's not efficient by copying properties if necessary. This isn't rocket science but it should be handled at the L2S framework level that knows EXACTLY what needs to be copied and updated or cleared. How hard could it be to provide a .Detach method and have an .Attach that can copy the state so it syncs?

If what Matt says is indeed the way it's going to be, that's really beyond lame and makes LINQ to SQL little more than a toy. I really hope I'm misunderstanding, but the inability to detach and attach entities easily is pretty crucial if you don't use two tier data scenarios with UI accessing the data directly.

I understand that LINQ to SQL is supposed to be the low end tool with the Enterprise Framework being the high end tool. Ok, so it looks like current builds of the EF have EXACTLY the same problems with attaching and detaching. I just don't get how you can build any sort of data access framework and not address this scenario. We've only been building distributed applications for what - nearly 10 years now?

Posted in ADO.NET  LINQ  

The Voices of Reason


 

Ian Cooper
October 01, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

The issue is that for an n-tier application you should really be thinking about exchanging messages across boundaries not objects. We know that doing distributed objects is hard, and we know that being stateful does not scale, which has led to a shift of emphasis to the principle that exchanging messages is easier. For example calling a web service from a rich client does not entail passing an object back-and-forth but a sending a message to the server telling it to add a new record or update an old one. To effect that you need to load the object in the web service (remember you are stateless) amend it, and then re-save (this is what people are identifying as re-playing their changes). So I think your n-tier architecture is more suspect than LINQ to SQL as I suspect you are trying to exchange objects instead of messages.

Garry
October 01, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

I commented about this issue in reply to this post: http://west-wind.com/weblog/posts/135659.aspx

I submitted this as an issue to Microsoft a few weeks ago. They were able to reproduce the issue and indicated it would be fixed before RTM.

The Attach logic was adding the single object to the object cache, but ignored related objects. When SubmitChanges() found these objects, it assumed they were new.

To fix this, they added added an "intermediate state" for objects that are related to attached objects. These objects will also be assumed to already exist in the database (just like the attached object), and will not be inserted during SubmitChanges.

Rick Strahl
October 01, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

@Ian, I'm not sure if I can agree with that. I've never understood the need - unless appropriate - to peel off messages when all the messages are in 90% of the cases are the same data that is already present. At that point you're just doing busy work copying properties form one object to another. This is one of the premises that is nice about LINQ to SQL - the objects are basically POCO object so there's no additional gunk on them and according to MS there's no tracking on the actual object - you can't get any closer to pure message objects than that - short of busy work. It entirely depends on your business scenario as well. Most applications are not full on SOA systems but have some need to share data out in various ways.

IAC, the SOA scenario is not really an issue because in a Web Service you get a new instance back always. The instance returned in that scenario will not be attached to a DataContext so presumably that should work anyway.

Rick Strahl
October 01, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

@Garry - Hmmm... that seems odd given the response I got from Matt. I guess we'll have to wait and see what actually works and doesn't, but it's disconcerting to see these mixed messages as to what will and will not work. I sure hope there will be enough time to test with the changes and still provide feedback before going gold.

Garry
October 01, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Yeah, that is unfortunate, especially since this appears to be a pretty serious issue. Here's a link to the issue I submitted:

https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=295402

The issue status is now Closed(Fixed), so hopefully the fix will make it into the RC/RTM bits.

Rick Strahl
October 01, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Thanks for posting the reference. That looks good. Question is whether that has changed or not to the point where this won't work due to the 'previously' attached entity that Matt mentions. Wish we could look and play with this ourselves at this point.

It's not clear from the resolution either whether you can modify related properties and get those updated as well or not which is hte scenario above. In your scenario the read caused the lazy load, but in my scenario above a value is actually changed.

Well, wait and see...

Steve
October 01, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Have you looked at NHibernate?

I wonder how does it handle this senario.

(I'm very relunctant to use MS's approach, it is their second attempt so I understand, and they still seem to be having problems creating something for the user (us) that works how we want to use it.)

Your senario is a common one I would think.

Ian Cooper
October 02, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Hi Rick,

I may be making incorrect assumptions about the scenario you are trying to enable. I am assuming, because of the desire to detach from and re-attach to a datacontext, that you are looking at a scenario where you are passing data from the process containing the data context into another process i.e. you are distributed. OK So far? Without the need to pass data between processes (or threads) I'm not sure the what scenario you might be envisaging where you would need a new context.

Your reason for using seperate processes in always about how you want to effect physical deployment and the usual advice is 'don't distribute until you have to'. That is because it is expensive. Note that physical distribution does not have to equate to layers. You can layer within as well as across processes. There do seem to be some architectural concepts around that encourage a model of using rpc (remoting) at every layer boundary. Layers don't have to be physical. Physical distribution is a deployment issue, with concerns around scaling, integration etc. not around cleany seperating orthogonal concerns. I suspect remoting gets overused by people feeling that they must have physical seperation at the layer boundary and that leads to a desire to amortize the cost of message sending by serializing objects. If you think the cost is too painful, it may be that you need to look at whether you needed to distribute in the first place.

If you are communicating across processes then you are changing context. Implicitly you should not be making assumptions about your caller. From the server you don't know anything about the client, so you define a message based contract to let the client tell you what it wants. Bear in mind that if you are physically distributing you may want to have many different client systems for your server (why else did you distribute?) and in that case it is easier to future proof yourself by exchanging a message instead of a serialized object.

So I think the issue is less the ORM than the architecture you may be proposing for the layers in your system. LINQ to SQL certainly is not a toy. It's lightwieght, with a feature set closer to say Wilson O/R Mapper than NHibernate, but I have built systems with the former and I am sure I will do with LINQ to SQL. The differences to tools like NHibernate are far more to do with support for certain mapping strategies and multiple db types than issues around how an ORM works.

Ryan Haney
October 02, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

I think the beauty of LINQ in general is the ability to program in a functional way. Combine functional programming with configuration and a provider model, and we can envision extremely scalable and agile applications. IMO, web services are supposed to be used for integration between disparate applications, not as a primary data access strategy.

Rick Strahl
October 02, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

@Ian and @Ryan - I don't necessarily disagree with the premise that it's best to keep things 'connected' when possible. In fact, that's what I've done with my BO's at this point where you can do EITHER connected or disconnected entities. Obviously connected is more efficient and that should be used whenever possible.

But if you're building a business layer it has to be resilient. You shouldn't have to make assumptions about how data is handled. You should be able to pick up an entity and pass it around any way you like and the front end should not have to know - in fact it shouldn't have to know - how that entity was generated and whether it belongs to a data context or not. As it stands currently in LINQ to SQL - I can make that work (even if it's buggy, but conceptually it works), but it's now really unclear whether this scenario is going to be supported or not.

Take a look on the MSDN forums and see that this is one of the most asked questions that in one form or another keeps coming up. Obviously people are doing it, politically correct or not.

I know several other developers (includnig a couple who are much more data architecture savvy than me) who are currently working with LINQ and EVERY SINGLE ONE OF THEM has hit this particular issue precisely because the assumption of any other ORM is that you can taken an entity and do as you please and then re-connect it. What are you saying then, all these other solutions are architectually unsound?

There are other problems with a purely connected state. If you're working connected and you're making changes to a bunch of entities and at some point you decide you know I don't need changes on one or two of the entities. How do you undo that? You can submit and abort but you can't individually undo changes at least not easily AFAIK. If you're making changes on the middle tier in many business objects - how are you going to hang on to the DataContext say in a Web application if you're doing everything thorugh a single context? You're going to track it on HttpContext.Items? Hardly.

If anything the connected behavior encourages people to do crazy stuff like hang on to the data context for an entire application or store it on Session in ASP.NET <g>...

Granted in a lot of cases all of the above isn't going to matter and you can get away with creating a context, making doing your DAL code and releasing the context, but there are scenarios where the detach is required and the day will come when you need it even if you don't run a typically disconnected scenario. Then what? You have to write Reflection code to copy properties from one object to another and walk the relationship hierarchy? This is something that should be baked into the architecture. L2S has methods for it after all but they are apparently going to be crippled (but again we need to see what RC/RTM actually looks like and not take Microsoft's description at face value).

Ryan Haney
October 02, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

When push comes to shove, as long as the application works, it doesn't matter what architecture you choose. Everyone will always have their own opinions. Not every application will have 1 million users. And when it does, I bet you will have to refactor the code anyways. Maintenance. The unavoidable.

Bart Czernicki
October 02, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

I love bloggers that are getting cute with their magazine-like titles to entice the reader (good job there). I am not going to get into specifics, but who cares. This reads similar to ur post earlier how LINQ isn't really dynamic (where that couldn't be further from the truth). I wouldn't call this a "nightmare" yet, but its good to let Microsoft know.

Rick Strahl
October 02, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

<s> definitely Ryan.

I'm playing devil's advocate. Personally I see myself running connected most of the time, but I'm building a generic business layer and needs to have some solution for detached entities.

mycall
October 03, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Keep up the great posts! Fixed or not in RTM, Microsoft does listen to people like you and there is no reason L2SQL (and L2Entity for that matter) can't bake this disconnected state in.

Ryan Haney
October 03, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Sorry, I wasn't mocking your code. I just laugh at how much money is spent in corporations building really simple web applications with bloated architectures. So when I speak of web services bloat, I am talking about a single simple web application that speaks to 2 web services and an LDAP store just to login.

Sigh, maybe I am just looking for an "easy" button.

Ben Kloosterman
November 09, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

I agree for larger systems sending the same objects around is a major pain , in terms of specific features for the data or comms framework and in terms of managing changes.

Every time you make a change every assembly needs to be updated , in WCF this is not the case but WCF is really creating copies of your server objects and using them as messages . This works but is a half baked approach .

The major advantage of using messages is
- your Business object have no dependence on the comms layer feel free to use proper OO and not have a ton of decorations and hacks to overcome some limitation. Once you use WCF try to change to a Remoting or visa versa not an easy task unless you have used messages.
- Your clients share no code with your servers and hence have simpler deployment and builds. In addition each client will be simpler for not having the bloat of features it does not use and can map server data to objects which suit its own domain. eg Vehicle may be quite different in different apps for the same company.
- You dont abstract away communication .. In the mind of the developer he knows he is sending a comms message which will be optomised differently. Structures will be flatter ( avoiding a lot of issues with lazy loading) . We had a requirement recently (2007) to access the same data over 9600 baud.
- You dont have to send all information only information clients need to know. This keeps clients simpler. This is more of an issue with larger systems where different clients want to see different thins , instead of developing huge objects they get their own view.

- Follows KISS it adds a layer but its simple as you do not need to use advanced features and work arounds - this saves an incredible amount of time for medium to large projects.

- No use of inheritance , interfaces and generics which dont map to XML. So why use a hack to make it work. The alternative is to not support these in your BL but again why gimp your BL ?

- Messages can match customer use cases and cary disrelated information . ie Messages can be extended with non related information. eg we had a booking system where the client wanted refferals count returned , rather than create a seperate polling / return mechanism it could be put in the message . Without some sort of CreateBookingResult message the original Booking CreateBooking(Booking) method would result in booking being poluted with non booking information. People have used multiple server round trips or a number of out paramaters here but that gets just as ugly after a while.

- Messages can trim object trees when converting - just beacause your categories are linked / lazy loaded there is no need to use them. Dont use lazy loading ? Why gimp your BL ?

- Chunky messages are more efficient and provide better performance.

Except for small /trivial systems the trivial time this takes to convert is well worth these benefits and the pains you get into .

Try it .

Regards,

Ben

jdn
December 08, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Did you ever find an actual work around for this?

If I have an Address and an AddressType, I find that the AddressType gets re-inserted when I create an Address, set the type, and save:

MyDataContext ctx = new MyDataContext();
Address a = new Address();
a.AddressType = AddressType.Get("Billing");
a.City = Chicago;
ctx.Addresses.InsertOnSubmit(a);
ctx.SubmitChanges();

The 'Get' uses a different context (of course).

This can't possibly be correct. This makes LINQ to SQL almost completely useless for anything, and blahblahblah about messages has nothing to do with it.

I must be doing something wrong, it can't be this bad.

Rick Strahl
December 09, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Nope that doesn't work. Microsoft has pretty much stated outright that you cannot mix entities from multiple contexts except for the very few supported scenarios with Table.Attach.

The above can be done however I think - just add a first and then use the same context in the .Get method. This is where a business object or some other higher level wrapper can be handy to hold the DataContext you're working with on a more sharable level so that multiple objects/methods can access it.

But I agree, this really bites. How you can design a Data Access Engine of any sort and not support some sort of integrated ability to attach entities is beyond me...

jdn
December 09, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

In the simple case, I found this works:

MyDataContext ctx = new MyDataContext();
Address a = new Address();
a.AddressType = AddressType.Get("Billing");
a.City = Chicago;
ctx.AddressTypes.Attach(a.AddressType, false);
ctx.Addresses.InsertOnSubmit(a);
ctx.SubmitChanges();

The attach has to be false. Then, the insert works without inserting a new AddressType.

Outside of the simple case, things get complicated. I am coding a checkout process, and as you can imagine there are many, many levels of objects. What I really hate about the exception message is that it doesn't tell you *which* object you are trying to Add that isn't New, just that you are doing it.

I'll let you know if I can actually get it to work.

Ken Eltringham
December 12, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Not being able to detach an Entity from its collection is terrible oversight. However this is how I overcame this problem for my particular application:

1. Create a Delete Stored Procedure (SP)
2. In the VS2008 L2SQL designer, drag and drop the stored procedure onto the design surface (DS)
3. Right click on a class on the DS and select Configure Behaviour.
4. From the Behaviour drop down list; select Delete.
5. Select Customize option button.
6. Below this option button select the Delete sp from the drop down list.

Now… The Delete SP does not do anything, but when invoked, the LINQ framework removes the entity from the collection.

Taking this example a little further, if you add a property flag to your entity, in the Delete SP you can check for if this has been set. If it is set, then actually delete the record from the SQL table, if not, bypass the whole delete process.

I know this is not perfect for many reasons, but it does the job.

I hope the ADO Entity Framework has the ability to detach an object.

Jose Luis Chavez del Cid
December 25, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

I have been developing my own layer over Enterprise Library 1/2 from a couple of years now. So I made a DataContext (I call it the gateway manager) detached from the Table (I call it gateway).

When I need to make a transaction, for making changes to more than one table a time, I attach the gateways in to the Gateway Manager. Based on configuration (app.config or web.config) the Gateway Manager gets what connection string it needs to connect, if the connection string names are the same they get one connection, bacause .net 2.0 has a section for connection strings, it uses the correct provider for the database, as defined by Enterprise Library, but if connection names are different then it gets another connection. So I can make a transaction in two or more diffent databases. Pooling is always handled by .net providers.

I already defined basic functions in a GatewayBase class, that handle a local table definition, parameters/columns and execution, focusing on writing the code the most simplistic and mantainable way. Some functions pull the definition of the table from the Gateway automatically so I can define column types and some more stuff and it will do the hadrd work internally. It might be a query string internally defined or it might be a stored procedure.

The last months I defned an XML Schema for defining a structure that will help me "build" the queries, and how to map data to entity classes. It has been long way, but far from Insert Update Delete and Select, I have a set of functions like Count, Exists, etc. I can define some simple patameters to structure how do i want to make the count or deleting etc.

Later I defined a BusinessEntitySet, a way to handle Entities and the changes as the DataContext. This is not attached to how Gateways and GatewayManagers work so I can use it when ever I need to.

I know it's not the most advanced thing, but I'm working on a designer for Visual Studio so I can edit the XML Definition, and the Compilers attached to Visual Studio (design time + prebuilt) or ASP.NET generate the code (built on execution).

When I first looked at LINQ to SQL, I thought it was a great tool, yes of course it's easier and with Lambda expressions you can build simpler things, that are built into expression trees, and back to code, at run time. The things I dislike are of course "runtime" compiled expression trees, and Attached Entities. You can't detach them, even if the context has been disposed when I try to attach the entity to a new DataContext to delete an entity from the database,

I received an error when doing the Attach that the datacontext was already disposed, but im not sharing the dataContext, it's not even a static field, it's created on the method. And now I get an error like: "System.NotSupportedException: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported."

I get no way of not depending on the DataContext of Linq to SQL, can´t recover what datacontext it was dependant on, and if it has already been disposed I can't use it anyway. I love simple things, get my data, change my data, store my data. If I need to track changes use a data context or any other tracking method, so I can use it simple, so if it's a small project, I can bring my set of puzzle pieces and use as little as i need, if it's a big project I can use all the pieces. Linq has been a disappointment in tying me up to a DataContext, not as an option.

Christian
December 26, 2007

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Hi Rick!
Great Blog, I spent a lot of time reading your articles since I run into the same problems you did, be it with WCF or LINQ ...

I really can´t belive what I am reading here. What does MS do? They know that there are distributed apps?

I was really excited about L2S. I thought: "I took such a long time for MS to provide a ORM-mapper, L2S must be a killer tool!" The, the first trouble I ran in, 2 hours after getting startet, was the lack of dynamich querys. I could not belive it. It took me two days to read all the articels and go through all the despaired experiments of people that couldn´t belive it either. I settled down with the dynamic.cs somewhere in the samples(!!!). No more type-checking and a lack of performance, but let´s see...

Ok, now my WCF Services: I have a XBAP Client, and it took me days to get XBAP and WCF to work together. (VS-Studio bug with "debug as downloaded from...", no way to transmit user credentials....... and so on)
And now I can´t reattach an object graph??? "You are trying to reattach... THIS IS NOT SUPPORTED" ??? What did the guys do?

Sorry, but I will fall back to NHibernate now. Keep your Linq2SQL for your nice prensentations and drop me a line if one can use it in a real world scenario!

Jose Marcenaro
January 23, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Hi Rick
I share your complaints about not being able to properly detach an entity.
As for your question "Entities are POCO objects - and they hold no state, how is L2SQL even keeping track that an entity was once attached?" ... I guess the answer is in the "EntitySet" and "EntityRef" properties built for each association. Those keep hold of the context, and try to connect when they are accessed for the first time.

I've found this workaround for disconnecting them:
// disconnect OrderDetails property of current 'order' object
EntitySet<OrderDetail> newDetails = new EntitySet<OrderDetail>();  // no datacontext here
newDetails.AddRange(order.OrderDetails.ToArray());  // go fetch them
order.OrderDetails = newDetails;  

Regards,
Jose.

E Rolnicki
April 04, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Great blog.

I've had some success with:

string EntityBaseClass.SerializeToXml(T object)
T EntityBaseClass.DeserializeFromXml(string objectString)

Customer existingCustomer = GetCustomerBasedOnJunk(junk);

//detach customer via serialization...ahh the joy of redundancy
Customer detachedCustomer = EntityBaseClass.DeserializeFromXml(EntityBaseClass.SerializeToXml(existingCustomer));

using(AnnoyingDataContext db = new AnnoyingDataContext())
{
db.Customers.Attach(detachedCustomer);
someNewOrder.Customer = detachedCustomer;
db.Orders.InsertOnSubmit(someNewOrder);
db.SubmitChanges();
}

Rick Strahl
April 04, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Yes I've experimented with that as well and that works as long as you attach with the True option of detach to force the object to be marked as changed.

But it's an expensive solution - lots of overhead in serializing and deserializing to XML and back.

Arron
June 21, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares


Here's a site that solves that problem by using Attach and having Timestamps on each table.

http://geekswithblogs.net/michelotti/archive/2007/12/17/117791.aspx

Rick Strahl
June 23, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

@Arron - this only works for single enties. It won't work if you have related entities in which case you'll get the main entity to update and the related ones not which is not all that useful. It's possible to write some manual code to walk the hierarchy of relations, but that's more work than it's worth frankly (not to mention slow because reflection would be required).

Webys
July 04, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

I have a little problem I'm a beginner in LINQ to SQL and I've follow your tutorial from blog: http://weblogs.asp.net/scottgu/archive/2007/07/16/linq-to-sql-part-5-binding-ui-using-the-asp-linqdatasource-control.aspx

All was ok I use VS 2008 (I've converted from VB.NET to C# , I use C#) but when I wish to put the code from App_Code in an sub dir like DAL or BLL (and encapsulate the Northwind.designer.cs generated code from Linq to SQL in a DAL or BLL namespace and for extensibility of entities like products with partial clases) and generate again the dblm file I've receiving en error:

1. first in my BLL dir from App_Code:

public partial class Product
{
partial void OnValidate(System.Data.Linq.ChangeAction action)
{
if (action == ChangeAction.Update && this.Discontinued == true && this.UnitsOnOrder > 0)
{
throw new ArgumentException("Record level can't be grather than 0 if Product is Discontinued");

}
}

}

and I've included the namespace : using DAL;

... but still error message is :No defining declaration found for implementing declaration of partial method 'Product.OnValidate(System.Data.Linq.ChangeAction)'

2 If i comment all BLL code from my extensibility partial class Product I receive an general error:

Could not load type 'NorthwindDataContext'.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.Web.HttpException: Could not load type 'NorthwindDataContext'.

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace:


[HttpException (0x80004005): Could not load type 'NorthwindDataContext'.]
System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +565
System.Web.UI.WebControls.LinqDataSourceView.get_ContextType() +68

[InvalidOperationException: Could not find the type specified in the ContextTypeName property of LinqDataSource 'CategoryLinqDS'.]
System.Web.UI.WebControls.LinqDataSourceView.get_ContextType() +193
System.Web.UI.WebControls.LinqDataSourceView.CreateContextAndTable() +458
System.Web.UI.WebControls.LinqDataSourceView.EnsureContextAndTable(Boolean selecting) +39
System.Web.UI.WebControls.LinqDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) +421
System.Web.UI.WebControls.ListControl.OnDataBinding(EventArgs e) +92
System.Web.UI.WebControls.ListControl.PerformSelect() +31
System.Web.UI.WebControls.BaseDataBoundControl.DataBind() +70
System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() +82
System.Web.UI.WebControls.ListControl.OnPreRender(EventArgs e) +26
System.Web.UI.Control.PreRenderRecursiveInternal() +86
System.Web.UI.Control.PreRenderRecursiveInternal() +170
System.Web.UI.Control.PreRenderRecursiveInternal() +170
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2041




--------------------------------------------------------------------------------
Version Information: Microsoft .NET Framework Version:2.0.50727.1433; ASP.NET Version:2.0.50727.1433

Thanks in advance guys :)

Michael Foltz
September 15, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

hey,

I had run into the same problem you've described above, but came across another blog post that elegantly solves it with a few extension methods.

here's the link:

http://blogs.msdn.com/cesardelatorre/archive/2008/09/04/updating-data-using-entity-framework-in-n-tier-and-n-layer-applications-short-lived-ef-contexts.aspx

Rick Strahl
September 15, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

@Michael - this talks about the Entity framework which handles updates differently than LINQ to SQL.

In either case though you end up having to update everything in order for this to work properly, which - grumbling - is not the end of the world (and as the Service wonks will tell you 'just the way it should be anyway') just 'busy' work.

Michael Foltz
September 16, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Fair enough. Guess I misread the post in my attempt to solve the problem I was dealing with. Linq to SQL, Linq to Entities. The two technologies seem so closely related that I've mistaken a solution in one for the other on more than one occasion.

Just curious. You played with any other ORMs? Namely NHibernate? Any likes/dislikes in comparison to the MS stack?

Rick Strahl
September 16, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

@Michael - I'm playing with different ORMs right now just experimenting around. nHibernate is Ok - biggest issue I have is that documentation is scattered about all over the place. Figuring out the right pieces to use (Castle ACtiveRecord) and get environment set up is a process if you don't already know where to start. The Reflection usage also concerns me, but to be honest I haven't gotten around to testing for perf yet. I'm still experimenting at the moment but it's slow going (other stuff interfering all the time).

I've been impressed with Frans' llbGen product for flexibility and pure features and comprehensiveness. Unfortunately, a commercial third party product is not an option for some of the work I'm doing due to licensing restrictions, so that's problematic.

Also looking at SubSonic again recently, but I have had lots of small issues with it right from the get-go that don't give me a warm fuzzy feeling either. Like the approach though although it's more in line with Linq to SQL's feature for ORM features than anything else.

None the wiser at the moment still poking around. So many choices and all have their tradeoffs. No 'perfect' solution.

When it comes down to it I'm leaning towards nHibernate, just because I'm really getting sick of Microsoft fucking up data access over and over again. Going the OSS route at least there'll be a modest amount of stability and yet you end up workign with a tool that's widely known and used.

Gary B
September 21, 2008

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Hi, I'm a bit of a newbie, and was trying to find a solution for dealing with 'detached' DataRows when I came across your blog. It's helped me understand quite a bit...as a matter of fact, I didn't understand what LINQ-to-SQL was and I ended up duplicating quite a bit of what it does...oh well. After reading your articles and the blog by Cesar de la Torre, it would seem like there's no magic bullet. Would an acceptable solution be to roll-you-own LINQ-to-SQL wizard and just add in properties to store the Original Values (basically _Name and _OrigName)? You could then use those (with a bit of reflection perhaps) to build your SQL and check for concurrency...I think DataRows basically do that using the DataRowVersion. The LINQ-to-SQL wizard is really pretty easy to duplicate, you just read the schema from the db and gen up a bunch of cookie-cutter data classes. Is this a valid solution? Is the performance of something like that gonna be really bad? Is this really the only solution, but the issue is that it should have been stashed in the framework? Thanks again, and keep up the good work.

Malcolm
March 01, 2009

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Hi Rick,

Has there been any update to this blog in regards to reattaching a detached object in LINQ to SQL? This is a terrible oversight in my opinion.

Eric J. Smith
May 04, 2009

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

PLINQO implements Detach functionality on your entities automatically as well as a ton of other features and enhancements. If you are interested, check out http://www.plinqo.com

Rick Strahl
May 04, 2009

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

@Eric - already looked at it. Looks great. Haven't had time to play with it though unfortunately.

Not entirely clear - CodeSmith is a requirement, right? That's one thing I'm a little weary off for a couple of projects that are 'public' this is a big limitation.

Rick O'Shay
May 25, 2009

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

Whether it's the latest Java EE or LINQ to SQL, the data context can be maintained across web requests in whatever strategy you are using to maintain conversational state. Why is that such a problem? The context manages the connection, presumably efficiently.

One alternative is to load the target entity and transfer the incoming changes to the managed entity when ready. The attach method is supposed to automate that but it seems problematic. Assuming it worked with a row version column (which isn't and never was a "time" stamp) there is still the issue of what overall shape it should load. What if the context already has a pending change and you are "attaching" over top of that, notwithstanding the state of the database?

So, there are two dirt-simple, crashingly obvious solutions and one fraught with peril and complexity. I choose the former.

Rick Strahl
May 25, 2009

# re: Complex Detached Entities in LINQ to SQL - more Nightmares

@Rick - it's a problem because the Context is very heavy. If you download data and make changes the context maintains the changes in an internal state. It's not a small object. Plus using Session in .NET for any sort of scalability beyond simple state settings is not going to scale well and certain to fail when you set up your Session to run on StateServer or SQL Server since DataContext can't serialize.

This means at best you'd have to create your own per user caching mechanism - again that won't scale if you ever use a Web Farm, but even without this issue it's still not a good idea because you'd need to manage lifetime and cleanup.

Really if you use DataContext you'll want to use a Unit of Work approach - create, use and remove or at the very least create per request remove at the end to maintain scalability and keep things logical and in line with the Web model. Anything else IMHO you're setting yourself up for a fall.

If you need persistence across connections you should use semaphore locking in the database (ie. special fields that hold lock state) or using hold tables to hold uncommitted data.
 

West Wind  © Rick Strahl, West Wind Technologies, 2005 - 2019