Friday, September 9, 2011

August 15 The joys of Dependency Injection! (Ok, I really do like it, but I can see peoples eyes glazing over as I write this)

I have identified a fairly serious issue with the UnitOfWork model currently used in the FMSC framework when running in non-atomic runtime systems (an example of an atomic runtime system is a WCF or HTTP server).

The current system Resolves the DbContext and increments the transaction scope at the start of a method, and decrements the scope at the end of a method, when the scope reaches 0 again the DbContext is saved. This is how the transaction support works. When working in WCF or HTTP operations, the DbContext is recreated for every web request, but the same context is used within that request. This ensures that each request is isolated from other actions, but the request itself acts as one DbContext operation.

For 'normal processess' there is no 'scope' for an action that can act as a lifetime manager for the context. We cannot use a new instance every time a DbContext is resolved or we would have no transaction support (as we would get a new DbContext for each method that is called), but if we use a singleton then we can have unexpected outcomes – if we load a list of entities, edit one item, and then call a completely unrelated method that saves the context, the edited item will be saved since the same context is used for all operations.

I am struggling to find a solution that can work and still maintain the simplicity of the current solution.




  1. One option is to implement a custom lifetime manager that will return a new context if the transaction scope is 0, otherwise it will return the existing context.




    1. This would resolve the scenario described above, as the loading of entities will be on one scope, and the data save will be in a different scope. It will also need to be merged with a PerThread solution so that each thread has its own lifetime manager to ensure that you don't call multiple method entries within a single transaction scope.


    2. Option 1 Requires the implementation of a new LifetimeManager that will inspect the current item and return a new instance if the transaction scope is 0 (alternately we could dispose of the item if the transaction scope is 0 after a save, but the former will use a new scope each non-transactioned request is made, while the latter will re-use an existing context until a transaction is started). It is a relatively complex solution to implement, but it does have the advantage that there are no changes to the application architecture, it will be completely isolated to the LifetimeManager implementation.


  2. Another option is to create a new DbContext per instance of a Service (DbContext), and somehow use that context in the UnitOfWork method handler instead of resolving a new instance.




    1. This means that each service must be self-contained however, as crossing service boundaries will involve different contexts and transaction scope, which could introduce errors.


    2. This provides the most scope for flexibility as you can have as many 'services' you like each completely independent of one another, you just need to manually manage the service instances if you want to share a context across operations.


    3. Option 2 Would use a standard (new item each resolve) DbContext manager but the UnitOfWorkHandler would inspect the calling object for its context, instead of resolving a context. This would require a new interface exposing the context of the service, and an update to the UnitOfWork call handler to get this instance from the object being wrapped. This would be the easiest to implement, and probably be the best solution despite the requirement that services are self-contained.


  3. Another possible option is to create a custom lifecycle manager where you create a DbContext manually, which will be reused whenever the context is resolved (per thread), and removed when you manually dispose it.




    1. The problem here is that you will not be able to have multiple contexts open at the same time, as the resolver would not know which to resolve.


    2. Option 3 would be the most complex to implement, requiring a PerThreadLifetimeManager that can 'create and dispose contexts' on demand, then continually resolve that same item until it is disposed. This may be possible using a standard ExternallyControlledLifetimeManager, but may or may not be thread safe.


I will be trialling Option 2 in the market app, as I will have a service application that will spawn threads to handle long-running processes (essentially a thread per AI instance, which acts as a 'player' in the game), as well as the standard WCF and MVC interfaces for client applications, and this solution seems to be the most appropriate.

No comments:

Post a Comment