Thursday, December 1, 2011

Busy, busy

Wow, it's been over a week since my last blog post, and despite being excited about starting some Expanz work, I only got as far as downloading the tools and creating a sample model.

I was struck with two immediate things, the first being that creating a model called System breaks everything (well duh), and second that I had no idea how to 'regenerate' the server project after changing the model. I think this was caused by enabling the Linq to Sql model when creating my server which seemed to make things harder for no appreciable gain. I really did not have more than a few minutes to get familiar with the tools though, so I am not saying these are actually problems.

The last week has been crazy which is why I haven't really looked at the expanz platform, with a 7th birthday party, multiple bbqs, a sick kitty, and a weekend only SW:TOR beta invite on the weekend, followed by a Foo Fighters concert, a non-sleeping baby, and work 'heightened operation period' during the week. Hopefully the next weekend and week will be a bit more relaxing.

I haven't heard from Mr R&D (@csharpzealot) whether I have a project to work on with Expanz yet, but if not I'll definitely try and get my market game up and running on the Expanz platform over the weekend.

Wednesday, November 23, 2011

Expanz

I was invited to a presentation by Expanz today to go through their development platform and how it can improve our company's development strategies.

From a technical standpoint it is quite an impressive showing, matching rich model driven development with a robust and scalable application platform, and cross-platform rich thin client support.

The three key features that I think will make it a game changing experience for application development workshops are the:


  • Rich modeling experience and customised data types: being able to define an 'email' data type with built in validation for example. This improves consistency between applications and provides a robust library of types that can be used for any application. This is also designed to flow through the business layer and UI, so an email field will always behave consistently throughout your applicaiton.

  • Excellent designer and UI generation templates: the coding is left for the business rules, while the model and UI can be completed by the designers / analysts.

  • Stable and scalable application server: the developer does not ned to worry about the plumbing of authentication, session management, etc. This is probably the largest differentiation factor between expanz and most application development frameworks. The design allows for load-balancing, sticky session management and multi-tier application hosting, locally or in the cloud, without the developer having to worry about how this all fits together.

Very impressive in all, and I am definitely looking forward to actually putting the theory into practice. I don't know how this will fit into our company due to client expectations, licensing, etc, but if we can get a foothold it could be a gamechanger for us. It would allow us to build up on successive projects to improve future productivity, and at some point allow for internal projects to be delivered quickly and easily enough to position ourselves as solution vendors, not just consultants.


This is the sort of 'dream' that I have been aiming for with the framework development I have been doing on the side for the last 9 months, so getting hold of this will exciting (as well as disappointing that I won't really be continuing with that).


For more info on Expanz, visit http://www.expanz.com/

Tuesday, November 22, 2011

Ads Redux

According to my statistics for the last couple of weeks I will be earning an estimated 8c per 1000 requests. Given that my traffic is <500 views per month I'm looking at over 2 years to make my first dollar.

This highlights the volume of trafic needed to actually make any money from blogging, and while I'm sure there are ways to tweak this, and potentially using other ad services, it still takes a hell of a lot of ad views to make money.

So bye bye ads, it was fun while it lasted.

Moq Testing and Lambda Equivalence

I have been steadily improving my unit tests as my experience with Moq improves, but I have now encountered an issue that has thrown me a little.

To recap the app design, I am using EF Code First for the ORM, accessed via Repository classes that expose (mockable) operations on the repository. I then have a service layer that performs the business operations across multiple repositories.

My unit testing is focusing on the Service Layer methods, with (partially) mocked repositories. My initial tests were pretty simple setup the mocks, perform the action, assert the response. However as I became more familiar with the Moq Verify function I was able to improve the way I tested by actually verifying that my repositories were being accessed with the expected parameters.
An example is that when my service method GetItem(itemId) is called, I expect my repository method

to be called with a value of itemId.
I can then test using the following


The test above ensures that calling GetItem on my service calls the GetSingle on my repository with the expected parameter, and calls it only once. It is a very basic test for a simple method, but is a good example.

The issue is that my repository is a bit more complex than I have shown, where the Get methods actually accept Lambda expressions, so we use

instead of just


Now this actually works IF we are using static/const variables in the collection, for example

Item class has a static const ITEM_TYPE_BOX = "Box"
and my service class calls

If we are passing variables to the service method, and that variable is used to create the repository expression, the test fails as the Verify method cannot find a matching execution of the method.
My service method accepts a string itemType

and my unit test uses the following verify method


The test code above fails because while the two lamdba expressions are functionallity identical, they are not expressively equal, so the Moq verify function comparison fails. Drilling down into the issues I have found that the expression specifically includes the namespace of the method that creates the expression when using local types, which means creating the lambda expression in one class, and comparing it to an identical expression created in another class will always fail (you can see this if you create an expression and call ToString() on the expression). The reason it works for constant comparisons is that there are no local variables to compare to.

I cannot remove the dependency on lambdas in my repository as this forms the core of the repository flexibility, but I have identified one way of keeping my unit tests robust while overcoming this issue.
It is possible to expose methods in the service that create an expression object which can be used by both the unit test verify, and the service method. The issue with this is that it is somewhat cumbersome since you need to expose a method for each of the expression combinations your service is using. In many cases this will be relatively straight forward, but it is still a fair bit of effort.

This is pretty disappointing as the unit testing was going quite well up to this point.

Thursday, November 17, 2011

SOA and how much information to disseminate

I was involved in a meeting today that made me think a little bit about how much detail is, and should be, shared when defining services and consumers.

To put this into context, there is Company A that has created application X (used by a number of different organisations) which needs to access common functionality from Company A, B and C. As Company A owns application X, as well as being one of the service providers, they are designing the service interface to be exposed by Company A, B and C.


Company A will create the Service Definition (preferably with input from all parties) as the primary owner of the system, but a burden of responsiblity has been placed on the other companies who will be implementing the service to provide details of their implementation in a supporting definition. This 'Consumer Definition' is intended to document the processes and flows that are followed in the implementation of the service interface, focusing on non-logical functionality such as error handling, logging, issue escalation and monitoring.

The two companies raised some concerns about this additional requirement, centered around two facts: Why should we provide this information; and as long as we conform to the interface why does it matter?

Both these concerns are valid, but I propose that providing such information is invaluable to good SOA architecture. While the service definition is the only thing that the services require, providing the additional information simply increases the ability for the users of the service to understand what is happening in each implementation. A contrived example would be if Company B's service is down, the administrators of the consuming application will know that an issue ticket should have been created in Company B's application fault log, and can contact Company B to verify the issue and obtain an ETA.

In a previous post I discussed some of the SOA points made by Steve Yegge from his time at Amazon, and this to me is clearly the sort of thing that provides massive value in SOA designs. In defining clear operational contexts for the services as well as the services themselves you can provide a much more meaningful and robust SOA environment.

Tuesday, November 15, 2011

EF Deleted Items Issue

I noticed an issue in one of my service methods whereby a record I deleted showed up in a subsequent query within a single unit of work.

The example code is

int orderId = order.OrderID;
_orderRepository.delete(order);
Order newOrder = _orderRepository.getAll(x=>x.orderID == orderId);

The above example is a bit contrived, there's a fair bit more that goes on but this code highlights the issue.

Now that I know what is going on this is realtively straight forward, but it is a bit counterintuitive when starting out.

The problem was in my repository, where I was using the DbContext DbSet property for each entity directly, instead of the DbSet.Local property. The difference between the two is that the root DbSet property contains all elements in their modified state (e.g. it contains the deleted order, with an updated state of Deleted), while the Local DbSet property (which is an IObservable of the root DbSet) has the entities in their 'current state' so if you delete an entity from the context it is removed from the Local DbSet.

I say this is counterintuitive because the only way to identify whether an item is deleted or not is through the root context Entry() method, you cannot base a query on the DbSet to exclude deleted items.

The solution is however fairly simple. Since I am using a Unit of Work pattern on the context, and my service methods are a single unit of work, I can use the Local DbSet for my repository actions without any issues down the line with disconnected or orphan entities, and I can do this without any modifications to my service.

So where all my repository queries used to use the code below as the base for all repository queries
IQueryable query = _set.AsQueryable(); //_set is the appropriate DbSet for the entity T in the context
I now simply base all my queries off
IQueryable query = _set.Local.AsQueryable();

Now deleted items should not show up in my list of queries. I hope - I haven't had a chance to actually test it just yet.



*edit*

Well, that was short lived - it seems as though using the Local context only works on previously loaded data, for instance you do a load, then delete an entity, then a load from the local context will not show the deleted item.



This is incredible frustrating as it means I need to know under what scenario I am 'loading' data in order to choose the right context to load from, and means I need to front-load all the entities I will be working with, then use the local context from that point on.



I am seriously thinking of switching to nHibernate over this one.


*edit 2*

I have identified a possible solution, but I am concerned by performance implications

When performing a query on my context, I can use Linq to query the state of each entity in the resulting query and further filter the results.



query.Where(whereClause).ToList().Where(x=> ((DbContext)_context).Entry(x).State != System.Data.EntityState.Deleted ).ToList();


The two performance issues with this are

a) I need to 'ToList()' the query and then apply the state filter (otherwise EF will attempt to apply the filter to the SQL query, which it can't do). This is not ideal, but not critical, and I may be able to force the first Where() to resolve the EF part in another way to avoid the extra list creation.


and b) queries that return a large number of results will be impacted (potentially severely) since each entity will be inspected individually against the context change manager.


So perhaps an nHibernate implementation could wait if this does what I want it to without critical performance implications.

Monday, November 14, 2011

Javascript standards

Html5 and css3 are well on their way to becoming standardised and you can be fairly sure that when using these technologies your knowledge can be reused time and time again. Why then are there so many different bloody javascript libraries, all with their own syntax and base functionality. Every time I start looking to re-learn Javascript for web app development I feel like I am starting all over again.

Granted jquery seems to be leading the pack, but even libraries built on jquery decide to do their own thing with data sources and other core features more often than not.

Of course other languages are not immune to this, with a plethora of frameworks and tools in .net alone, but at least the basic syntax its the same and with .net you get the benefit of an excellent dev environment to help manage the differences. Each tool and library can usually work regardless of the other tools and libraries you are using as well, whereas in javascript if you find a nice calendar control for jquery and you are using yui you are SOL.

Ok rant over, I should go check out some javascript libraries to see if I like them enough to learn since I am lagging a bit in my skills. Twitterverse is all over #kendoui at the moment, and the demos of #knockoutJS I ran through a month or so ago were nice, so maybe I should start there.

Sunday, November 13, 2011

Free 2 Play

Not long ago I mentioned I was a bit of an MMO lover, but in recent times I haven't been playing much.  This means I have never really been around for the whole 'free to play' MMO games in recent times.  While the business model seems to work, I don't particularly like the gameplay traps that most of the games seem to fall into.

A lot of the F2P games follow a very 'eastern' progression model (i.e. grinding), which I personally detest, but F2P extends on this by either emphasising the grind unless you pay to access areas that give better rewards or by providing the rewards themselves at a cost.  Even western games that are now F2P have a similar trap, with D&D online and Champions Online allowing you to pay for the better quests.

To be honest I think I prefer the 'unlimited trial' model of WoW and W:AR, where you can pretty much do everything any other character can up until a certain level, though not everyone will agree.

League of Legends is a F2P game that I am really enjoying though, perhaps because it is not an MMO and has  no real grind.  You can pay for new characters, skins, and minor abilities, or you can use earned points to purchase these items.  A real bonus to LoL is the continual cycling of the character roster however, so every now and then you will get a new list of characters to choose from (plus any you purchase) which means you are not stuck with crappy stock characters even if you play rarely or don't want to pay.  Since most of the 'power up' abilites can only be bought with the 'earned' points, and not cash, you never really feel cheated by not paying either.

But all in all, I have yet to find a F2P MMO that even comes close to interesting me the way subscription ones do, and it seems the majority of the subscription games that have turned F2P are the grind-y ones that I was never really interested in anyway.

Perhaps i am just getting old and MMO-hating rather than F2P hating, we'll see what ToR does for me :)

Saturday, November 12, 2011

My god! It's full of ads.

So out of curiosity I decided to enable ads on my blog.  I've always wondered how much money ads can actually make for someone.  Obviously with my limited audience I don't expect to make any money, but I thought it would be interesting to see what they payout is like and extrapolate from there based on my page views..

So I'll disable it again in a week or two, and sorry for the inconvenience in the meantime.

Friday, November 11, 2011

The difference between bad code and good code

A colleague asked for some advice today on a project that he inherited (which I am extending with a separate module incidentally).

The issue was related to the usage of Entity Framework in the code that he had to maintain, and he needed some advice on how to proceed.  The problem was that the service layer was calling the repository multiple times, but each repository method was wrapped in a separate unit of work.
e.g.

public void DeleteEntity(int entityID)
{
    using (var context = EntityContext())
    {
        var entity = context.Postings.SingleOrDefault(p => p.entityID == entityID);
        context.Entities.DeleteObject(entity);
        context.SaveChanges();
    }
}
and
public Entity GetPosting(int entityID)

{
    using (var context = EntityContext())
    {
        return context.Entities.FirstOrDefault(p => p.entityID == entityID);
    }
}
This caused two problems for the developer, who needed to perform a complex action in his service that referenced multiple repository calls.
  1. He had no control over the transactional scope for the repository methods
  2. Each operation was on a separate EF context, so the service could not load and entity, edit it, and then save the changes (unless the repository was designed for disconnected entities, which it wasn't).
From a maintainability and testability point of view this was also a very poor design, as the repository methods created instances of dependency object (the service method also created instances of the repositories, making the services inherently untestable).


The version of this design that I implemented for my component follows a similar service/repository/entity pattern, but is implemented in a far more testable and robust manner.

The first improvement over the legacy design is in the dependency management
My service accepts a context and all required repositories in the constructor, and my repositories accepts a context, which allows for improved maintainability (all dependencies are described) and testability (all dependencies can be mocked).  This also allows us to use dependency injection/IoC to create our object instances.

The second improvement was in the Unit of Work design.
Rather than have each repository method as a single unit of work, the service methods are the units of work, so any action within the service uses the same context (as it is passed as a dependency to the repositories that the service uses), and each service call acts as a Unit of Work, calling SaveChanges at the end of the service to ensure that the changes act under a single transaction.
There are limitations to this design (your public service methods become an atomic transaction and you should not call other public methods from within another method) but for simplicity and maintainability it is a pretty good solution.

Below is a simple example of the design I am using, preserving maintainability, testability, and predictability.  I'm not saying it is necessarily the best code around, but it solves a number of issues that I often see in other developers code.

public class HydrantService
{
  public HydrantService(HydrantsSqlServer context, EFRepository<Hydrant> hydrantRepository, EFRepository<WorkOrder> workOrderRepository, EFRepository<HydrantStatus> hydrantStatusRepository)
  {
    _context = context;
    _hydrantRepository = hydrantRepository;
    _workOrderRepository = workOrderRepository;
    _hydrantStatusRepository = hydrantStatusRepository;
  }
  public void createFaultRecord(WorkOrder order)
  {
    HydrantStatus status = _hydrantStatusRepository.GetSingle<HydrantStatus>(x => x.StatusCode == "Fault"); //_context.HydrantStatuses.Where(x => x.StatusCode == "Fault").FirstOrDefault();
    order.Hydrant.HydrantStatus = status;
    _workOrderRepository.Add(order);
    _context.SaveChanges();
  }
}

  public class EFRepository<T>
 {
 public EFRepository(IDbContext context)
 {
    _context = context;
  }   public virtual ICollection GetAll()

  {
    IQueryable query = _context.Set();
    return query.ToList();
  }
}

Thursday, November 10, 2011

Social Communities

This is a bit of an introspective post about my own interaction with social, online and gaming communities.
I have always been fairly anti-social, and aside from a small group of close-knit friends I have never felt comfortable in social situations.
Since getting married and now the birth of my gorgeous baby girl I have become even more reclusive, and I think I need to kick myself into gear and do something about it.
Since I do have a bit of a problem with social interaction however, just getting out and meeting new people isn't really my thing, so I am thinking of expanding my online presence somewhat, which is just a little bit easier.

At a professional level I have started doing this a bit, with an increase in my Twitter and LinkedIn presence, and a marked increase in blogging. I would like to become more involved in PerthDotNet but as my wife works part time retail, Thursdays are out of the question for any sort of meet up.

The other area that I am thinking of using to increase my social interaction is through gaming communities. I have always been an MMO whore, from UO, EQ and DAOC in the early days, to SW:G, WoW, EQ2, DAOC, W:AR, and Eve Online more recently (yep, DAOC is there twice, i've been back to that game more than any other). Ironically though, despite being "MMO" games I have only ever had very limited interaction with the gaming community, with the majority of my time spent solo, or in the company of my RL friends. On the opposite scale, a close friend who has always suffered from social anxiety far worse than I ever did was really dedicated to the community in the MMOs we played.

The Eve community on Ars Technica was really the first time I had ever really tried to be part of a gaming community on my own. Unfortunately playing Eve with limited time commitments is an effort in futility, especially in a large 0.0 guild in a low population timezone. Whether I try and become more involved in Eve or pick up a new game such as SW:TOR, I really need to try and become a functional member of a community in the game otherwise I will end up continuing to be a hermit and end up back to where I am now.

Hopefully being part of both a professional and gaming community will help improve my communication and organisation skills, but mostly will get me back into interacting with people and becoming less of a hermit.

p.s. personal blogging is much harder than technical blogging...

Friday, November 4, 2011

Moq - Multiple Calls

I previously had the assumption that Moq allowed for Ordered setups, which was apparently mistaken. This must have been in TypeMock or another tool I looked at in the past.

So, I wanted to do three 'boundary value' calls to my service and return a different value from my repository for each call. Now I could do this as three 'setups' with the fixed paramter values, or three separate setup/execute phases, but I wanted a better way that meets the standard setup/execute/verify testing pattern.

Thanks to this blog I have a nice solution.


_hydrantDbSetMoq.Setup(x => x.GetSingle<Hydrant>(It.IsAny<Expression<Func<Hydrant>>>(), It.IsAny<IEnumerable<string>>(), It.IsAny<bool>())).Returns(
new Queue<Hydrant>(new[] { new Hydrant() { HydrantID = 100 }, new Hydrant() { HydrantID = 0 }, new Hydrant() { HydrantID = 0 } }).Dequeue
);

Hydrant actual1 = service.GetHydrant(100);
Hydrant actual2 = service.GetHydrant(-1);
Hydrant actual3 = service.GetHydrant(int.MaxValue);



And each successive call to getHydrant will return the next value in the queue.

Thursday, November 3, 2011

Mocked Repository and Generic Constraints

So, a productive couple of days - three issues resolved.

Reinstated a Repository - Moq cannot mock EF IDbSet so I decided that bringing the repository back would be a good idea. Example unit test below:

[TestMethod]
public void ListInventoryTest()
{
Mock<IDbContext> context = new Mock<IDbContext>();
Mock<AssetRepository> assetRepository = new Mock<AssetRepository>(context.Object);
Mock<StockpileRepository> stockpileRepository = new Mock<StockpileRepository>(context.Object);
List<Asset> assets = new List<Asset>();
assets.Add(new Asset() { AssetId = 1 });
assets.Add(new Asset() { AssetId = 2 });

Stockpile stockpile = new Stockpile() { StockPileId = 1, Assets = assets };

//assetRepository.Setup(x=>x.GetSingle(It.IsAny<expression<func<asset, bool="">>>())).Returns(asset);<expression<func<asset,>
stockpileRepository.Setup(x => x.GetSingle(It.IsAny<Expression<Func<Stockpile, bool>>>(), It.IsAny<IEnumerable<string>>(), It.IsAny<bool>())).Returns(stockpile);
var inventoryService = new InventoryService( stockpileRepository.Object, assetRepository.Object );
List<Asset> rv = inventoryService.ListInventory("Hailes", "Jita01");
Assert.IsNotNull(rv);
Assert.IsTrue(rv.Count == 2);
}
Then, now that I had a repository I could add a Null Object pattern solution to my repository, which was a bit tricker than I expected. In order to create a new T in the generic method, I needed to include a generic constraint to ensure T had a blank constructor.

public virtual T GetSingle<T>(Expression<Func<T, bool>><func whereClause, IEnumerable<string> customIncludes = null, bool overrideDefaultIncludes = false) where T : new()<string><func
{
IQueryable query = (IQueryable)this.ApplyIncludesToSet(customIncludes, overrideDefaultIncludes);
T val = query.SingleOrDefault(whereClause);
if (val == null)
{
val = new T();
}
return val;
}
The final issue I resolved was picking an appropriate lifetime manager for the EF DbContext when running in an 'application' context. Using the PerResolveLifetimeManager ensures that when resolving a class, any common Dependency in the entire resolve pat are shared - this means that a service with two repository dependencies, which both depend on a dbContext, will both use the same dbContext when the service is resolved - yay. This does exactly what I want it to, each operation should use a new service instance, which will use a single dbContext across all repository actions within that method.

So yeah, productive (and thanks to @wolfbyte for the generic constraint tip).

Next job to flesh out my unit tests, and continue on the functionality, as this covered the majority of my architecture issues.

Tuesday, November 1, 2011

Unit Testing, Moq, EF, and Repositories

 
Well, I have just started a small (8-12 week / 1 resource) project using an unfinished version of our in-house framework for some parts of it. In the process I want to ensure that I integrate some key design patterns (null object, repository, and unit of work) and full unit testing on the service implementation. This will hopefully help alleviate the pain of working with DotNetNuke, cross-application dependencies, and webforms.

So my first step was to property expose services from the dependent application as this is a major point of failure in other systems that use this application, which was pretty straight forward as the application design is not too bad. As this is a shared dependency on the DotNetNuke instance, I did not need to expose this as a WCF service, but could easily change it in the future if necessary. The new service interface will help prevent changes in the core application from breaking the dependent application, as any changes will be reflected as build failures in the service class, highlighting this to the developers and ensuring they either make the change to not break the interface, or let all consumers of this service know there is a breaking update and plan appropriate changes. This is a key issue encountered when services and application references are not well defined, and has caused a number of deployment issues at my current client.

The guts of this post however is to discuss my plan for unit testing, and how I had to rethink my previous statement of going ‘repository-less’. I previously discussed the removal of the repository from the framework and using the DbSet functionality in the EF context as the repository pattern. This worked really well, until I decided to do some unit tests.
I decided to use a mocking library in my unit tests specifically to ensure I was performing appropriately isolated tests, and to reduce the impact of managing test data. I had previously looked at Moles (Microsoft stubbing tool), but it always seemed so cumbersome and confusing, so I picked up Moq instead. I really like the Moq usage pattern, and so I thought it would be a good fit.

So, the plan was to use Moq to create mocks of the repository functions that act in predictable and repeatable ways, which means we can run the service and test that the service behaves as we expect.

An example is given below – in this example I created a service to get a list of ‘stations’ from the dependent application. Since I am testing my service, I want to Mock the dependent application service to act predictably, so I can ensure that my service acts the way I want it to (we are not performing end-to-end integration testing, so we don’t want to rely on the dependent application succeeding or failing at this point)


//when we call ‘GetStations’ with a parameter of 0, our mocked service throws an exception – I know the dependent service reacts in this way, so I can ensure this is integrated in my test
_samsServiceMoq.Setup(x => x.GetStations(0)).Throws();
//when we call ‘GetStations’ with a parameter of -1, our mocked service returns no results
_samsServiceMoq.Setup(x => x.GetStations(-1)).Returns(new List());
//when we call ‘GetStations’ with a parameter of 1, our mocked service returns a list with one item in it
_samsServiceMoq.Setup(x => x.GetStations(1)).Returns(new List() { new Unit(){ UnitID = "100" } });
 
UserService target = new
UserService(_samsServiceMoq.Object); //create an instance of my service, and pass in the mocked dependent service
List actual1;
List actual2;
List actual3;
actual1 = target.GetStations(-1); //execute the service method with the specified parameter
actual2 = target.GetStations(1); //execute the service method with the specified parameter
actual3 = target.GetStations(0); //execute the service method with the specified parameter
_samsServiceMoq.VerifyAll();
//check whether the mocked service methods were called in the execution of our tests – this is useful to ensure that your service method is calling the expected mocked method with the expected parameters.
//check the results from the service to ensure they match what you expect (based on the response from the mocked service)
Assert.IsNotNull(actual1);
Assert.IsNotNull(actual2);
Assert.IsNotNull(actual3);
Assert.AreEqual(0, actual1.Count);
Assert.AreEqual(1, actual2.Count);
Assert.AreEqual("100", actual2[0].UnitID);
Assert.AreEqual(0, actual3.Count);

The above example shows how you can configure a test without worrying about the dependent services, so you can test only the functionality in your service. You will also note that the service itself needs to be designed so that all dependencies are passed to the service, instead of created in the service (this is a key point in ensuring testability of components, all dependencies must be passed to the object). If we did not do this, we could never mock the dependent service, which means we would need to set up the test to ensure the dependent service responds appropriately (configure the dependency, and know/configure sample data that the dependency will respond to).
This works really well, I can test my (admittedly very simple) service without caring about configuring the dependent service. However doing the same thing on an EF repository instead of the dependent service does not work so well. The code below should work, but doesn’t due to limitations in EF/C#/Moq.


_hydrantContextMoq.Setup(x=>x.Hydrants).Returns(_hydrantDbSetMoq.Object);
_hydrantDbSetMoq.Setup(x => x.ToList()).Returns(new List() { new Hydrant() });
HydrantService service = new HydrantService(_hydrantContextMoq.Object);
List actual;
actual = service.GetHydrantList();
_hydrantContextMoq.VerifyAll();
_hydrantDbSetMoq.VerifyAll();
Assert.IsTrue(actual.Count == 1);

Here I am mocking my DbContext to return a mocked IDbSet, and mocking the IDbSet.ToList() to return a list of Hydrants with 1 item. This way I can test my service so that calling getHydrantList on my service returns the single length list. Unfortunately, IDbSet.ToList() is not a mockable method (it is actually an extension method) which means it is not possible to set up a mock for this method. Since my service is using this method, I cannot test my service in isolation of the database.


This is where the Repository comes in. Instead of using the IDbSet.ToList() directly, I would use a Repository GetAll() method which abstracts the call to the underlying DbSet method. As the repository is just another dependency on the service, we can mock this instead of the EF IDbSet, and hence have an appropriately testable service. We will also then have the ability to ensure that the repository supports the null object pattern, so a call to the IDbSet that may return null (such as a find() with an invalid key) can return an appropriate null object to the service, so the service, and all clients, know it will never receive a null as the result of a service operation.

So, big backtrack on the framework repository, and big kudos to Moq for making testing easier (at least for my simple examples so far).

Monday, October 31, 2011

Distractions

Yes, I have been slack lately, but I was going to get back into things, promise. The nudge from a colleague had nothing to do with it.

I have a soft spot for RPGs and Turn-Based Strategy games, and with the cheap Civ5 purchase a little while back, and pulling out my PSP for some Final Fantasy Tactics in the last couple of weeks, I haven't done much of anything for about a month. I like to think of these distractions as a necessary break when working on projects outside of work, but I do get sucked in a bit too much sometimes.

So, I have a handful of things I wanted to sort out with my Market game.

Framework / Architecture


  • Remove the AoP 'Unit of Work' implementation - this is pretty much done, I just need to formalise the new pattern for the UnitOfWork (single EF context/unit of work for each 'public' business method, and a single usage business service)

  • Restore the repository layer - specifically to assist with unit testing (EF/Queryable methods are not mockable, at least using moq).

  • The Repository has jumped back into my consciousness for two reasons - one, on a new small project I kicked off I plan on doing thorough unit testing, and found that the base EF IDbSet functions cannot be mocked. and two - implementing a null object pattern using EF is not simple, but implementing this logic in the repository is pretty simple.

  • Investigation on a dual nHibernate / EF implementation - see how much effort is involved in creating an nHibernateRepository

  • Investigation into AutoFac for Dependency Injection / IoC - problems with Unity lifetime behaviours and Bootstrapping may be improved with AutoFac.

  • Modify the application actions to use a command pattern, and introduce a server queue for processing.

  • Revisit the timed action services (server thinking / working) to produce a more flexible solution

Short Term Functionality



  • Implement base (atomic) producer AI

  • Implement complex (multiple ingredient) producer AI

  • Implement producer (basic and complex) trading AI - buy (ingredients) and sell (created items) orders, basic market analysis/P&L.

  • Add market transactions

  • Implement a lightswitch asset management application

  • Add ships/capacity and item volume

  • Add ship cargo

  • Add pathfinding

  • Add movement

  • Add Trading AI (buy/move/sell) - include improved market analysis

So yeah, i should get my ass into gear.

Thursday, October 13, 2011

Google, Amazon, Dog Food, and Loyalty

So there's two things I take out of the Steve Yegge Google rant (https://plus.google.com/112678702228711889851/posts/eVeouesvaVX#112678702228711889851/posts/eVeouesvaVX) that I had already been thinking about recently.

The first is the idea of the "Platform" and how the Amazon SOA mandate led to the position they stand today. I had no idea they offered so many services, but you can clearly see how each of their offerings has grown from their internal systems being designed as independent hosted components (even down to their payments system). You can see the "Eat your own Dog Food" approach has clearly paid off, as amazon can expose these proprietary systems as consumable services, monetizing them instead of simply consuming them as part of their own needs. This is an extreme example that progressed over the course of years, but it does highlight the capabilities that SOA can offer. If you build for enterprise integration and SOA, your components can become much more than the sum of their parts.

The second concept his post highlighted is the idea of company loyalty, and a love for your work. Steve clearly loves google and has a passion for not only what he does at Google, but what Google does in the broader scheme of things. I think for all the perks that Google offers, this level of loyalty stems from much more than just the money thrown around.

In the past I have worked at a company that I really loved, and while I was paid fairly well, and we had pretty good perks, it was more than this that really made the difference compared to where I am now. We were all treated with respect and acknowledged as key contributors in the company not just a resource, remunerated according to our capabilites, and as a team we all had a passion for what we were doing. This last point is a key item in what made the work environment so outstanding. We felt like we were doing something worthwhile, always pushing each other to improve and grow, and were all happy doing what we were doing.

I miss that high level of motivation from the teams I work with, but I recognise that this was an exceptional workplace and very little will ever compare. Reading the post drove home how great the workplace was.

Wednesday, September 28, 2011

ICT Maturity

When working for a large consulting company you have the opportunity to see a lot of different client sites, which is on the whole a good thing. In the last couple of years though, I have been involved in clients that are seriously lacking in ICT process maturity, and it is causing significant issues when trying to produce the best outcomes for the clients.

My current client site is possibly the worst example of this I have ever seen, with the issues stemming from both the top and bottom of the organisation. From the top there is no ICT governance for project analysis and no architectural analysis for ICT projects and how they fit into the enterprise. From the bottom there is no standard process for source control, development standards, standard development tools/frameworks, or project planning. Then in the middle there is a complete lack of Business Analysis and Project Management to bridge the gap between business expectations and actual delivery.

As a senior developer I have seen these bottom-up flaws to varying extents over all the clients I have worked for.
These issues stem from a lack of technical leadership driving the adoption of better practices, and is common in small development teams with isolated project silos. This lack of "maturity" works fine as long as each developer stays within their own silo, but as the teams build and the projects become more complex, this lack of maturity begins to show. It is at this point (preferably before) that a technical leader needs to step in and set the standards, guidelines, and processes for the entire development team.

In working towards a technical and enterprise architecture position I have also started to get a much better understanding of some of the failures at the top and mid level of the clients ICT department.
Again the issues stem from a lack of maturity in the enterprise. While there is a vision for the delivery of individual solutions, there is no review of how individual solutions fit the enterprise as a whole, how solutions can be delivered across the enterprise rather than as stand-alone silos, and how solutions interact with other areas of the organisation.
This leads to the same underlying issues as the developers face at the bottom level, where each project becomes an independent silo, with ad-hoc dependency and management, and communication strategies.
As the enterprise grows, each business unit begins to see the need for shared information and the reduction of process duplication, however it is often too late at this point to implement a strategy for consolidation of existing systems as integration of the multiple silos becomes too complex and time consuming. My current client is at this point now, where they have a number of duplicate systems, data, and processes, and are beginning to see the need to consolidate this duplication, but there are no enterprise or technical guidelines in place to ensure that the projects are designed in a way that will enable the requirements of the enterprise, not just the silo.

I have noted that ICT maturity is often driven by the needs of a growing business outstripping the capabilities of the independent silo's of business processes that are the hallmark of immature enterprises. This is an issue that affects these business from the bottom up, as well as the top-down.

I have concluded that enterprise architecture is a fundamental step in the maturity of both a business and its ICT needs, and the sooner a business implements an effective enterprise architecture, the more agile and less wasteful ICT becomes in delivering business improvement. Unfortunately business often sees the cost of enterprise and technical architecture as being too high, discounting the late gains in productivity due to the early cost of the architectire. In the long term however, the cost of developing and maintaining the silos far outweighs the cost of a proper enterprise architecture.

At TechEd this year there was a discussion on "Disposable Architecture" and when it is best to use a Silo'd approach rather than a full-blown enterprise architecture. While the arguments for this were compelling, it is wise to note the following point he made, "if a system needs to communicate with other systems, then use an appropriately architectured solution". This is becoming more evident the more experienced I become, because maintaining communications between multiple silos becomes far more work in the stability and maintainability of applications than implementing an appropriate architecture and building your solutions to meet that architecture, even if the time to introduce and then integrate that architecture exceeds the cost of building the individual silo solution.

So, the more I know, the more I need to learn, le sigh.

Saturday, September 17, 2011

HTML5, XAML, and Windows 8

Ok, so there's another round of "silverlight is dead" theories going around again. It may well be after build, and especially after the announcement of the "no plugin IE" (in metro mode), but in reality I still think there needs to be a place for it.

With Windows 8 we get WinRT, the win32 replacement that treats XAML/.NET and HTML5/JS equally, *for desktop apps*. I emphasise the last point, because MS have pretty much given us nothing with respect to Web Apps, except the obvious conclusion that the promotion of HTML5 means MS want us to focus on that.

However, all of the tools that MS has given developers for HTML5 center around Metro desktop applications, and there is very little reuse that can be applied between the Metro HTML5 experience and normal web HTML5 development since everything is tied in so closely with the WinJS libraries. Conversely Silverlight and WPF share many of the controls and libraries and development for both environments is very similar, so windows developers who currently have the full stack of web (silverlight) and desktop (WPF) knowledge will now have to stick to doesktop only (xaml) or learn HTML5 for Web and either HTML5/WinRT or XAML for desktop.

Unless some love is given to Silverlight, Devs are going to have to learn HTML5/JS for web applications, and therefore why not learn HTML5/JS for WinRT at the same time, so what's the point of .NET? There's some hyperbole for you, but I'm sure there are a lot of .NET Devs thinking that very thing.

And the crux is, .Net development is so good because you can be so productive with the tools at your disposal, and XAML development is no exception. HTML5/JS development is far less productive, and far more painful for developers, so for .Net developers moving to HTML5/JS development, there is going to be regression in the productivity and quality of work, which benefits no one.

Friday, September 9, 2011

AoP Transactions Redux

Yesterday I highlighted the issues I was having with "Aspectising" the Unit of Work with regards to concurrency management. After a bit of research I was able to find some resources on the issue that confirmed my suspicions.

I identified that in many cases it was the responsibility of the business logic to determine the actions undertaken in the event of a commit failure (whether it be due to concurrency or some other failure), and that using AoP to wrap the transaction removes any chance for the method itself to perform the appropriate resolution steps.

As it turns out, I am not the only one to come across this issue, as described in this paper http://lpdwww.epfl.ch/rachid/papers/ECOOP02.pdf. This paper described the problem I encountered in a much more scientific manner than I possibly have the time or skill to, but it comes to very much the same conclusions I did. Yes it is possible, but a) does it make sense, and b) the solution would be so complex that it would defeat the purpose of cross-cutting implementation, which is to reduce complexity.

The paper also discusses one of the issues I highlighted in previous posts where a solution I proposed to manually manage a transaction for specific business functions would need to be configured in such a way as to only allow execution if not within an existing transaction scope, and similarly AoP managed transaction methods could not be called from within the manually managed transaction method. This is mentioned in chapter 7 when discussing the EJB transaction policies, which is essentially what I was describing in my discussion.

So… my final recommendation is:


  • Remove the AoP Unit Of Work implementation as step 1.

  • Ensure each business logic method manages its own transaction scope

  • Ensure Public methods perform the transaction/concurrency commits, and do not call other public methods

  • Ensure private methods are always within the scope of a public method transaction, and do not handle concurrency failures (as they will only trigger when the outer transaction commits anyway)

This is a fairly generalised solution and might need a better definition, but it should be possible to adhere to these rules to improve consistency of the application and ensure the behaviour is predictable.

Cross-Cutting / AoP Transactions

I previously posted some notes on the Unit of Work implementation using Aspect Oriented Programming and how it relates to the application style (atomic vs persistent).

I have encountered another issue that has added what I think may be the final nail in the coffin for this design. The issue this time is handling Database Concurrency when using the Unit of Work, and the obvious issue is that the DbContext 'save' occurs outside the scope of the method that performed the changes, which means that method cannot define any custom concurrency handlers.


An example is as follows:



My core "Server" needs to update the users' Action Points (AP) every hour.


This process cannot Fail due to concurrency issues – it must retry (with
newly calculated values) if an error occurs else the user will either not get
their AP (or will end up with too many)



  • A user has 20AP

  • The server begins the process to add 10 AP, loading the user as a 20AP user

  • The user 'spends' 3AP – total AP = 17

  • The server 'Adds' 10 AP – total is 20 + 10 = 30AP

In the above scenario, the user has magically performed an action at no cost,
since the two processes occurred in parallel on the database.


If we simply throw a DB Concurrency error, the server process would have failed, so the +10AP would never occur and the user misses out on their hourly allocation, also a very bad thing.


What we ideally want to do is to catch the Concurrency error in the Service Method that adds the 10AP, recalculate what the new AP should be, and save the new AP count. However using the Unit of Work method the Service Method cannot handle this, it would have to be handled in the consumer of the service method. For User Actions this may be appropriate (we provide a warning and tell them to try again), but for non-interactive and business critical actions we want to handle this within the service, which means we cannot use the Unit of Work attribute to handle the context saving.

At the moment I do not have a way of solving this using the Unit of Work attribute interception, but there may be a way to instantiate appropriate handler chains for specific concurrency issues.
For example you may be able to specify a handler for concurrency issues on a particular Entity (User for example) and/or a specific field in an entity (AP), which will consistently handle concurrency issues on that type. This way if you encounter a concurrency error on the AP field, you always recalculate and try again, but if you encounter a concurrency error on a less important field you can throw an exception and have the user try again.


You could also potentially build up an Action chain where each service method you call determines whether a concurrency error can just throw the error back to the user, or whether to handle it in a custom manner (and build up the action handlers as appropriate)

I will have to investigate this, the second option might be an impressively complex way to resolve this issue.

Obviously the simple option is to make every service method completely self-contained and handle everything specific to the service method, which will work, but defeats the purpose of having a framework do the manual labour.

Too clever for its own good

I have been a big proponent in the past of Unity and Dependency Injection, especially related to aspect oriented programming. Allowing automatic transaction and logging support are two key areas where AoP has provided huge benefits to the simplicty of application development.
With a caveat that this works really well for atomic systems, where a request is self contained and all the necessary processing occurs in the one transaction.

Some limitations of this have arisen during the design of the Market game however, and this is predominantly due to the need to inject 'time' into the equation. Ultimately we are attempting to perform an action that spans a period of time and during that time we may wish to read the progress of that task. When a task is a single transaction however, it is not possible to read that progress from outside the transaction. This leads to the scenario where we need to perform a method with custom transactional scoping. i.e. do it manually.

This introduces room for errors, because we need to ensure that it is not possiblle for a UnitOfWork method to call a 'manually managed' method, as this will corrupt the UnitOfWork scope, and the same for manually managed methods calling UnitOfWork methods.

We also need to ensure that for a particular service instance we can't call a second method while a "long running" method is in progress, as the DbContext scope will be affected (since it is shared between instances of the dbContext). This is actually a more general issue, you cannot reuse a Service instance if there is an on-going action.

Yes this can be worked on, and is not a very difficult change to make, but it does mean that there is the potential to introduce 'unexpected' behaviour, which is one of the reasons AoP and dependency injection is so useful.

Finally, as for my Market Application, I may rethink my concept of Time. This is something I've been looking at for a little while, and it does make sense. Basically it is changing Time to be more like the facebook game model. Essentially everything the user wants to do will take up X Action Points, which can be gained in a number of ways (time being the primary one).

This would resolve the issue we have with "Actions taking time" as every action would be immediate (if you have enough AP), but you just can't do anything else until you have more AP.

“Long Running Processes”, “Asynchronous Communication”, and “Everything In the Database”

This blog post comes from technical design considerations for the Market game, specifically related to interoperability (multiple ways of interacting with the core services), security (more specifically application layer responsibilities), and scalability.

Security/Design Concerns
The problem was initially raised in relation to the execution of long running processes in the Service layer, and specifically in my case the "production process", but morphed into a major design and architecture rethink.

The production process is the process by which an item is created - consuming ingredients, and taking a specified amount of time to complete.
I was working on an AI process that would continually run item production, and initially had a simple method for production

Choose what to create
Determine how long it would
take
Start a timer (short)
On timer completion, create
the item, start the timer again
Start a timer (long)
On timer completion, stop
production and choose something else

The issue here is that the AI process (the client) is responsible for the production process, which would be a very bad thing to leave up to a client. This is a fundamental security issue, not only for games (where we don't want cheaters) but also for general business processes (we don't want the client to run banking transactions).

The obvious fact here is that the service should be performing the production of an item; the client should request to 'produceAnItem' which performs the steps to create an item, including waiting the correct amount of 'build time'. The AI client can then worry about the 'big picture' which is specific to its own processing (choosing what to build). By doing this in our service method we are relying on either a blocking call to the service method, or implementing a callback/event to the client when the action is complete.

Asynchronous Issues
This works fine for 'connected' systems, but asynchronous systems such as WCF or ASP.net will not be able to run a service method designed this way. For example, using WCF to process this request means that our WCF call will either block until complete meaning the service method could timeout; or the WCF call will complete immediately, but when the callback/event fires we have no communication channel to inform the client.
WCF can work around this by using duplex communication, but this is limited to WCF and even further limited to the full .net framework (i.e. no silverlight/ WP7 support), so this cannot be used (it is also unreliable).

Polling and Feedback
A generally accepted solution then is to start the process in the service method and have the client check back to see if the process is complete. While this can be bandwidth inefficient if your polling frequency is too high, it is a reliable solution. This process can also solve one of the key issues with long-running processes, and that is progress reporting, as each time the client checks for completion, the server can respond with any progress information.

This then brings me to the "Everything in the Database" point. If we have a long running process, triggered by a WCF call, or on a background thread, or anything other than a global static variable, then we cannot (easily) access that process to determine progress or completion. So while our service could be sitting there creating all these items, how does the client know that their particular run is completed? In order to support this we need to actually write a "ProductionRun" item into the database for the requested production, and we can then update that from the long running process, and read it back again when the client wants to know progress. Potentially more importantly, we can recover a production run from an application/server crash as we have all the details persisted.

Ok, so we now have a working solution across any client type

Client -> Request Production Run
Service -> Create production run record (including calculated completion time)
Start production timer
Client -> Check for completion/progress
Client -> Check for completion/progress
Service -> On production complete, mark the production run complete, and do the item creation etc
Client -> Check for completion/progress

Worker Processing / Message Bus
The above process can be modified slightly to reduce the responsibility of the "Service" and introduce a "Worker" system that performs the processing of actions. The Service method becomes a broker, that writes requests to the database and returns responses to the client. A separate process running on a separate system then reads the requests from the database and performs actions on these requests. This allows for increased scalability and reliability as we are reducing the responsibility of the client-facing system, and can use multiple workers to perform processing on these requests. This is essentially a Message Bus architecture, which is a proven and reliable architecture for highly scalable solutions, and taking the solution described above, the implementation of a Message Bus would not require major application redesign.

August 15 The joys of Dependency Injection! (Ok, I really do like it, but I can see peoples eyes glazing over as I write this)

I have identified a fairly serious issue with the UnitOfWork model currently used in the FMSC framework when running in non-atomic runtime systems (an example of an atomic runtime system is a WCF or HTTP server).

The current system Resolves the DbContext and increments the transaction scope at the start of a method, and decrements the scope at the end of a method, when the scope reaches 0 again the DbContext is saved. This is how the transaction support works. When working in WCF or HTTP operations, the DbContext is recreated for every web request, but the same context is used within that request. This ensures that each request is isolated from other actions, but the request itself acts as one DbContext operation.

For 'normal processess' there is no 'scope' for an action that can act as a lifetime manager for the context. We cannot use a new instance every time a DbContext is resolved or we would have no transaction support (as we would get a new DbContext for each method that is called), but if we use a singleton then we can have unexpected outcomes – if we load a list of entities, edit one item, and then call a completely unrelated method that saves the context, the edited item will be saved since the same context is used for all operations.

I am struggling to find a solution that can work and still maintain the simplicity of the current solution.




  1. One option is to implement a custom lifetime manager that will return a new context if the transaction scope is 0, otherwise it will return the existing context.




    1. This would resolve the scenario described above, as the loading of entities will be on one scope, and the data save will be in a different scope. It will also need to be merged with a PerThread solution so that each thread has its own lifetime manager to ensure that you don't call multiple method entries within a single transaction scope.


    2. Option 1 Requires the implementation of a new LifetimeManager that will inspect the current item and return a new instance if the transaction scope is 0 (alternately we could dispose of the item if the transaction scope is 0 after a save, but the former will use a new scope each non-transactioned request is made, while the latter will re-use an existing context until a transaction is started). It is a relatively complex solution to implement, but it does have the advantage that there are no changes to the application architecture, it will be completely isolated to the LifetimeManager implementation.


  2. Another option is to create a new DbContext per instance of a Service (DbContext), and somehow use that context in the UnitOfWork method handler instead of resolving a new instance.




    1. This means that each service must be self-contained however, as crossing service boundaries will involve different contexts and transaction scope, which could introduce errors.


    2. This provides the most scope for flexibility as you can have as many 'services' you like each completely independent of one another, you just need to manually manage the service instances if you want to share a context across operations.


    3. Option 2 Would use a standard (new item each resolve) DbContext manager but the UnitOfWorkHandler would inspect the calling object for its context, instead of resolving a context. This would require a new interface exposing the context of the service, and an update to the UnitOfWork call handler to get this instance from the object being wrapped. This would be the easiest to implement, and probably be the best solution despite the requirement that services are self-contained.


  3. Another possible option is to create a custom lifecycle manager where you create a DbContext manually, which will be reused whenever the context is resolved (per thread), and removed when you manually dispose it.




    1. The problem here is that you will not be able to have multiple contexts open at the same time, as the resolver would not know which to resolve.


    2. Option 3 would be the most complex to implement, requiring a PerThreadLifetimeManager that can 'create and dispose contexts' on demand, then continually resolve that same item until it is disposed. This may be possible using a standard ExternallyControlledLifetimeManager, but may or may not be thread safe.


I will be trialling Option 2 in the market app, as I will have a service application that will spawn threads to handle long-running processes (essentially a thread per AI instance, which acts as a 'player' in the game), as well as the standard WCF and MVC interfaces for client applications, and this solution seems to be the most appropriate.

WP7 List Performance / ObservableMVMCollection

One of the key considerations when working with list data in the MVVM pattern is that the list should be represented by ViewModels, not Domain objects, except where the data to be displayed is very basic and contains no additional functionality.
The ObservableMVMCollection is a helper class that wraps a collection of Domain objects into a collection of ViewModels to assist with binding of list data, and this model works very well for its intended goal.

One of the details that was highlighted in the Tech Ed application however, was that the creation of fully implemented ViewModels is a non-trivial action when data is being loaded very quickly, and as the ViewModel creation occurs in the UI thread, this can cause unresponsiveness of the UI. In an effort to resolve this, I placed some of the logic into a background thread to reduce the overhead on the UI thread, which caused some issues of its own.

What I identified was a number of dependencies in the ExtendedViewModelBase that prevented the creation of ViewModel objects in a background thread, and ultimately this prevents us from offloading the viewmodel creation to a background thread. At the time I did not have a chance to delve too deeply into this, but I do have a few details where we can help resolve this issue. One area that looked to cause problems was the registering of MVVM Messages, but it is possible that the command creation could also be a problem in addition to others.

The first point I would like to make is that the ExtendedViewModelBase grew out of a need to provide common functionality between pages, and handles a number of common steps in the ViewModel lifecycle, as well as specific bindings and properties for common functionality (such as loading progress animations etc). However, when working with this list data, the actual functionality that these individual viewmodels need to present is severely limited. The most that would be required, in general, is to handle a few button or link events in the viewmodel, or to load some additional data not part of the domain object. All the additional functionality is really not required in these individual list item viewmodels.
In light of this, I think that we need at least two classes of Base ViewModel objects, ExtendedViewModelBase and LightViewModelBase in the framework, where the LightViewModelBase implementation is stripped back to a point where it can be generated quickly and efficiently, and more importantly instantiated fully on a background thread, where it can then be added to the appropriate ObservableMVMCollection on the dispatcher with no additional processing.

I believe this would go a long way to improving performance of an MVVM application, especially one with significant amounts of List data.

WCF - SubClasses and KnownType

As part of the "Market Game" that I am working on I have been trying to expose a significant portion of the functionality through WCF services rather than just working with the "Service Layer" directly. While some functionality will only ever be exposed to internal systems (such as the AI logic), the user-facing functions could be accessed via a number of UIs including WP7 and HTML5, so exposing these functions via wcf is desirable.


As it turns out, testing via the WCF services was a really good thing, because I identified an interesting issue that took me quite a while to identify.

Example
In my example I have the following classes


Item – any object that can be bought or sold
BaseItem:Item – an object that has no "ingredients" (an "elemental object")
ComplexItem:Item – an object that is created from multiple other Items
Asset – an ownership record for an item (can be either a BaseItem or ComplexItem)
StockPile – a list of Assets for a particular user at a particular location

I then have a WCF service that returns a StockPile given a user and location (including the list of assets). This call was failing and I had no idea why, especially considering it did succeed before I made a few changes to the data model.

Troubleshooting
As we all know, debugging WCF errors can be tricky, so when I first encountered an error I had to try a number of different things to get this working (the errors was the trusty generic wcf "the underlying connection was closed" error).


The item I immediately thought of was the "maxItemsInObjectGraph", "maxReceivedMessageSize", and "maxBufferSize" binding settings as these had been encountered before when working with list data and returning 'complex' data types. The data I was returning wasn't that large, but I knew the default limits could be hit fairly easily.


When that failed I tried to find details on "100 Continue" messages (this popped up in a trace, and I had seen an issue like this before). I ended up forcing this off with "System.Net.ServicePointManager.Expect100Continue = false" but this also did not resolve the issue.



Solution
Finally, I stumbled upon a post mentioning serialisation of abstract types and the need for the KnownTypeAttribute which immediately triggered an old memory of having to do this back at my old work but with classic asmx web services.


Anyway, as it turns out, the serialisation process cannot serialise/deserialise derived types that are defined using their base type – e.g. "public Item getItems(){return new BaseItem();}" will fail. However, if you specify that the "KnownTypes" for Item are BaseItem and ComplextItem, then the serialiser can correctly inspect identify the actual Item type and serialise it appropriately.


"If you use a Base Class as the type identifier for a WCF method (parameter or return value) you must add a KnownTypes declaration to the base class DataContract"
Therefore the fix is to add the following to the Item class definition


[KnownType(typeof(BaseItem))]
[KnownType(typeof(ComplexItem))]
Public
class Item

WCF can then magically understand and serialise a List object graph correctly.

Repository Pattern

I have been doing some (a very little bit) 'on-the-side' development using the FMSC framework with an end goal of producing a fairly simple multi-platform trade and production game. If anyone is familiar with Eve-Online, this game is based on the market from that. This is really just an excuse to work with the framework, occupy my brain, and get some ideas going.

Anyway, while doing this I came across some discussions on the "Repository Pattern", EF 4.1 and when it is and is not required. Based on this I am thinking that the "Repository" layer could be removed.



To start with, I think the "repository project" that we have right now is still required at a basic level as this is where the DBContext lives, which is the way we interact with EF, but it is the individual Repository objects / Interfaces that I think could be removed and replaced with direct calls to the EF repository object.



Why the repository

Separation of concern – each repository instance is designed to separate the data access functions from the business logic functions. This is your general DAL/BL/UI separation, where the DAL in this case is the Repository.
Flexibility – the Repository Interface should allow you to swap out the underlying ORM with minimal impact


Why our repository implementation fails

1.a) The DBContext.Set() interface is itself a repository pattern. Business operations occur on the class instances exposed by the Set operation. E.g. DBContext.Set().add(itemInstance) will add an item to the database, exactly the same as ItemRepository.Add(itemInstance), but with a whole class layer being removed.

1.b) An intent of the repository layer was to ensure that all database operations were resolved before returning to the Business Layer, which prevented the business layer from essentially creating dynamic sql statements. However, it became apparent that the repository had to be flexible enough to provide the functionality that the BL requires, which required a lot of work (such as implementing a way to specify which children to load for an entity).

By adding this flexibility we then provided the BL with more ability to dictate the SQL that was generated, ultimately negating the purpose of the repository to begin with. The only benefit the repository now provided was that all queries would be resolved as soon as the repository was called, not when the service 'resolved' the query.

The cost of implementing this flexibility was also itself expensive, especially when EF itself provided this out of the box.

2) Being able to swap out one ORM for another (e.g. EF to nHibernate) would be a particularly amazing feature and is one of the Holy Grails of the repository pattern. However, as highlighted in a few blogs I read, a) how often does this happen (I'll slap whoever mentions CBH) and b) how much actual effort would it really save.

Due to the additional flexibility that we had to include to allow the repository to be an asset rather than a hindrance, I believe we are coupled closely enough with the underlying framework that changing ORMs is possible, but potentially more effort than is feasible. The potential payoff is there (one interface for EF or NH), but if we never get there then the repository is just a waste of effort.



Conclusion

For all future work on my "Market" game I will be using the EF Set() "repository" and will attempt to identify any areas where the repository layer is actually more suitable. If I find anything I'll blog about it.

Edit, I do actually think there is one area where the repository can provide a useful function, and thanks to Brian for bringing this up. The Non-Null Pattern (or whatever you want to call it) is a pretty useful pattern, one which can be helped tremendously by the repository. E.g., calling getSingle(query) on the repository can call the Set().FirstOrDefault(query) and if null return a blank T. However if you are working on the Set directly, you will need to check for the nulls in the BL and handle it appropriately. It may be possible to use extension methods to do this (actually, that might be a very good way of handling this, note to self), but the repository pattern does make it easy.

That is about the only real tangible benefit I can see of the repository however.

Need More Blogs

There has been a recent trend at work to try and get a bit more interaction and knowledge sharing between developers, and one of the ways this is being done is via internal Blogs.  Unfortunately it is pretty much just me writing these blogs, so I figured I'd throw my blogs open to a wider audience and start this blog up again. 

So beware the upcoming influx of blogs which may not all have an appropriate context (as they may expect knowledge of internal projects), but I'll attempt to ensure that all future blogs have as much context as they need.