Wednesday, November 23, 2011

Expanz

I was invited to a presentation by Expanz today to go through their development platform and how it can improve our company's development strategies.

From a technical standpoint it is quite an impressive showing, matching rich model driven development with a robust and scalable application platform, and cross-platform rich thin client support.

The three key features that I think will make it a game changing experience for application development workshops are the:


  • Rich modeling experience and customised data types: being able to define an 'email' data type with built in validation for example. This improves consistency between applications and provides a robust library of types that can be used for any application. This is also designed to flow through the business layer and UI, so an email field will always behave consistently throughout your applicaiton.

  • Excellent designer and UI generation templates: the coding is left for the business rules, while the model and UI can be completed by the designers / analysts.

  • Stable and scalable application server: the developer does not ned to worry about the plumbing of authentication, session management, etc. This is probably the largest differentiation factor between expanz and most application development frameworks. The design allows for load-balancing, sticky session management and multi-tier application hosting, locally or in the cloud, without the developer having to worry about how this all fits together.

Very impressive in all, and I am definitely looking forward to actually putting the theory into practice. I don't know how this will fit into our company due to client expectations, licensing, etc, but if we can get a foothold it could be a gamechanger for us. It would allow us to build up on successive projects to improve future productivity, and at some point allow for internal projects to be delivered quickly and easily enough to position ourselves as solution vendors, not just consultants.


This is the sort of 'dream' that I have been aiming for with the framework development I have been doing on the side for the last 9 months, so getting hold of this will exciting (as well as disappointing that I won't really be continuing with that).


For more info on Expanz, visit http://www.expanz.com/

Tuesday, November 22, 2011

Ads Redux

According to my statistics for the last couple of weeks I will be earning an estimated 8c per 1000 requests. Given that my traffic is <500 views per month I'm looking at over 2 years to make my first dollar.

This highlights the volume of trafic needed to actually make any money from blogging, and while I'm sure there are ways to tweak this, and potentially using other ad services, it still takes a hell of a lot of ad views to make money.

So bye bye ads, it was fun while it lasted.

Moq Testing and Lambda Equivalence

I have been steadily improving my unit tests as my experience with Moq improves, but I have now encountered an issue that has thrown me a little.

To recap the app design, I am using EF Code First for the ORM, accessed via Repository classes that expose (mockable) operations on the repository. I then have a service layer that performs the business operations across multiple repositories.

My unit testing is focusing on the Service Layer methods, with (partially) mocked repositories. My initial tests were pretty simple setup the mocks, perform the action, assert the response. However as I became more familiar with the Moq Verify function I was able to improve the way I tested by actually verifying that my repositories were being accessed with the expected parameters.
An example is that when my service method GetItem(itemId) is called, I expect my repository method

to be called with a value of itemId.
I can then test using the following


The test above ensures that calling GetItem on my service calls the GetSingle on my repository with the expected parameter, and calls it only once. It is a very basic test for a simple method, but is a good example.

The issue is that my repository is a bit more complex than I have shown, where the Get methods actually accept Lambda expressions, so we use

instead of just


Now this actually works IF we are using static/const variables in the collection, for example

Item class has a static const ITEM_TYPE_BOX = "Box"
and my service class calls

If we are passing variables to the service method, and that variable is used to create the repository expression, the test fails as the Verify method cannot find a matching execution of the method.
My service method accepts a string itemType

and my unit test uses the following verify method


The test code above fails because while the two lamdba expressions are functionallity identical, they are not expressively equal, so the Moq verify function comparison fails. Drilling down into the issues I have found that the expression specifically includes the namespace of the method that creates the expression when using local types, which means creating the lambda expression in one class, and comparing it to an identical expression created in another class will always fail (you can see this if you create an expression and call ToString() on the expression). The reason it works for constant comparisons is that there are no local variables to compare to.

I cannot remove the dependency on lambdas in my repository as this forms the core of the repository flexibility, but I have identified one way of keeping my unit tests robust while overcoming this issue.
It is possible to expose methods in the service that create an expression object which can be used by both the unit test verify, and the service method. The issue with this is that it is somewhat cumbersome since you need to expose a method for each of the expression combinations your service is using. In many cases this will be relatively straight forward, but it is still a fair bit of effort.

This is pretty disappointing as the unit testing was going quite well up to this point.

Thursday, November 17, 2011

SOA and how much information to disseminate

I was involved in a meeting today that made me think a little bit about how much detail is, and should be, shared when defining services and consumers.

To put this into context, there is Company A that has created application X (used by a number of different organisations) which needs to access common functionality from Company A, B and C. As Company A owns application X, as well as being one of the service providers, they are designing the service interface to be exposed by Company A, B and C.


Company A will create the Service Definition (preferably with input from all parties) as the primary owner of the system, but a burden of responsiblity has been placed on the other companies who will be implementing the service to provide details of their implementation in a supporting definition. This 'Consumer Definition' is intended to document the processes and flows that are followed in the implementation of the service interface, focusing on non-logical functionality such as error handling, logging, issue escalation and monitoring.

The two companies raised some concerns about this additional requirement, centered around two facts: Why should we provide this information; and as long as we conform to the interface why does it matter?

Both these concerns are valid, but I propose that providing such information is invaluable to good SOA architecture. While the service definition is the only thing that the services require, providing the additional information simply increases the ability for the users of the service to understand what is happening in each implementation. A contrived example would be if Company B's service is down, the administrators of the consuming application will know that an issue ticket should have been created in Company B's application fault log, and can contact Company B to verify the issue and obtain an ETA.

In a previous post I discussed some of the SOA points made by Steve Yegge from his time at Amazon, and this to me is clearly the sort of thing that provides massive value in SOA designs. In defining clear operational contexts for the services as well as the services themselves you can provide a much more meaningful and robust SOA environment.

Tuesday, November 15, 2011

EF Deleted Items Issue

I noticed an issue in one of my service methods whereby a record I deleted showed up in a subsequent query within a single unit of work.

The example code is

int orderId = order.OrderID;
_orderRepository.delete(order);
Order newOrder = _orderRepository.getAll(x=>x.orderID == orderId);

The above example is a bit contrived, there's a fair bit more that goes on but this code highlights the issue.

Now that I know what is going on this is realtively straight forward, but it is a bit counterintuitive when starting out.

The problem was in my repository, where I was using the DbContext DbSet property for each entity directly, instead of the DbSet.Local property. The difference between the two is that the root DbSet property contains all elements in their modified state (e.g. it contains the deleted order, with an updated state of Deleted), while the Local DbSet property (which is an IObservable of the root DbSet) has the entities in their 'current state' so if you delete an entity from the context it is removed from the Local DbSet.

I say this is counterintuitive because the only way to identify whether an item is deleted or not is through the root context Entry() method, you cannot base a query on the DbSet to exclude deleted items.

The solution is however fairly simple. Since I am using a Unit of Work pattern on the context, and my service methods are a single unit of work, I can use the Local DbSet for my repository actions without any issues down the line with disconnected or orphan entities, and I can do this without any modifications to my service.

So where all my repository queries used to use the code below as the base for all repository queries
IQueryable query = _set.AsQueryable(); //_set is the appropriate DbSet for the entity T in the context
I now simply base all my queries off
IQueryable query = _set.Local.AsQueryable();

Now deleted items should not show up in my list of queries. I hope - I haven't had a chance to actually test it just yet.



*edit*

Well, that was short lived - it seems as though using the Local context only works on previously loaded data, for instance you do a load, then delete an entity, then a load from the local context will not show the deleted item.



This is incredible frustrating as it means I need to know under what scenario I am 'loading' data in order to choose the right context to load from, and means I need to front-load all the entities I will be working with, then use the local context from that point on.



I am seriously thinking of switching to nHibernate over this one.


*edit 2*

I have identified a possible solution, but I am concerned by performance implications

When performing a query on my context, I can use Linq to query the state of each entity in the resulting query and further filter the results.



query.Where(whereClause).ToList().Where(x=> ((DbContext)_context).Entry(x).State != System.Data.EntityState.Deleted ).ToList();


The two performance issues with this are

a) I need to 'ToList()' the query and then apply the state filter (otherwise EF will attempt to apply the filter to the SQL query, which it can't do). This is not ideal, but not critical, and I may be able to force the first Where() to resolve the EF part in another way to avoid the extra list creation.


and b) queries that return a large number of results will be impacted (potentially severely) since each entity will be inspected individually against the context change manager.


So perhaps an nHibernate implementation could wait if this does what I want it to without critical performance implications.

Monday, November 14, 2011

Javascript standards

Html5 and css3 are well on their way to becoming standardised and you can be fairly sure that when using these technologies your knowledge can be reused time and time again. Why then are there so many different bloody javascript libraries, all with their own syntax and base functionality. Every time I start looking to re-learn Javascript for web app development I feel like I am starting all over again.

Granted jquery seems to be leading the pack, but even libraries built on jquery decide to do their own thing with data sources and other core features more often than not.

Of course other languages are not immune to this, with a plethora of frameworks and tools in .net alone, but at least the basic syntax its the same and with .net you get the benefit of an excellent dev environment to help manage the differences. Each tool and library can usually work regardless of the other tools and libraries you are using as well, whereas in javascript if you find a nice calendar control for jquery and you are using yui you are SOL.

Ok rant over, I should go check out some javascript libraries to see if I like them enough to learn since I am lagging a bit in my skills. Twitterverse is all over #kendoui at the moment, and the demos of #knockoutJS I ran through a month or so ago were nice, so maybe I should start there.

Sunday, November 13, 2011

Free 2 Play

Not long ago I mentioned I was a bit of an MMO lover, but in recent times I haven't been playing much.  This means I have never really been around for the whole 'free to play' MMO games in recent times.  While the business model seems to work, I don't particularly like the gameplay traps that most of the games seem to fall into.

A lot of the F2P games follow a very 'eastern' progression model (i.e. grinding), which I personally detest, but F2P extends on this by either emphasising the grind unless you pay to access areas that give better rewards or by providing the rewards themselves at a cost.  Even western games that are now F2P have a similar trap, with D&D online and Champions Online allowing you to pay for the better quests.

To be honest I think I prefer the 'unlimited trial' model of WoW and W:AR, where you can pretty much do everything any other character can up until a certain level, though not everyone will agree.

League of Legends is a F2P game that I am really enjoying though, perhaps because it is not an MMO and has  no real grind.  You can pay for new characters, skins, and minor abilities, or you can use earned points to purchase these items.  A real bonus to LoL is the continual cycling of the character roster however, so every now and then you will get a new list of characters to choose from (plus any you purchase) which means you are not stuck with crappy stock characters even if you play rarely or don't want to pay.  Since most of the 'power up' abilites can only be bought with the 'earned' points, and not cash, you never really feel cheated by not paying either.

But all in all, I have yet to find a F2P MMO that even comes close to interesting me the way subscription ones do, and it seems the majority of the subscription games that have turned F2P are the grind-y ones that I was never really interested in anyway.

Perhaps i am just getting old and MMO-hating rather than F2P hating, we'll see what ToR does for me :)

Saturday, November 12, 2011

My god! It's full of ads.

So out of curiosity I decided to enable ads on my blog.  I've always wondered how much money ads can actually make for someone.  Obviously with my limited audience I don't expect to make any money, but I thought it would be interesting to see what they payout is like and extrapolate from there based on my page views..

So I'll disable it again in a week or two, and sorry for the inconvenience in the meantime.

Friday, November 11, 2011

The difference between bad code and good code

A colleague asked for some advice today on a project that he inherited (which I am extending with a separate module incidentally).

The issue was related to the usage of Entity Framework in the code that he had to maintain, and he needed some advice on how to proceed.  The problem was that the service layer was calling the repository multiple times, but each repository method was wrapped in a separate unit of work.
e.g.

public void DeleteEntity(int entityID)
{
    using (var context = EntityContext())
    {
        var entity = context.Postings.SingleOrDefault(p => p.entityID == entityID);
        context.Entities.DeleteObject(entity);
        context.SaveChanges();
    }
}
and
public Entity GetPosting(int entityID)

{
    using (var context = EntityContext())
    {
        return context.Entities.FirstOrDefault(p => p.entityID == entityID);
    }
}
This caused two problems for the developer, who needed to perform a complex action in his service that referenced multiple repository calls.
  1. He had no control over the transactional scope for the repository methods
  2. Each operation was on a separate EF context, so the service could not load and entity, edit it, and then save the changes (unless the repository was designed for disconnected entities, which it wasn't).
From a maintainability and testability point of view this was also a very poor design, as the repository methods created instances of dependency object (the service method also created instances of the repositories, making the services inherently untestable).


The version of this design that I implemented for my component follows a similar service/repository/entity pattern, but is implemented in a far more testable and robust manner.

The first improvement over the legacy design is in the dependency management
My service accepts a context and all required repositories in the constructor, and my repositories accepts a context, which allows for improved maintainability (all dependencies are described) and testability (all dependencies can be mocked).  This also allows us to use dependency injection/IoC to create our object instances.

The second improvement was in the Unit of Work design.
Rather than have each repository method as a single unit of work, the service methods are the units of work, so any action within the service uses the same context (as it is passed as a dependency to the repositories that the service uses), and each service call acts as a Unit of Work, calling SaveChanges at the end of the service to ensure that the changes act under a single transaction.
There are limitations to this design (your public service methods become an atomic transaction and you should not call other public methods from within another method) but for simplicity and maintainability it is a pretty good solution.

Below is a simple example of the design I am using, preserving maintainability, testability, and predictability.  I'm not saying it is necessarily the best code around, but it solves a number of issues that I often see in other developers code.

public class HydrantService
{
  public HydrantService(HydrantsSqlServer context, EFRepository<Hydrant> hydrantRepository, EFRepository<WorkOrder> workOrderRepository, EFRepository<HydrantStatus> hydrantStatusRepository)
  {
    _context = context;
    _hydrantRepository = hydrantRepository;
    _workOrderRepository = workOrderRepository;
    _hydrantStatusRepository = hydrantStatusRepository;
  }
  public void createFaultRecord(WorkOrder order)
  {
    HydrantStatus status = _hydrantStatusRepository.GetSingle<HydrantStatus>(x => x.StatusCode == "Fault"); //_context.HydrantStatuses.Where(x => x.StatusCode == "Fault").FirstOrDefault();
    order.Hydrant.HydrantStatus = status;
    _workOrderRepository.Add(order);
    _context.SaveChanges();
  }
}

  public class EFRepository<T>
 {
 public EFRepository(IDbContext context)
 {
    _context = context;
  }   public virtual ICollection GetAll()

  {
    IQueryable query = _context.Set();
    return query.ToList();
  }
}

Thursday, November 10, 2011

Social Communities

This is a bit of an introspective post about my own interaction with social, online and gaming communities.
I have always been fairly anti-social, and aside from a small group of close-knit friends I have never felt comfortable in social situations.
Since getting married and now the birth of my gorgeous baby girl I have become even more reclusive, and I think I need to kick myself into gear and do something about it.
Since I do have a bit of a problem with social interaction however, just getting out and meeting new people isn't really my thing, so I am thinking of expanding my online presence somewhat, which is just a little bit easier.

At a professional level I have started doing this a bit, with an increase in my Twitter and LinkedIn presence, and a marked increase in blogging. I would like to become more involved in PerthDotNet but as my wife works part time retail, Thursdays are out of the question for any sort of meet up.

The other area that I am thinking of using to increase my social interaction is through gaming communities. I have always been an MMO whore, from UO, EQ and DAOC in the early days, to SW:G, WoW, EQ2, DAOC, W:AR, and Eve Online more recently (yep, DAOC is there twice, i've been back to that game more than any other). Ironically though, despite being "MMO" games I have only ever had very limited interaction with the gaming community, with the majority of my time spent solo, or in the company of my RL friends. On the opposite scale, a close friend who has always suffered from social anxiety far worse than I ever did was really dedicated to the community in the MMOs we played.

The Eve community on Ars Technica was really the first time I had ever really tried to be part of a gaming community on my own. Unfortunately playing Eve with limited time commitments is an effort in futility, especially in a large 0.0 guild in a low population timezone. Whether I try and become more involved in Eve or pick up a new game such as SW:TOR, I really need to try and become a functional member of a community in the game otherwise I will end up continuing to be a hermit and end up back to where I am now.

Hopefully being part of both a professional and gaming community will help improve my communication and organisation skills, but mostly will get me back into interacting with people and becoming less of a hermit.

p.s. personal blogging is much harder than technical blogging...

Friday, November 4, 2011

Moq - Multiple Calls

I previously had the assumption that Moq allowed for Ordered setups, which was apparently mistaken. This must have been in TypeMock or another tool I looked at in the past.

So, I wanted to do three 'boundary value' calls to my service and return a different value from my repository for each call. Now I could do this as three 'setups' with the fixed paramter values, or three separate setup/execute phases, but I wanted a better way that meets the standard setup/execute/verify testing pattern.

Thanks to this blog I have a nice solution.


_hydrantDbSetMoq.Setup(x => x.GetSingle<Hydrant>(It.IsAny<Expression<Func<Hydrant>>>(), It.IsAny<IEnumerable<string>>(), It.IsAny<bool>())).Returns(
new Queue<Hydrant>(new[] { new Hydrant() { HydrantID = 100 }, new Hydrant() { HydrantID = 0 }, new Hydrant() { HydrantID = 0 } }).Dequeue
);

Hydrant actual1 = service.GetHydrant(100);
Hydrant actual2 = service.GetHydrant(-1);
Hydrant actual3 = service.GetHydrant(int.MaxValue);



And each successive call to getHydrant will return the next value in the queue.

Thursday, November 3, 2011

Mocked Repository and Generic Constraints

So, a productive couple of days - three issues resolved.

Reinstated a Repository - Moq cannot mock EF IDbSet so I decided that bringing the repository back would be a good idea. Example unit test below:

[TestMethod]
public void ListInventoryTest()
{
Mock<IDbContext> context = new Mock<IDbContext>();
Mock<AssetRepository> assetRepository = new Mock<AssetRepository>(context.Object);
Mock<StockpileRepository> stockpileRepository = new Mock<StockpileRepository>(context.Object);
List<Asset> assets = new List<Asset>();
assets.Add(new Asset() { AssetId = 1 });
assets.Add(new Asset() { AssetId = 2 });

Stockpile stockpile = new Stockpile() { StockPileId = 1, Assets = assets };

//assetRepository.Setup(x=>x.GetSingle(It.IsAny<expression<func<asset, bool="">>>())).Returns(asset);<expression<func<asset,>
stockpileRepository.Setup(x => x.GetSingle(It.IsAny<Expression<Func<Stockpile, bool>>>(), It.IsAny<IEnumerable<string>>(), It.IsAny<bool>())).Returns(stockpile);
var inventoryService = new InventoryService( stockpileRepository.Object, assetRepository.Object );
List<Asset> rv = inventoryService.ListInventory("Hailes", "Jita01");
Assert.IsNotNull(rv);
Assert.IsTrue(rv.Count == 2);
}
Then, now that I had a repository I could add a Null Object pattern solution to my repository, which was a bit tricker than I expected. In order to create a new T in the generic method, I needed to include a generic constraint to ensure T had a blank constructor.

public virtual T GetSingle<T>(Expression<Func<T, bool>><func whereClause, IEnumerable<string> customIncludes = null, bool overrideDefaultIncludes = false) where T : new()<string><func
{
IQueryable query = (IQueryable)this.ApplyIncludesToSet(customIncludes, overrideDefaultIncludes);
T val = query.SingleOrDefault(whereClause);
if (val == null)
{
val = new T();
}
return val;
}
The final issue I resolved was picking an appropriate lifetime manager for the EF DbContext when running in an 'application' context. Using the PerResolveLifetimeManager ensures that when resolving a class, any common Dependency in the entire resolve pat are shared - this means that a service with two repository dependencies, which both depend on a dbContext, will both use the same dbContext when the service is resolved - yay. This does exactly what I want it to, each operation should use a new service instance, which will use a single dbContext across all repository actions within that method.

So yeah, productive (and thanks to @wolfbyte for the generic constraint tip).

Next job to flesh out my unit tests, and continue on the functionality, as this covered the majority of my architecture issues.

Tuesday, November 1, 2011

Unit Testing, Moq, EF, and Repositories

 
Well, I have just started a small (8-12 week / 1 resource) project using an unfinished version of our in-house framework for some parts of it. In the process I want to ensure that I integrate some key design patterns (null object, repository, and unit of work) and full unit testing on the service implementation. This will hopefully help alleviate the pain of working with DotNetNuke, cross-application dependencies, and webforms.

So my first step was to property expose services from the dependent application as this is a major point of failure in other systems that use this application, which was pretty straight forward as the application design is not too bad. As this is a shared dependency on the DotNetNuke instance, I did not need to expose this as a WCF service, but could easily change it in the future if necessary. The new service interface will help prevent changes in the core application from breaking the dependent application, as any changes will be reflected as build failures in the service class, highlighting this to the developers and ensuring they either make the change to not break the interface, or let all consumers of this service know there is a breaking update and plan appropriate changes. This is a key issue encountered when services and application references are not well defined, and has caused a number of deployment issues at my current client.

The guts of this post however is to discuss my plan for unit testing, and how I had to rethink my previous statement of going ‘repository-less’. I previously discussed the removal of the repository from the framework and using the DbSet functionality in the EF context as the repository pattern. This worked really well, until I decided to do some unit tests.
I decided to use a mocking library in my unit tests specifically to ensure I was performing appropriately isolated tests, and to reduce the impact of managing test data. I had previously looked at Moles (Microsoft stubbing tool), but it always seemed so cumbersome and confusing, so I picked up Moq instead. I really like the Moq usage pattern, and so I thought it would be a good fit.

So, the plan was to use Moq to create mocks of the repository functions that act in predictable and repeatable ways, which means we can run the service and test that the service behaves as we expect.

An example is given below – in this example I created a service to get a list of ‘stations’ from the dependent application. Since I am testing my service, I want to Mock the dependent application service to act predictably, so I can ensure that my service acts the way I want it to (we are not performing end-to-end integration testing, so we don’t want to rely on the dependent application succeeding or failing at this point)


//when we call ‘GetStations’ with a parameter of 0, our mocked service throws an exception – I know the dependent service reacts in this way, so I can ensure this is integrated in my test
_samsServiceMoq.Setup(x => x.GetStations(0)).Throws();
//when we call ‘GetStations’ with a parameter of -1, our mocked service returns no results
_samsServiceMoq.Setup(x => x.GetStations(-1)).Returns(new List());
//when we call ‘GetStations’ with a parameter of 1, our mocked service returns a list with one item in it
_samsServiceMoq.Setup(x => x.GetStations(1)).Returns(new List() { new Unit(){ UnitID = "100" } });
 
UserService target = new
UserService(_samsServiceMoq.Object); //create an instance of my service, and pass in the mocked dependent service
List actual1;
List actual2;
List actual3;
actual1 = target.GetStations(-1); //execute the service method with the specified parameter
actual2 = target.GetStations(1); //execute the service method with the specified parameter
actual3 = target.GetStations(0); //execute the service method with the specified parameter
_samsServiceMoq.VerifyAll();
//check whether the mocked service methods were called in the execution of our tests – this is useful to ensure that your service method is calling the expected mocked method with the expected parameters.
//check the results from the service to ensure they match what you expect (based on the response from the mocked service)
Assert.IsNotNull(actual1);
Assert.IsNotNull(actual2);
Assert.IsNotNull(actual3);
Assert.AreEqual(0, actual1.Count);
Assert.AreEqual(1, actual2.Count);
Assert.AreEqual("100", actual2[0].UnitID);
Assert.AreEqual(0, actual3.Count);

The above example shows how you can configure a test without worrying about the dependent services, so you can test only the functionality in your service. You will also note that the service itself needs to be designed so that all dependencies are passed to the service, instead of created in the service (this is a key point in ensuring testability of components, all dependencies must be passed to the object). If we did not do this, we could never mock the dependent service, which means we would need to set up the test to ensure the dependent service responds appropriately (configure the dependency, and know/configure sample data that the dependency will respond to).
This works really well, I can test my (admittedly very simple) service without caring about configuring the dependent service. However doing the same thing on an EF repository instead of the dependent service does not work so well. The code below should work, but doesn’t due to limitations in EF/C#/Moq.


_hydrantContextMoq.Setup(x=>x.Hydrants).Returns(_hydrantDbSetMoq.Object);
_hydrantDbSetMoq.Setup(x => x.ToList()).Returns(new List() { new Hydrant() });
HydrantService service = new HydrantService(_hydrantContextMoq.Object);
List actual;
actual = service.GetHydrantList();
_hydrantContextMoq.VerifyAll();
_hydrantDbSetMoq.VerifyAll();
Assert.IsTrue(actual.Count == 1);

Here I am mocking my DbContext to return a mocked IDbSet, and mocking the IDbSet.ToList() to return a list of Hydrants with 1 item. This way I can test my service so that calling getHydrantList on my service returns the single length list. Unfortunately, IDbSet.ToList() is not a mockable method (it is actually an extension method) which means it is not possible to set up a mock for this method. Since my service is using this method, I cannot test my service in isolation of the database.


This is where the Repository comes in. Instead of using the IDbSet.ToList() directly, I would use a Repository GetAll() method which abstracts the call to the underlying DbSet method. As the repository is just another dependency on the service, we can mock this instead of the EF IDbSet, and hence have an appropriately testable service. We will also then have the ability to ensure that the repository supports the null object pattern, so a call to the IDbSet that may return null (such as a find() with an invalid key) can return an appropriate null object to the service, so the service, and all clients, know it will never receive a null as the result of a service operation.

So, big backtrack on the framework repository, and big kudos to Moq for making testing easier (at least for my simple examples so far).