Wednesday, September 28, 2011

ICT Maturity

When working for a large consulting company you have the opportunity to see a lot of different client sites, which is on the whole a good thing. In the last couple of years though, I have been involved in clients that are seriously lacking in ICT process maturity, and it is causing significant issues when trying to produce the best outcomes for the clients.

My current client site is possibly the worst example of this I have ever seen, with the issues stemming from both the top and bottom of the organisation. From the top there is no ICT governance for project analysis and no architectural analysis for ICT projects and how they fit into the enterprise. From the bottom there is no standard process for source control, development standards, standard development tools/frameworks, or project planning. Then in the middle there is a complete lack of Business Analysis and Project Management to bridge the gap between business expectations and actual delivery.

As a senior developer I have seen these bottom-up flaws to varying extents over all the clients I have worked for.
These issues stem from a lack of technical leadership driving the adoption of better practices, and is common in small development teams with isolated project silos. This lack of "maturity" works fine as long as each developer stays within their own silo, but as the teams build and the projects become more complex, this lack of maturity begins to show. It is at this point (preferably before) that a technical leader needs to step in and set the standards, guidelines, and processes for the entire development team.

In working towards a technical and enterprise architecture position I have also started to get a much better understanding of some of the failures at the top and mid level of the clients ICT department.
Again the issues stem from a lack of maturity in the enterprise. While there is a vision for the delivery of individual solutions, there is no review of how individual solutions fit the enterprise as a whole, how solutions can be delivered across the enterprise rather than as stand-alone silos, and how solutions interact with other areas of the organisation.
This leads to the same underlying issues as the developers face at the bottom level, where each project becomes an independent silo, with ad-hoc dependency and management, and communication strategies.
As the enterprise grows, each business unit begins to see the need for shared information and the reduction of process duplication, however it is often too late at this point to implement a strategy for consolidation of existing systems as integration of the multiple silos becomes too complex and time consuming. My current client is at this point now, where they have a number of duplicate systems, data, and processes, and are beginning to see the need to consolidate this duplication, but there are no enterprise or technical guidelines in place to ensure that the projects are designed in a way that will enable the requirements of the enterprise, not just the silo.

I have noted that ICT maturity is often driven by the needs of a growing business outstripping the capabilities of the independent silo's of business processes that are the hallmark of immature enterprises. This is an issue that affects these business from the bottom up, as well as the top-down.

I have concluded that enterprise architecture is a fundamental step in the maturity of both a business and its ICT needs, and the sooner a business implements an effective enterprise architecture, the more agile and less wasteful ICT becomes in delivering business improvement. Unfortunately business often sees the cost of enterprise and technical architecture as being too high, discounting the late gains in productivity due to the early cost of the architectire. In the long term however, the cost of developing and maintaining the silos far outweighs the cost of a proper enterprise architecture.

At TechEd this year there was a discussion on "Disposable Architecture" and when it is best to use a Silo'd approach rather than a full-blown enterprise architecture. While the arguments for this were compelling, it is wise to note the following point he made, "if a system needs to communicate with other systems, then use an appropriately architectured solution". This is becoming more evident the more experienced I become, because maintaining communications between multiple silos becomes far more work in the stability and maintainability of applications than implementing an appropriate architecture and building your solutions to meet that architecture, even if the time to introduce and then integrate that architecture exceeds the cost of building the individual silo solution.

So, the more I know, the more I need to learn, le sigh.

Saturday, September 17, 2011

HTML5, XAML, and Windows 8

Ok, so there's another round of "silverlight is dead" theories going around again. It may well be after build, and especially after the announcement of the "no plugin IE" (in metro mode), but in reality I still think there needs to be a place for it.

With Windows 8 we get WinRT, the win32 replacement that treats XAML/.NET and HTML5/JS equally, *for desktop apps*. I emphasise the last point, because MS have pretty much given us nothing with respect to Web Apps, except the obvious conclusion that the promotion of HTML5 means MS want us to focus on that.

However, all of the tools that MS has given developers for HTML5 center around Metro desktop applications, and there is very little reuse that can be applied between the Metro HTML5 experience and normal web HTML5 development since everything is tied in so closely with the WinJS libraries. Conversely Silverlight and WPF share many of the controls and libraries and development for both environments is very similar, so windows developers who currently have the full stack of web (silverlight) and desktop (WPF) knowledge will now have to stick to doesktop only (xaml) or learn HTML5 for Web and either HTML5/WinRT or XAML for desktop.

Unless some love is given to Silverlight, Devs are going to have to learn HTML5/JS for web applications, and therefore why not learn HTML5/JS for WinRT at the same time, so what's the point of .NET? There's some hyperbole for you, but I'm sure there are a lot of .NET Devs thinking that very thing.

And the crux is, .Net development is so good because you can be so productive with the tools at your disposal, and XAML development is no exception. HTML5/JS development is far less productive, and far more painful for developers, so for .Net developers moving to HTML5/JS development, there is going to be regression in the productivity and quality of work, which benefits no one.

Friday, September 9, 2011

AoP Transactions Redux

Yesterday I highlighted the issues I was having with "Aspectising" the Unit of Work with regards to concurrency management. After a bit of research I was able to find some resources on the issue that confirmed my suspicions.

I identified that in many cases it was the responsibility of the business logic to determine the actions undertaken in the event of a commit failure (whether it be due to concurrency or some other failure), and that using AoP to wrap the transaction removes any chance for the method itself to perform the appropriate resolution steps.

As it turns out, I am not the only one to come across this issue, as described in this paper http://lpdwww.epfl.ch/rachid/papers/ECOOP02.pdf. This paper described the problem I encountered in a much more scientific manner than I possibly have the time or skill to, but it comes to very much the same conclusions I did. Yes it is possible, but a) does it make sense, and b) the solution would be so complex that it would defeat the purpose of cross-cutting implementation, which is to reduce complexity.

The paper also discusses one of the issues I highlighted in previous posts where a solution I proposed to manually manage a transaction for specific business functions would need to be configured in such a way as to only allow execution if not within an existing transaction scope, and similarly AoP managed transaction methods could not be called from within the manually managed transaction method. This is mentioned in chapter 7 when discussing the EJB transaction policies, which is essentially what I was describing in my discussion.

So… my final recommendation is:


  • Remove the AoP Unit Of Work implementation as step 1.

  • Ensure each business logic method manages its own transaction scope

  • Ensure Public methods perform the transaction/concurrency commits, and do not call other public methods

  • Ensure private methods are always within the scope of a public method transaction, and do not handle concurrency failures (as they will only trigger when the outer transaction commits anyway)

This is a fairly generalised solution and might need a better definition, but it should be possible to adhere to these rules to improve consistency of the application and ensure the behaviour is predictable.

Cross-Cutting / AoP Transactions

I previously posted some notes on the Unit of Work implementation using Aspect Oriented Programming and how it relates to the application style (atomic vs persistent).

I have encountered another issue that has added what I think may be the final nail in the coffin for this design. The issue this time is handling Database Concurrency when using the Unit of Work, and the obvious issue is that the DbContext 'save' occurs outside the scope of the method that performed the changes, which means that method cannot define any custom concurrency handlers.


An example is as follows:



My core "Server" needs to update the users' Action Points (AP) every hour.


This process cannot Fail due to concurrency issues – it must retry (with
newly calculated values) if an error occurs else the user will either not get
their AP (or will end up with too many)



  • A user has 20AP

  • The server begins the process to add 10 AP, loading the user as a 20AP user

  • The user 'spends' 3AP – total AP = 17

  • The server 'Adds' 10 AP – total is 20 + 10 = 30AP

In the above scenario, the user has magically performed an action at no cost,
since the two processes occurred in parallel on the database.


If we simply throw a DB Concurrency error, the server process would have failed, so the +10AP would never occur and the user misses out on their hourly allocation, also a very bad thing.


What we ideally want to do is to catch the Concurrency error in the Service Method that adds the 10AP, recalculate what the new AP should be, and save the new AP count. However using the Unit of Work method the Service Method cannot handle this, it would have to be handled in the consumer of the service method. For User Actions this may be appropriate (we provide a warning and tell them to try again), but for non-interactive and business critical actions we want to handle this within the service, which means we cannot use the Unit of Work attribute to handle the context saving.

At the moment I do not have a way of solving this using the Unit of Work attribute interception, but there may be a way to instantiate appropriate handler chains for specific concurrency issues.
For example you may be able to specify a handler for concurrency issues on a particular Entity (User for example) and/or a specific field in an entity (AP), which will consistently handle concurrency issues on that type. This way if you encounter a concurrency error on the AP field, you always recalculate and try again, but if you encounter a concurrency error on a less important field you can throw an exception and have the user try again.


You could also potentially build up an Action chain where each service method you call determines whether a concurrency error can just throw the error back to the user, or whether to handle it in a custom manner (and build up the action handlers as appropriate)

I will have to investigate this, the second option might be an impressively complex way to resolve this issue.

Obviously the simple option is to make every service method completely self-contained and handle everything specific to the service method, which will work, but defeats the purpose of having a framework do the manual labour.

Too clever for its own good

I have been a big proponent in the past of Unity and Dependency Injection, especially related to aspect oriented programming. Allowing automatic transaction and logging support are two key areas where AoP has provided huge benefits to the simplicty of application development.
With a caveat that this works really well for atomic systems, where a request is self contained and all the necessary processing occurs in the one transaction.

Some limitations of this have arisen during the design of the Market game however, and this is predominantly due to the need to inject 'time' into the equation. Ultimately we are attempting to perform an action that spans a period of time and during that time we may wish to read the progress of that task. When a task is a single transaction however, it is not possible to read that progress from outside the transaction. This leads to the scenario where we need to perform a method with custom transactional scoping. i.e. do it manually.

This introduces room for errors, because we need to ensure that it is not possiblle for a UnitOfWork method to call a 'manually managed' method, as this will corrupt the UnitOfWork scope, and the same for manually managed methods calling UnitOfWork methods.

We also need to ensure that for a particular service instance we can't call a second method while a "long running" method is in progress, as the DbContext scope will be affected (since it is shared between instances of the dbContext). This is actually a more general issue, you cannot reuse a Service instance if there is an on-going action.

Yes this can be worked on, and is not a very difficult change to make, but it does mean that there is the potential to introduce 'unexpected' behaviour, which is one of the reasons AoP and dependency injection is so useful.

Finally, as for my Market Application, I may rethink my concept of Time. This is something I've been looking at for a little while, and it does make sense. Basically it is changing Time to be more like the facebook game model. Essentially everything the user wants to do will take up X Action Points, which can be gained in a number of ways (time being the primary one).

This would resolve the issue we have with "Actions taking time" as every action would be immediate (if you have enough AP), but you just can't do anything else until you have more AP.

“Long Running Processes”, “Asynchronous Communication”, and “Everything In the Database”

This blog post comes from technical design considerations for the Market game, specifically related to interoperability (multiple ways of interacting with the core services), security (more specifically application layer responsibilities), and scalability.

Security/Design Concerns
The problem was initially raised in relation to the execution of long running processes in the Service layer, and specifically in my case the "production process", but morphed into a major design and architecture rethink.

The production process is the process by which an item is created - consuming ingredients, and taking a specified amount of time to complete.
I was working on an AI process that would continually run item production, and initially had a simple method for production

Choose what to create
Determine how long it would
take
Start a timer (short)
On timer completion, create
the item, start the timer again
Start a timer (long)
On timer completion, stop
production and choose something else

The issue here is that the AI process (the client) is responsible for the production process, which would be a very bad thing to leave up to a client. This is a fundamental security issue, not only for games (where we don't want cheaters) but also for general business processes (we don't want the client to run banking transactions).

The obvious fact here is that the service should be performing the production of an item; the client should request to 'produceAnItem' which performs the steps to create an item, including waiting the correct amount of 'build time'. The AI client can then worry about the 'big picture' which is specific to its own processing (choosing what to build). By doing this in our service method we are relying on either a blocking call to the service method, or implementing a callback/event to the client when the action is complete.

Asynchronous Issues
This works fine for 'connected' systems, but asynchronous systems such as WCF or ASP.net will not be able to run a service method designed this way. For example, using WCF to process this request means that our WCF call will either block until complete meaning the service method could timeout; or the WCF call will complete immediately, but when the callback/event fires we have no communication channel to inform the client.
WCF can work around this by using duplex communication, but this is limited to WCF and even further limited to the full .net framework (i.e. no silverlight/ WP7 support), so this cannot be used (it is also unreliable).

Polling and Feedback
A generally accepted solution then is to start the process in the service method and have the client check back to see if the process is complete. While this can be bandwidth inefficient if your polling frequency is too high, it is a reliable solution. This process can also solve one of the key issues with long-running processes, and that is progress reporting, as each time the client checks for completion, the server can respond with any progress information.

This then brings me to the "Everything in the Database" point. If we have a long running process, triggered by a WCF call, or on a background thread, or anything other than a global static variable, then we cannot (easily) access that process to determine progress or completion. So while our service could be sitting there creating all these items, how does the client know that their particular run is completed? In order to support this we need to actually write a "ProductionRun" item into the database for the requested production, and we can then update that from the long running process, and read it back again when the client wants to know progress. Potentially more importantly, we can recover a production run from an application/server crash as we have all the details persisted.

Ok, so we now have a working solution across any client type

Client -> Request Production Run
Service -> Create production run record (including calculated completion time)
Start production timer
Client -> Check for completion/progress
Client -> Check for completion/progress
Service -> On production complete, mark the production run complete, and do the item creation etc
Client -> Check for completion/progress

Worker Processing / Message Bus
The above process can be modified slightly to reduce the responsibility of the "Service" and introduce a "Worker" system that performs the processing of actions. The Service method becomes a broker, that writes requests to the database and returns responses to the client. A separate process running on a separate system then reads the requests from the database and performs actions on these requests. This allows for increased scalability and reliability as we are reducing the responsibility of the client-facing system, and can use multiple workers to perform processing on these requests. This is essentially a Message Bus architecture, which is a proven and reliable architecture for highly scalable solutions, and taking the solution described above, the implementation of a Message Bus would not require major application redesign.

August 15 The joys of Dependency Injection! (Ok, I really do like it, but I can see peoples eyes glazing over as I write this)

I have identified a fairly serious issue with the UnitOfWork model currently used in the FMSC framework when running in non-atomic runtime systems (an example of an atomic runtime system is a WCF or HTTP server).

The current system Resolves the DbContext and increments the transaction scope at the start of a method, and decrements the scope at the end of a method, when the scope reaches 0 again the DbContext is saved. This is how the transaction support works. When working in WCF or HTTP operations, the DbContext is recreated for every web request, but the same context is used within that request. This ensures that each request is isolated from other actions, but the request itself acts as one DbContext operation.

For 'normal processess' there is no 'scope' for an action that can act as a lifetime manager for the context. We cannot use a new instance every time a DbContext is resolved or we would have no transaction support (as we would get a new DbContext for each method that is called), but if we use a singleton then we can have unexpected outcomes – if we load a list of entities, edit one item, and then call a completely unrelated method that saves the context, the edited item will be saved since the same context is used for all operations.

I am struggling to find a solution that can work and still maintain the simplicity of the current solution.




  1. One option is to implement a custom lifetime manager that will return a new context if the transaction scope is 0, otherwise it will return the existing context.




    1. This would resolve the scenario described above, as the loading of entities will be on one scope, and the data save will be in a different scope. It will also need to be merged with a PerThread solution so that each thread has its own lifetime manager to ensure that you don't call multiple method entries within a single transaction scope.


    2. Option 1 Requires the implementation of a new LifetimeManager that will inspect the current item and return a new instance if the transaction scope is 0 (alternately we could dispose of the item if the transaction scope is 0 after a save, but the former will use a new scope each non-transactioned request is made, while the latter will re-use an existing context until a transaction is started). It is a relatively complex solution to implement, but it does have the advantage that there are no changes to the application architecture, it will be completely isolated to the LifetimeManager implementation.


  2. Another option is to create a new DbContext per instance of a Service (DbContext), and somehow use that context in the UnitOfWork method handler instead of resolving a new instance.




    1. This means that each service must be self-contained however, as crossing service boundaries will involve different contexts and transaction scope, which could introduce errors.


    2. This provides the most scope for flexibility as you can have as many 'services' you like each completely independent of one another, you just need to manually manage the service instances if you want to share a context across operations.


    3. Option 2 Would use a standard (new item each resolve) DbContext manager but the UnitOfWorkHandler would inspect the calling object for its context, instead of resolving a context. This would require a new interface exposing the context of the service, and an update to the UnitOfWork call handler to get this instance from the object being wrapped. This would be the easiest to implement, and probably be the best solution despite the requirement that services are self-contained.


  3. Another possible option is to create a custom lifecycle manager where you create a DbContext manually, which will be reused whenever the context is resolved (per thread), and removed when you manually dispose it.




    1. The problem here is that you will not be able to have multiple contexts open at the same time, as the resolver would not know which to resolve.


    2. Option 3 would be the most complex to implement, requiring a PerThreadLifetimeManager that can 'create and dispose contexts' on demand, then continually resolve that same item until it is disposed. This may be possible using a standard ExternallyControlledLifetimeManager, but may or may not be thread safe.


I will be trialling Option 2 in the market app, as I will have a service application that will spawn threads to handle long-running processes (essentially a thread per AI instance, which acts as a 'player' in the game), as well as the standard WCF and MVC interfaces for client applications, and this solution seems to be the most appropriate.

WP7 List Performance / ObservableMVMCollection

One of the key considerations when working with list data in the MVVM pattern is that the list should be represented by ViewModels, not Domain objects, except where the data to be displayed is very basic and contains no additional functionality.
The ObservableMVMCollection is a helper class that wraps a collection of Domain objects into a collection of ViewModels to assist with binding of list data, and this model works very well for its intended goal.

One of the details that was highlighted in the Tech Ed application however, was that the creation of fully implemented ViewModels is a non-trivial action when data is being loaded very quickly, and as the ViewModel creation occurs in the UI thread, this can cause unresponsiveness of the UI. In an effort to resolve this, I placed some of the logic into a background thread to reduce the overhead on the UI thread, which caused some issues of its own.

What I identified was a number of dependencies in the ExtendedViewModelBase that prevented the creation of ViewModel objects in a background thread, and ultimately this prevents us from offloading the viewmodel creation to a background thread. At the time I did not have a chance to delve too deeply into this, but I do have a few details where we can help resolve this issue. One area that looked to cause problems was the registering of MVVM Messages, but it is possible that the command creation could also be a problem in addition to others.

The first point I would like to make is that the ExtendedViewModelBase grew out of a need to provide common functionality between pages, and handles a number of common steps in the ViewModel lifecycle, as well as specific bindings and properties for common functionality (such as loading progress animations etc). However, when working with this list data, the actual functionality that these individual viewmodels need to present is severely limited. The most that would be required, in general, is to handle a few button or link events in the viewmodel, or to load some additional data not part of the domain object. All the additional functionality is really not required in these individual list item viewmodels.
In light of this, I think that we need at least two classes of Base ViewModel objects, ExtendedViewModelBase and LightViewModelBase in the framework, where the LightViewModelBase implementation is stripped back to a point where it can be generated quickly and efficiently, and more importantly instantiated fully on a background thread, where it can then be added to the appropriate ObservableMVMCollection on the dispatcher with no additional processing.

I believe this would go a long way to improving performance of an MVVM application, especially one with significant amounts of List data.

WCF - SubClasses and KnownType

As part of the "Market Game" that I am working on I have been trying to expose a significant portion of the functionality through WCF services rather than just working with the "Service Layer" directly. While some functionality will only ever be exposed to internal systems (such as the AI logic), the user-facing functions could be accessed via a number of UIs including WP7 and HTML5, so exposing these functions via wcf is desirable.


As it turns out, testing via the WCF services was a really good thing, because I identified an interesting issue that took me quite a while to identify.

Example
In my example I have the following classes


Item – any object that can be bought or sold
BaseItem:Item – an object that has no "ingredients" (an "elemental object")
ComplexItem:Item – an object that is created from multiple other Items
Asset – an ownership record for an item (can be either a BaseItem or ComplexItem)
StockPile – a list of Assets for a particular user at a particular location

I then have a WCF service that returns a StockPile given a user and location (including the list of assets). This call was failing and I had no idea why, especially considering it did succeed before I made a few changes to the data model.

Troubleshooting
As we all know, debugging WCF errors can be tricky, so when I first encountered an error I had to try a number of different things to get this working (the errors was the trusty generic wcf "the underlying connection was closed" error).


The item I immediately thought of was the "maxItemsInObjectGraph", "maxReceivedMessageSize", and "maxBufferSize" binding settings as these had been encountered before when working with list data and returning 'complex' data types. The data I was returning wasn't that large, but I knew the default limits could be hit fairly easily.


When that failed I tried to find details on "100 Continue" messages (this popped up in a trace, and I had seen an issue like this before). I ended up forcing this off with "System.Net.ServicePointManager.Expect100Continue = false" but this also did not resolve the issue.



Solution
Finally, I stumbled upon a post mentioning serialisation of abstract types and the need for the KnownTypeAttribute which immediately triggered an old memory of having to do this back at my old work but with classic asmx web services.


Anyway, as it turns out, the serialisation process cannot serialise/deserialise derived types that are defined using their base type – e.g. "public Item getItems(){return new BaseItem();}" will fail. However, if you specify that the "KnownTypes" for Item are BaseItem and ComplextItem, then the serialiser can correctly inspect identify the actual Item type and serialise it appropriately.


"If you use a Base Class as the type identifier for a WCF method (parameter or return value) you must add a KnownTypes declaration to the base class DataContract"
Therefore the fix is to add the following to the Item class definition


[KnownType(typeof(BaseItem))]
[KnownType(typeof(ComplexItem))]
Public
class Item

WCF can then magically understand and serialise a List object graph correctly.

Repository Pattern

I have been doing some (a very little bit) 'on-the-side' development using the FMSC framework with an end goal of producing a fairly simple multi-platform trade and production game. If anyone is familiar with Eve-Online, this game is based on the market from that. This is really just an excuse to work with the framework, occupy my brain, and get some ideas going.

Anyway, while doing this I came across some discussions on the "Repository Pattern", EF 4.1 and when it is and is not required. Based on this I am thinking that the "Repository" layer could be removed.



To start with, I think the "repository project" that we have right now is still required at a basic level as this is where the DBContext lives, which is the way we interact with EF, but it is the individual Repository objects / Interfaces that I think could be removed and replaced with direct calls to the EF repository object.



Why the repository

Separation of concern – each repository instance is designed to separate the data access functions from the business logic functions. This is your general DAL/BL/UI separation, where the DAL in this case is the Repository.
Flexibility – the Repository Interface should allow you to swap out the underlying ORM with minimal impact


Why our repository implementation fails

1.a) The DBContext.Set() interface is itself a repository pattern. Business operations occur on the class instances exposed by the Set operation. E.g. DBContext.Set().add(itemInstance) will add an item to the database, exactly the same as ItemRepository.Add(itemInstance), but with a whole class layer being removed.

1.b) An intent of the repository layer was to ensure that all database operations were resolved before returning to the Business Layer, which prevented the business layer from essentially creating dynamic sql statements. However, it became apparent that the repository had to be flexible enough to provide the functionality that the BL requires, which required a lot of work (such as implementing a way to specify which children to load for an entity).

By adding this flexibility we then provided the BL with more ability to dictate the SQL that was generated, ultimately negating the purpose of the repository to begin with. The only benefit the repository now provided was that all queries would be resolved as soon as the repository was called, not when the service 'resolved' the query.

The cost of implementing this flexibility was also itself expensive, especially when EF itself provided this out of the box.

2) Being able to swap out one ORM for another (e.g. EF to nHibernate) would be a particularly amazing feature and is one of the Holy Grails of the repository pattern. However, as highlighted in a few blogs I read, a) how often does this happen (I'll slap whoever mentions CBH) and b) how much actual effort would it really save.

Due to the additional flexibility that we had to include to allow the repository to be an asset rather than a hindrance, I believe we are coupled closely enough with the underlying framework that changing ORMs is possible, but potentially more effort than is feasible. The potential payoff is there (one interface for EF or NH), but if we never get there then the repository is just a waste of effort.



Conclusion

For all future work on my "Market" game I will be using the EF Set() "repository" and will attempt to identify any areas where the repository layer is actually more suitable. If I find anything I'll blog about it.

Edit, I do actually think there is one area where the repository can provide a useful function, and thanks to Brian for bringing this up. The Non-Null Pattern (or whatever you want to call it) is a pretty useful pattern, one which can be helped tremendously by the repository. E.g., calling getSingle(query) on the repository can call the Set().FirstOrDefault(query) and if null return a blank T. However if you are working on the Set directly, you will need to check for the nulls in the BL and handle it appropriately. It may be possible to use extension methods to do this (actually, that might be a very good way of handling this, note to self), but the repository pattern does make it easy.

That is about the only real tangible benefit I can see of the repository however.

Need More Blogs

There has been a recent trend at work to try and get a bit more interaction and knowledge sharing between developers, and one of the ways this is being done is via internal Blogs.  Unfortunately it is pretty much just me writing these blogs, so I figured I'd throw my blogs open to a wider audience and start this blog up again. 

So beware the upcoming influx of blogs which may not all have an appropriate context (as they may expect knowledge of internal projects), but I'll attempt to ensure that all future blogs have as much context as they need.