Thursday, June 21, 2012

Team Learning

I have been fortunate enough at work to have participated in a couple of trials for online learning/training proposals for our teams, and I thought I would post some opinions.

The two trials were Pluralsight, and Safari Books Online, both of which which focus on self-learning rather than formal training, which is my personal preference.  I would rather determine what I want to focus on than be led down a particular path.

However both of these options have provided a very different experience, and I am not sure which one I prefer.

Pluralsight
Pros:
  • Good quality videos
  • Good range of topics
  • Concise and straight forward guides
  • Offline support
  • Mobile device support

Cons:
  • Does not always go to the 'next level' of complexity
  • Videos arguably require more 'attention' than books.

Safari Books Online
Pros:
  • Massive Library
  • Ok mobile device support (no offline access)
Cons:
  • A lot of crap to wade through to find good resources
  • Books can generally be overly verbose / take more time to digest

Usage Patterns
As an avid reader the poor kindle support of the safari books online is a bit of a letdown, but it does work relatively on modern mobile browsers for reading on the go.  The lack of an offline mode is not too much of an issue as data usage is quite low, though it does mean you need a tethered connection if you are not reading on your phone. 
Pluralsight has very good mobile support (with most major phone platforms supported), as well as offline access for both web and mobile devices.  My key concern is that 'video' requires more investment in attention than reading, as you require both audio and video (arguably you can just listen, making it less of an investment than reading, but i feel you lose too much by doing this).  This makes it more difficult to, for example, sit in the living room while your wife is watching terrible TV.

Content
The sheer breadth and depth of knowledge available on safari books online is outstanding, IF you have the patience to find the right resource, and the patience to actually read a book on a particular topic.  Quite often technical books build on the knowledge of previous chapters to present more complex topics, which can make it difficult to try an pick up the complex topics without having read the rest of the book. 
Pluralsight is definitely a more accessible learning tool to come up to speed with new concepts and tools, but the depth of knowledge cannot compare with that of the safari books online.  Where it excells is providing a very concise information with clear samples that can easily be picked up without having to wade through verbose text or code to understand what is being portrayed.  One area that Pluralsight probably falls down is as a reference resource, as it can be difficult to use to look up a particular detail / syntax, while there are literally hundreds of books you could bookmark and index for this very purpose.
 
Conclusion

For a small dedicated team of expert developers, Safari Books Online is an excellent resource.
For a larger team with a wide range of expertise and commitment levels, Pluralsight is an excellent way to introduce new technologies without a heavy investment of time for the developers.

Personally I like having access to the entire Safari Books Online library, however the content available on Pluralsight is far more accessible and immediately useful than trying to find and wade through entire books on the same topics.

Tuesday, May 8, 2012

TFS - Creating a Branch from Local Source

Well that was frustrating.

The Problem:
Work has been performed on the source Trunk that now needs to be in a separate branch, and not checked into the Trunk.

The Solution:
Create a branch selecting 'current workspace' as the source version
*Ba Bow* - nope, this doesn't do what you think it does, your shiny new branch does not have your local workspace changes.

Create a branch and merge the changes across
*Ba Bow* merges only work on checked in code.

Create a branch, shelve your changes, unshelve to the branch
*ding ding* we are on a winner, only it is a PITA to get it working.

Firstly you need the TFS power tools - here: http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f
Next you need to, for some reason, ensure you have no other pending changes in your workspace.  I have no idea why, but even if you have pending changes competely unrelated to the shelve files, you will get warnings and may get errors.  Strange but true.
Next you need to call the "tfpt unshelve" power tool command. But you need to call this from a folder in the workspace you are working with as there is no way to set the workspace/server in the tfpt command.

tfpt unshelve MyShelfsetName /migrate "/source:$/MyPath/My Path with spaces/MyBranch" "/target:$/MyPath/My Path with spaces/MyOtherBranch"
Finally, when you run the command you will need to merge all the shelved files into the new branch, one by one.  You can perform an auto-merge, but this will actually perform a merge on the destination/shelf, when in reality you very likely want to take the shelf version rather than do a merge, which requires you to go through file by file.

The link below is a good source of information on this process.
http://codereferences.blogspot.com.au/2012/02/migrating-shelveset-from-one-branch-to.html

Tuesday, May 1, 2012

Solving Race Conditions



Example

We have 'Tags' that can be applied to a 'Post'. When creating a post, we want to only create new tag entities when the tag does not already exists. A race condition can exist when two posts with the same new tag are created at the same time as the check if the tag exists can be false for both posts.

Solutions

1) Always create a new tag - we don't really care that we have duplicate tags if we always perform operations on the tagName, not the actual instance of a tag - this in ddd would be a 'Value Object'.

Side effects are; a potential drop in performance, and an increased database size

2) Use Database constraints to mark the field as unique, so the second post will attempt to create the tag and fail.

Side effects are; it is difficult to impose unique constraints in ORM, not all underlying providers (object store for example) will support this, and requires 'retry' logic.

3) Use a Transaction to check the existance of the entry before the insert.

Side effects are; a drop in performance from transactional locking and increased solution complexity.

4) Use a messaging service model for processing each post creation as a separate operation so the race conditions wont exist.

Side effects are; an increase in application complexity due to the asynchronous nature of the queue, and reduced performance due to the overhead (albeit with an increased scalability)

Conclusions

Option 1 in this scenario is entirely valid but this is not always the case, often guaranteed uniqueness is important, so using this as a general solution to race conditions is not appropriate.

Options 2 and 3 are very reiliant on the functionality of the underlying data provider.  In most LOB solutions is probably OK, but it is not as flexible and scalable as I would like.

Option 4 seems like a fairly drastic change in application design, but in reality this should not be as large an impact as you would think.

Friday, April 20, 2012

Quick Tip: Enum Descriptions / Labels

If you ever had an enumerable list but wanted to include 'friendly names' with the list, this is how you can do it using Extension methods.

Create an extension Method
public static string Description(this Enum enumValue)

{
var enumType = enumValue.GetType();
var field = enumType.GetField(enumValue.ToString());
var attributes = field.GetCustomAttributes(typeof(DescriptionAttribute), false);
return attributes.Length == 0 ? enumValue.ToString() : ((DescriptionAttribute)attributes[0]).Description;
}
 
Define an Enum
public enum AnEnum
{
[Description("First")]Val1,
[Description("Second")]Val2
}
 
Use the enum
var displayName = AnEnum.Val1.Description();

Quick Tip: C# Strongly Typed Property Name

A quick and easy way to use a type safe (compiler-aware) lambda expresssion

public string GetPropertyName<TSource, TResult>(Expression<Func<TSource, TResult>> propertyExpression)

{

var memberExpression = propertyExpression.Body as MemberExpression;

return memberExpression != null ? memberExpression.Member.Name : null;

}



Then call

GetPropertyName<LocalGovernment, string>(x => x.LocalGovernmentName)





Originally found here:
http://stackoverflow.com/questions/1417383/how-to-get-properties-names-from-object-parameter

Thursday, April 19, 2012

Developer Collaboration / Knowledge Sharing

As a consultant for a large company you are often segregated from your peers, often by physical location, for what can be extended periods of time.

For me this poses a very important problem, and that is the sharing of knowledge within the company. When a developer goes out to a client, they can build a vast range of knowledge that is never then shared back to the broader development team. Sometimes even knowledge within a team at a client site is not shared effectively, leading to a lack of growth within the team, and lack of understanding and standardisation within the delivered products.

Formal training can be used to bring everyone to a certain level of knowledge, but training rarely imparts the knowledge of best practice or the results of experience.
Developer-led workshops are often better at imparting specialised knowledge but they are dependent on attendence, prerequisite knowledge and can suffer from a lack of follow-up material. It can also require significant time investment on the team, where physical segregation and time management has significant issues.
Wiki's and blogs can provide as much information as developer-led workshops, but suffer from lack of accessibility and relevance, and the lack of immediacy of queries and responses, so while it is a useful way to store information, it is not always useful for sharing information.

For me, the above scenarios seem to be the most commonly used way to share information, and do help to at least some degree. However it is also lacking in so many ways to really bridge the gap of knowledge within a large group of developers.

I think the popularity of social media shows how easy it can be to share information between peers if it is easy to share, and easy to search. Stack Overflow is an outstanding example, where the community is answering the questions of other developers and in the process sharing their own knowledge and experience. I would like to see something that is almost the opposite of this as well, where developers can share snippets of code that they have used to solve a particular issue, or that can be extremely useful in a variety of situations. CodeProject is a bit like this, but the 'gamification' and 'accessiblity' of stack overflow makes it a significantly more relevant tool.

So for those of you working in the industry, how do you ensure everyone has access to the knowledge and experience of your most effective developers.

Monday, April 16, 2012

Incomplete HTML5 is Incomplete

Or why HTML5 is not close to being ready for LOB.

I have always given JavaScript a bit of a bad rap, and to be honest a lot of it comes down to not being as proficient with it as I am at my meat and potatoes c# development. That and the lack of toolset support to rival C# development in Visual Studio.

However I have been using it more recently, both in my at work projects and personal projects. This is largely due to my pushing the adoption of ASP.NET MVC at work, which is really helping leverage Javascript rather than the kludges and hacks of the old WebForms way.

While we are currently still targetting HTML4 for our LOB systems, I have been working on various HTML5 personal projects to get a feel for how we can leverage the new features of HTML5 going forward. And it is not as pretty as I had been led to believe.

By "HTML5" I mean the buzzword that it entails, so in reality HTML5, CSS3, JavaScript etc, it is just easier to go with HTML5. And in fact the Actual issue I want to raise here is with the HTML5 specification. While most of the buzz about HTML5 is about fancy CSS3 animations, gradients, or html canvas and media tags, when it comes to LOB data/forms is king.

HTML5 Forms
So HTML5 has a large number of new fieds (email, tel, number, etc) and utilities (autofocus, placeholder). These range from very useful (autofocus, placeholder, range) to simply handy (with validation being pretty standard now, fields for email, telephone, etc are fairly uninspiring).

However trying to get these working consistently across browser version is an excercise in futility for the conceivable future.

For example, I attempted to have a page with a Range element, which is unsupported in most browsers at this stage. OK, I'll use a JQuery Range control then, which worked fine. Except that I was using the same scripting (KnockoutJS) to bind to my form for a JQuery Mobile targeted page too, which requires the use of the HTML5 range element. There was no way I could provide a solution that would work without a fully separate page renderer and script, or a bunch of browser/compatibility checks. I actually needed 3 different browsers to do the testing, a Local IE, Firefox (with User Agent Switching to iOS) and my Android device (which led to further issues of having to host in IIS instead of the Visual Studio IIS Express to allow for external access).

Isn't HTML5 supposed to be the great consolidator?

HTML5 Data
Local Data has even been removed from the HTML5 specification, so while basic 'DOM Storage' is available (basic key-value store) in prettymuch all browsers, the much hyped "webdb" support is completely non-standard and generally browser specific, which in essence means you either write a library for each browser, or you don't use it.


The final word
In even just a basic usage scenario I came across a severe lack of cross-compatibility issues when attemtping to build an HTML5 application that can work across browsers and in the mobile space. Until HTML5 becomes standardised, or at least the features become supported in mainstream browsers at least to some level of consistency, HTML5 is not going to be the be-all and end-all for LOB solutions.

If you have to rewrite your UI and scripts for each browser / mobile platform, then HTML5 offers little compelling benefit over current methods where you implement rich web technologies via Silverlight using vastly superior (YMMV) tools and service the mobile space with custom/expensive native apps, or basic html interfaces.

This is a little bit exaggerated, but it is at least a small counter to the 'silverlight is dead' brigade. I am currently very happy developing well designed MVC4 applications over Silverlight at the moment however, I just think the 'HTML5 will save us all' attitude is a little premature in the LOB space when it is still so hard to get something simple done consistently in HTML5.

Tuesday, March 13, 2012

MVC4 Mobile and Desktop Web

As LoB applicatons are what I do on a daily basis, I have been looking at how we can use our existing expertise to build mobile applications.

This is the third component of my analysis (but only the second to be blogged), my android analysis can be found here.

As MVC4 has some improved web applications development features, such as Single Page Application templates, Mobile Application Templates, and the Web API, I decided to use this. I already blogged how the initial project templates were slightly confusing, so this time I decided to start with a standard template, and build from there.

Steps I took to getting up and running


  • Set up the Framework and Business Logic - this is becoming second nature

  • Set up a plain MVC Project

  • Create my page Controller and default view

  • Create my ApiController for the get/post operations

    • I recommend firefox for testing this, IE doesn't do JSON very well and firefox made debugging much easier.

    • Firefox (with the user agent spoofing plugin) also makes it easy to test the mobile enhancements.



  • Create a script for the KnockoutJS ViewModel

    • This included the ajax calls to the ApiController to load and update data. I was unpleasantly surprised to see that while firefox worked immediately, IE10 (win8) required a lot of tweaking and in the end I had to switch from $.getJSON() to $.ajax() jquery calls.

    • As I am creating a multi-page application I defined this at the View level.

    • At this point I also had to modify the 'bundle' in global.js to include KnockoutJS - the default bundle definition did not include this. As a side note, I completely removed the default bundling later on, and created the bundles explicitly. This was necessary to ensure the mobile and desktop script bundles were used correctly.



  • Update my View to use the KnockoutJS bindings



Once this was done, I had a functional website that I wanted to convert to a mobile friendly solution.


  • Install (Nuget) JQuery Mobile

  • Create a new script bundle to include JQuery Mobile js and css files

  • Create a new _Layout.cshtml called _Layout.mobile.cshtml - this should use the mobile script bundle in the script header.

    • This is a cool feature in MVC4, when a mobile agent is detected, the ".mobile" content is rendered, not the standard view



  • Update _Layout.mobile.cshtml to use the JQuery mobile data-role attributes for layout



We now have a mobile styled page and a desktop styled page using a single codebase. The next step would be to update the View to provide better mobile styling features, but I won't go into that here.

Thursday, March 8, 2012

MVC4 Single Page Basics

I recently highlighted that I had a bit of an issue with the MVC4 tutorials that are available here as I do not believe the standard application structure for the default apps is adequately documented.

I also note that there are some unnecessary differences between the base templates that can further confuse the issue, predominantly in the View definitions, that make it difficult to discern the important differences between the template types e.g. Single Page Applications / Mobile applications / Web API.

For example, the Login.cshtml Header in the mobile template is defined as
@section Header {
    @Html.ActionLink("Cancel", "Index", "Home", null, new { data_icon = "arrow-l", data_rel = "back" })
    <h1>@ViewBag.Title</h1>
}

while in the SPA application it is defined as
<hgroup class="title">
    <h1>@ViewBag.Title.</h1>
    <h2>Enter your user name and password below.</h2>
</hgroup>

Now there is a realistic need for the actions to be defined in the mobile header and not in the SPA header, BUT I think the differing use of hgroup and @section Header should have been standardised.



As for a lack of basic plumbing documentation, I think the SPA application is very lacking in this area. The tutorials are very good at showing how the data access and data binding works, but one thing that really stood out for me was the lack of information on how the login page was loaded in a popup and how the actions were resolved as Ajax actions instead of standard actions.

Looking into the code it wasn't actually that difficult to identify where things were happening, for example the line
        <script src="@System.Web.Optimization.BundleTable.Bundles.ResolveBundleUrl("~/Scripts/js")"></script>

in the _layout file loads all the javascript code in the scripts folder, which in turn loaded the AjaxLogin script file.
The code below then overrides the onclick of the login links to render the link content in a popup instead of a new page.
// List of link ids to have an ajax dialog
    var links = ['#loginLink', '#registerLink'];

    $.each(links, function (i, id) {
        $(id).click(function (e) {
            var link = $(this),
                url = link.attr('href');

            if (!dialogs[id]) {
                loadAndShowDialog(id, link, url);
            } else {
                dialogs[id].dialog('open');
            }

            // Prevent the normal behavior since we use a dialog
            e.preventDefault();
        });
    });

The loadDialog function sets an additional query parameter (content=1) which the Controller accepts and renders a PartialView instead of a View (in ContextDependentView(), and prefixes "Json" to the action property in the ViewBag. The View itself uses this 'action property' as the post action, which ensures that the correct Action is called (either JsonAction or just Action) is called on the HttpPost for the View.

Yes once you know what is going on it is easy to follow, but all this detail is very important to the way Single Page Applications work, as much if not moreso than how to use upshot and knockout, and it is just left to developer to figure it out.

Anyway, this is Beta, but I just thought that for something so central to the development experience with Mvc4 there should be more emphasis on the basics to ensure that people understand what is happening rather than just guiding them directly to loading data.

MVC4

I have avoided really learning JavaScript for web development for a little while now. Or at least I have always had other things come up that has taken precedence. Despite one fairly large ExtJS project, and a smattering of JQuery controls, I have not spent much time learning JavaScript, HTML5, or CSS3.

With the release of MVC4 beta I decided to really try and get stuck into JavaScript and at the same time work on Mobile application development for rich internet apps.

One thing that I think is really lacking in the MVC javascript demos however is an explanation of the basic components of the projects. All the demos highlight how easy it is to load and manipulate data through the web api, and bind to elements on the page, but there is very little on the application plumbing, how pages are rendered, how navigation works, how the partial views are rendered to popups, etc. All of this requires diving into the code to work it out.

So hopefully in the coming weeks I'll post a few guides on these 'basics' as I start to build up my own knowledge.

Saturday, February 11, 2012

5 Minutes of Android from a .net dev

Introduction

I have been part of a casual group of senior developers working on building a standard development platform for our teams, and have been using it for a few simple trial projects quite successfully.  The original design was to support desktop, web, and Windows Phone development, with the primary focus being the core tiers (data/business) with the presentation layers currently fairly underdeveloped. 

I have produced effective WP7 applications using this framework however, so I figured I would build an Android app to get some insight into how much effort would be involved in supporting multiple native applications.

Backend

I started with a simple design, a ratings system where users can rate, and view the average rating, for a list of items.  As Android doesn’t support SOAP very well, I decided to expose my services via webHttp (REST) endpoints, which works quite well with the ratings model that I am building.

I had a few issues setting up Json support through WCF, for a couple of reasons. 

Firstly I was using DataContract(IsReference=true) to support the data contract serialisation of my entities, which is not supported by DataContractJsonSerializer.  This was required to prevent circular references in my entity design for previous projects.  The solution was to turn this off, and ignore the circular references from my entities using IgnoreDataMember attributes.

As always, as this was within the WCF layer, the errors were woefully inadequate, and it took a while to actually identify this as the issue :/

Secondly, I downloaded and installed cUrl to test my POST/PUT calls, and getting the exact right quotes/escaping strategy for the Json parameter was a pain.  In the end I needed the entire parameter enclosed in single quotes, and each double quote for the key/value pairs needed to be escaped with a standard escape character (\) .

Android

So I now had a RESTful service that android would be able to access quite easily, so on to the android development part.

  • Getting the tools
    • This was fairly easy, Google has a very simple guide on how to get the SDK, as well as links to appropriate IDEs
    • The development can be done in a variety IDE’s, including eclipse, but for a change I decided to go with Telerik IntelliJ Idea, which has “built in” android support.
    • It was a bit of a process to download all the components for development.  Once the SDK was installed there were then a number of sub-components to download.
    • And then I needed the right Java Versions, with android only working on Java 1.6 meaning I needed to sign up for an Oracle account to download a legacy jdk.
  • Hello World
    • IntelliJ IDEA was good here, creating a new android project was straight-(ish) forward.
    • Much like java projects in general, the code, assets, and resources were scattered all over a number of folders, but at least the build files were all set up and everything built up front.
    • While IDEA has no graphical designer, it does have a preview.  I will need to work out how I can support test data in the designer to make sure things look right.
  • Testing
    • Setting up an emulator was very simple, and deploying was automatic when debugging.
    • Debugging was very slow, I would definitely switch to a real device for tracing/debugging in anger.
  • Next Steps
    • Web Access
      • Getting a single http request to my REST service was easy
      • note, do not use localhost (yeah i should have known this from my WP7 apps too)
    • Data Binding
      • Not quite there yet, but it is not as simple as I expected – there is no JSON binding support off the bat, and binding list data actually involves creation of arrays of hashmaps.  This was a big surprise.
    • Page Navigation
      • Haven’t looked at this yet
    • UI Design
      • Haven’t looked at this yet

 

Conclusion

I had a lot of reservations on how hard it would be to make a simple android app, but offloading all the logic to the service layer, using RESTful services and designing your application flow for the mobile form factor, it looks like it can be done with fairly minimal effort.

Obviously providing ‘local’ device processing and resources (inputs etc) will complicated this dramatically, but for LOB systems it is definitely feasible.  Providing a native WP7 and Android front-ends is definitely within reach of our application framework.

Thursday, February 2, 2012

Autofac and A Simple Plugin Design

I am working on a web service that needs to support custom processing components depending on the input. I had started using Autofac as the DI container for the initial service implementation, but it wasn't until I required the custom processing components that I really needed DI.

The plan was to generate the components as independent assemblies so that they could be implemented and deployed independantly of the service. This also meant I needed XML configuration to register each new component.

The implementation would be to instantiate a Service, which had an IProcessor as a dependency

public Service(Common commonService, IProcessor processor)

I then register the service as a named service, which is named after the processor that is to be created. However, the named server needed to be configured to create a named instance of the IProcessor when resolved (as the WebService resolves the Service, not the IProcessor).

builder.Register(c => new Service(c.Resolve<TrimCommon>(), c.ResolveNamed<IProcessor>("CustomProcessor"))).Named<Service>("CustomProcessor");

In order to resolve the Service I need to register my CustomProcessor, which is where the 'trickyness' kicks in. I didn't want to add each CustomProcessor as a reference to my service, as that would be difficult to manage, so I could not just register the type Fluently.
This meant I had to go back to XML (eewww, spring.net) to register my IProcessor implementations. This was still very easy
<component type="CustomProcessor, CustomProcessor" service="IProcessor, Common" name="CustomProcessor" instance-scope="per-lifetime-scope" />

So in code, when I resolve
Service service = getInstanceContext().ResolveNamed("CustomProcessor");
I get a Service with the appropriate processor.

Unfortunately I realised that when trying to resolve the CustomProcessor class this failed, as it was not loaded into the AppContext, and did not exist in the GAC. To fix this I had to handle the AppDomain.CurrentDomain.AssemblyResolve event and load the assembly manually.

The code below handles the event and loads the assembly if it exists in the "plugins" folder of the running webservice

Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args)
{
System.IO.DirectoryInfo folder = new System.IO.DirectoryInfo(this.Server.MapPath("/plugins"));
AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve);
System.IO.FileInfo[] files = folder.GetFiles("*.dll");
foreach (System.IO.FileInfo file in files)
{
try
{
Assembly assembly = Assembly.LoadFrom(file.FullName);
if (assembly.FullName.Substring(0, assembly.FullName.IndexOf(',')) == args.Name)
{
return assembly;
}
}
catch (Exception)
{
return null;
}
}

return null;
}

Tuesday, January 31, 2012

Autofac and WCF Lifetime Issues

I am building a fairly simple WCF service and decided to use Autofac / Autofac.Integration.Wcf for (admittedly very simple) dependency injection.

I have previously used Unity, with a custom wcf scoped lifetime manager, but I have heard good things about autofac.

Setting up the WCF instance was pretty straight forward following the wiki for a IIS hosted server, and I could immediately use the service.

The next step was to configure the service instance PerCall since I don't want session management, which was also straight forward.

Finally I wanted to register a dependency for the service, which caused some confusion, partly because I was 'doing it wrong', and partly because some of the guides I had seen were wrong.

My registration is shown below, and is based on the Autofac wiki examples
var builder = new ContainerBuilder();
builder.RegisterType<WcfService>();
builder.RegisterType<BusinessService>().InstancePerLifetimeScope();
AutofacHostFactory.Container = builder.Build();

The BusinessService is scoped as InstancePerLifetimeScope, which combined with the AutofacHostFactory definition in the service host markup should generate a new instance per WCF call.

I then called AutofacHostFactory.Container.Resolve() twice in one of my service methods witht he expectation that each call to the service method would create a single new instance of BusinessService, but the two calls in the one method would return the same instance. Unfortunately this was not the case, and a single instance was reused throughout all my service method invokations.

I should have known better and had the BusinessService as a dependency to my service constructor (which was indeed a misdemeanor) BUT, it took a considerable amount of time to find out why the manual service resolution was not respecting the expected Lifetime management. As it turns out, the AutofacHostFactory.Container was not the correct container to use, and now that I think of it, I can understand why, but there is very little description over what happens in the service configuration within Autofac.

Registering the container in the AutofacHostFactory actually creates an Autofac container on the Host Factory itself, which is why the resolution returned a common instance each time, even between calls. Obvious yes, but the code to get the actual container was far less obvious. It wasn't until I saw this 'bug request' that I understood what had happened.

The Autofac service host factory registers a new container lifetime when the service is created, and the only way to get it is to obtain it from the current WCF OperationContext. In Unity this was more explicit because I implemented the Lifetime Manager myself, but in Autofac the Integration.Wcf libraries which things a little more black box, and the lack of documentation was frustrating.

Anyway, as I said, I did it wrong to begin with - adding the BusinessService as a constructor dependency resoved the dependency exactly as I originally expected, but it can definitely be confusing for a developer as they cannot call resolve() on the same container they register it (on the AutofacHostFactory in this case).

Wednesday, January 25, 2012

Why the TFS Hate

For the last few months twitter has been abuzz with TFS hate, and I always wondered exactly why.

Sure there are some annoyances, but on the whole as a full SDLC tool, TFS is a pretty comprehensive and cohesive tool.

It integrates source control, task management, automated builds and testing into a single system, and handles each component pretty well.

Source Control
+ good branching, merging, labelling support
+ good IDE integration
- not as good as a DCVS (such as git)
- ties source code to source control, a massive pet hate and has caused countless issues in the past

Task Management
+ flexible enough to support release planning, resourcing, task management, bug tracking, etc
+ integrates with build and testing components
+ integrates with the IDE
+ fairly easy to modify
- flexibility poor is relative to something like JIRA

Automated Builds
+ flexible
+ integrates with the IDE
+ simple to set up
+ integrates with Unit Testing (and Test Manager for deployment/UI testing)
- complex to modify

Testing (via test manager)
+ integrates with task management (test failures can generat bugs)
+ integrates with builds (you can define a set of automated UI tests to execute on build success)


As a single system nothing comes close to this level of functionality and flexibility, and despite its warts it does work pretty well.

Saying that, I can name off the top of my head better alternatives for each component, such as Git/Mercurial for source control, TeamCity for builds, HP or Rational Test Managers, JIRA for task management/workflow. Some of these integrate with TFS or Visual Studio to varying levels, but setting them up and day to day integration between these systems is not going to be as cohesive as a single TFS solution.

Granted I know in the past I have struggled to correctly set up TFS, Sharepoint, Reporting Services, Test Center, and HyperV to all work correctly, so perhaps configuring all these external systems is not as difficult as I think.

Git and TeamCity are definitely items I want to become familiar with if only to find out what the hype is all about, but I again reiterate that I don't really understand all the hate with TFS.

Perhaps more experience with these other systems will turn me into a TFS hating ragaholic too :)

Monday, January 23, 2012

Strongly Typed Configuration Sections

I generally don't like to use strongly typed configuration sections, but I'm working on a project that uses them very heavily and just encountered an annoying issue.

When you have a configuration collection such as shown below, there is no way to access the value of customAttribute in the your strongly typed collection class.

<folderlist customattribute="value">
<clear>
<add attribute1="value" attribute2="value">
<add attribute1="value" attribute2="value">
<folderlist>

The link below shows an example of how this can be resolved, but it is a bit nasty.

http://www.frankwisniewski.net/2011/12/how-to-use-a-configurationelementcollection-with-custom-attributes/

One thing to note is that in your class definition, you cannot use the [ConfigurationProperty] attribute on the property exposing the attribute, otherwise the property will not be identified as an unrecognised attribute and so your code will never be hit..

Friday, January 20, 2012

Consultancy Politics

One of the major disadvantages of being a consultant is that you usually have to fit in with the incumbent way of doing things. Despite being able to bring knowledge and experience to a team, you are often found toe-ing the line despite your best efforts.

In the past I have been in positions where I have been given the mandate to implement sweeping changes to a project development processes, but was blocked from implementing all but the smallest of these changes in an effort to appease staff that just would not make any effort to change.

In other locations I have seen permanent developers with so many years invested in a certain way of doing things that they are unwilling to accept constructive criticism or alternate views. Of course their expertise in these systems makes them invaluable and inviolate, and so the status quo continues.

Of course one of the things I have been learning is how to better introduce change and keep team morale high, so I am getting better at this, but sometimes you just wish that you could put the politics aside and just make things happen.