Tuesday, July 22, 2014

The promise of functional utopia (dippng my toes in F#)

Functional programming has been around for ages, but aside from a very brief introduction in my uni days I hadn't even considered using a functional language at work. Times are changing however, and functional programming is gaining a lot more mainstream attention. The key cited benefits are reduced bugs due to fewer side effects in code, better concurrency/scalability support due to the use of immutable types, and greater productivity due to the use of higher order functions and reduced "lines of code".

With that in mind I set out to gain some experience in functional programming and decide whether I could see the cited benefits and make use of it.

As a .NET developer I have focused on F# - advice from some experts has indicated that learning a stricter functional language like scala (scalaz) or haskell is recommended as F# allows for some 'cheating' to occur. I won't go into the details because it is beyond my level of understanding, but sticking to "pure" functional languages means you are less likely to fall into the traditional coding traps you might encounter. While I decided to stick with F#, I have tried to keep everything as strict and side-effect free as possible, using record types over classes and avoiding mutable types.


To start with, I recommend taking a look at a couple of Pluralsight classes if you have the opportunity, these are pretty useful. The basic introduction is useful to get an idea of the language features, and the intermediate level class is extremely useful for understanding how functional programming can be used in the real world.


The key outcomes I took from my initial look into F# is that it reminded me a bit of my early days of getting to know JavaScript; everything was a hodgepodge of dependencies and big unstructured files that were difficult to read. I am pretty sure there are better ways to structure things, so I am not overly concerned by that, but I do think it certainly leads you down an unstructured path and I would want to understand how to structure things better before trying to use it on anything "real".

On the positive side, I think the power of the language itself for data manipulation became apparent quickly. The sequence and list features are extremely powerful and really form the basis of anything you would want to do with the data at hand.

For standard "CRUD" systems I wouldn't expect to see a significant gain in using functional programming, but I can certainly see a benefit for data analysis and transformation activities. It is also fairly easy to combine F# with C# code, so you can use imperative code and libraries to do the imperative CRUD code (such as database querying) and then F# to work with the resulting data.

All in all, I think there is work involved in making F# code "enterprise ready" from a maintainability and supportability perspective. From examples I have seem, I believe this is likely an issue of functional languages in general, and not F# specific, but I don't have any experience to make that call. I would find it hard to justify the learning curve to a client, but I certainly think there are benefits to be had.

Friday, April 11, 2014

How not to write unit tests

In doing some major refactoring of code recently I was extremely glad we had unit test coverage in place.  Unfortunately there were some fairly problematic issues in the implementation of the unit tests that ended up costing a lot more time than it should.

The problem was two-fold.  Firstly there was a mismatch of custom hand-crafted stub classes and automatically mocked interfaces.  Secondly there was no consistency over the level of dependencies that were stubbed.  In some cases "true" (i.e. non-stubbed) implementations of dependencies were used three or four dependencies deep, and others only the direct dependency was mocked.

The combination of this meant it was really difficult to update the unit tests as part of the refactoring.  Each time a test failed I had to check whether it was due to a manually crafted stub that needed to be revised, a top level dependency that needed to be updated, or a dependency deep in the dependency tree that needed to be updated.

If the unit tests were configured appropriately, the only time I would need to update the unit tests would have been when a direct dependency changed.

What should have been done?

One of the more important features of Unit Tests is isolation and repeatable consistency of testing.  This ensures your unit test is appropriately testing the code you are targeting, and not the multitude of dependencies that the code may have.

If a direct dependent class has a nested dependency that is modified but the direct dependency retains the same expected behaviour, then you don't need to change your unit test.  Without this separation, major refactoring of your code becomes far more labourious, as you need to both refactor your code as well as all of the unit tests that have a dependency on the altered code at any level.

For this reason mocking or stubbing the direct dependencies of a method has become standard practice when working with unit tests.  Back in the old days this used to mean creating custom stub classes for all your dependencies.  This is one reason why Unit Testing was so often implemented as full end-to-end tests.
These days there is no excuse, Tools like Moq and RhinoMocks provide the ability to automatically create mocked versions of your dependencies, return appropriate objects based on the the calling parameters, and confirm that the services were called with the expected parameters.  Using these tools allows you to focus on testing the "unit of code" without worrying about what dependencies of dependencies might be doing.

Given an arbitrary method that calculates the cart total cost with a discount based on the total cost and number of items purchased.

        public decimal CalculateCart(Item[] items, IDiscountCalculator discountCalculator)
        {
            decimal total = 0;

            foreach (var item in items)
            {
                total += item.Cost;
            }

            var discountedAmount = discountCalculator.calculate(total, items.Length);

            return total - discountedAmount;
        }

IDiscountCalculator has its own unit tests to verify its behaviour, so we don't want to be testing this as part of the cart calculation.  We also don't want our CalculateCart to fail if our discountCalculator algorithm changes - as long as we trust the discountCalculator is going to work we don't need it to do the actual calculation and can return an arbitrary value.

public void TestCalculateCart()
{
    var cart = new Cart();
    Mock<IDiscountCalculator> discountCalculator = new Mock<IDiscountCalculator>();
           
    //arrange
    var expectedDiscount = (decimal)2;
    var expectedOriginalTotal = 10 + 10;
    var expectedDiscountedTotal = 10 + 10 - (decimal)2;

    var items = new Item[] {new Item() {Cost = 10}, new Item() {Cost = 10}};
    discountCalculator
        .Setup(x =>
            x.calculate(It.IsAny<decimal>(), It.IsAny<int>())
        )
        .Returns(expectedDiscount);

    //act
    var total = cart.CalculateCart(items, discountCalculator.Object);

    //assert
    //confirm the expected total is returned
    Assert.AreEqual(expectedDiscountedTotal, total );
           
    //confirm the discount calculator was called with the appropriate input
    discountCalculator.Verify(mock =>
        mock.calculate(It.Is<decimal>(x => x.Equals(expectedOriginalTotal)), It.Is<int>(x => x.Equals(2)))
        , Times.Once
        );

}

Now no matter what the implementation of DiscountCalculator, as long as we trust our DiscountCalculator is behaving (i.e through valid unit tests) then we don't need to change our unit test for the Cart calculation.

If our cart calculator was extended to throw an InvalidDiscountException in the event the discount is great than the total, we can arrange the discountCalculator to return 12, and assert that the exception was thrown without changing our 'happy path test' and with the same level of confidence.

Proper isolation and repeatability means refactoring is far simpler and your tests become less brittle.

Wednesday, March 26, 2014

F5 Big-IP Load Balanced WCF Services

*update* - This follow up may provide more detail on finding a solution 

We have been trying to configure our new F5 Big-IP load balancer for some WCF services and encountered a strange issue. 
The service uses wsHttpBinding with TransportWithMessageCredential over SSL.  The Auth type is Windows.
When disabling either node on the F5, the service worked.  However when both nodes were active, the service failed with an exception:
Exception:
Secure channel cannot be opened because security negotiation with the remote endpoint has failed. This may be due to absent or incorrectly specified EndpointIdentity in the EndpointAddress used to create the channel. Please verify the EndpointIdentity specified or implied by the EndpointAddress correctly identifies the remote endpoint.
Inner Exception:
The request for security token has invalid or malformed elements.
Numerous documentation and blogs highlight that *the* way to support load balancing in WCF is to turn off Security Context Establishment by setting EstablishSecurityContext=false in the binding configuration, or by turning on 'sticky sessions'.
http://ozkary.blogspot.com.au/2010/10/wcf-secure-channel-cannot-be-opened.html
http://msdn.microsoft.com/en-us/library/ms730128.aspx
http://msdn.microsoft.com/en-us/library/vstudio/hh273122(v=vs.100).aspx
We did not want to use sticky sessions; although this did fix the issue the F5 logs showed that load balancing was not working as we wanted.
Unfortunately, we already had EstablishSecurityContext set to false,  so the security negotiation should have been occuring on each request, which meant it should be working. 
After hours of investigating other binding settings, creating test clients, updating the WCFTestTool configurations and generally fumbling around, eventually we went back to reconfiguring the F5.  Although it worked when only one node was active with exactly the same configuration, unless the WCF binding documentation we found *everywhere* was a complete lie it had to be F5.
It was finally traced to the F5 OneConnect (http://support.f5.com/kb/en-us/solutions/public/7000/200/sol7208.html) configuration.  This does some jiggery-pokery to magically pool backend connections to improve performance.  It also seems to break WCF services, at least it broke ours. 
Disabling OneConnect on the F5 application profile resolved the issue immediately.
We now have our load balanced, non-persistent, WCF services behind the shiny F5 working.
As this hadn't come up online that I could find, I can only assume it was related to the combination of TransportWithMessageCredential and Windows as the message credential type.
*edit* So the solution was not as straight forward as we thought, and currently we have reverted back to “sticky sessions” in the F5 to get this to work.  Even with EstablishSecurityContext=false, and OneConnect disabled, the same failure will occur if a single client has two concurrent requests using two separate threads (our clients are web applications) and the F5 routes each connection to a separate service node.

While we investigate further, the short term solution is to use Transport security instead of TransportWithMessageCredential.  As this required a client change, we had to deploy multiple bindings and each client app will upgrade while we use Sticky Sessions on the F5.  Once all clients are on the new binding we can remove the old binding and disable sticky sessions again.

Transport security works for us, but it is not perfect.  It reduces security (SSL only, no message encryption) and reduces flexibility (we can’t for instance switch to client certificate, or username authentication for individual per request auth).
It does however keep the services stable and gives us time to perform a thorough analysis of the problem.

Tuesday, March 25, 2014

“Don’t Query From the View” - Unit of Work and AOP

Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems.


I was asked about a warning from NHProfiler from a colleague to see my opinion on the matter.  The warning was:  http://hibernatingrhinos.com/products/NHProf/learn/alert/QueriesFromViews “Don’t Query From the View”.  

The team had a session lifetime per controller action using AOP (Attribute Filters), but needed to extend this out to be active during the view rendering phase after the controller completed.  They did this by instantiating the session in the request filter chain instead of the controller attribute filters.  When they did this, they received the NHProfiler warning above.

The collegue was dismissive of the warning, partly because the implementation of the code was innocuous and partly because the warning reasons were not particularly compelling.  

There are some serious implications of the pattern that was being followed however, some of which I have covered on this blog in the past (here) and (here).

 
tldr;
·        Sessions can be open as long as you need them to be.  You do not need to arbitrarily close them if you know they will be needed again as soon as the next step in the pipeline is reached.
·        There are (arguably) valid reasons why the View rendering would be able to access an active session.
·        The longer a session is open, the more chance of unexpected behaviour from working with Entities in a connected state.  Care should be taken.


Pro1: Simplicity
One benefit from having a session opened and closed automatically using cross-cutting AOP is that your application doesn’t need to care about it.  It knows that it will have a session when used, and will commit everything when the scope ends.  This is often done as a controller/action filter attribute, or higher in the request pipeline.  You don’t need to pass a session around, check if it exists, etc.

Con1: Deferring code to View Rendering adds complexity.
I have argued that having the session active outside the controller makes it more difficult to maintain and debug your code as the session is active during the View render component.  The response was that forcing a controller action to pre-load everything the view *might* need and just pushing a dumb model prevents the view from performing optimisation of what it actually loads.  I don’t believe that the View rendering should have such an impact on code execution but the point is a valid point of contention.

Con2: Loss of control.
When using AOP to manage the session lifetime you have much less control over the failed state.  As the failure occurred outside of the business logic context, you can’t easily use business logic to handle the failure.  If the standard policy is to simply direct you to an error page or similar, then this behaviour is completely valid.  However if you needed to perform actions based on the failure (such as attempting to resolve optimistic concurrency issues automatically) then you can’t. 

Con3: Loss of traceability.
When using a long running session there is no traceability between the persistence and the application logic that made the entity changes.  If you experience an unexpected query when the unit of work commits, you can’t immediately identify which code caused this behaviour.

Con4: Unexpected change tracking.
Having a long running (and potentially ‘invisible’) session exposes you to potentially unexpected behaviour due to the session tracking all changes on the underlying entities.  If you load an entity from the data access layer, then make some changes for display purposes (perhaps change a Name property from “Jason Leach” to “LEACH, Jason”) before passing that entity to the view, when the unit of work cleanup occurs it will persist that change to the database because all of your changes are being tracked. 
A less obvious example is if you had a Parent entity with a list of Children.  In a particular controller action you want to get the parent and only a subset of the children.  So you might select the parent and all children.  Then you may do something like Parent.Children = Parent.Children.Select(x=>childfilter).ToList().  Depending on your configuration this is likely to delete all of the children not matching the filter when the unit of work completes.  Oops.

Con3 and 4 are direct side effects of leaving a session open longer.  In the Parent->Child example, you would likely only need the session active when you did the initial load of the Parent and Child entities, then close the session and start manipulating the entities.   Obviously you should be mapping entities to ViewModels and never manipulating those entities, but it is a very easy mistake to make.  Coupled with con3, it can be near impossible to identify in a complex unit of work.

Conclusion
As long as you are careful about what you are doing, and not arbitrarily altering domain model entities while a session is open, then long-running sessions are not a problem.  However there is significant potential of issues if your devs don't understand the risks.
As long as you are happy with standard error/failure, or UI driven resolution workflows, then AOP for unit of work management is acceptable, again as long as the risks / limitations are acknowledged.

 

Thursday, March 20, 2014

Little Things

As I take on more architectural responsibility I write far less code.  I do however provide code samples and give advice on how to solve certain issues.

If there is a flaw in the samples provided that is obvious in the integrated solution, if not the code sample,  it is a major failing on my part to have not identified it.  I hate that.

When the flaw is obvious and the fix is equally obvious,  then is that still my failing, or a failing of the developers for blindly following a code example and not reviewing or taking the effort to understand the code. 

Probably both.

Thursday, February 27, 2014

Web Deploy / MS Deploy Connection String Parameterisation

MsDeploy/WebDeploy parameterisation is pretty useful.  It allows a decent amount of flexibility to deploy a single artefact to multiple environments. 

But it doesn’t always work quite as expected, specifically for Connection Strings in the configuration file.

Scenario:

If you create Parameter entries in parameters.xml, when you deploy a web application project you will receive a SetParameters.xml file with entries based on the parameters.xml.  When defining the parameters, you can give them friendly parameter names (and even prompt text which is used when deploying via IIS management screens).

However if you look at the parameters.xml file that is created in the zip manifest, you will see non-friendly parameter entries for the connection strings, and if you created a ‘friendly’ parameter in the source parameters.xml, this friendly entry will have no transformation rules applied.

This means during msdeploy, the friendly entries in your SetParameters file are ignored, so the connection strings are not updated in web.config during the deployment.

Details:

So if your source Parameters.xml contains
<parameter defaultvalue="" name="Connection String 1">
    <parameterentry match="/configuration/connectionStrings/add[@name='ConnectionString1']/@connectionString" scope="\\web.config$" type="XmlFile">
    </parameterentry>
</parameter>
<parameter defaultvalue="" name="Connection String 2">
    <parameterentry match="/configuration/connectionStrings/add[@name='ConnectionString2']/@connectionString" scope="\\web.config$" type="XmlFile">
    </parameterentry>
</parameter>

And the following in your SetParameters.xml file using your friendly names

<setparameter name="Connection String 1" value="{connection string you want}">
</setparameter>
<setparameter name="Connection String 2" value="{connection string you want}">
</setparameter>


The compiled manifest parameters.xml will look like
<parameter name="Connection String 1" defaultValue="" />
<parameter name="Connection String 2" defaultValue="" />

<parameter name="ConnectionString1-Web.config Connection String" description="ConnectionString1 Connection String used in web.config by the application to access the database." defaultValue="{default value here}" tags="SqlConnectionString">
<parameterEntry kind="XmlFile" scope="{magic string representing web.config}" match="/configuration/connectionStrings/add[@name='ConnectionString1']/@connectionString" />
</parameter>
<parameter name="ConnectionString2-Web.config Connection String" description="ConnectionString2 Connection String used in web.config by the application to access the database." defaultValue="{default value here}" tags="SqlConnectionString">
<parameterEntry kind="XmlFile" scope="{magic string representing web.config}" match="/configuration/connectionStrings/add[@name='ConnectionString2']/@connectionString" />
</parameter>

As you can see, the friendly name entries have no content, so when deploying your SetParameters values are read from the file, but never applied to the config file.

Resolution:

The fix is pretty simple – remove the friendly entries from source control in parameters.xml and setparameters.xxx.xml, and add non-friendly name entries just to the setparameters.xxx.xml – the non-friendly names are ‘predictable’ although if you are desparate, just check the parameters.xml in the manifest after a build.


NHibernate Cross-database Hacks

Cross-Database Queries


Ok, so this is a bit of a hack, but it does work.  Thanks to this which set me down the "right" path.

I work with a few legacy systems that have central databases for common information, and individual databases for application specific information.  The data is queried using joins across the databases to retrieve the data required.
A separate service model was introduced for the common data, but when performing filtered queries across both data sets the service model was not efficient (this is a greater issue of context boundaries that I won’t go into here).  To perform the queries that were previously performed using stored procedures using cross-database joins in nHibernate required a bit of a cheat.

nHibernate mappings have a “schema” property as well as the more commonly used “table” property.  By manipulating this schema property you can convince nHibernate to perform cross-database queries.  Setting the schema to “{database}.{schema}” any join query to that element will effectively use the full cross-database query syntax when converted to a SQL query.

Neat (but ultimately not very satisfying because it is not a very nice design).


Bigger Hack, run away


If the target database name is not known until runtime, you can even hack it more to support this.

During the configuration of the nHibernate session factory, you can make a few modifications that will allow you to update the schema property of an entity.  This is useful if you have a different ‘other’ database name for each environment (e.g. OtherDatabase-dev, OtherDatabase-prd).

First we appropriately generate the fluent configuration, and build it.
We then iterate through the class mappings.  Each persistentClass.MappedClass is the underlying POCO model object of the entity.
We check if this is one that we want to override the schema property for (IOtherDatabaseEntity is a simple blank interface, it could be done via naming convention or whatever)
And then update the schema property on the mapping
Finally we create the session factory from the modified config

var fluentConfiguration = config.Mappings(m =>
        m.FluentMappings
        .AddFromAssemblyOf()
        );
var builtConfig = fluentConfiguration.BuildConfiguration();

foreach (PersistentClass persistentClass in builtConfig.ClassMappings)
{
        if (
            typeof(IOtherDatabaseEntity)
                .IsAssignableFrom(persistentClass.MappedClass)
            )
        {
            persistentClass.Table.Schema = cdrName;
        }
}
                  
ISessionFactory sessionFactory = builtConfig
        .BuildSessionFactory();


Hacks away!

NHibernate + LINQPad

While looking at ways to assess and improve performance some nHibernate queries, I was frustrated with the tools at my disposal.

I will start with the point: I don't have access to nhprof.  It seems to be the gold standard for any nHibernate shop, but them's the breaks.

What I was doing:


It was painful to execute the entire codebase to run one or two queries and view the output sql, so I started writing integration-unit tests for query optimisation.  This was slightly better, but still required a change-compile-run-review process which was annoying.

What was I thinking:

I then remembered I have a personal LINQPad license that I hadn't used in a while and wondered if I could get it working.  I saw this which helped me on my way, but we don't use nHibernate.Linq, so the steps were a bit different.

The outcome was extremely useful however, and now I am free to tinker with my queries a lot more freely.

How I did it:

To start you need to add the references to nHibernate that you require (in my case NHibernate, Iesi.Collections, and FluentNHibernate).  You then add the references to your Domain / Models and mapping assemblies.

The next step is to create a hibernate.cfg.xml file with the appropriate configuration for your database.  Make sure you set show_sql=true in the configuration so LINQPad can display the generated SQL.

Then you can call the following code

var cfg = new Configuration().Configure(@"D:\linqpad\ahs\hibernate.cfg.xml");
var factory = Fluently.Configure(cfg)
                .Mappings(m =>
                {
                    m.FluentMappings.AddFromAssemblyOf();
                })
                .BuildSessionFactory();


using (var session = factory.OpenSession()){
    var site = session.QueryOver()
    .List();
}
and viola, you can now tinker with queries as you will, with immediate feedback on the generated SQL and execution time.


You can then save this as a query, or save the assembly references/using statements as a snippet to get you up and running quickly for new queries. 

Caveats:

This method only works with pre-compiled entity mappings, so if you intend to improve performance at the entity mapping layer you still need to do this through your application and export the assemblies for LINQPad to use.

Extensions:

LINQPad allows you to create an 'application config' file that is used when your inner assemblies require web/app.config sections.  Run the code:
AppDomain.CurrentDomain.SetupInformation.ConfigurationFile
to find the location of the file, and if it does not exist create it.  Note that unlike most .NET apps, this is not the LINQPad.exe.config, but LINQPad.config.  Enter any configuration you need into this file.  This can include the nHibernate config  instead of the separate file (but limits configuration flexibility).

This allows you to configure things like nHibernate 2nd level cache instances, such as memcache.  As long as you include the necessary libraries in the query references, and the configuration in the linqpad.config file this will work and provide even greater flexibility for performance analysis and testing.


Conclusion:

So there you go, a "poor man's" guide to nHibernate performance analysis, thanks to the ever awesome LINQPad.