Tuesday, September 25, 2018

How to help reduce the pain of transitioning to JavaScript

There are many complaints about JavaScript 

Two major ones coming from a .NET background are; 
1) dynamic language increases errors and reduces the availability of productivity tools like refactoring and autocompletion 
2) duplication of code between the front-end and back-end 

These are valid complaints, and do have an impact on productivity.  But. 
I see the benefits of rich and responsive UI in the browser as an important part of producing great client outcomes.  I'm not arguing for JavaScript everywhere, static content (and the vast majority of content managed content) can be perfectly functional, responsive and pretty without complex javascript development, but for interactive systems such as LOB applications, traditional server side solutions (the traditional loop: HTTP-{GET|POST}->Process->HTTP-RESPONSE->client render HTML->repeat) are not sufficient. 
Even in low-latency on-premises intranet environments this model is outdated for all but the simplest of solutions, and with cloud and distributed workforces increasing, the inefficiencies of this approach are becoming more significant.   

I am not advocating for Single Page Applications, where a whole raft of additional complexities arise for often minimal realized benefit, but a middle ground is necessary. 


So what are some options to increase productivity and produce great web applications? 


Rich Server-Side Frameworks  
Vaadin and the now-defunct Lightswitch remove the front-end from the development process altogether.  The server-side object model or designer tool is used to provide the screen definition, and the HTML is generated based on the server-side definition.  The frameworks are usually smart enough to support rich client-side processing as well (without such features, you are really better off with MVC-style solutions with server-side HTML templating engine like ASP.NET MVC/Razor, DJango and Ruby on Rails). 

These solutions attempt to reduce the amount of front-end code (HTML or JavaScript) by making the framework that generates the HTML also generate a lot of helper JavaScript code for you.  Some of these do things well (MVC unobtrusive validation isn't bad) while others do it poorly (ASP.NET Content Panels) but as soon as you need to do something slightly different, you either need to modify the framework (if you can) or write a buch of exceptional front-end code to get it the way you want. 

Server-side Frameworks with UI Controls 
Unlike frameworks like Vaadin however, the developer is responsible for defining the general website HTML, and any JavaScript required to support the controls used, which re-introduces the original productivity drains mentioned.  The benefit is that the developer still doesn't need to build the controls, and often the controls have a well-defined integration pattern with different back-end technologies. 

There are half-assed measures like ASP.NET Content Panels (and most of the ASP.NET Ajax Extensions) and better solutions like ASP.Net MVC unobtrusive validation that allow the server-rendered HTML to also dynamically generate client-side code to provide client-side processing that would otherwise require a postback.   
More complete UI frameworks like KendoUI alleviate some of the consistency issues, but require considerable JavaScript boilerplate to implement and don't resolve many of the dynamic language and code duplication issues except to generally reduce the amount of code needed to use the controls vs writing your own control implementations. 

JavaScript turtles (all the way down) 
NodeJS proponents tout that a single codebase across front-end and back-end code will improve productivity, and why limit your front-end functionality when you can use that code in the back-end. 

A counter argument is that there will always be 'client specific' code and 'server specific' code – yes using a common language means reduced context switching fatigue, and you can share functionality between the front and back-end, but at the end of the day you are still writing code for two separate execution paths – your back-end won't have code that calls the back-end services, but you will certainly need that code in your front-end. 

This also emphasises the issues of untyped languages (unless you use something like TypeScript), and you are also removing all the years of .NET expertise in one fell swoop. 

.NET turtles 
I include this as an aside, as it is experimental and very early days, but with the growth of WebAssembly and tools like Blazor, it is possible to build .NET websites that run natively in the client browser.   
This doesn't quite work in the way one might expect, rather than incorporating something like ASP.NET MVC in the browser, it uses stand-alone razor-syntax pages defining layout and functionality within the page.   In many respects it is a step away from "good architectural separation of concerns" but to some extend does align to component-based web design models like React and Vue, so with a better application state management engine this could work well. 

Embracing full-stack development 
Treating the client-side as a first-class citizen in your solution doesn't directly address the productivity issues raised, but can indirectly make a dramatic difference. 

Familiarity with the language and strong design patterns (SOLID principles, modularistion, and dependency management), modern tooling (modern VS, and VS Code, ESLint) goes a long way to making JavaScript work well.   
The more familiar you are with something, the less likely you are to make basic mistakes.  Untyped languages will always risk spelling, capitalisation or type inconsistencies (a = 1, b = "1", a == b) so it takes time to be productive for people used to compilers picking these basic things up, but if you have a solid foundation (compared to the mess of global namespace objects and spaghetti events of early JavaScript and JQuery development) then these sorts of errors will be reduced.   

TypeScript 
The use of TypeScript can also address the type safety issues, and is a big draw for those who are used to relying on the compiler to catch simple errors.  The cost is learning a new language, and interoperability issues between TypeScript and JavaScript libraries, but for large and complex sites this could provide a significant improvement in productivity.   

The use of TypeScript or better familiarity with JavaScript will not address the fact that you will need to duplicate code between the front and back end when doing full stack development. 

Elmish and Fable are similar solutions using a functional paradigm (probably a step too far for most developers) 

Code Generation 
As noted in JavaScript turtles, the issue of code duplication isn't always relevant.  You will be required to write client specific code in multi-tier architecture regardless of whether it is one language or two.  
Synchronisation of identical code like DTOs (although as a dynamic language JavaScript really doesn't need this), static reference information (things like enums that are used to drive application logic), and front-end business logic (e.g. validation that you also want to ensure is applied server side) is something that does require tedious duplication when using different languages, so that is a good point to focus on.  

It is possible to use Roslyn inspectors to read metadata about your back-end code and emit basic javascript modules.   
You may take the DTO object and validation rules and generate JavaScript object validation code for each field, any Enums tagged for 'front end use' may be picked up and defined in a JavaScript module, or even building API modules for each of your MCV controller methods. 

Boilerplate reduction 
Rather than using code-generation, you can also use and define libraries to consume the back-end services as required.  You may use swagger-js to inspect and call your service libraries instead of defining (or generating) a list of URLs and 'fetch()' requests and promises for example. 

For validation you might actually create service endpoints that validate an object or field on the server side, and have simple library which performs that validation on the server.   

Standardisation and Experience 
Obviously the use Code Generation or Boilerplate libraries/functions comes at a development cost, either writing those libraries, or finding good ones and learning how to use them.  In the absence of that time, understanding where those solutions apply and appropriately abstracting them is a really good starting point.   
For example, annotate all types you WANT to expose to the front-end (if you aren't using a framework that does so automatically).  It takes ~10 lines of code to define a custom attribute, and ~20 characters to apply one. 
Similarly, abstract your front-end code into specific-purpose libraries with independent configuration (e.g. API libraries with a separate 'root url' defined as a depdency, and separate validation rules from validation execution and field application).  This will allow you to start to incorporate functionality later, even if you know you don't have the time or skills to solve those problems to begin with.   


Conclusions 
There are many valid reasons why .NET developers eschew JavaScript, but there are also a lot of opportunities and areas in which large improvements can be made to productivity without having to become an expert in all things JavaScript.   
The crux however is having someone to drive those efficiencies, and sufficient team engagement to take them on board.  The more engagement, the further you can help your team gain those efficiencies.  Even without the time to work on innovated frameworks and solutions, there are basics you can instill into your team such as modularisation, dependency management, and core library consistency to help reduce the sticker shock of picking up JavaScript.   
With enough team engagement you can bring in things like TypeScript, or UI frameworks like Vue, React or Angular, and beyond that, with dedicated support from your teams you can start to introduce tools and frameworks that can automate a lot of what is required.   








Tuesday, September 18, 2018

Service Mesh and API Management

A colleague of mine has been looking at a complete development platform transformation to better meet the needs of the organisation.  There are some very lofty goals he wants to achieve, and we've had some very interesting (and exciting) discussions on where he wants to head. 

One of the discussions we have had is in relation to the management of services and the difference between Service Mesh solutions and API Management.  On the surface there are a lot of overlaps between the two concerns, and it isn't immediately obvious what API Management solutions offer that Service Mesh solutions don't.  

Breaking it down, the Service Mesh should be responsible for ensuring your services are available.  Capabilities such as rich security models, automatic scaling, monitoring and control dashboards, and infrastructure abstraction are also core features of Service Mesh platforms.  Solutions like Istio can also provide advanced cross-cutting concerns such as service redirection, logging and caching without having to incorporate those capabilities in the underlying services.

API Management on the other hand is about how those services are exposed to consumers.  This is aimed at ensuring the people who need to use your services can access them, and have the necessary tools to use them.  Like a Service Mesh, API Management is concerned about access security, service failover and monitoring, but are not responsible for controlling the services themselves.  Instead the API Management layer is responsible for managing which APIs are accessible to which consumers, transforming the services to meet consumer constraints, setting service access limits, and defining billing and utilisation policies.

API management wasn't something my colleague had looked into, but it is an important part of defining how the services would be accessed.  The industry has a tendency to equate "API Management" with "Monetisation" and it is easy to discount API Management when looking at the capabilities of Service Mesh solutions, but when you consider the key differentiation between managing services and managing consumers, there's definitely value in looking at both.

Monday, September 10, 2018

Risk vs Progress

Last week I had to get my hands dirty setting up proprietary physical servers using a serial port connection and terminal interface.  It was an fun diversion and an important step in a long-running project.  I also thought it was interesting that on a 150k+ piece of hardware I was forced to trim plastic from the included serial port connector to fit the server.

It was not particularly difficult to complete, but it took a lot of work and effort to get that far; the subsequent data center installation this week will be an important milestone in the overall project.  It will also highlight the dichotomy between risk and progress, with a further 4 week wait until the equipment can be configured by the vendor.  Despite being arguably greater risk, the lack of alternatives meant that approval to physically configure the device could be granted (including hacking away at the connector), but because the software configuration elements can be performed by the vendor remotely it was considered prudent to ensure the vendor performed these activities despite the extended timeframe involved.

There's obviously a fine line between risk and progress, and I'm not one to advocate not following appropriate procedures, but when a process is well documented and really quite straight forward, it's hard not to get frustrated by overly risk averse engagements.

Thursday, September 6, 2018

Working with Tuples - dereferencing tuples in function declarations

I've been trying to do more little code tidbits to ensure I keep my skills up since I'm rarely coding any more, and F# is a good way to stretch my skills in unfamiliar ways.

I've done a few "useful" things with it now and I'm really enjoying it.  One was purely for fun, another for personal use but not exactly fun, and another for actual work.

One thing I found difficult was working effectively with Tuples, which even though Records are so easy to use in F#, tuples are even easier to use and I have tended to use them perhaps more than I should.
Take this code, which basically just sums the "second value from each tuple in the list"
let list = [(a, 10);(b, 5)]
let aggregateValues = list |> List.sumBy(fun tuple -> snd tuple)
The readability of that is horrible as you start adding more complex types or multi-value tuples.


A really simple tip was shown to me from someone who has actually delivered F# projects
let aggregateValues = list |> List.sumBy(fun (category,value) -> value)
by simply dereferencing the tuple and using appropriate field names, it is much clearer what is happening.  Now I knew this could be done on things like match clauses but I didn't realise you could do this in a method declaration. 


Thursday, September 8, 2016

F5 Big-IP Load Balanced WCF Services - Update


The post below was some findings from a project related to the authentication between the front and back end services, and F5 configurations.  It closely matches issues identified in my previous post for the SSO server but that was using ntlm/kerberos (I have a feeling that we didn't set up the SPNs for kerberos correctly).  Ultimately I bet that a similar configuration to that below but set up for one-shot-kerberos and using the SPN in the Identity column would have resolved this issue.



tldr;  
Server config:  
Configure the server to accept client certificates using PeerTrust - allows us to authorise the client 
Define the server certificate (the identity of the service - important later on) - allows message security to work 
<behavior name="certificate"> 
<serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" /> 
<serviceDebug includeExceptionDetailInFaults="true" /> 
<serviceCredentials> 
<clientCertificate> 
<authentication certificateValidationMode="PeerTrust" /> 
</clientCertificate> 
<serviceCertificate findValue="<blah>" storeLocation="LocalMachine" x509FindType="FindBySubjectName" /> 
</serviceCredentials> 
</behavior> 

configure the wsHttpBindings to use Message security,  
items highlighted yellow - this is used for the message security and allows load balancing without sticky sessions 
items highlighted green - this is used to authenticate the client using a certificate 
<wsHttpBinding> 
<binding name="certauth" maxReceivedMessageSize="2147483647"> 
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" /> 
<security mode="Message"> 
<message algorithmSuite="TripleDesSha256" establishSecurityContext="false" negotiateServiceCredential="false" clientCredentialType="Certificate" /> 
</security> 
</binding> 
</wsHttpBinding> 

Configure the endpoint, basically accept request on <endpoint>/certauth and applies the appropriate message security and authentication 
<endpoint name="certauth" address="certauth" binding="wsHttpBinding" bindingConfiguration="certauth" contract="<contract>" /> 


Client Config: 
Configure the client to accept the server's security configuration using PeerTrust 
Configure the certificate that this client will pass as its credentials 
<endpointBehaviors> 
<behavior name="certificate"> 
<clientCredentials> 
<clientCertificate findValue="<name>" storeLocation="LocalMachine" x509FindType="FindBySubjectName" /> 
<serviceCertificate> 
<authentication certificateValidationMode="PeerTrust" /> 
</serviceCertificate> 
</clientCredentials> 
</behavior> 
</endpointBehaviors> 

Configure the security bindings, must match the service bindings 
<wsHttpBinding> 
<binding name="certauth" maxReceivedMessageSize="2147483647"> 
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" /> 
<security mode="Message"> 
<message algorithmSuite="TripleDesSha256" establishSecurityContext="false" negotiateServiceCredential="false" clientCredentialType="Certificate" /> 
</security> 
</binding> 
</wsHttpBinding> 

Configure the endpoint.  Note the /certauth suffix. 
the identity element is essential, it needs to match the serviceCredentials defined on the service.   
<endpoint address="<endpoint>/certauth" binding="wsHttpBinding" bindingConfiguration="certauth" contract="<contract>" behaviorConfiguration="certificate"> 
<identity> 
<certificateReference findValue="<name>" x509FindType="FindBySubjectName" storeLocation="LocalMachine" storeName="TrustedPeople"/> 
</identity> 
</endpoint> 



long version; 

To do this we need to use message security for the following reasons 
  • We are using SSL offload, so transport security is not supported (works over SSL only) 
  • We are using certificate auth, which is not allowed on TransportWithCredentialOnly 
  • We are working across a domain boundary so we cannot use windows authentication 

The second complexity is that we are working in a load balanced environment.  This adds complexity as WCF negotiates a security context to generate the message security before sending the message.  If the Load balancer sends the message to a different server to the one the security context was generated on, then this will cause intermittent security failures. 

The two general solutions to this are to; 
  • enable sticky sessions which the F5 then "guarantees" that the security context and message occur on the same server.   
    • This did not seem to work - with sticky sessions enabled with the following config (), we still received this errors.   
    • If the security context lasted longer than the sticky session timeout, then this could still cause problems, but I don't think this was the case. 
    • I have a feeling this was caused by not setting the 'identity' correctly in the WCF config which was a key part of the final working solution, but it should still have worked regardless as both front end servers had success connecting to both back end servers. 
  • set wcf establishSecurityContext="false" which basically sets up security for every message, rather than establishing a long term security context, each message would have its own security context applied. 
    • this did not work.  I have encountered this before where this was the supposed fix and did not actually resolve the issue, but I thought that it was related to ntlm issues at the time. 
    • Sticky sessions and this fix should pretty much have guaranteed the fix as you would never have the sticky session timeout before the security context expired (because it is only ever generated right before the message is sent). 
      • This did not work.  I have no explanation for this. 

A final solution that I had previously read which allows a configuration to work without sticky sessions is to set negotiateServiceCredentials="false", however all examples point this to enable "kerberos-one-shot" which allows message security using kerberos if the services are configured with domain users and SPNs.  As we work across domains this wasn't acceptable. 

I was finally able to put a few things together and determined that the use of "establishSecurityContext" along with "negotiateServiceCredentials" should be possible without kerberos if you use certificate security (separate from certificate authentication).  This had partially been done with the  "servicebehaviour" element which defined the serviceCredentials on the server using certificates, and peerTrust for serviceCertificate on the client.  However the next step was to define the serviceIdentity on the client config so that the client knew which certificate to encrypt the message with ( public key) instead of negotiating with the service to determine its certificate before encrypting.   
Setting the Endpoint Identity and setting establishSecurityContext="false" and negotiateServiceCredentials="false" meant that the client now had enough info to encrypt the message without asking the service anything, and the service could decrypt it. 

Note that NONE of these issue related to the authentication certificate not being accepted, it was the "message security" that was the problem, it just happened to require message security so that it could attach the authentication to the payload. 

A side effect of this is that we should now be able to get rid of sticky sessions. 

The Dependency Zoo

This is shamelessly pulled from here: https://nblumhardt.com/2010/01/the-relationship-zoo/ - read it, it is good.  Much better than this.

Quite some time ago I was faced with a problem that I wasn't quite sure how to solve, and one of the dev's I most respect sent me the link above.  While the link wasn't the answer, it helped me formulate the answer I needed by making me think more critically about my dependencies.

Prior to that point dependency injection to me was very useful, but I didn't really how limited my understanding was.  I thought I had a good grasp of why IoC was important, and how to use DI, but I hadn't realised how simplistic my view was.

The problem in my case wasn't that I needed an B, but I needed a specific B and I would only know which B until runtime.  This didn't match the way I thought of Dependency Injection and I couldn't figure out a non-ugly solution.

What this article did was really make me think about how I consider dependencies. 

The article notes that there are a number of common types of relationships between classes:
"I need a B"
"I might need a B"
"I need all the Bs"
"I need to create a B"
"I need to know something about B before I will use it"

All of these have a direct representation in Autofac (and many other DI containers, though the last is not so common), this was a direct result of the author thinking about these relationships before writing Autofac.

Usually the solution is "use the service locator 'anti-pattern'".  I won't argue the point, but it is widely considered an anti-pattern unless it fits a very narrow category.  This does fall within that category, but I didn't think it was an elegant solution to my problem.  I also didn't think of it at the time.  Essentially the service locator pattern is you take the DI container as the dependency and ask it for what you need (usually in a switch statement).




The second most common solution to my problem is to take the "I need to create a B", or Func<X,Y,B> where B is the type we want and X and Y are the parameters that we use to create a B.  This pretty much matches the Factory pattern, where you have a class which creates the objects you want.  In this case autofac is smart enough that if you take a dependency on Func<X,B> and you have an B that takes X as a parameter, it will give you the constructor function of X without having to write a specific "Factory" class. 


The problem in this instance is that it doesn't really solve my problem for two reasons, 1 is that the type of A was common across all my B's, so Autofac wouldn't know which B to give me; I still needed a Factory class.  It does makes my dependee cleaner, but the factory was still riddled with switch statements trying to get the right B.  It is a very useful type of dependency, but not the one I wanted in this instance.

The next most common solution is to take all of the Bs and loop through them asking them if they are the right one.  I've seen this as the "right way to not use a service locator for that one instance where it is usually used" but I'm not particularly convinced. http://blog.ploeh.dk/2011/09/19/MessageDispatchingwithoutServiceLocation/ explains this is a bit more detail.  It works, but it is really inefficient - especially if X is expensive to create.  It is also ugly.

"Expensive to create" might not always be an issue, but if you are creating every B and discarding all but one, every time you ask for an B, that's messy.  This is where Lazy<B> comes in.  Lazy is built into C# (and autofac knows about it) so that you don't actually get a B, you get something that can give you an B when you need it.


This means that if you *might* need an B, then go with Lazy<B> and you will only create/get one when you need it.  It's not always necessary, but if you have a complex dependency hierarchy, or a very intensive constructor, it can make a big difference.  Unfortunately I still needed all of the B's to determine which one I needed, and because I had Lazy<B>'s I couldn't ask it if this was the right one without type inspection/reflection.  Note: be careful of the distinction between Lazy<B> and Owned<B> - I won't cover it here, but it is important. 

So far we have covered the top 4 in the list.  They are all important, very useful to know, but they didn't solve my problem.  The last on the list is the most obscure, and possibly the least widely used, but it is something I love (and have manually implemented in other DI containers that doesn't have it).
Autofac calls this Meta<B,X> where X is the type of an object that is attached to B at the time you register B.  You can then ask the dependency about its X, without creating a B.
Using this we can have lots of B's, and when registering my B's I tell it to attach an X to it.  When I want the right B, I ask it for the IEnumerable<Meta<X,Lazy<B>>> and loop through the list until I find the B that has the X I am looking for.  Autofac does this automatically, all I need to do is tell autofac the X when I register each B.



I can also create a Func<B> factory that does this, cleaning up my dependee a little more - the most common use case for this pattern is the command dispatcher pattern where it doesn't really add any value to abstract this however (since finding the right B for a given ICommand is the whole point of the command dispatcher). 
Some examples of the Metadata you want to attach to a method are:
  • The type of ICommand the IHandler handles - this is the one I use the most
  • A 'weight' or 'order'  - if we have 10 B's that might be the one I need, I get the highest priority one.
  • A 'delegation' pattern, I need to get the approval workflow for invoices > $500.  If this changes to >$1000 in 6 months all I need to do is update the registration of the IApprovalWorkflows to set the new limit.
  • A 'feature toggle' or A/B testing - you may define a number of Xs and want to pick a different one based on some criteria.


I don’t expect anyone to come out of this thinking that these examples are the best way to do things, but what I would like you to come away with is a greater appreciation of what it means to have a dependency, and to use appropriate patterns and abstractions instead of just saying "I need a B" ( or new-ing something up).