If you just hold the "switch mode" button on the back of the microphone (normally used to switch between cardiod and omni-directional), 0-latency monitoring comes back and the level can be controlled by the volume knob on the front. Huzzah.
Talking about stuff, and junk
Random thought dumps from a .Net developer
Tuesday, March 31, 2020
Blue Yeti Nano monitoring not working
I have been wracking my brains getting no-latency monitoring working on my Blue Yeti Nano with no luck, but thanks to a reddit post (embedded deep in a bunch of replies that I can't find now) I've fixed it.
If you just hold the "switch mode" button on the back of the microphone (normally used to switch between cardiod and omni-directional), 0-latency monitoring comes back and the level can be controlled by the volume knob on the front. Huzzah.
Even after fixing it, it still doesn't show up as a "microphone" option in the control panel playback levels tab as shown in most "how to fix no latency monitoring" youtube videos.
If you just hold the "switch mode" button on the back of the microphone (normally used to switch between cardiod and omni-directional), 0-latency monitoring comes back and the level can be controlled by the volume knob on the front. Huzzah.
Wednesday, November 28, 2018
.NET Core and WCF / WsHTTPBinding / WS-* Issues
Late last week I ran a couple of internal technical spikes to verify a standard assumption I was making in a design. The question was whether .NET Core was a valid platform for a new system, and although some very minor work had been done in .NET Core, this system would need to integrate with a number of existing services, and this had not been trialed previously.
I figured it would be simple (but I still wanted to verify it before setting down that path). It wasn't, and I'm glad I did take it that step further.
It turns out that .NET Core / .NET Standard 2.1 does not support WSHtttpBindings. WSHtttpBindings includes most of the WS-* extensions, including WS-Security, so if you are using Message security in WCF you can't use .NET Core.
All is not lost even if you are using WSHttpBindings. You can convert WSHttpBindings to a CustomBinding (this site helps a lot: http://webservices20.cloudapp.net/). If you are using WSHttpBindings, but no not using WS-*, you may still be OK. .NET Core doesn't support the specific implementations of many of the classes even where they are actually present in the netstandard library and result in PlatformNotSupportedException at runtime, so make sure you test it even if the appropriate binding classes are available.
In this case we were never going to be able to connect to that particular WCF endpoint using .NET Core.
It would be possible to manually implement WS-* implementations, but it's not something I would have the time not inclination to support.
As an aside, we implemented a secondary TransportOnly WsHttpBinding on the services years ago to address an issue with TransportWithMessageCredential and load balancers under heavy load. I was able to use that secondary binding using a CustomBinding in .NET Core, so the end result was that we could access the service, but it was a good assumption to have questioned and validated.
Tuesday, September 25, 2018
How to help reduce the pain of transitioning to JavaScript
There are many complaints about JavaScript
Two major ones coming from a .NET background are;
1) dynamic language increases errors and reduces the availability of productivity tools like refactoring and autocompletion
2) duplication of code between the front-end and back-end
These are valid complaints, and do have an impact on productivity. But.
I
see the benefits of rich and responsive UI in the browser as an
important part of producing great client outcomes. I'm not arguing for
JavaScript everywhere, static content (and the vast majority of content
managed content) can be perfectly functional, responsive and pretty
without complex javascript
development, but for interactive systems such as LOB applications,
traditional server side solutions (the traditional loop:
HTTP-{GET|POST}->Process->HTTP-RESPONSE->client render
HTML->repeat) are not sufficient.
Even
in low-latency on-premises intranet environments this model is outdated
for all but the simplest of solutions, and with cloud and distributed
workforces increasing, the inefficiencies of this approach are becoming
more significant.
I
am not advocating for Single Page Applications, where a whole raft of
additional complexities arise for often minimal realized benefit, but a
middle ground is necessary.
So what are some options to increase productivity and produce great web applications?
Rich Server-Side Frameworks
Vaadin
and the now-defunct Lightswitch remove the front-end from the
development process altogether. The server-side object model or
designer tool is used to provide the screen definition, and the HTML is
generated based on the server-side definition. The frameworks are
usually smart enough to support rich client-side processing as well
(without such features, you are really better off with MVC-style
solutions with server-side HTML templating engine like ASP.NET
MVC/Razor, DJango and Ruby on Rails).
These
solutions attempt to reduce the amount of front-end code (HTML or
JavaScript) by making the framework that generates the HTML also
generate a lot of helper JavaScript code for you. Some of these do
things well (MVC unobtrusive validation isn't bad) while others do it
poorly (ASP.NET Content Panels) but as soon as you need to do something
slightly different, you either need to modify the framework (if you can)
or write a buch of exceptional front-end code to get it the way you want.
Server-side Frameworks with UI Controls
Unlike frameworks like Vaadin
however, the developer is responsible for defining the general website
HTML, and any JavaScript required to support the controls used, which
re-introduces the original productivity drains mentioned. The benefit
is that the developer still doesn't need to build the controls, and
often the controls have a well-defined integration pattern with
different back-end technologies.
There are half-assed measures like ASP.NET Content Panels (and most of the ASP.NET Ajax Extensions) and better solutions like ASP.Net
MVC unobtrusive validation that allow the server-rendered HTML to also
dynamically generate client-side code to provide client-side processing
that would otherwise require a postback.
More
complete UI frameworks like KendoUI alleviate some of the consistency
issues, but require considerable JavaScript boilerplate to implement and
don't resolve many of the dynamic language and code duplication issues
except to generally reduce the amount of code needed to use the controls
vs writing your own control implementations.
JavaScript turtles (all the way down)
NodeJS
proponents tout that a single codebase across front-end and back-end
code will improve productivity, and why limit your front-end
functionality when you can use that code in the back-end.
A counter argument is that there
will always be 'client specific' code and 'server specific' code – yes
using a common language means reduced context switching fatigue, and you
can share functionality between the front and back-end, but at the end
of the day you are still writing code for two separate execution paths –
your back-end won't have code that calls the back-end services, but you
will certainly need that code in your front-end.
This also emphasises
the issues of untyped languages (unless you use something like
TypeScript), and you are also removing all the years of .NET expertise
in one fell swoop.
.NET turtles
I include this as an aside, as it is experimental and very early days, but with the growth of WebAssembly and tools like Blazor, it is possible to build .NET websites that run natively in the client browser.
This doesn't quite work in the way one might expect, rather than incorporating something like ASP.NET MVC in the browser, it uses stand-alone razor-syntax pages defining layout and functionality
within the page. In many respects it is a step away from "good
architectural separation of concerns" but to some extend does align to
component-based web design models like React and Vue, so with a better application state management engine this could work well.
Embracing full-stack development
Treating
the client-side as a first-class citizen in your solution doesn't
directly address the productivity issues raised, but can indirectly make
a dramatic difference.
Familiarity with the language and strong design patterns (SOLID principles, modularistion, and dependency management), modern tooling (modern VS, and VS Code, ESLint) goes a long way to making JavaScript work well.
The more familiar you are with something, the less likely you are to make basic mistakes. Untyped languages will always risk spelling, capitalisation
or type inconsistencies (a = 1, b = "1", a == b) so it takes time to be
productive for people used to compilers picking these basic things up,
but if you have a solid foundation (compared to the mess of global
namespace objects and spaghetti events of early JavaScript and JQuery development) then these sorts of errors will be reduced.
TypeScript
The
use of TypeScript can also address the type safety issues, and is a big
draw for those who are used to relying on the compiler to catch simple
errors. The cost is learning a new language, and interoperability
issues between TypeScript and JavaScript libraries, but for large and
complex sites this could provide a significant improvement in
productivity.
The
use of TypeScript or better familiarity with JavaScript will not
address the fact that you will need to duplicate code between the front
and back end when doing full stack development.
Elmish and Fable are similar solutions using a functional paradigm (probably a step too far for most developers)
Code Generation
As
noted in JavaScript turtles, the issue of code duplication isn't always
relevant. You will be required to write client specific code in
multi-tier architecture regardless of whether it is one language or
two.
Synchronisation
of identical code like DTOs (although as a dynamic language JavaScript
really doesn't need this), static reference information (things like enums
that are used to drive application logic), and front-end business logic
(e.g. validation that you also want to ensure is applied server side)
is something that does require tedious duplication when using different
languages, so that is a good point to focus on.
It is possible to use Roslyn inspectors to read metadata about your back-end code and emit basic javascript modules.
You may take the DTO object and validation rules and generate JavaScript object validation code for each field, any Enums
tagged for 'front end use' may be picked up and defined in a JavaScript
module, or even building API modules for each of your MCV controller
methods.
Boilerplate reduction
Rather
than using code-generation, you can also use and define libraries to
consume the back-end services as required. You may use swagger-js
to inspect and call your service libraries instead of defining (or
generating) a list of URLs and 'fetch()' requests and promises for
example.
For
validation you might actually create service endpoints that validate an
object or field on the server side, and have simple library which
performs that validation on the server.
Standardisation and Experience
Obviously
the use Code Generation or Boilerplate libraries/functions comes at a
development cost, either writing those libraries, or finding good ones
and learning how to use them. In the absence of that time,
understanding where those solutions apply and appropriately abstracting
them is a really good starting point.
For
example, annotate all types you WANT to expose to the front-end (if you
aren't using a framework that does so automatically). It takes ~10
lines of code to define a custom attribute, and ~20 characters to apply
one.
Similarly,
abstract your front-end code into specific-purpose libraries with
independent configuration (e.g. API libraries with a separate 'root url' defined as a depdency,
and separate validation rules from validation execution and field
application). This will allow you to start to incorporate functionality
later, even if you know you don't have the time or skills to solve
those problems to begin with.
Conclusions
There
are many valid reasons why .NET developers eschew JavaScript, but there
are also a lot of opportunities and areas in which large improvements
can be made to productivity without having to become an expert in all
things JavaScript.
The
crux however is having someone to drive those efficiencies, and
sufficient team engagement to take them on board. The more engagement,
the further you can help your team gain those efficiencies. Even
without the time to work on innovated frameworks and solutions, there
are basics you can instill into your team such as modularisation, dependency management, and core library consistency to help reduce the sticker shock of picking up JavaScript.
With enough team engagement you can bring in things like TypeScript, or UI frameworks like Vue, React or Angular, and beyond
that, with dedicated support from your teams you can start to introduce
tools and frameworks that can automate a lot of what is required.
Tuesday, September 18, 2018
Service Mesh and API Management
A colleague of mine has been
looking at a complete development platform transformation to better
meet the needs of the organisation. There are some very lofty goals he
wants to achieve, and we've had some very interesting (and exciting)
discussions on where he wants to head.
One
of the discussions we have had is in relation to the management of
services and the difference between Service Mesh solutions and API
Management. On the surface there are a lot of overlaps between the two
concerns, and it isn't immediately obvious what API Management solutions
offer that Service Mesh solutions don't.
Breaking
it down, the Service Mesh should be responsible for ensuring your
services are available. Capabilities such as rich security models,
automatic scaling, monitoring and control dashboards, and infrastructure
abstraction are also core features of Service Mesh platforms.
Solutions like Istio can also provide advanced cross-cutting concerns
such as service redirection, logging and caching without having to
incorporate those capabilities in the underlying services.
API
Management on the other hand is about how those services are exposed to
consumers. This is aimed at ensuring the people who need to use your
services can access them, and have the necessary tools to use them.
Like a Service Mesh, API Management is concerned about access security,
service failover and monitoring, but are not responsible for controlling
the services themselves. Instead the API Management layer is
responsible for managing which APIs are accessible to which consumers,
transforming the services to meet consumer constraints, setting service
access limits, and defining billing and utilisation policies.
API
management wasn't something my colleague had looked into, but it is an
important part of defining how the services would be accessed. The
industry has a tendency to equate "API Management" with "Monetisation"
and it is easy to discount API Management when looking at the
capabilities of Service Mesh solutions, but when you consider the key
differentiation between managing services and managing consumers,
there's definitely value in looking at both.
Monday, September 10, 2018
Risk vs Progress
Last week I had to get my hands dirty setting up proprietary physical servers using a serial port connection and terminal interface. It was an fun diversion and an important step in a long-running project. I also thought it was interesting that on a 150k+ piece of hardware I was forced to trim plastic from the included serial port connector to fit the server.
It was not particularly difficult to complete, but it took a lot of work and effort to get that far; the subsequent data center installation this week will be an important milestone in the overall project. It will also highlight the dichotomy between risk and progress, with a further 4 week wait until the equipment can be configured by the vendor. Despite being arguably greater risk, the lack of alternatives meant that approval to physically configure the device could be granted (including hacking away at the connector), but because the software configuration elements can be performed by the vendor remotely it was considered prudent to ensure the vendor performed these activities despite the extended timeframe involved.
There's obviously a fine line between risk and progress, and I'm not one to advocate not following appropriate procedures, but when a process is well documented and really quite straight forward, it's hard not to get frustrated by overly risk averse engagements.
Thursday, September 6, 2018
Working with Tuples - dereferencing tuples in function declarations
I've been trying to do more little code tidbits to ensure I keep my skills up since I'm rarely coding any more, and F# is a good way to stretch my skills in unfamiliar ways.
I've done a few "useful" things with it now and I'm really enjoying it. One was purely for fun, another for personal use but not exactly fun, and another for actual work.
One thing I found difficult was working effectively with Tuples, which even though Records are so easy to use in F#, tuples are even easier to use and I have tended to use them perhaps more than I should.
Take this code, which basically just sums the "second value from each tuple in the list"
A really simple tip was shown to me from someone who has actually delivered F# projects
I've done a few "useful" things with it now and I'm really enjoying it. One was purely for fun, another for personal use but not exactly fun, and another for actual work.
One thing I found difficult was working effectively with Tuples, which even though Records are so easy to use in F#, tuples are even easier to use and I have tended to use them perhaps more than I should.
Take this code, which basically just sums the "second value from each tuple in the list"
let list = [(a, 10);(b, 5)]The readability of that is horrible as you start adding more complex types or multi-value tuples.
let aggregateValues = list |> List.sumBy(fun tuple -> snd tuple)
A really simple tip was shown to me from someone who has actually delivered F# projects
let aggregateValues = list |> List.sumBy(fun (category,value) -> value)by simply dereferencing the tuple and using appropriate field names, it is much clearer what is happening. Now I knew this could be done on things like match clauses but I didn't realise you could do this in a method declaration.
Thursday, September 8, 2016
F5 Big-IP Load Balanced WCF Services - Update
The post
below was some findings from a project related to the authentication between the
front and back end services, and F5 configurations. It closely matches issues identified in my previous post for the SSO server but that was using ntlm/kerberos (I have a feeling
that we didn't set up the SPNs for kerberos correctly). Ultimately I bet that a similar configuration
to that below but set up for one-shot-kerberos and using the SPN in the Identity
column would have resolved this issue.
tldr;
Server
config:
Configure
the server to accept client certificates using PeerTrust - allows us to
authorise the client
Define
the server certificate (the identity of the service - important later on) -
allows message security to work
<behavior name="certificate">
<serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="true" />
<serviceCredentials>
<clientCertificate>
<authentication certificateValidationMode="PeerTrust" />
</clientCertificate>
<serviceCertificate findValue="<blah>" storeLocation="LocalMachine" x509FindType="FindBySubjectName"
/>
</serviceCredentials>
</behavior>
configure
the wsHttpBindings to use Message security,
items
highlighted yellow - this is used for the message security and allows load
balancing without sticky sessions
items
highlighted green - this is used to authenticate the client using a
certificate
<wsHttpBinding>
<binding name="certauth" maxReceivedMessageSize="2147483647">
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
<security mode="Message">
<message algorithmSuite="TripleDesSha256" establishSecurityContext="false" negotiateServiceCredential="false" clientCredentialType="Certificate" />
</security>
</binding>
</wsHttpBinding>
Configure
the endpoint, basically accept request on <endpoint>/certauth
and applies the appropriate message security and authentication
<endpoint name="certauth" address="certauth" binding="wsHttpBinding" bindingConfiguration="certauth" contract="<contract>" />
Client
Config:
Configure
the client to accept the server's security configuration using PeerTrust
Configure
the certificate that this client will pass as its credentials
<endpointBehaviors>
<behavior name="certificate">
<clientCredentials>
<clientCertificate findValue="<name>" storeLocation="LocalMachine" x509FindType="FindBySubjectName"
/>
<serviceCertificate>
<authentication certificateValidationMode="PeerTrust" />
</serviceCertificate>
</clientCredentials>
</behavior>
</endpointBehaviors>
Configure
the security bindings, must match the service bindings
<wsHttpBinding>
<binding name="certauth" maxReceivedMessageSize="2147483647">
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
<security mode="Message">
<message algorithmSuite="TripleDesSha256" establishSecurityContext="false" negotiateServiceCredential="false" clientCredentialType="Certificate" />
</security>
</binding>
</wsHttpBinding>
Configure
the endpoint. Note the /certauth suffix.
the
identity element is essential, it needs to match the serviceCredentials defined
on the service.
<endpoint address="<endpoint>/certauth" binding="wsHttpBinding" bindingConfiguration="certauth" contract="<contract>" behaviorConfiguration="certificate">
<identity>
<certificateReference findValue="<name>" x509FindType="FindBySubjectName" storeLocation="LocalMachine" storeName="TrustedPeople"/>
</identity>
</endpoint>
long version;
To do
this we need to use message security for the following reasons
- We are using SSL offload, so transport security is not supported (works over SSL only)
- We are using certificate auth, which is not allowed on TransportWithCredentialOnly
- We are working across a domain boundary so we cannot use windows authentication
The
second complexity is that we are working in a load balanced environment.
This adds complexity as WCF negotiates a security context to generate the
message security before sending the message. If the Load balancer sends
the message to a different server to the one the security context was
generated on, then this will cause intermittent security failures.
The two
general solutions to this are to;
- enable sticky sessions which the F5 then "guarantees" that the security context and message occur on the same server.
- This did not seem to work - with sticky sessions enabled with the following config (), we still received this errors.
- If the security context lasted longer than the sticky session timeout, then this could still cause problems, but I don't think this was the case.
- I have a feeling this was caused by not setting the 'identity' correctly in the WCF config which was a key part of the final working solution, but it should still have worked regardless as both front end servers had success connecting to both back end servers.
- set wcf establishSecurityContext="false" which basically sets up security for every message, rather than establishing a long term security context, each message would have its own security context applied.
- this did not work. I have encountered this before where this was the supposed fix and did not actually resolve the issue, but I thought that it was related to ntlm issues at the time.
- Sticky sessions and this fix should pretty much have guaranteed the fix as you would never have the sticky session timeout before the security context expired (because it is only ever generated right before the message is sent).
- This did not work. I have no explanation for this.
A final
solution that I had previously read which allows a configuration to work
without sticky sessions is to set
negotiateServiceCredentials="false", however all examples point this
to enable "kerberos-one-shot" which allows message security
using kerberos if the services are configured with domain users and SPNs.
As we work across domains this wasn't acceptable.
I
was finally able to put a few things together and determined that the
use of "establishSecurityContext" along with
"negotiateServiceCredentials" should be possible without
kerberos if you use certificate security (separate from certificate
authentication). This had partially been done with the
"servicebehaviour" element which defined the serviceCredentials on
the server using certificates, and peerTrust for serviceCertificate on the
client. However the next step was to define the serviceIdentity on the
client config so that the client knew which certificate to encrypt the message
with ( public key) instead of negotiating with the service to
determine its certificate before encrypting.
Setting
the Endpoint Identity and setting establishSecurityContext="false"
and negotiateServiceCredentials="false" meant that the client now had
enough info to encrypt the message without asking the service anything, and the
service could decrypt it.
Note
that NONE of these issue related to the authentication
certificate not being accepted, it was the "message security"
that was the problem, it just happened to require message security so that it
could attach the authentication to the payload.
A side
effect of this is that we should now be able to get rid of sticky sessions.
Subscribe to:
Posts (Atom)