Increase Local Reasoning with Stateless Architecture and Value Types

It is just another Thursday of adding features to your mobile app.

You have blasted through your task list by extending the current underlying object model + data retrieval code.

Your front-end native views are all coming together. The navigation between views and specific data loading is all good.

Git Commit. Git Push. The build pops out on HockeyApp. The Friday sprint review goes well. During the sprint review the product manager points out that full CRUD (Create, Read, Update, Delete) functionality is required in each of the added views. You only have the ‘R’ in ‘CRUD’ implemented. You look through your views, think it just can’t be that bad to add C, U and D, and commit to adding full CRUD to all the views by next Friday’s sprint review.

The weekend passes by, you come in on Monday and start going through all your views to add full CRUD. You update your first view with full CRUD; start navigating through your app; do some creates, updates, and deletes; and notice that all of those other views you added last week are just broken. Whole swaths of classes are sharing data you didn’t know was shared between them. Mutation to data in one view has unknown effects on the other views due to the shared references to data classes from your back-end object model.

Your commitment to having this all done by Friday is looking like a pipe-dream.

Without realizing it you are now a victim of code that is not locally reasoned due to heavy use of reference types.

Local reasoning is the ability of a programmer to look at a single unit of code (i.e. class, struct, function, series of functions) and ensure that changes made to data structures and variables within that unit of code don’t have an unintended effect on unrelated areas of the software.

The primary example of poor local reasoning is handing out pointers to single references of classes to many different areas of the application. These pointers to single references are then mutated by all the clients that were handed the reference to the shared instance.

Adding to the pain is the possibility that a reference to your single instance was held by a client, then mutated by a parallel running thread.

Code that you thought that was one-pass, single-path, easy-peasy has now become a spider web of live wires blowing in the wind that short circuit and spark at random times.

With the recent rise of Swift, there has been a movement to use value types to avoid all that sparking caused by random mutation within reference types.

For so long, we were conditioned to use classes types for everything. Classes seem like the ultimate solution. Lightweight hand off of references (i.e. pointers), instead of allocation and copying of all internal members across function calls. The ability to globally mutate in one shot via known publicly exposed functions. It all looks so awesomely ‘object-oriented’. Until you hit the complex scenarios that a CRUD based user interface has to implement. Suddenly that awesome class based object model is being referenced by view after view after subview after sub-subview. Mutation can now occur across 10s of classes with 10s of running threads.

Time to go way too far up the geek scale to talk about possible solutions.

A classic trope in many Star Trek episodes was something sneaking onto the ship. Once on the ship, the alien / particle / nanite / Lwaxana Troi would start to wreak havoc with the red shirts / warp core / main computer / Captain Picard’s patience (respectively).

By using nothing but classes and reference types, even with a well defined pure OO interface to each, you are still spending too much time with your shields down, and letting too many things sneak onto the ship. It is time to raise the shields forever by using value types as an isolated shuttlecraft to move values securely between the ship and interested parties.

Apple has been emphasizing the use of value types for the past two years via their release of the Swift language.

Check out these WWDC 2015 /2016 presentations which emphasize the use of Swift value types as a bringer of stability and performance via using the language itself to bring local reasoning to code:

Apple has even migrated many existing framework classes (i.e. reference typed) to value types in the latest evolution of Swift 3.0. Check out the WWDC 2016 Video: What’s New in Foundation for Swift.

At Minnebar 11, Adam May and Sam Kirchmeier presented on Exploring Stateless UIs in Swift. In their presentation, they outline a series of techniques using Swift to eliminate common iOS state and reference bugs. Their techniques meld together stateless concepts from React, Flux, and language techniques in Swift, to dramatically increase the local reasoning in standard iOS code.

Riffing off of Adam and Sam’s presentation, I came up with a basic representation of stateless concepts in Xamarin to solve a series of cross-platform concerns.

The rise of using value types as a bringer of local reasoning to code is not just isolated to Apple and Swift. The recognition that emphasizing local reasoning can eliminate whole swaths of bugs is also burning it’s way through the JavaScript community as well. React.js and underlying Flux architectural concepts enforce a one-way-one-time push + render to a view via action, dispatcher, and store constructs. React + Flux ensure that JavaScript code doesn’t do cross-application mutation in random and unregulated ways. The local reasoning is in React + Flux is assured by the underlying architecture.

Even the PUT, DELETE, POST, and GET operations lying underneath REST based web interfaces are recognition of the power of local reasoning and the scourge of mutation of shared references to objects.

C# and .NET languages are very weak on the use of value types as a bringer of local reasoning. For so long Microsoft has provided guidance along the lines of ‘In all other cases, you should define your types as classes‘, largely due to performance implications.

Have no fear, you can bring similar concepts to C# code as well via the struct.

The one draw back of C# is the ease with which the struct can expose mutators in non-obvious ways via functions and public property setters.

Contrasting with C#, Swift has the ‘mutating‘ keyword. Any function that will mutate a member of a struct requires the ‘mutating‘ keyword to be attached in order for compilation to succeed. Unfortunately, there is no such compiler enforced mutability guard for structs in C#. The best you can usually do is to omit the property set from most of your struct property definitions, and also use private / internal modifiers to ensure the reasoning scope of your type is as local as you can possibly make it.

The next time you see a bug caused by a seemingly random chunk of data mutating, give a thought to how you may be able to refactor that code using stateless architecture concepts and value types to increase the local reasoning of all associated code. Who knows, you may find and fix many bugs you didn’t even realize that you had.

Writing Node Applications as a .NET Developer

As a .NET developer, creating modern web apps using Node on the backend can seem daunting.  The amount of tooling and setup required before you can write a “modern” application has resulted in the development community to display “Javascript Fatigue”; a general wariness related to the exploding amount of tooling, libraries, frameworks and best practices that are introduced on a seemingly daily basis.  Contrast this with building an app in .NET using Visual Studio where the developer simply selects a project template to build off of and they’re ready to go. [Read more…]

Common Pitfalls with IDisposable and the Using Statement

Memory management with .NET is generally simpler than it is in languages like C++ where the developer has to explicitly handle memory usage.  Microsoft added a garbage collector to the .NET framework to clean up objects and memory usage from managed code when it was no longer needed.  However, since the garbage collector does not deal with resource allocation due to unmanaged code, such as COM object interaction or calls to external unmanaged assemblies, the IDisposable pattern was introduced to provide developers a way to ensure that those unmanaged resources were properly handled.  Any class that deals with unmanaged code is supposed to implement the IDisposable interface and provide a Dispose() method that explicitly cleans up the memory usage from any unmanaged code.  Probably the most common way that developers dispose of these objects is through the using statement.
[Read more…]

Getting Started with the Managed Extensibility Framework

The Managed Extensibility Framework (MEF) from Microsoft is a framework that allows developers to create a plug-in based application that allows for designing extensible programs by either the developer or third parties.  The definition from MSDN is as follows (link):

It allows application developers to discover and use extensions with no configuration required. It also lets extension developers easily encapsulate code and avoid fragile hard dependencies. MEF not only allows extensions to be reused within applications, but across applications as well.

At first glance, it looks like just another IoC container, and it certainly can be used for dependency injection much the same as Ninject or other DI frameworks. But the true power comes when you realize that your dependencies can come from anywhere and be loaded and run at any time, and that is the true purpose of MEF. It allows you to create libraries or pieces of functionality in isolation that perform a very specific functionality (as well as unit test them isolation as well), and then plug them in to a much larger application. This gives you very clean separation of concerns in your code, and allows developers to focus on smaller projects simultaneously and deliver a final product to the client that much faster.

[Read more…]

Source Control Best Practices

One of the most powerful tools we have as software developers is not a coding pattern, method, framework, or even really code at all. Like a bank keeps its most valuable assets in a safe, so do we as developers seek to protect our most valuable assets, the code we create.

Source control (referred to variously as source control management, version control, revision control, and probably a half dozen other terms as well) describes a system we use to store our code, manage changes to that code, and share our code with others. Our choice of a source control system is one of the single most important decisions we can make, and will radically affect how productive we are able to be.

In this article we will examine the rationale behind source control, and get a rundown of the different types of source control systems available, including examples of each still in widespread use today. After that we will discuss how to structure a solution to get the most out of our source control system, with an emphasis on .NET solutions. Lastly we will learn how to integrate a source control system with the software development lifecycle.

[Read more…]

Best Practices for Dependency Injection

Dependency injection (DI) is a design pattern meant to transform hard-coded dependencies into swappable ones, generally at run-time. DI is the primary mechanism by which to implement Inversion of Control (IoC) techniques to load dependencies at run-time as well as the most effortless way to swap dependency implementations with mocks or stubs for unit testing. DI is a best practice that yields more readable and maintainable code due to the way all of an implementation’s dependencies are knowable at-a-glance and by the amazing side effect of creating easily testable code.
[Read more…]