Using Aurelia’s Dependency Injection Library In Non-Aurelia App, Part 2

In my last post, we looked at Aurelia’s dependency injection library and how you might be able to use it as a standalone library in your vanilla JavaScript application.  Now, however, I want to take a look at how you might be able to wire it into your React application, and then finally how you might hook it into React/Redux application.

The React App

My React app is built to do the same thing as the vanilla JavaScript application with four different components that are designed to show several injection scenarios.

One of the problems that you have to conquer when hooking Aurelia into a React application is that React is responsible for calling the constructor of your components.  If you want to get your dependencies injected, then you need to do it in a way that plays nicely with React, and this is a perfect use case for a higher-order-component.

If you are not familiar with the concept of higher order functions, then I would suggest that you read through a few blog posts on functional programming to get the hang of it, but essentially we are going to be creating a function that wraps our original function (the component) to add functionality.

Injection Higher Order Component

I am not a big fan of using React’s context to pass information down to child components so I would rather pass the injected types into my components via their props.  To do that I need to create a higher order component that is aware of the current container and the required dependencies so that it can wrap the current component and pass those in as props.

I want to try to future proof this code so that it will hopefully work with the decorator spec once it is finalized, so I am going to create a function that takes in the options and returns another function that takes in the target function (component).

export function configureInject(options, types) {
    if (!types) {
        types = [];

    return function(target) {...};

Inside of that second function, we need to create a React component that renders the target component but modifies the props that are passed into the target component so that we can inject the required types.

import React, { createElement } from 'react';
import hoistStatics from 'hoist-non-react-statics';
return function(target) {
    class InjectedComponent extends React.Component {
        constructor(props) {
        render() {
            return createElement(target, this.props);
    InjectedComponent.wrappedComponent = target;
    return hoistStatics(InjectedComponent, target);

This higher order component simply wraps another component and renders it without modifications right now, but once we have a reference to our container and the required types, then we can mess with the props that are passed into the wrapped component to actually inject them.

One other piece that is really important any time that you create a higher-order-component is the piece that hoists the statics from the target component to the higher order component.  This would allow any static functions or properties defined on the target component to be called from the higher order component.

Now that we have our component wrapped, let’s make the final changes to resolve the dependencies.

import React, { createElement } from 'react';
import hoistStatics from 'hoist-non-react-statics';
import container from './rootContainer';

export function configureInject(options, types) {
   if (!types) {
     types = [];
   return function(target) {
       const targetName = target.displayName
           || 'Component';

       class InjectedComponent extends React.Component {
           constructor(props) {
                   if (options.useChildContainer) {
                       const containerToUse = props.container || container;
                       this.container = containerToUse.createChild();
                   } else {
                       this.container = props.container || container;
           render() {
               const injectedTypes = => this.container.get(type));
               const modifiedProps = Object.assign({}, this.props, { container: this.container, injections: injectedTypes });
               return createElement(target, modifiedProps);

       InjectedComponent.wrappedComponent = target;
       InjectedComponent.displayName = `InjectedComponent(${targetName})`;

       return hoistStatics(InjectedComponent, target);

That code should allow you to take a normal React component and define an array of its dependencies and then have them injected as props.

Wiring up the HOC

Using this higher order component is very simple.  All you need to do is wrap your component with it before exporting, and then you can pull your dependencies off of the props.

import React from 'react';
import MyService from '../services/MyService';
import { configureInject } from '../di/containerHelpers';

class MyComponent extend React.Component {
    constructor(props) {
        this.service = props.injections[0];

export default configureInject({}, [ MyService ])(MyComponent);

There are some things that you might be able to do to make this more robust or performant so that it is not resolving against the container in every render, but the general approach in this example should be solid.  One other thing that you could do would be instead of defining an array of dependencies, define key/value pairs instead so that you could inject them directly onto prop keys instead of having an injections array.  My preference is to namespace them on the injections key to prevent possible prop naming collisions, though.

Injecting React Components

If you want to be able to inject React components as dependencies to other components then you need to be careful how you do it.  If you remember, the container normally constructs the types when resolving them, but that does not work with React because React needs to be the one to construct the components.  As a result, if you want to inject components themselves then the constructor functions need to be registered as an instance so that the container will return the function untouched and let React new it up.

There is one additional thing that you have to watch out for when injecting React components.  If you are using JSX then your injected component variable needs to start with an uppercase letter or else React will treat it as an element.

const MyComponent = container.get(ComponentA);
const myComponent = container.get(ComponentA);
return (

    <MyComponent>This works</MyComponent>
    <myComponent>This does not</myComponent>


The Rest of the App

Since this is just a React app without any state management libraries the rest of your app is most likely going to be vanilla JS (or close to it) and you would not have to do anything special to wire those pieces up with your DI container.


A React example is nice to see, but the hottest part of the React ecosystem right now is to use Redux for managing your state.  While a Redux application is going to look pretty similar to a React app in some regards, specifically with the higher order components, Redux also adds a few other layers that need to be accounted for.  If you are not familiar with Redux, then I would strongly suggest reading through their docs first.


A Redux container, not to be confused with Aurelia’s container, is a component that is connected to the Redux store.  When you connect a component to a store you grab state values from your store and pass them as props to the component, and you also pass in references to action creators as dispatch props to the component.  Any of our non-connected components can still use the same injection higher-order-component from the React example, but the connected components will need a slightly different approach so that we can inject the bound action creators as well as the state selector functions.

React-Redux already supplies a higher order component called “connect” that wraps the component and modifies the props with the state and action creators.  Ideally, this approach would just encapsulate the connect logic so that we do not have to rewrite the store subscription itself.

If you truly want to fully decouple your container from your other layers, then you need to be able to inject the state selector functions, action creators, and any other arbitrary item needed by your component.

Action Creators

In addition to needing to be injected into the container’s mapDispatchToProps function, the action creators themselves may need to have services injected into them.  Fortunately, when the container resolves the action creator it can also resolve the dependencies of the action creator, so our solution will need to provide a higher order function that wraps the action creators and defines their dependencies.


The selectors are supposed to be slim and operate directly against the state object so we should not need to worry about injecting any dependencies into them.

My Approach

My approach involved creating wrappers for all three prop handlers that are passed to the connect higher order component: stateProps, dispatchProps, and mergeProps.  These three handlers are responsible for ensuring that the correct DI container is used for that specific connected component and resolving all of the required dependencies.


If you want to inject selector(s) into your mapStateToProps function, then you need to define a function that takes in the state, ownProps, and the injected selector(s) as arguments.  When you wrap that with the higher order function then at runtime the selectors will be injected so that they can return your state props.

const mapStateToProps = (state, ownProps, injectedUserSelector) => {
   const user = injectedUserSelector(state);
   return {
       firstName: user.firstName,
       lastName: user.lastName,
const injectedStateToProps = injectSelectors([USER_SELECTOR_ID])(mapStateToProps);


mapDispatchToProps is a little easier since it really only returns action creators.  In this case, you can just return an object of key/value pairs where we will resolve the values against the container and use those resolved action creators instead.

import { fetchUserProfile } from '../actionCreators/user';
const mapDispatchToProps = {
   fetchUserProfile: fetchUserProfile

Action Creators

Action creators, other than fulfilling a part of the Redux design pattern, are vanilla JavaScript functions and that means that they can have their own dependencies injected the same way that things are injected in vanilla JS apps.

import UserService from '../services/UserService';
import { inject } from 'aurelia-dependency-injection';

export function fetchUserProfile(userService) {
   return function() {
       return (dispatch) => {
           dispatch({ type: 'FETCH_USER' });
           return userService.loadUserProfile().then((data) =&gt; {
                   type: 'USER_RESPONSE',
                   firstName: data.firstName,
                   lastName: data.lastName,


As long as your action creators are registered as transients, then the container will construct new instances with up to date dependencies every time.  You could probably also get away with doing a singleton per container instance if you are worried about the extra overhead of using transients.


The last piece to look at is our final implementation.

import React, { createElement } from 'react';
import hoistStatics from 'hoist-non-react-statics';
import container, { retrieveContainerId, setIdOnContainer } from './rootContainer';
import UserService from '../services/UserService';
import { connect, bindActionCreators } from 'react-redux';

const DISPATCH = Symbol('Dispatch');
const INJECTED_SELECTORS = Symbol('InjectedSelector');
const CONTAINER = Symbol('Container');

 This will determine if a container instance was passed in or if it needs to create one.
 This will also set a unique id on the new container and register services on it.
function determineContainer(stateProps, ownProps, options, registrationFn) {
   // if a container has already been set on the state props then use that
   if (stateProps && stateProps[CONTAINER]) {
       return stateProps[CONTAINER];

   let currentContainer = container;
   if (ownProps && ownProps.container) {
       currentContainer = ownProps.container;
   } else if (stateProps && stateProps.container) {
       currentContainer = stateProps.container;

   if (options && options.useChildContainer) {
       const childContainer = currentContainer.createChild();
       registerServices(childContainer, registrationFn);
       return childContainer;
   return currentContainer;

function registerServices(containerToUse, registrationFn) {
   // allow the redux container to register services on the container
   if (typeof registrationFn === 'function') {

 This creates a decorator function that will allow you to inject reselect selectors into your mapStateToProps function
export function injectSelectors(types) {
   return function(target) {
       // return a function that takes the container so we can resolve the types before calling the real state to props function
       function mapStateToPropsWrapper(container) {
           const injectedSelectors = => container.get(type));
           return function injectedStateToProps(state, ownProps) {
               return target(state, ownProps, ...injectedSelectors);
   mapStateToPropsWrapper[INJECTED_SELECTORS] = true;
   return mapStateToPropsWrapper;

// this is meant to be used with a Redux container
export function injectConnect(options, types, registrationFn, mapStateToProps, mapDispatchToProps) {
   return function(target) {
       // we don't want to bind in the dispatch props function, since we need to inject the action creators later
       // use this to grab a reference to the dispatch function
function dispatchProps(dispatch, ownProps) {
           const dispatchedProps = Object.assign({}, mapDispatchToProps);
           dispatchedProps[DISPATCH] = dispatch;
           return dispatchedProps;

       // create a wrapper for the state props that determines if we need to inject a container or not
       function stateToProps(state, ownProps) {
           // we need to set the container on the state so that mergeProps can use the same container instance
           const containerToUse = determineContainer(state, ownProps, options, registrationFn);
           if (typeof mapStateToProps === 'function') {
               if (mapStateToProps[INJECTED_SELECTORS]) {
               const injectedStateToProps = mapStateToProps(containerToUse);
               return Object.assign({}, { [CONTAINER]: containerToUse }, injectedStateToProps(state, ownProps));
           } else {
               return Object.assign({}, { [CONTAINER]: containerToUse }, mapStateToProps(state, ownProps));

       return Object.assign({}, { [CONTAINER]: containerToUse }, mapStateToProps);

   // handle the dispatch props and merge the state/own/dispatch props together
   function mergeProps(stateProps, dispatchProps, ownProps) {
       const containerToUse = determineContainer(stateProps, ownProps, options, registrationFn);

       const injectedTypes = => containerToUse.get(type));
// resolve the action creators from the container and bind them to dispatch
       // grab the reference to the dispatch function passed along
const dispatch = dispatchProps[DISPATCH];
       const boundDispatchProps = {};
       Object.keys(dispatchProps).forEach((key) => {
           if (key === DISPATCH) return;
           const actionCreator = containerToUse.get(dispatchProps[key]);
           boundDispatchProps[key] = (...args) => {
               dispatch(actionCreator.apply(null, args));

       // add the injections and the container to the props passed to the component
       return Object.assign({}, ownProps, stateProps, boundDispatchProps,
           injections: injectedTypes,
           container: containerToUse
   return connect(stateToProps, dispatchProps, mergeProps)(target);

That is a fair amount of code, but the result is that the boilerplate code in your containers ends up being pretty minimal.

import { injectConnect, injectSelectors } from '../di/containerHelpers';
import UserProfileComponent from '../components/UserProfileComponent.jsx';
import ComplicatedService from '../services/ComplicatedService';
import { fetchUserProfile } from '../actionCreators/user';
import { USER_SELECTOR_ID } from '../reducers/user';

const mapStateToProps = (state, ownProps, injectedUserSelector) => {
const user = injectedUserSelector(state);
return {
firstName: user.firstName,
lastName: user.lastName,
const injectedStateToProps = injectSelectors([USER_SELECTOR_ID])(mapStateToProps);

const mapDispatchToProps = {
fetchUserProfile: fetchUserProfile

export default injectConnect({}, [ ComplicatedService ], null, injectedStateToProps, mapDispatchToProps)(UserProfileComponent);

Instead of using the connect higher order component to wrap your container, now you use the injectConnect and pass it your options, required types, and the mapStateToProps and mapDispatchToProps functions.  The other major difference is that there is one extra line to register the selectors that need to be injected into the mapStateToProps function.


While all of that code works and properly injects everything into the different layers, I am not entirely sure that the added complexity of debugging your app would make it worth it in the long run.  When you get a bug report the easy part would be figuring out which component has the issue, but then you would also need to know which DI container resolved that component, and which selectors and action creators were injected.  I think after working through this that I would probably switch to a simpler solution for the Redux example and just use a single container or even just use the simple higher order component from the React example and forget about injecting selectors and action creators.  I hope though that these two articles have helped give you some ideas of how you might be able to leverage the Aurelia dependency injection library in your code.



Using Aurelia’s Dependency Injection Library In Non-Aurelia App, Part 1

If you are anything like me then you like to try to keep your code loosely coupled, even your JavaScript code.  The ES2015 module spec helped solve a lot of issues with dependency management in JavaScript apps, but it did not really do anything to prevent having code that is tightly coupled to the specific imports. When Aurelia was originally announced, one of the things that first caught my eye was that it included a dependency injection library that was designed to be standalone so you could use it even if you were not including the rest of the Aurelia framework.  Now that Aurelia has had some time to mature, I decided to see how exactly it might look to use the dependency injection library in a variety of non-Aurelia applications.

In this two-part blog series, I will unpack a few basics about the library itself, and then show how it might be used in three different apps: a vanilla JavaScript app, a React app, and then a React app that uses Redux for its state management.

The DI Library

Before we dive into how you would integrate the dependency injection library into your application, we first need to take a look at the how the library works.

If you want to use Aurelia’s dependency injection library, then I would suggest installing it from NPM with “npm install aurelia-dependency-injection”.  You’ll notice there are only two total dependencies that also get installed: aurelia-pal and aurelia-metadata. Aurelia-metadata is used to read and write metadata from your JavaScript functions, and aurelia-pal is a layer that abstracts away the differences between the browser and server so that your code will work across both environments.

Once you have installed the library, the concept is similar to the Unity dependency injection container for .NET.  You create one or more nested containers so each contains their own type registrations, and then types are resolved against a container with the ability to traverse up the parent container chain if desired.  When you register a type, you are able to specify how exactly it will be constructed, or if it is already an instance that should just be returned as-is.

Registration Types

There are three basic lifecycles that you can choose when you register something with the container, and then there are several advanced methods if you need more flexibility.  Let us consider the three lifecycle options first.

Standard Usage

When you want to register an object or an existing instance with the container, you should use the registerInstance method.  When the container resolves an instance, it will not attempt to construct a new one or manipulate it in any way.  The registered instance will simply be returned.

If you want to have your type be constructed every time that it is resolved from the container, then you want to use the registerTransient method.  When you register a transient you need to register the constructor function so that the container can create new instances every time that it is resolved.

You might have something that you want to be a singleton but it still needs to be constructed that first time.  You could either construct it yourself and register it as an instance, or register the constructor function using the registerSingleton method.  This behaves like the registerTransient function except that it will only construct the object the first time it is resolved, and then it will return that instance every other time.  When a singleton is registered, the container that it was registered with will hold on to a reference to that object to prevent it from getting garbage collected.  This ensures that you will always resolve to the exact same instance.  One thing to remember with singletons though is that they are only considered a singleton by a specific container.  Child containers or other containers are able to register their own versions with the same key, so if that happens, then you might get different instances depending on which container resolved it.  If you want a true application level singleton then you need to register it with the root container and not register that same key with any child containers.

If you attempt to resolve a type that does not exist the default behavior is to register the requested type as a singleton and then return it.  This behavior is configurable, though, so if you do not want it, then you should disable the autoregister feature.

Advanced Usage

Now that we have looked at the three basic use cases for registering with the container let us take a look at the more advanced approaches.  If you have a very specific use case that is not covered by the standard instance/transient/singleton resolvers then there are two other functions available to you to give the flexibility to achieve your goals.

If you need a custom lifetime other than singleton/instance/transient, you may register a custom handler with the container.  The handler is simply a function that takes the key, the container, and the underlying resolver and lets you return the object.

If you need a custom resolution approach, then you can register your own custom resolver with the registerResolver function.

import { Container } from 'aurelia-dependency-injection';

// create the root container using the default configuration
const rootContainer = new Container();
// makeGlobal() will take the current instance and set it on the static Container.instance property so that it is available from anywhere in your app

const appConstants = { name: 'DI App', author: 'Joel Peterson' };

function AjaxService() {
    return {
        makeCall: function() { ... }

// registerInstance will always return the object that was registered
rootContainer.registerInstance('AppConstants', appConstants);

// create a nested Container
const childContainer = rootContainer.createChild();

// register a singleton with the child container


Now that we have considered how to use the container to register individual instances or objects, let us take a look at how types are resolved.

In my opinion, the real benefit of the Aurelia container comes when you use it to automatically resolve nested dependencies of your resolved type.  However, before it can resolve your nested dependencies you have to first tell the container what those dependencies are supposed to be.

The Aurelia dependency injection library also provides some decorators that can be used to add metadata to your constructor functions that will tell the container what dependencies need to be injected when resolving the type.  If you want to leverage the decorator functions as actual decorators then you will need to add the legacy Babel decorators plugin since Babel 6 does not support decorators at the moment,  However, my advice would be to use the decorator functions as plain functions so that you do not have to rely on experimental features.

import { inject } from 'aurelia-dependency-injection';

import MyDependency1 from './myDependency1';

import MyDependency2 from './myDependency2';

function MyConstructor(myDependency1, myDependency2) { ... }

inject(MyDependency1, MyDependency2)(MyConstructor);

export default MyConstructor;

In this example, the inject function adds metadata to the constructor function that indicates which dependencies need to be resolved and injected into the argument list for your constructor function.  It is important to keep in mind that the dependencies will be injected in the order that they were declared, so be sure to make your arguments list order align with your inject parameter list.

Some developers might decide that they do not want to have to manually register all of their types with the container and would rather have it automagically be wired up for them.  Aurelia does support this approach as well with the autoregister feature.  However, it is probably not going to be ideal to have everything be registered as singletons so Aurelia provides other decorators that you can use to explicitly declare how that type will be autoregistered.  Once you decorate your items as singletons or transients, then whenever they are resolved they will autoregister with that lifetime modifier and you can build up your app’s registrations on-demand.

import { singleton, transient } from 'aurelia-dependency-injection';

function MyType() { ... }

// by decorating this type as a singleton, if this is autoregistered it will instead be registered as a singleton instead of as an instance
// default is to autoregister in root container
// or you can allow it to be registered in the resolving container

// or you can specify that this is a transient type

export default MyType;

Resolver Modifiers

Aurelia also provides a few different resolver modifiers that you can use to customize what actually gets injected into your constructor functions. For instance, maybe you do not want the container to autoregister the requested type if it does not exist and just return a null value instead, or maybe you want to return all of the registered types for a given key instead of just the highest priority type.  These resolver modifiers are used when you specify the dependencies for your given constructor function.

// this list is not exhaustive, so be sure to check out Aurelia's documentation for additional resolvers

import { Optional, Lazy, All, inject } from 'aurelia-dependency-injection';

function MyConstructor(optionalDep, lazyDep, allDeps) {

    // optional dependencies will be null if they did not exist
    if (optionalDep !== null) { ... }

    // lazy dependencies will return a function that will return the actual dependency
    this.actualLazyDep = lazyDep();

    // all will inject an array of registrations that match the given key
    allDeps.forEach((dep) => { ... });

inject(Optional.of(Dep1), Lazy.of(Dep2), All.of(Dep3))(MyConstructor);

export default MyConstructor;

Vanilla JavaScript

Now that we have talked about the basics of how to use Aurelia’s dependency injection library, the first type of application that I want to consider is a simple application written in vanilla JavaScript.  The full source code for this example app can be found at my Github repo, so I will just explain some of my choices and the reasons behind them.

I created a module that is responsible for returning the root container.  My personal preference is to be able to explicitly import the root container in my app instead of relying on the Container.instance static property being defined.

There are many different ways that you can create your UI with vanilla JavaScript and I opted to create a simple component structure where each component has a constructor, an init function, and a render function.  I decided to keep the init phase separate from the constructor so that the container can use the constructor solely for passing in dependencies.  There is a way in which you can supply additional parameters to the constructor but I decided it would be simpler to just have to init functions.  However you end up writing your UI layer, I would advise that you do it in such a way so that your constructor parameters are only the required dependencies, otherwise, you will have to do a more complicated container setup.

I also decided to allow for my components to track a reference to the specific container instance that resolved them.  This allows components to create a child container off of the specific container that resolved the current component and build a container tree.

However, one thing that I did discover with the deeply nested container hierarchies is that child dependencies resolve at the container that resolved the parent, and the resolution of child dependencies does not start back down at the original container.  For instance, consider this example.

// component A depends on component B


const childContainer = rootContainer.createChild();
const componentA = childContainer.get(ComponentA);

In this example, I would expect that since ComponentA does not exist in the child container that it would fall back to the root container to resolve.  However, when it sees that ComponentA depends on ComponentB and attempts to resolve ComponentB, I would expect it to start from childContainer since that is where the initial resolution happened.  Based on my experience, however, it seems like it starts at rootContainer since that is the container that actually resolved ComponentA.  This can cause issues if you attempt to override a previously registered item in a child container and that is a dependency of something that is only defined in the parent container.  In my example app, I ran across this and ended up re-registering the dependents of my overridden module in my child container so that the resolution would occur properly.


In this article, we discussed some of the basic functionality of Aurelia’s dependency injection library and how you might incorporate it into a vanilla JavaScript application.  In Part 2, we will look at how you might also wire up dependency injection into a plain React application as well as a React application that uses Redux for state management.

Writing Node Applications as a .NET Developer – My experience in Developing in Node vs .NET/C# (Part 3)

While the previous posts described what one needs to know prior to starting a Node project, what follows is some of my experiences that I came across while writing a Node application.  

How do I structure my project?

The main problem I had when developing my Node application was figuring out a sound application structure. As mentioned earlier, there is a significant difference between Node and C# when it comes to declaring file dependencies. C#’s using statement is more of a convenience feature for specifying namespaces and its compiler does the dirty work of determining what files and DLLs are required to compile a program. Node’s CommonJS module system explicitly imports a file or dependency into a dependent file at runtime. In C#, I generally inject a class’s dependencies via constructor injection, delegating object instantiation and resolution to an Inversion of Control container. In Javascript, however, I tend to write in a more functional manner where I write and pass around functions instead of stateful objects.

This difference in styles and structure had me question my design choices and made me decide between the following:

  • Passing a module’s dependency(s) in as a function parameter OR
  • “Require-ing” the dependency module via Node’s module system

Right or wrong, I opted for the latter.  Doing so allowed my module to encapsulate its dependencies and decoupled its implementation from its dependent modules. In addition, for unit testing purposes, I was able to mock and stub any modules that I imported via “require” statements using the library “rewire”.

After feeling as though this was the “wrong” way of designing my application, I came to realize the following:

The CommonJS module system is a type of IoC container

In fact, when “require-ing” a module, the result of that module is cached and returned for subsequent require calls to that same file path within an application context.  After realizing this, my anxiety around application structure melted away as I realized I could use the same patterns I would use in a C# application.

How do I write unit tests?

Being the disciplined developer that I am, I rely heavily on unit tests as a safety net against regressions as well as to implement new features through Test Driven Development. In C# (with Visual Studio’s help), the testing story is straightforward as one only needs to create a test project, write tests and use the IDE’s built in test-runner to run them.  If using NUnit or VisualStudio’s Test Tools, tests and test fixtures are denoted via attributes that the test runner picks up while running tests.  The developer experience is quite frictionless as testing seems like a first-class citizen in the ecosystem and within Visual Studio.

Setting up a testing environment in a Node project is a different story. The first decision one must make is the test framework to utilize; the most popular being Jasmine and Mocha.  Both require a configuration file that details the following:

  • Which files (via a file pattern) should (and shouldn’t) be considered tests and therefore processed by the test runner
  • What reporter to use to output test results and detail any failed tests or exceptions
  • Any custom configuration related to transpilation or code processing that will need to be performed prior to running your tests

While the first two items are fairly straightforward, the third can be a major point of frustration especially to those new to Javascript build tools and transpilers. Some of the biggest problems I faced with Javascript testing were with having my files transpile prior to being run through the test runner.  

My first approach was to use Webpack (since I was using it in development and production for bundling and transpilation) to create a bundle of my test files which would run through the test runner. This required having a separate webpack configuration (to indicate which test files needed to bundled) along with configuring my Jasmine config file to point to this bundle. While this did work, it was painfully slow as a bundle had to be created each time and run through the test runner. In addition, it felt like a hack as I’d need to cleanup this generated bundle file after each test run. My eventual solution was to use babel-register as a helper to allow Jasmine to run all of my files through this transpiler utility.  This worked well (albeit slow) and seemed like the cleaner solution as babel-register acted as a transpilation pipeline, transpiling your code in memory and providing it to Jasmine for testing.

Much of the issues I faced with setting up a test harness for my Node application was related to the pain points inherent to transpilation. If I hadn’t been using advanced Javascript language features, this pain would have been eased slightly. However, this fact points to the differences in decision points one must face when developing a Node application compared to developing a .NET application.

Overall experience compared to C#

Aside from the pain points and confusion that I faced in the preceding sections, my overall experience in developing a Node application was delightful.  Much of this is due to my love for Javascript as a language but the standard Node library as well as the immense number of third party libraries available via npm allowed me to easily accomplish whatever programming goal I had.  In addition, I found that when I was stuck using a certain library or standard library module, I had plenty of resources available to me to troubleshoot any issues, whether they be Github issues or Stack Overflow articles.    As a last resort, if Googling my problem didn’t result in a resolution, I able was to look at the actual source code of my application’s dependencies which were available in the node_modules folder.

After clearing these initial hurdles, the overall development experience in Node was not that much different from developing an application in C#.   The major difference between the two platforms is the standard tooling for .NET applications  is arguably the best available to the community.  Visual Studio does so much for the developer in all facets of application design, which is great for productivity but can abstract too much of what your application and code are doing under the hood that it can be an impediment to growing as a programmer.  Although at first it seemed like a step backwards, having the command line as a tool in my Node development process exposed me to the steps required to build the application, giving better insight into the process.


At the end of the day, both .NET and Node are very capable frameworks that will allow you to create nearly any type of application that you desire. As with many things in technology, deciding between the two generally comes down to your project’s resources and time at hand, as well as the amount of familiarity and experience on your team for a given framework. Both frameworks have pros and cons when compared against each other but one can’t go wrong in choosing one over the other.

From a personal perspective, I thoroughly enjoy developing Node applications for a few reasons. The first being that Javascript is my favorite language and the more Javascript code I can write, the happier I am. Writing Node applications is also a great way to improve your skills in the language compared to Javascript development for the web as you can focus solely on your application logic and not be slowed down by issues related to working with the DOM or different browser quirks. Finally, I find Node to be a great tool for rapid development and prototypes.  The runtime feels very lightweight and if you have a proper build/tool chain in place, the developer feedback loop can be very tight and gratifying.

Overall, you can’t go wrong between the two frameworks but if you want to get out of your comfort zone and fully embrace the full-stack javascript mindset, I strongly recommend giving Node development a shot!

Writing Node Applications as a .NET Developer – Getting Ready to Develop (Part 2)

In the previous blog post, I provided a general overview of some the key differences between the two frameworks. With this out of the way we’re ready to get started writing an application. However, there are some key decisions to make regarding what development tools to use as well as getting the execution environment set up.

Selecting an IDE/Text Editor

Before I could write a line of code, I needed to decide on an IDE/Text Editor that I wanted to use to write my application. As a C# developer, I was spoiled with the number of features that Visual Studio offered a developer that allowed for a frictionless and productive developing experience. I wanted to have this same experience when writing a Node application so before deciding on an IDE, I had a few prerequisites:

  • Debugging capabilities built into the IDE
  • Unobtrusive and generally correct autocomplete
  • File navigation via symbols (CTRL + click in Visual Studio with Resharper extension)
  • Refactoring utilities that I could trust; Find/Replace wasn’t good enough

While I love Visual Studio, I find that its JavaScript editor is more annoying than helpful.  Its autocomplete often gets in the way of my typing and it will automatically capitalize my symbols without my prompting.  Add to the fact that since I was working with a new framework and was already spreading my wings, I wanted to expose myself to another tool for learning’s sake.

Given my preferences above, I decided that JetBrain’s Webstorm would fit my needs:

  • Webstorm offers a Node debugging experience that rivals VS’s. One can set breakpoints, view locals and evaluate code when a breakpoint is hit.
  • The IDE’s autocomplete features (although not perfect) offer not only the correct symbols I’m targeting but often times would describe the signature of the function I was intending to call.
  • By indexing your project files on application start, Webstorm allows for symbol navigation via CTRL + click.  I was even able to navigate into node_modules files.
  • When refactoring code, Webstorm will search filenames, symbols and text comments, providing a safe way of refactoring code without (too many) headaches.

While not at the same level as Visual Studio’s C# development experience, Webstorm offers the user the next best thing, allowing for an environment that offers a familiar developer experience.  Although there are other (free) options available (Sublime Text, Atom, Visual Studio Code) I found that with these editors, I had to do more work to set up an environment that would allow me to develop at a productive pace.

Embracing the Command Line

Due to the power of Visual Studio as a tool and its ability to abstract away mundane operations, your average .NET developer tends to be a little wary of using the command line to perform common tasks.  Actions such as installing dependencies, running build commands and generating project templates are handled quite well in Visual Studio through wizards and search GUIs, preventing the user from having to know a myriad of tool-specific commands.

This is not the case when working with the Node ecosystem and its contemporary toolset.  Need to install a dependency via npm? A quick `npm i -S my-dependency` is required.  Want to run a yeoman generator to scaffold out an express application?  You only need to download the generator (if you don’t have it) using the same npm tool, run the generator with `yo my-awesome-generator` and walk through the prompts.  How about a build command?  Assuming you have an npm script set-up, typing `npm run build:prod` will do, (even though this is just an alias for another command line command that you will have to write).  In Node development, working with that spooky prompt is unavoidable.

While it might feel tedious and a step backwards as a developer, using the command line as a development tool has many benefits. You generally see the actions that a command line command is performing which gives you better insight into what is actually happening when you run `npm run build:prod`.  By using various tools via the command line, you have a better grasp of which tool is meant for what purpose.  This is in comparison to Visual Studio where at first blush, one equates Nuget, compiling via the F5 key and Project Templates to Visual Studio as a whole, not grasping that each of the commands you perform are separate toolsets and dependencies that Visual Studio invokes.  Having better insight into your toolchain can help in troubleshooting when a problem arises.

Running Your Code

The final step in writing a Node application is preparing your environment to run your Node code.  The only thing you will need to run your application is the Node runtime and the Node Package Manager (included in the same download and installed alongside Node).

Node.exe is the actual executable that will run your Node application.  Since Javascript is an interpreted language, the code you write is passed to this executable which parses and runs your application.  There is no explicit compilation step that a user must perform prior to running a Node application.  Furthermore, unlike applications written in a .NET language, Node programs are not dependent on a system-wide framework to be present. The only requirement for your code to run is to have the node.exe on the system path. This results in the deployment story of a Node application to be simpler and allows for cross platform deployment that is not yet readily available to .NET applications.

The neat thing about Node is that if you type in the `node` command without any file parameters, you get access to the Node REPL right in your console.  While this is great for experimentation or running/testing scripts, it’s a little lacking and I’ve only used it for simple operations and language feature tests.

While node.exe is what runs your application, npm is what will drive your application’s development.  Although the “pm” might stand for Package Manager to pull in your project dependencies, it’s more of a do-all utility that can run predefined scripts, specify project dependencies and provide a manifest file for your project if you publish it as an npm module.


Oftentimes with new frameworks and technologies, I have experienced frustration in getting my environment set up so that I could write code that runs at the click of a button. However with Node, the process is very straightforward, simply requiring one to install the runtime and package manager which are both available as an MSI that can be found on Node’s website.  From there, you can run your Node program by accessing the command line and pointing to your entry file.  In all honesty, the hardest part was deciding on an IDE that offered some of the features I became accustomed to when working in Visual Studio.

In the next and final post in this series, I will provide my overall experience with writing a Node application, detailing some questions I had surrounding application structure and testing, as well as giving a summary on my feelings on the runtime.

Writing Node Applications as a .NET Developer

As a .NET developer, creating modern web apps using Node on the backend can seem daunting.  The amount of tooling and setup required before you can write a “modern” application has resulted in the development community to display “Javascript Fatigue”; a general wariness related to the exploding amount of tooling, libraries, frameworks and best practices that are introduced on a seemingly daily basis.  Contrast this with building an app in .NET using Visual Studio where the developer simply selects a project template to build off of and they’re ready to go. [Read more…]

A Dive into SystemJS – Production Considerations

Previously we have looked at the basic configuration of SystemJS and what happens when you attempt to load modules. What we have covered so far is good enough for a development system, but things are different when you try to push your code to production and performance is much more important. It might be fine for a development system to make XHR requests for each individual script file, but that is not ideal for most production systems. This article will attempt to evaluate the production setup that is needed to attain good performance. [Read more…]

A Dive into SystemJS – Loading and Translating

In the last article we took a look at some of the basic configuration options of SystemJS and also the high level workflow of what happens when you attempt to import a module. This article is going to walk through what happens from when a script has been fetched by the browser until a module instance is returned, as well as provide some information on creating plugins for SystemJS. [Read more…]

A Dive into SystemJS – Part 1

The ECMA2015 module syntax for JavaScript was a much needed addition to the language. For years now the JavaScript community has tried to make up for the lack of a standard module format with several competing formats: AMD, CommonJS, and then UMD which tried to wrap both of the others. The introduction of an official module syntax, details of which can be found at the MDN imports documentation page, means that there is going to be a new module loader required to load the new format. Unfortunately the ECMA2015 specification ended up not including a module loader, but there is a proposal with the WhatWG team to add a module loader to the browser. SystemJS is a module loader built on top of the original ES6 module loader polyfill and soon to be the polyfill for the WhatWG module loader. This series of articles is going to take a deep dive into SystemJS and see what all it has to offer. [Read more…]

Firebase – A Real Time Document Database

There are a plethora of document databases to choose from nowadays. The entire nature of storing data is changing, so how we work with data needs to change as well. Single page applications on the web need to be responsive, not just in layout but in communication as well. Users have come to expect a higher quality of data representation, and the landscape is quickly evolving.

[Read more…]

Working with the HTML Selection API

The HTML Selection API gives developers the ability to access highlighted text within the browser and perform some DOM and text manipulation on the selected text. These useful features are available now in any modern browser as well as legacy browsers back to IE9. While there are more complex things that can be done with this API this blog article will hopefully illustrate some possible uses of the API and give you an idea of how to start using some of these features.
[Read more…]