GraphQL Server with Hot Chocolate

In my last blog post (1), we implemented a REST API with DDD. This blog post will detail how to add a GraphQL endpoint to this API. If you have never implemented GraphQL, or have never done it with Hot Chocolate, this blog post is for you. You can find the completed source code on GitHub (2).

I chose Hot Chocolate for this because GraphQL .NET, the other alternative, has much less support and requires explicitly defining the schema, whereas Hot Chocolate can infer it from your types, as well as explicitly defining it, if you wish to take the time to do that. In addition to this, Hot Chocolate has a better parser (it passes Facebook’s smoke tests, while GraphQL .NET’s parser does not), it has better performance, better data loaders (I will cover what a Data Loader is below), and supports schema stitching (where you combine multiple GraphQL endpoints into a single schema), among other things (3). This blog post starts where my last one, Domain-Driven Design, leaves off; a REST API implemented with DDD.

Before we get started, there are a couple things to note. First, GraphQL is a graph query language. The definition of “graph” it is using is “a collection of vertices and edges that join pairs of vertices” (4). In this definition, our vertices are entities (e.g. a record in a DB), and our edges are the relationships between entities.

Next, GraphQL always returns a 200 response, regardless of errors. On the server-side, which this blog post covers, this primarily means you will add errors to the response in your error handlers, rather than setting the response code in a global exception handler. Error details are returned via an error field, which can be defined on the root of the response, such as a network error, inside the data field with details on a failed top-level query, or on an individual field within a response.

The reason you would use GraphQL over REST or another protocol is the ability to specify exactly what data you need for an entire page (or section of page). With our REST implementation in my last post, we had to load a user and the details on their checked out books sequentially, because we only returned the checked out book ids on the user object. With GraphQL, we can define a relationship there so the user can pull all the details in a single request; unlike REST, adding these fields does not add any overhead, because the user specifies exactly which fields they want returned in the query. Additionally, we can perform multiple root queries with a single request simply by including them in the query; REST forces use to make one request per query. Combining all of these, we can significantly reduce the network requests our apps and websites make and completely eliminate all unused data transferred, which can significantly improve performance on slow networks.

Setting up Hot Chocolate

First, we need to add the Hot Chocolate packages; I am using version 12.0.1 in this blog post of the following two packages:

  • HotChocolate
  • HotChocolate.AspNetCore

The first step to migrating a REST API to a GraphQL server with Hot Chocolate is creating a query type.

public class Query {}

Next we will add the GraphQL middleware to the Startup.cs file.

services
     .AddGraphQLServer()
     .AddQueryType<Query>();

And use the middleware.

app.UseEndpoints(endpoints => {
     endpoints.MapGraphQL();
     endpoints.MapBananaCakePop();
});

Now that we have everything registered, we can start adding our query endpoints (the GET endpoints in REST). For this first step, we will simply do the same implementation as our old REST controllers.

private readonly FeatureFlags featureFlags;
private readonly IBookApplication bookApplication;

public Query(IOptions<FeatureFlags> featureFlags, IBookApplication bookApplication)
{
    this.featureFlags = featureFlags.Value;
    this.bookApplication = bookApplication;
}

public async Task<List<Book>> GetAllBooks()
{
    if (!featureFlags.EnableBook)
    {
        throw new NotImplementedException("Query not implemented");
    }

    var books = await bookApplication.GetAll();
    return books;
}

Now we can run this query in at the /GraphQL endpoint.

query {
    allBooks {
        id
        isbn
        name 
        publishedOn   
        authors { name }
        publisher { name }
    }
}

Banana Cake Pop is Hot Chocolate’s in-browser query tool; it replaces the original GraphiQL query viewer/editor most GraphQL servers provide, and is a Postman-style tool to support building GraphQL queries and mutations. Now we can go to /GraphQL, see Banana Cake Pop, and run the query above and it will work. Adding the rest of the GET endpoints is equally easy. Adding the PUT, POST, and DELETE endpoints is as simple as adding a Mutation class and adding .AddMutationType<Mutation>() after .AddQueryType<Query>().

Before you jump in and start making these changes to your app, note that you can only register one Query and one Mutation type. This does not mean you must add all your query and mutation types to the same file—there could be hundreds of these for a large API, which would get very messy. To split these into multiple files, just use the type extension feature.

public class Query { }

[ExtendObjectType(typeof(Query))]
public class QueryBookResolvers
{
    private readonly FeatureFlags featureFlags;
    private readonly IBookApplication bookApplication;

    public QueryBookResolvers(IOptions<FeatureFlags> featureFlags, IBookApplication bookApplication)
    {
        this.featureFlags = featureFlags.Value;
        this.bookApplication = bookApplication;
    }

     public async Task<List<Book>> GetAllBooks()
    {
        if (!featureFlags.EnableBook)
        {
            throw new NotImplementedException("Query not implemented");
        }

        var books = await bookApplication.GetAll();
        return books;
    }
}

Then register each type extension, and it will all work correctly.

services
    .AddGraphQLServer()
    .AddQueryType<Query>()
    .AddTypeExtension<QueryBookResolvers>()
    .AddTypeExtension<QueryUserResolvers>();

Error Handling

Now that we have our operations set up, we need an error filter. Hot Chocolate does not expose errors to the client when not in production mode for the same reasons most REST API servers automatically block them when in production mode. However, this prevents us from giving useful errors to our users when they make a mistake. To resolve this, we will implement the IErrorFilter interface, check if the error is because of a validation exception, and if so, expose our error message and set the code to Validation to provide context to the message.

public class ValidationErrorFilter : IErrorFilter
{
    public IError OnError(IError error)
    {
        if (error.Exception is not ValidationException validationException)
        {
            return error;
        }

        var errors = new List<IError>();
        foreach (var err in validationException.Errors)
        {
            var newErr = ErrorBuilder.New()
                .SetMessage(err.ErrorMessage)
                .SetCode("Validation")
                .Build();

            errors.Add(newErr);
        }

        return new AggregateError(errors);
    }
}

Finally, we need to register our filter with .AddErrorFilter<ValidationErrorFilter>(). Now, our queries will display a reasonable response when there are errors, instead of simply a message "Unexpected Execution Error" when we throw a ValidationException when we perform validations inside our entities or applications. Because we used an aggregate error, we can split each validation error into its own error inside our filter as well, instead of having one long error message with multiple error states or just showing the first message.

{
   "errors": [
    {
      "message": "User must have a name",
      "extensions": {
        "code": "Validation"
      }
    }
  ],
  "data": {
    "createUser": null
  }
}

At this point, you will want to add an error filter for NotImplementedException or change the exception thrown in our operations to be a QueryException as well, which is automatically handled by GraphQL. For further validation, there is a library called Fairybread (5) you can use to automatically validate incoming input objects, but I did not think it was at a good point for production use yet; it only reported the first error found and returned too much information to the caller, including the fact that we were using Fairybread and FluentValidation for validation. If these are not issues for your use case, feel free to try it out.

Creating a Graph

Part of the reason to use GraphQL is so clients can pull all related data in a single query. Our user query currently only returns the book id, but as a GraphQL implementation, our users expect to be able to retrieve book information from the relationship between the user and a checked-out book. Without this ability, we are not really defining a graph of vertices joined by edges so much as a simple list of vertices. To fix this, we will now write a resolver to override that field and return an object that contains both the checked out date, the return date, and the fields from the main book type.

[ExtendObjectType(typeof(User))]
public class UserExtensions
{
    private readonly IBookApplication bookApplication;

    public UserExtensions(IBookApplication bookApplication)
    {
        this.bookApplication = bookApplication;
    }

    [BindMember(nameof(User.Books))]
    public async Task<IReadOnlyList<CheckedOutBookDetails>> GetBooks([Parent] User user)
    {
        var books = new List<CheckedOutBookDetails>();
        foreach (var book in user.Books)
        {
            var bookDetails = await bookApplication.GetBook(book.BookId);
            return new CheckedOutBookDetails(bookDetails.Id, bookDetails.Isbn, bookDetails.Name, bookDetails.PublishedOn, bookDetails.Publisher, bookDetails.Authors, book.CheckedOutOn, book.ReturnBy);
        }

        return books;
    }
}

Once we register this new resolver in our startup file with .AddTypeExtension<UserExtensions>(), we have a fully functional GraphQL server implementation. We can also use extensions to add new fields entirely (as such, I could have defined many extension functions on the CheckedOutBook type instead of creating a CheckedOutBookDetails type), and also prevent fields from being resolved (6). However, there is still a bit more work we can do. Look at this query and see if you can identity the issues:

query {
  user(id: "e796b1ed-dce1-4302-9d74-c5a543f8cae6") {
    id
    name
    books {
      id name
    }
  }

  u1: user(id: "e2087ec5-8caf-4969-91ce-5c39fc378afc") {
    id
    name
    books {
      id name
    }
  }
}

First, we have two root objects that are hitting the same table; if we had those ids before we started resolving the data, we could combine that into a single database query. If both queries were using the same user id, we could resolve all the data once and reuse it for the second query as well. In this case, it is not a huge concern, but we could maliciously construct a query to pull a very large number of data repeatedly, which could lock other requests out of our database while it resolves it. Second, we have an N+1 problem in each root query; we are resolving the user (1 query), then returning to the database once for each book the user has checked out (N queries). We can resolve both of these issues and reduce our database requests down to two no matter how much data the user needs with the use of a Data Loader.

Data Loaders

Data loaders take a list of keys and resolve all of them at the same time; since GraphQL knows which user ids we are querying for immediately, it can combine the two user queries into a single query. Once it gets the users back, it can pull the book ids from the users and resolve them in a single database query as well. Here is an example of our Books field with the data loader:

[BindMember(nameof(User.Books))]
public async Task<IReadOnlyList<CheckedOutBookDetails>> GetBooks([Parent] User user, IResolverContext context)
{
    var books = await context.BatchDataLoader<Guid, Book>(
        async (keys, ct) =>
        {
            var books = await bookApplication.GetBooks(keys);
            return books.ToDictionary(x => x.Id);
        })
    .LoadAsync(user.Books.Select(s => s.BookId).ToList());

    return books.Select(s => {
        var book = user.Books.Single(t => t.BookId == s.Id);
        return new CheckedOutBookDetails(s.Id, s.Isbn, s.Name, s.PublishedOn, s.Publisher, s.Authors, book.CheckedOutOn, book.ReturnBy);
    }).ToList();
}

Note that we inject the resolver context into this method; Hot Chocolate will inject that into any resolver method for us; we do not need to configure it anywhere. Note also that we still fully control our database access. If we had an issue with querying too many ids at once, we could build our response dictionary from batches of N items per database query rather than all loading data for all ids at once. To support this, I added the following implementations to our library context and book application to query multiple books by id in one operation. See if you can use these as a reference to change the GetUser method on our query resolver to use a Data Loader.

public async Task<IReadOnlyList<Book>> GetBooksAsync(IReadOnlyList<Guid> ids)
{
    return await libraryContext.Book.AsQueryable()
        .Where(f => ids.Contains(f.Id))
        .ToListAsync();
}

public async Task<IReadOnlyList<ApiContracts.Book>> GetBooks(IReadOnlyList<Guid> ids)
{
    var users = await libraryRepository.GetBooksAsync(ids);
    return users.Select(mapper.Map<Book, ApiContracts.Book>).ToList();
}

Testing

Now that our server is done, we need to write tests for it. If you are not using the data loaders, you can simply write unit tests around your resolver methods. Using the data loaders with mocks quickly becomes a nuisance due to the sheer amount of mocking needed, but writing integration tests is still easy. First, we set up our service collection and build a service provider; we can tie into our Startup.ConfigureServices method to handle most of the work with this. Now we just call ExecuteRequestAsync with our query as a string parameter; we can then assert against the error object on the result or call ToJson() on it and either assert against the JSON directly or deserialize the result into an object to test against.

[Fact]
public async Task ReturnsUser()
{
    var options = new Dictionary<string, string>
    {
        ["FeatureFlags:EnableUser"] = bool.TrueString,
        ["ConnectionStrings:Database"] = "mongodb://localhost"
    };

    var config = new ConfigurationBuilder().AddInMemoryCollection(options);

    var services = new ServiceCollection();
    services.AddSingleton<IConfiguration>(config.Build());
    new Startup(config.Build()).ConfigureServices(services);
    var serviceProvider = services.BuildServiceProvider();

    var builder = await serviceProvider.ExecuteRequestAsync(
@"query {
  user(id: ""e796b1ed-dce1-4302-9d74-c5a543f8cae6"") {
    id name books { id name }
  }
}");

    var result = builder.ToJson();
    var expected = @"{
  ""data"": {
    ""user"": {
      ""id"": ""e796b1ed-dce1-4302-9d74-c5a543f8cae6"",
      ""name"": ""Abraham Hosch"",
      ""books"": [
        {
          ""id"": ""30558e66-f0df-4dcd-aa96-1b3d329f1b86"",
          ""name"": ""C# in Depth: 4th Edition""
        },
        {
          ""id"": ""0a08e8df-b71e-4300-9683-bd4a1b7bcaf1"",
          ""name"": ""Dependency Injection Principles, Practices, and Patterns""
        }
      ]
    }
  }
}";

    Assert.Equal(expected, result);
}

Conclusion

Now we have a functioning GraphQL server complete with error handling and probably better performance under load than our original REST API thanks to the data loaders and fewer requests to construct a full graph of data. There are still a few more things to consider before we are complete, however, primarily around security.

GraphQL has some attack vectors REST APIs avoid, including:

  • Exposing the entire query and response structure to all clients
  • Potentially deeply nested queries that take a long time to resolve, such as an arbitrarily deep friends of friends relationship
  • Fields that are performance intensive to resolve which can be queried multiple times in a single request

For this API, I do not mind that anyone can read our type schema, but if I did, could disable introspection for all unauthorized users (7) the same way I could require authorization for specific operations and/or fields. The simplest way to resolve the other two is to set a timeout to prevent slow operations from killing performance; Hot Chocolate defaults to a 30-second timeout. If necessary, we can also define complexity values (8) for operations and block execution of requests if their computed complexity is higher than our assigned limit. We could, for example, prevent any queries nested 5 levels or deeper, and not allow multiple root queries in a request when running a an expensive operation by setting the complexity for these operations to or above the maximum allowed value.

Reference

  1. Previous blog post: https://superdevelopment.com/2021/09/24/domain-driven-design/
  2. Source code: https://github.com/Hosch250/Library-DDD/tree/hotChocolateBlog
  3. Discussion of Hot Chocolate vs GraphQL .NET: https://github.com/ChilliCream/hotchocolate/issues/392#issuecomment-571733745
  4. Definition of a Graph: https://www.merriam-webster.com/dictionary/graph
  5. Fairybread: https://github.com/benmccallum/fairybread
  6. Extending a schema: https://chillicream.com/docs/hotchocolate/defining-a-schema/extending-types
  7. Introspection: https://chillicream.com/docs/hotchocolate/server/introspection
  8. Operation Complexity: https://chillicream.com/docs/hotchocolate/security/operation-complexity

Domain-Driven Design

You’ve decided to use Domain-Driven Design (DDD), but aren’t sure how to implement it. Maybe you’ve seen it go wrong before and aren’t sure how to prevent that happening again. Maybe you’ve never done it and aren’t sure where to start. This post will show you how to implement a DDD domain layer, including aggregates , value objects, domain commands, and validation, and how to avoid some of the pitfalls I’ve seen. It will not discuss the why of DDD vs other competing patterns; nor, for the sake of brevity, will it discuss the infrastructure or application layers of a DDD app. To demonstrate these concepts in action, I have built a backend for a library using DDD; the most relevant sections will be shown in the post, and the full version can be found on GitHub. The tech stack I used is an ASP.NET Core API written in C# backed by a Mongo DB.

The Aggregate Root

The aggregate root is the base data entity of a data model. This entity will contain multiple properties, which may be base CLR types or value objects. Value objects can be viewed as objects that are owned by the aggregate root. Each object, whether an aggregate root or value object, is responsible for maintaining its state. We will start by defined an abstract aggregate root type with properties all our aggregate roots will have:

public abstract class AggregateRoot
{
    public string AuditInfo_CreatedBy { get; private set; } = "Library.Web";
    public DateTime AuditInfo_CreatedOn { get; private set; } = DateTime.UtcNow;

    public void SetCreatedBy(string createdBy)
    {
        AuditInfo_CreatedBy = createdBy;
    }
}

Next, we will define an implementation of this type containing a couple internal constructors, a number of data properties, and a couple methods for updating the data properties. Looking through the implementation below, you will probably note that my data properties have private setters and methods for setting them. This looks a little strange when you consider that properties allow custom setters, but the reason for this is serialization. When we deserialize an object from our DB, we don’t want to have to go through any validation we might do when setting a property; we just want to read into the property and assume the data has already been validated. When the data changes, we need to validate it, so we make the property setters private and provide public methods to set the data. Another benefit the methods provide is you can pass a domain command to them, instead of just the final expected value of the property; this allows you to provide supplemental information as necessary.

public class User : AggregateRoot
{
    /// <summary>
    /// Used for deserialization
    /// </summary>
    [BsonConstructor]
    internal User(Guid id, string name, bool isInGoodStanding, List<CheckedOutBook> books)
    {
        Id = id;
        Name = name;
        IsInGoodStanding = isInGoodStanding;
        this.books = books;
    }

    /// <summary>
    /// Used by the UserFactory; prefer creating instances with that
    /// </summary>
    internal User(string name)
    {
        Id = Guid.NewGuid();
        Name = name;
        IsInGoodStanding = true;
    }

    public Guid Id { get; private set; }
    public string Name { get; private set; }
    public bool IsInGoodStanding { get; private set; }

    [BsonElement(nameof(Books))]
    private readonly List<CheckedOutBook> books = new();
    public IReadOnlyCollection<CheckedOutBook> Books => books.AsReadOnly();

    public async Task CheckoutBook(CheckoutBookCommand command)
    {
        // validation happens in any event handler listening for this event
        // e.g. Does the library have this book, is it available, etc.
        await DomainEvents.Raise(new CheckingOutBook(command));

        var checkoutTime = DateTime.UtcNow;
        books.Add(new CheckedOutBook(command.BookId, checkoutTime, checkoutTime.Date.AddDays(21)));
        DomainEvents.Raise(new CheckedOutBook(command));
    }

    public async Task ReturnBook(ReturnBookCommand command)
    {
        // validation happens in any event handler listening for this event
        // e.g. Does the user have this book checked out, etc.
        await DomainEvents.Raise(new ReturningBook(command));

        books.RemoveAll(r => r.BookId == command.BookId);
        DomainEvents.Raise(new ReturnedBook(command));
    }
}

public class CheckedOutBook
{
    public CheckedOutBook(Guid bookId, DateTime checkedOutOn, DateTime returnBy)
    {
        BookId = bookId;
        CheckedOutOn = checkedOutOn;
        ReturnBy = returnBy;
    }

    public Guid BookId { get; private set; }
    public DateTime CheckedOutOn { get; private set; }
    public DateTime ReturnBy { get; private set; }
}

Having POCOs or dumb objects (objects that aren’t responsible for maintaining their internal state) is often one of the first mistakes people make when doing DDD. They will create a class with public getters and setters and put their logic in a service (I will go over domain services and why you don’t usually want to use them later). The problem with this is that two places might be working with the same object instance at the same time and write data that the other is reading or writing, so the object risks ending up in an inconsistent state. DDD prevents inconsistent state by only allowing the object to set its own state, so if two consecutive changes to the same object would lead to inconsistent state, the object will catch that with its internal validation, instead of relying on the caller to have validated the change.

Domain Commands

Domain commands are how you tell an aggregate to update itself. In the code above, CheckoutBook and ReturnBook are domain commands. It isn’t strictly necessary to create a command type to represent the data being passed; you could have just passed a Guid bookId instead of a command class into the method. However, I like creating a command type because you have a single object to run validation against, and you can validate parameters when creating the command instance. For example, if your domain command requires a certain value be provided, you could validate that it’s not null in the type constructor instead of in the domain command itself. The validation on the type especially helps the logic flow well; you can’t really validate a Guid without additional context; you can validate a ReturnBookCommand type that contains a Guid, and you already have the additional context around what the Guid is.

public class CheckoutBookCommand
{
    public Guid BookId { get; }
    public Guid UserId { get; }

    public CheckoutBookCommand(Guid userId, Guid bookId)
    {
        if (bookId == Guid.Empty) { throw new ArgumentException($"Argument {nameof(bookId)} cannot be an empty guid", nameof(bookId)); }
        if (userId == Guid.Empty) { throw new ArgumentException($"Argument {nameof(userId)} cannot be an empty guid", nameof(userId)); }

        BookId = bookId;
        UserId = userId;
    }
}

Validation

You probably noticed the comments I had in the domain command implementations about validation. Validation is often tricky to get right in DDD because it uses other dependencies, such as a DB. For example, to successfully check out a book, the system has to make sure both the book and user are in the system, that the book is available, that the user is in good standing, etc. To do these, we already pulled the user from the DB to get the user aggregate, so we know the user is in the system. However, we haven’t checked that the book is in the system, so we need to reference a database instance when we do our validation inside the domain command. We can’t inject a DB instance into the aggregate because we don’t resolve aggregates from the IoC container, and even if we could, it’s not the aggregate’s responsibility to connect to the DB. We could new a DB instance up in the command, but that is wrong for reasons outside the scope of this article, in addition to not being the aggregate’s responsibility to talk to the DB (research Dependency Injection and Inversion of Control if you don’t know why). This is where our command system comes into play. Notice the DomainEvents.Raise call. I have that implemented with MediatR, which is a .NET implementation of the mediator pattern; see the link at the end of this article for more detail:

public static class DomainEvents
{
    public static Func<IPublisher> Publisher { get; set; }
    public static async Task Raise<T>(T args) where T : INotification
    {
        var mediator = Publisher.Invoke();
        await mediator.Publish<T>(args);
    }
}

We register IPublisher and our notifications and commands with our IoC container so we can resolve dependencies in our handlers. We then create a method that knows how to resolve an IPublisher instance and assign it to the static Publisher property in our startup. The static Raise method then has all the information it needs to raise the event and wait for the handlers to complete. In this example, I use the FluentValidation library for validation within these handlers. We could put an error handler in our HTTP response pipeline to catch ValidationExceptions and translate them into 400 responses.

public class CheckingOutBook : INotification
{
    public CheckoutBookCommand Command { get; }

    public CheckingOutBook(CheckoutBookCommand command) => Command = command;
}

public class CheckingOutBookValidationHandler : INotificationHandler<CheckingOutBook>
{
    private readonly CheckingOutBookValidator validator;

    public CheckingOutBookValidationHandler(CheckingOutBookValidator validator) => this.validator = validator;

    public Task Handle(CheckingOutBook @event, CancellationToken cancellationToken)
    {
        validator.ValidateAndThrow(@event.Command);

        return Task.CompletedTask;
    }
}

public class CheckingOutBookValidator : AbstractValidator<CheckoutBookCommand>
{
    public CheckingOutBookValidator(ILibraryRepository repository)
    {
        RuleFor(x => x.UserId)
            .MustAsync(async (userId, _) =>
            {
                var user = await repository.GetUserAsync(userId);
                return user?.IsInGoodStanding == true;
            }).WithMessage("User is not in good standing");

        RuleFor(x => x.BookId)
            .MustAsync(async (bookId, _) => await repository.GetBookAsync(bookId) is not null)
            .WithMessage("Book does not exist")
            .DependentRules(() =>
            {
                RuleFor(x => x.BookId)
                    .MustAsync(async (bookId, _) => !await repository.IsBookCheckedOut(bookId))
                    .WithMessage("Book is already checked out");
            });
    }
}

Creating Entities

At this point you may be wondering how we ensure an aggregate root is valid on initial creation since we can’t await results in a constructor the way we do in our command handlers inside the entity. This is a prime case for the use of factories; we’ll make our constructor internal to reduce the accessibility as much as possible and create a factory that makes any infrastructure calls it needs, calls the constructor, then raises an event with the newly created entity as data that can be used to validate it. This way, we encapsulate all the logic needed to create an event, instead of relying on each place an event is created to perform the logic correctly and ensure the entity is valid.

public class UserFactory
{
    public async Task<User> CreateUserAsync(string name)
    {
        var user = new User(name);
        await DomainEvents.Raise(new CreatingUser(user));

        return user;
    }
}

Domain Services

You are probably wondering at this point why I didn’t simply use a service to perform the checkout book command. For example, I could define the service with a method CheckoutBook(User user, Guid bookId), and perform all the validation inline, instead of importing MediatR and FluentValidation and creating 3 classes to simply validate my user. Then I would inject this service into whatever place calls the domain command and call the service instead of calling the domain command. I could still have my domain command be responsible for updating the entity instance to ensure it isn’t having random values assigned in places. The problem with this is I now have some logic in the service and some in my entity; how do I determine which logic goes where? When multiple devs are working on a project, this becomes very difficult to handle, and people have to figure out where existing logic is and where to put new logic. This issue often leads to duplicated logic, which leads to bugs when one is updated and the other isn’t, among other issues. Additionally, as I mentioned above, because the validation logic occurs outside my entity, I can no longer trust that the entity is in a valid state because I don’t know if the validation was run before the command to update the entity was called. Because DDD implemented correctly only allows the entity to update itself, we can validate data changes once inside the entity just before we update it, instead of hoping the caller remembered to fully validate the changes.

References

Increase Local Reasoning with Stateless Architecture and Value Types

It is just another Thursday of adding features to your mobile app.

You have blasted through your task list by extending the current underlying object model + data retrieval code.

Your front-end native views are all coming together. The navigation between views and specific data loading is all good.

Git Commit. Git Push. The build pops out on HockeyApp. The Friday sprint review goes well. During the sprint review the product manager points out that full CRUD (Create, Read, Update, Delete) functionality is required in each of the added views. You only have the ‘R’ in ‘CRUD’ implemented. You look through your views, think it just can’t be that bad to add C, U and D, and commit to adding full CRUD to all the views by next Friday’s sprint review.

The weekend passes by, you come in on Monday and start going through all your views to add full CRUD. You update your first view with full CRUD; start navigating through your app; do some creates, updates, and deletes; and notice that all of those other views you added last week are just broken. Whole swaths of classes are sharing data you didn’t know was shared between them. Mutation to data in one view has unknown effects on the other views due to the shared references to data classes from your back-end object model.

Your commitment to having this all done by Friday is looking like a pipe-dream.

[Read more…]

Writing Node Applications as a .NET Developer

As a .NET developer, creating modern web apps using Node on the backend can seem daunting.  The amount of tooling and setup required before you can write a “modern” application has resulted in the development community to display “Javascript Fatigue”; a general wariness related to the exploding amount of tooling, libraries, frameworks and best practices that are introduced on a seemingly daily basis.  Contrast this with building an app in .NET using Visual Studio where the developer simply selects a project template to build off of and they’re ready to go. [Read more…]

Common Pitfalls with IDisposable and the Using Statement

Memory management with .NET is generally simpler than it is in languages like C++ where the developer has to explicitly handle memory usage.  Microsoft added a garbage collector to the .NET framework to clean up objects and memory usage from managed code when it was no longer needed.  However, since the garbage collector does not deal with resource allocation due to unmanaged code, such as COM object interaction or calls to external unmanaged assemblies, the IDisposable pattern was introduced to provide developers a way to ensure that those unmanaged resources were properly handled.  Any class that deals with unmanaged code is supposed to implement the IDisposable interface and provide a Dispose() method that explicitly cleans up the memory usage from any unmanaged code.  Probably the most common way that developers dispose of these objects is through the using statement.
[Read more…]

Getting Started with the Managed Extensibility Framework

The Managed Extensibility Framework (MEF) from Microsoft is a framework that allows developers to create a plug-in based application that allows for designing extensible programs by either the developer or third parties.  The definition from MSDN is as follows (link):

It allows application developers to discover and use extensions with no configuration required. It also lets extension developers easily encapsulate code and avoid fragile hard dependencies. MEF not only allows extensions to be reused within applications, but across applications as well.

At first glance, it looks like just another IoC container, and it certainly can be used for dependency injection much the same as Ninject or other DI frameworks. But the true power comes when you realize that your dependencies can come from anywhere and be loaded and run at any time, and that is the true purpose of MEF. It allows you to create libraries or pieces of functionality in isolation that perform a very specific functionality (as well as unit test them isolation as well), and then plug them in to a much larger application. This gives you very clean separation of concerns in your code, and allows developers to focus on smaller projects simultaneously and deliver a final product to the client that much faster.

[Read more…]

Source Control Best Practices

One of the most powerful tools we have as software developers is not a coding pattern, method, framework, or even really code at all. Like a bank keeps its most valuable assets in a safe, so do we as developers seek to protect our most valuable assets, the code we create.

Source control (referred to variously as source control management, version control, revision control, and probably a half dozen other terms as well) describes a system we use to store our code, manage changes to that code, and share our code with others. Our choice of a source control system is one of the single most important decisions we can make, and will radically affect how productive we are able to be.

In this article we will examine the rationale behind source control, and get a rundown of the different types of source control systems available, including examples of each still in widespread use today. After that we will discuss how to structure a solution to get the most out of our source control system, with an emphasis on .NET solutions. Lastly we will learn how to integrate a source control system with the software development lifecycle.

[Read more…]

Best Practices for Dependency Injection

Dependency injection (DI) is a design pattern meant to transform hard-coded dependencies into swappable ones, generally at run-time. DI is the primary mechanism by which to implement Inversion of Control (IoC) techniques to load dependencies at run-time as well as the most effortless way to swap dependency implementations with mocks or stubs for unit testing. DI is a best practice that yields more readable and maintainable code due to the way all of an implementation’s dependencies are knowable at-a-glance and by the amazing side effect of creating easily testable code.
[Read more…]