Domain-Driven Design

You’ve decided to use Domain-Driven Design (DDD), but aren’t sure how to implement it. Maybe you’ve seen it go wrong before and aren’t sure how to prevent that happening again. Maybe you’ve never done it and aren’t sure where to start. This post will show you how to implement a DDD domain layer, including aggregates , value objects, domain commands, and validation, and how to avoid some of the pitfalls I’ve seen. It will not discuss the why of DDD vs other competing patterns; nor, for the sake of brevity, will it discuss the infrastructure or application layers of a DDD app. To demonstrate these concepts in action, I have built a backend for a library using DDD; the most relevant sections will be shown in the post, and the full version can be found on GitHub. The tech stack I used is an ASP.NET Core API written in C# backed by a Mongo DB.

The Aggregate Root

The aggregate root is the base data entity of a data model. This entity will contain multiple properties, which may be base CLR types or value objects. Value objects can be viewed as objects that are owned by the aggregate root. Each object, whether an aggregate root or value object, is responsible for maintaining its state. We will start by defined an abstract aggregate root type with properties all our aggregate roots will have:

public abstract class AggregateRoot
{
    public string AuditInfo_CreatedBy { get; private set; } = "Library.Web";
    public DateTime AuditInfo_CreatedOn { get; private set; } = DateTime.UtcNow;

    public void SetCreatedBy(string createdBy)
    {
        AuditInfo_CreatedBy = createdBy;
    }
}

Next, we will define an implementation of this type containing a couple internal constructors, a number of data properties, and a couple methods for updating the data properties. Looking through the implementation below, you will probably note that my data properties have private setters and methods for setting them. This looks a little strange when you consider that properties allow custom setters, but the reason for this is serialization. When we deserialize an object from our DB, we don’t want to have to go through any validation we might do when setting a property; we just want to read into the property and assume the data has already been validated. When the data changes, we need to validate it, so we make the property setters private and provide public methods to set the data. Another benefit the methods provide is you can pass a domain command to them, instead of just the final expected value of the property; this allows you to provide supplemental information as necessary.

public class User : AggregateRoot
{
    /// <summary>
    /// Used for deserialization
    /// </summary>
    [BsonConstructor]
    internal User(Guid id, string name, bool isInGoodStanding, List<CheckedOutBook> books)
    {
        Id = id;
        Name = name;
        IsInGoodStanding = isInGoodStanding;
        this.books = books;
    }

    /// <summary>
    /// Used by the UserFactory; prefer creating instances with that
    /// </summary>
    internal User(string name)
    {
        Id = Guid.NewGuid();
        Name = name;
        IsInGoodStanding = true;
    }

    public Guid Id { get; private set; }
    public string Name { get; private set; }
    public bool IsInGoodStanding { get; private set; }

    [BsonElement(nameof(Books))]
    private readonly List<CheckedOutBook> books = new();
    public IReadOnlyCollection<CheckedOutBook> Books => books.AsReadOnly();

    public async Task CheckoutBook(CheckoutBookCommand command)
    {
        // validation happens in any event handler listening for this event
        // e.g. Does the library have this book, is it available, etc.
        await DomainEvents.Raise(new CheckingOutBook(command));

        var checkoutTime = DateTime.UtcNow;
        books.Add(new CheckedOutBook(command.BookId, checkoutTime, checkoutTime.Date.AddDays(21)));
        DomainEvents.Raise(new CheckedOutBook(command));
    }

    public async Task ReturnBook(ReturnBookCommand command)
    {
        // validation happens in any event handler listening for this event
        // e.g. Does the user have this book checked out, etc.
        await DomainEvents.Raise(new ReturningBook(command));

        books.RemoveAll(r => r.BookId == command.BookId);
        DomainEvents.Raise(new ReturnedBook(command));
    }
}

public class CheckedOutBook
{
    public CheckedOutBook(Guid bookId, DateTime checkedOutOn, DateTime returnBy)
    {
        BookId = bookId;
        CheckedOutOn = checkedOutOn;
        ReturnBy = returnBy;
    }

    public Guid BookId { get; private set; }
    public DateTime CheckedOutOn { get; private set; }
    public DateTime ReturnBy { get; private set; }
}

Having POCOs or dumb objects (objects that aren’t responsible for maintaining their internal state) is often one of the first mistakes people make when doing DDD. They will create a class with public getters and setters and put their logic in a service (I will go over domain services and why you don’t usually want to use them later). The problem with this is that two places might be working with the same object instance at the same time and write data that the other is reading or writing, so the object risks ending up in an inconsistent state. DDD prevents inconsistent state by only allowing the object to set its own state, so if two consecutive changes to the same object would lead to inconsistent state, the object will catch that with its internal validation, instead of relying on the caller to have validated the change.

Domain Commands

Domain commands are how you tell an aggregate to update itself. In the code above, CheckoutBook and ReturnBook are domain commands. It isn’t strictly necessary to create a command type to represent the data being passed; you could have just passed a Guid bookId instead of a command class into the method. However, I like creating a command type because you have a single object to run validation against, and you can validate parameters when creating the command instance. For example, if your domain command requires a certain value be provided, you could validate that it’s not null in the type constructor instead of in the domain command itself. The validation on the type especially helps the logic flow well; you can’t really validate a Guid without additional context; you can validate a ReturnBookCommand type that contains a Guid, and you already have the additional context around what the Guid is.

public class CheckoutBookCommand
{
    public Guid BookId { get; }
    public Guid UserId { get; }

    public CheckoutBookCommand(Guid userId, Guid bookId)
    {
        if (bookId == Guid.Empty) { throw new ArgumentException($"Argument {nameof(bookId)} cannot be an empty guid", nameof(bookId)); }
        if (userId == Guid.Empty) { throw new ArgumentException($"Argument {nameof(userId)} cannot be an empty guid", nameof(userId)); }

        BookId = bookId;
        UserId = userId;
    }
}

Validation

You probably noticed the comments I had in the domain command implementations about validation. Validation is often tricky to get right in DDD because it uses other dependencies, such as a DB. For example, to successfully check out a book, the system has to make sure both the book and user are in the system, that the book is available, that the user is in good standing, etc. To do these, we already pulled the user from the DB to get the user aggregate, so we know the user is in the system. However, we haven’t checked that the book is in the system, so we need to reference a database instance when we do our validation inside the domain command. We can’t inject a DB instance into the aggregate because we don’t resolve aggregates from the IoC container, and even if we could, it’s not the aggregate’s responsibility to connect to the DB. We could new a DB instance up in the command, but that is wrong for reasons outside the scope of this article, in addition to not being the aggregate’s responsibility to talk to the DB (research Dependency Injection and Inversion of Control if you don’t know why). This is where our command system comes into play. Notice the DomainEvents.Raise call. I have that implemented with MediatR, which is a .NET implementation of the mediator pattern; see the link at the end of this article for more detail:

public static class DomainEvents
{
    public static Func<IPublisher> Publisher { get; set; }
    public static async Task Raise<T>(T args) where T : INotification
    {
        var mediator = Publisher.Invoke();
        await mediator.Publish<T>(args);
    }
}

We register IPublisher and our notifications and commands with our IoC container so we can resolve dependencies in our handlers. We then create a method that knows how to resolve an IPublisher instance and assign it to the static Publisher property in our startup. The static Raise method then has all the information it needs to raise the event and wait for the handlers to complete. In this example, I use the FluentValidation library for validation within these handlers. We could put an error handler in our HTTP response pipeline to catch ValidationExceptions and translate them into 400 responses.

public class CheckingOutBook : INotification
{
    public CheckoutBookCommand Command { get; }

    public CheckingOutBook(CheckoutBookCommand command) => Command = command;
}

public class CheckingOutBookValidationHandler : INotificationHandler<CheckingOutBook>
{
    private readonly CheckingOutBookValidator validator;

    public CheckingOutBookValidationHandler(CheckingOutBookValidator validator) => this.validator = validator;

    public Task Handle(CheckingOutBook @event, CancellationToken cancellationToken)
    {
        validator.ValidateAndThrow(@event.Command);

        return Task.CompletedTask;
    }
}

public class CheckingOutBookValidator : AbstractValidator<CheckoutBookCommand>
{
    public CheckingOutBookValidator(ILibraryRepository repository)
    {
        RuleFor(x => x.UserId)
            .MustAsync(async (userId, _) =>
            {
                var user = await repository.GetUserAsync(userId);
                return user?.IsInGoodStanding == true;
            }).WithMessage("User is not in good standing");

        RuleFor(x => x.BookId)
            .MustAsync(async (bookId, _) => await repository.GetBookAsync(bookId) is not null)
            .WithMessage("Book does not exist")
            .DependentRules(() =>
            {
                RuleFor(x => x.BookId)
                    .MustAsync(async (bookId, _) => !await repository.IsBookCheckedOut(bookId))
                    .WithMessage("Book is already checked out");
            });
    }
}

Creating Entities

At this point you may be wondering how we ensure an aggregate root is valid on initial creation since we can’t await results in a constructor the way we do in our command handlers inside the entity. This is a prime case for the use of factories; we’ll make our constructor internal to reduce the accessibility as much as possible and create a factory that makes any infrastructure calls it needs, calls the constructor, then raises an event with the newly created entity as data that can be used to validate it. This way, we encapsulate all the logic needed to create an event, instead of relying on each place an event is created to perform the logic correctly and ensure the entity is valid.

public class UserFactory
{
    public async Task<User> CreateUserAsync(string name)
    {
        var user = new User(name);
        await DomainEvents.Raise(new CreatingUser(user));

        return user;
    }
}

Domain Services

You are probably wondering at this point why I didn’t simply use a service to perform the checkout book command. For example, I could define the service with a method CheckoutBook(User user, Guid bookId), and perform all the validation inline, instead of importing MediatR and FluentValidation and creating 3 classes to simply validate my user. Then I would inject this service into whatever place calls the domain command and call the service instead of calling the domain command. I could still have my domain command be responsible for updating the entity instance to ensure it isn’t having random values assigned in places. The problem with this is I now have some logic in the service and some in my entity; how do I determine which logic goes where? When multiple devs are working on a project, this becomes very difficult to handle, and people have to figure out where existing logic is and where to put new logic. This issue often leads to duplicated logic, which leads to bugs when one is updated and the other isn’t, among other issues. Additionally, as I mentioned above, because the validation logic occurs outside my entity, I can no longer trust that the entity is in a valid state because I don’t know if the validation was run before the command to update the entity was called. Because DDD implemented correctly only allows the entity to update itself, we can validate data changes once inside the entity just before we update it, instead of hoping the caller remembered to fully validate the changes.

References

Increase Local Reasoning with Stateless Architecture and Value Types

It is just another Thursday of adding features to your mobile app.

You have blasted through your task list by extending the current underlying object model + data retrieval code.

Your front-end native views are all coming together. The navigation between views and specific data loading is all good.

Git Commit. Git Push. The build pops out on HockeyApp. The Friday sprint review goes well. During the sprint review the product manager points out that full CRUD (Create, Read, Update, Delete) functionality is required in each of the added views. You only have the ‘R’ in ‘CRUD’ implemented. You look through your views, think it just can’t be that bad to add C, U and D, and commit to adding full CRUD to all the views by next Friday’s sprint review.

The weekend passes by, you come in on Monday and start going through all your views to add full CRUD. You update your first view with full CRUD; start navigating through your app; do some creates, updates, and deletes; and notice that all of those other views you added last week are just broken. Whole swaths of classes are sharing data you didn’t know was shared between them. Mutation to data in one view has unknown effects on the other views due to the shared references to data classes from your back-end object model.

Your commitment to having this all done by Friday is looking like a pipe-dream.

[Read more…]

Writing Node Applications as a .NET Developer

As a .NET developer, creating modern web apps using Node on the backend can seem daunting.  The amount of tooling and setup required before you can write a “modern” application has resulted in the development community to display “Javascript Fatigue”; a general wariness related to the exploding amount of tooling, libraries, frameworks and best practices that are introduced on a seemingly daily basis.  Contrast this with building an app in .NET using Visual Studio where the developer simply selects a project template to build off of and they’re ready to go. [Read more…]

Common Pitfalls with IDisposable and the Using Statement

Memory management with .NET is generally simpler than it is in languages like C++ where the developer has to explicitly handle memory usage.  Microsoft added a garbage collector to the .NET framework to clean up objects and memory usage from managed code when it was no longer needed.  However, since the garbage collector does not deal with resource allocation due to unmanaged code, such as COM object interaction or calls to external unmanaged assemblies, the IDisposable pattern was introduced to provide developers a way to ensure that those unmanaged resources were properly handled.  Any class that deals with unmanaged code is supposed to implement the IDisposable interface and provide a Dispose() method that explicitly cleans up the memory usage from any unmanaged code.  Probably the most common way that developers dispose of these objects is through the using statement.
[Read more…]

Getting Started with the Managed Extensibility Framework

The Managed Extensibility Framework (MEF) from Microsoft is a framework that allows developers to create a plug-in based application that allows for designing extensible programs by either the developer or third parties.  The definition from MSDN is as follows (link):

It allows application developers to discover and use extensions with no configuration required. It also lets extension developers easily encapsulate code and avoid fragile hard dependencies. MEF not only allows extensions to be reused within applications, but across applications as well.

At first glance, it looks like just another IoC container, and it certainly can be used for dependency injection much the same as Ninject or other DI frameworks. But the true power comes when you realize that your dependencies can come from anywhere and be loaded and run at any time, and that is the true purpose of MEF. It allows you to create libraries or pieces of functionality in isolation that perform a very specific functionality (as well as unit test them isolation as well), and then plug them in to a much larger application. This gives you very clean separation of concerns in your code, and allows developers to focus on smaller projects simultaneously and deliver a final product to the client that much faster.

[Read more…]

Source Control Best Practices

One of the most powerful tools we have as software developers is not a coding pattern, method, framework, or even really code at all. Like a bank keeps its most valuable assets in a safe, so do we as developers seek to protect our most valuable assets, the code we create.

Source control (referred to variously as source control management, version control, revision control, and probably a half dozen other terms as well) describes a system we use to store our code, manage changes to that code, and share our code with others. Our choice of a source control system is one of the single most important decisions we can make, and will radically affect how productive we are able to be.

In this article we will examine the rationale behind source control, and get a rundown of the different types of source control systems available, including examples of each still in widespread use today. After that we will discuss how to structure a solution to get the most out of our source control system, with an emphasis on .NET solutions. Lastly we will learn how to integrate a source control system with the software development lifecycle.

[Read more…]

Best Practices for Dependency Injection

Dependency injection (DI) is a design pattern meant to transform hard-coded dependencies into swappable ones, generally at run-time. DI is the primary mechanism by which to implement Inversion of Control (IoC) techniques to load dependencies at run-time as well as the most effortless way to swap dependency implementations with mocks or stubs for unit testing. DI is a best practice that yields more readable and maintainable code due to the way all of an implementation’s dependencies are knowable at-a-glance and by the amazing side effect of creating easily testable code.
[Read more…]