Share this page to your:
Mastodon

When I was putting together a unit test recently I had a small ephiphany about how the code I was testing ought to be structured.

The Java code I was trying to test relied on a large-ish object graph and the most accurate way to generate this is to run the code that already builds it. And that was harder than it should have been. My intial plan was to create a JSON document that the existing code would unpack into the graph and then pass that into the new code I was testing. The existing code already takes a JSON document and I had samples of the JSON I needed. It unpacked the JSON, did various necessary things to it, and produced the graph I needed.

But the existing code did some other operations too. For example it saved it all to a database, and it fired off various events. It possibly did more (inherited code so I would have to dig into it to know). There are ways of handling this. I could mock those other operations so that, for example, instead of firing an event the mock would do nothing. I could add an H2 database to my test so that the database writes would write to a freshly constructed memory-only database. I could. But this is when I started wondering if I should have to.

With a slightly different design for that existing code, one that separated the object handling from the peristence and the other operations, life would be much simpler. But this wasn't obvious at design time. The object graph I'm using is quite pure domain code, not riddled with operational logic as you might be thinking at this point. No, those objects are all just fields and getters and setters. The operations are in separate service objects that manipulate them. But that service code does a lot of transformation work getting from the structure dictated by the incoming JSON to the structure required by the database. It is that transformation code (and that code only) that I need to use in my test.

Let's look at a contrived example that is a lot simpler than the one in real life:

    class CustomerServiceImpl implements CustomerService {

        @Autowired CustomerRepository customerRepository;
            @Autowired ApplicationEventPublisher applicationEventPublisher;

        public void unpackCustomer(IngestedCustomer ingestedCustomer) {
            final Customer customer = Customer.builder()
                .invoices(
                    ingestedCustomer.getInvoices().stream()
                        .map(invoice -    transformInvoice(invoice))
                        .collect(Collection::List))
                .name(ingestedCustomer.getName())
                .build();
            customerRepository.save(customer);
            applicationEventPublisher.publishEvent(new CustomerUpdated(customer)
        }
    }

We're unpacking the customer record that has invoices attached and persisting them, then we generate an event. Now, this would be easier for me if the class looked like this:

    class CustomerServiceImpl implements CustomerService {

        @Autowired CustomerRepository customerRepository;
            @Autowired ApplicationEventPublisher applicationEventPublisher;

        public Customer unpackCustomer(IngestedCustomer ingestedCustomer) {
            return Customer.builder()
                .invoices(
                    ingestedCustomer.getInvoices().stream()
                        .map(invoice -    transformInvoice(invoice))
                        .collect(Collection::List))
                .name(ingestedCustomer.getName())
                .build();
        }
        public void persistCustomer(IngestedCustomer ingestedCustomer) {
            customerRepository.save(unpackCustomer(ingestedCustomer));
            applicationEventPublisher.publishEvent(new CustomerUpdated(customer)
        }
    }

With that I could call unpackCustomer directly, get the functionality I want and none of the persistence... except that the repository and event publisher are still being injected and still have to be configured. So this is a little better:

    class CustomerServiceImpl implements CustomerService {

        public Customer unpackCustomer(IngestedCustomer ingestedCustomer) {
            return Customer.builder()
                .invoices(
                    ingestedCustomer.getInvoices().stream()
                        .map(invoice -    transformInvoice(invoice))
                        .collect(Collection::List))
                .name(ingestedCustomer.getName())
                .build();
        }
        public Customer persistCustomer(IngestedCustomer ingestedCustomer) {
            return unpackCustomer(ingestedCustomer)
        }
    }
    class CustomerServiceOperations extends CustomerServiceImpl {

        @Autowired CustomerRepository customerRepository;
            @Autowired ApplicationEventPublisher applicationEventPublisher;

        public void persistCustomer(IngestedCustomer ingestedCustomer) {
            customerRepository.save(super.persistCustomer(ingestedCustomer));
            applicationEventPublisher.publishEvent(new CustomerUpdated(customer)
        }
    }

My test code can use CustomerServiceImpl and my production code can use CustomerServiceOperations. Of course I now have two classes instead of one and that bothers some people. Depending on what other external resources are needed there might be combinations where, say, you are okay with persistence but not event generation or vice versa. But the general principle here is that keeping the code that just manipulates and transforms your objects separate from code that needs other resources will make it easier for your testers to reuse it.

Previous Post Next Post