Spock Mocks in Spring Integration-Test using DetachedMockFactory

Spring-Boot introduced some cool testing improvements in Version 1.4 – sadly not all readily available for usage in Spock. A nice feature, that allows for easy mocking and stubbing of Beans inside of integration tests is the new @MockBean annotation. This annotation will automagically replace the annotated bean with a Mockito mock. All you need to do is annotate a field with the corresponding bean type inside your @SpringBootTest annotated test class, just like this:

There was a lot of discussion inside Spock’s issue tracker about using this feature with Spock, but it seems, we won’t be able to make this work inside the spock-spring plugin (the best solution seems to write a custom Spock extension which replicates this feature inside the Spock environment – any volunteers?)

But don’t give way to despair yet, there is indeed an okayish workaround, using the new DetachedMockFactory which was introduced in Spock 1.1. If used in conjunction with the new @TestConfiguration annotation from Spring, you get some really nice mocking capabilities for your integration testing needs. Let’s look at some example code now!

This time I’ve thought it would be nice to use everyone’s favorite and totally close to reality example application – the famous book store. This is the production code of our BookService class, which implements our business logic (the listAvailableBooks() method):

Now let’s assume we want to test this class inside an integration test, but we don’t want to hit the real persistence layer and so need to stub away the BookRepository bean (in the real world, using an integration test for this, does not make any sense at all, we could test the class using solely unit tests just fine). In order to inject the stubbed bean into our system under test, we can simply define a nested configuration class:

Afterwards, we can use the BookRepository bean inside our Spock tests just like any other stub object:

A curious person might ask, why we need to use a DetachedMockFactory after all? Couldn’t we simply use some code like BookRepository repoStub = Stub(BookRepository)? In order to understand why this doesn’t work, you need to understand how Spock’s mocking/stubbing works. I’m not sure if I really understand how mocking/stubgging works in Spock, but in this case it’s enough to know, that you’ll only be able to create mock/stub objects if you are inside a specification. This would lead to problems when using the @SpringBootTest annotation in order to initialize the Spring context, since Spring will try to boot up before any specification code has been run. Gladly we now have the capabilities of DetachedMockFactory at our fingertips, which allows us to create mocks/stub outside a specification. Here is the complete test class:

I hope this solution will lift some of the sadness which might have risen inside the hearts of passionate Spock users once they’ve realised some of the new Spring-Boot 1.4 testing improvements won’t be easily available for them. And of course you should prefer to use unit tests after all (an environment in which Spock shines even more).

The source code of the example project is available at Github.

Integration testing configuration validation with Spock in Spring-Boot

This is something I stumbled upon during my regular workday. We wanted to test-drive the validation of our externalized Spring-Boot configuration. It was obvious that we needed an integration test for this feature, since the Spring-Boot framework, as well as the file system, are heavy involved in this kind of test. But what condition did we want to check in order to determine if the test had successfully passed? We decided that our application should not start if misconfigured and as a consequence, we wanted the test to ensure, that the application hasn’t started.

Our first approached included using a regular Spring-Boot integration test, by using the @SpringApplicationConfiguration annotation. Sadly this route proved to be a blind alley, since Spock’s Spring integration tries to initialize the ApplicationContext prior to testing and this rightfully failed!

And so we had to run the test inside a regular unit test and manually initialize the Spring application inside the method. To be honest, I wasn’t really sure about how to initialize the ApplicationContext or how to feed the configuration into the test. Luckily we’ve stumbled about a very good official Spring-Boot sample project. Based on this code, we came up with an solution like the following.

This is the class backing the external configuration:

And this is a possible integration test:

As you can see, this is a normal Spock unit test (although it’s semantically still an integration test). The ApplicationContext is initialized as an AnnotationConfigApplicationContext and the EnvironmentTestUtils are used to inject an invalid external configuration into the ApplicationContext.

There might be other use cases in which one might like to manually initialize the ApplicationContext instead of relying on Spock to setup the integration test environment, so I thought this post could provide to be usefull for future generation of Spring-Boot hackers to come. You can find my complete sample project on GitHub.

@DataJpaTest in my Pocket

In my last post I’ve written about how to use the new Spring-Boot 1.4 test annotations in combination Spock for more boilerplate free test code. I’ve received some great feedback for this post (and it was even featured on some other blogs like the official Spring blog, Petri Kainulainen’s great Spring and testing blog, as well as Jacob Aae Mikkelsen’s Grails blog), for which I’m very grateful!) and I was also asked if I could provide an example of how to use @DataJpaTest in conjunction with multiple configured datasources. And so I thought great, that’ll be the topic of my next post.

Little Pocket Monster Shop of Horrors

Again I’ve tried to come up with some real world code examples, so let’s assume we have some kind of online shop in which you can buy small monsters in order to let them fight for you. Since cross-platform is all the rage nowadays, it might be a good idea for our shop to offer monsters from different vendors, which might be stored in distinct persistence layers.

We have the following package layout:

The pokemon as well as the digimon package are both configured to use Spring-Data-Jpa with their own datasource. The PokemonConfig looks like this:

And the DigimonConfig looks really similar:

I must admit, this was the first time I’ve used Spring-Boot and Spring-Data-Jpa in conjunction with multiple datasources and the setup wasn’t all smooth sailing for me. The official docs are by no means bad, but the ride get’s pretty bumpy once you’ve got to manually disable different Spring-Boot autoconfigure features (since it is not always 100% percent intuitive on which condition autoconfiguration of a specific feature is disabled automagically – and Oliver Gierke seems to agree on this issue as well 😉) and some aspects mentioned inside the docs – like the usage of EntityManagerFactoryBuilder; I somehow could not inject the builder class without having a @Primary annotated datasource inside my Spring context – didn’t work for me at all. Oliver Gierke’s Spring-Data-Jpa example project on GitHub was a great help for me, since I could nearly copy-paste the EntityManager configuration code. Since I’m still kind of a Spring noob (I’ve started to use Spring in production with the advent of Spring-Boot, beforehand I was a zealous Grails crusader) it’s always a tad bit scary to leave the green meadows of Spring-Boot autoconfiguration and venture forth into the dark depths of vanilla Spring. But nevertheless, it’s almost consistently an enlightening and worthwhile experience once you’ve entangled the ominous stacktrace jungle.


After getting the general application setup, the @DataJpaTest were so darn straight forward and simple, it even feels a bit awkward to devote a blog post to this topic.

You simply have to annotate your tests with @DataJpaTest (and @ContextConfiguration, since Spock is still lacking full support for new Spring-Boot 1.4 test annotations) and you’re good to go. Ah, it feels good to have the cozy Spring-Boot autoconfiguration magic back.

Here is an example test class:

I’m not entirely sure why this does work without further configuration, but while digging into the Spring-Boot source code I’ve found the following comment:

By default, tests annotated with @DataJpaTest will use an embedded in-memory database (replacing any explicit or usually auto-configured DataSource). The @AutoConfigureTestDatabase annotation can be used to override these settings.

So I think it might be safe to assume, these tests don’t need to care whether you are using multiple datasources, or a single autoconfigured one, they seem to be quite decoupled from this part of the application configuration and I think we will get further information about this annotation once Spring-Boot 1.4 is finally released.

I’ve uploaded the sample project on GitHub and I’m looking forward to any comments and/or questions about this topic.

Using Spring-Boot 1.4 testing features with Spock

There was a great blog post over at the spring.io blog a couple of days ago, introducing the new testing improvements coming with Spring-Boot 1.4. I was very intrigued by these new upcoming features, but at the same time kind of sad, that Spock wasn’t mentioned anywhere in the examples (at least you can find some general mentioning about using Spock for testing Spring-Boot applications in the official documentation). In my humble opinion, using Spock to test your Spring-Boot application is a match made in heaven and since I’m a Spring-Boot as well as Spock fan, I thought I just might provide the examples of using Spock alongside the new Spring-Boot 1.4 testing features the original blog post was lacking.

I’ll try to structure this post similar to the original post so you can skip back and forth between the two and checkout the differences in the examples. In order to integrate Spock with Spring (and Spring-Boot) you’ll need this dependency:

The build.gradle config looks like this (there are some additional dependencies for the example project):

Testing without Spring

The original post gave some great advice about unit testing your distinct Spring components: Don’t involve Spring into this! Thanks to the magic of TDD and dependency injection this shouldn’t be a big problem for the main business components of your application (assuming you’ve followed the practices of The Clean Architecture and Hexagonal Architecture). Let’s look at this example of a Spring @Service using some other @Component (there is no implementation difference between @Service and @Component, we’re talking solely semantics here).

I think it’s a shame that many source code examples and tutorials found in the world wide web use artificial and shallow use cases that are as far away from real world usage scenarios as JavaScript is from having a mature build system and so I’ve tried to come up with a useful example application. A friend once told me the process of cooking a sauce hollandaise is a quite difficult one, because you have to monitor the cooking temperature in a very precise fashion. And so this application represents a temperature monitoring system for sauce hollandaise consisting of a HollandaiseTemperatureMonitor service using some Thermometer component.

Test driving the class HollandaiseTemperatureMonitor I’ve come up with the following Spock tests (Since I’m an inhabitant of the part of the world which uses the metric system, all temperature units are in degree Celsius 😉):

This is an example of a pure unit test without involving any Spring dependencies whatsoever. There is still one interesting Spock feature to be found here. We create a Stub Thermometer by simply calling Stub(Thermometer) and instruct this stub to return the givenTemperature afterwards with this line:

thermometer.currentTemperature() >> givenTemperature

If you are somehow unfamiliar with the term Stub here is a great article by Martin Fowler going deep into the differences between Stubs, Mocks, Fakes and so on.

The corresponding production code of HollandaiseTemperatureMonitor that will make this tests pass looks like this:

Integration tests

So far we haven’t seen any of the new Spring-Boot 1.4 testing features, so let’s get to the cool stuff now. When building a Spring-Application I always like to have a really simple smoke test in place, to simply verify that the application starts without errors. This test may look like the following:

Here you can see the new @SpringBootTest annotation at work, which promises to remove all the integration test boiler plate annotations which you needed prior to Spring-Boot 1.4, alas it is not compatible with Spock right now (see this issue). I’ve already submitted a pull request to add Spring-Boot 1.4 compatibility to Spock, but for now we have to use a workaround by explicitly using the @ContextConfiguration annotation:

If Spock finds the @ContextConfiguration on a class, it will assume this is a Spring test and will act accordingly and so this test will pass as expected.

Now let’s move on to the next new testing feature, testing application slices.

Testing Application Slices

Introducing a new use case to our system, we might like to persist some statistic data of our hollandaise cooking process. And what could be a better place to store this data then a relational database? (The answer is nearly everything else, but I wanted some JPA component for the next example and so I had to come up with something…)

Spring-Boot 1.4 introduces some handy shortcuts for integration testing the persistence layer of your application, like the @DataJpaTest annotation. This annotation will instruct Spring to only initialize the components which are needed for the interaction with the persistence (specifically JPA) layer and so we might gain a faster startup time for our integrations tests.

I’ve written a really simply test to demonstrate this feature:

Please note that again I’ve had to add the @ContextConfiguration annotation or else Spock wouldn’t identify the test as a Spring integration test. This is not mentioned in the official documentation and this behavior stems from the same issue as before.


Okay, that’s it for now. I simply wanted to give some Spock examples for the new Spring-Boot 1.4 testing stuff the original blog post was lacking and I’m quite amused that I’ve discovered some missing Spock support along the way. I hope the proposed workarounds might help someone wondering why this features aren’t working as described inside the documentation. Maybe I’ll try to implement a more stable Spring discovery for the Spock-Spring module in the near future (for now I’m still waiting for my pull request to be merged 😉).

You can find the source code at GitHub. I’ve also changed to display the source code with the help of the oEmbed Gist Plugin since syntax highlighting and support of markdown code fencing in WordPress was living hell and I couldn’t stand the broken encodings anymore…
I’m still not totally happy with the small column width of the code examples, I think I need to tweak the WordPress theme in order to work better with source code examples.

Data Driven Testing

In my last blog post I’ve committed myself to writing one blog post every week and it seems I’ve already failed this goal, since there wasn’t any post last week :(. But let us not hesitate in the face of yesterdays defeat and instead continue our quest towards a glorious test-driven string padding library in service of the whole interweb.

The first Spock blog post introduced some of the general concept but now I’d like to show you the really good stuff. One of my favorite Spock features is something called Data Driven Testing. This features allows to run the same tests for different input and expected output data – really useful to easily follow the DRY convention in your test classes and still cover all the edge cases of your production code. There are different ways to use this feature, but I think the easiest and in general most applicable one is to use the Data Tables feature.

Back in the first Spock post I’ve already suggested to extend our test method with additional input and output data in order to test drive the production code towards the real solution. The test method inside PadServiceSpec with applied Data Tables looks like this:

All you have to do is define input and expected output in a table and Spock will automatically generate multiple distinct test cases for you, sweet! Also take a look at the @Unroll annotation – this annotation will tell Spock to report each parameterized run as a distinct result, substituting the #variable in the method name with the used parameter values.

If run with the old production code all tests but the first one will fail of course (I think I’ve defined the most useful permutations). So the next step is to use these tests as the specification for the production code and get into coding the feature until all tests pass.

The following code makes the tests results light up as green as the WindowsXP default wallpaper meadows:

And since IntelliJ IDEA has quite a good support for Spock, this is how the test report looks like inside this marvelous IDE:
alt text

Like the last time, the source code is available on GitHub and should be runnable directly after checkout, thanks to the cool included Gradle-Wrapper (And maybe I should upload this library into Maven-Central to make it includable as an external dependency in a wide variety of projects and other libraries :P).

Sorry for the missing syntax highlighting. It seems like the last WordPress (or Jetpack?) update broke the SyntaxHighlighter Evolved plugin. Any recommendations for good syntax highlight in WordPress?

Getting Groovy with Java and Spock


Nowadays I simply love testing and my work colleagues call me nicknames such as MC Test. But I haven’t always been this kind of a test zealot.
When I was introduced to Unit-Testing for the first time, back in university, I couldn’t wrap my head around it. Why test the code I just wrote? I just wrote the code, I knew it was working. And I verified that it was working by using a truck load full of print statements and checking field values inside the debugger!

It became even more mysterious once the concept of Test-Driven-Development was introduced. Writing tests before writing the code? This was even more insane!
And so I remember many years of writing production code without a single line of test code…or at least without a single line of useful test code, of course there was the sporadic Unit-Test for my getters and setters.

But the urge to follow good development practices kept on itching and I remember writing my first more or less useful Unit- and Integration-Tests for a Grails Web-Application using JUnit and Grails’ mocking and stubbing facilities (the difference between mocking and stubbing was completely unknown to me back then and my tests reflected this as well). And once the first bugs started to come ashore the TDD approach seemed to resonate with me. I started to adopt a technique I’ve coined Bug-Driven-Testing. Every time a bug was discovered I’d write a test that would evoke the buggy behaviour and so I could bugfix the code by developing against this new test.
It was at that point my testing spark was lit and not for long after this eye-opening experience, I discovered Spock mentioned inside the Grails docs. Since then I’ve used Spock almost exclusively for all my testing needs and while my test-fu matured, so did Spock (which is available as version 1.0 since 2015-03-02).

Testing Java production code with Spock

Although Spock is a Groovy testing framework it’s perfectly capable to test your Java code as well. In this post I’d like to give a small introduction into Spock.

Let’s assume we want to implement a small library that provides some String padding functions (this seems to be considered a really useful library inside the JavaScript community). As the build system I’d like to use Gradle. Normally I’d use Maven, since it still seems a bit more mature and sane, but since Groovy is a fist class citizen inside the Gradle ecosystem, setting up the project with Gradle is a tad bit easier (also the Gradle support in IntelliJ is getting more awesome with every release, while Maven is treated like a poor cousin).

The build.gradle file looks like this:

If you’re already familiar with Gradle this is self-explanatory. If not, just take this for granted now. This config simply pulls in all Spock dependencies and allows you to combine Java production and Groovy test code. Next we want to get going with our first test, TDD style. The tests are called Specification in Spock and your test classes normally have a *Spec.groovy suffix. Assuming we want to code a class called PadService, this is what our first test case in PadServiceSpec might look like:

In a glimpse you can identify this code as Groovy code, no semicolons, weee!
Spock follows a BDD-style test definition, mapping the arrange-act-assert test phases to given-when-then. Statements inside the then block are automatically evaluated as test conditions.

In order to make this test pass, we can write the simplest class possible like this (this time in Java):

But of course, this code will fail for every other parameter except the ones used in the Spock test. Now we could start adding additional tests, which will perform the same operations with different values. Or we could wrap our tests with a for-loop and initialize an array of different input and expected output values (please don’t!). Instead of having to do such cumbersome work, Spock provides a great solution for this use case, called Data Driven Testing. In my next blog post I’ll show how to use Data Driven Testing in order to test drive the PadService to a working version. The source code (the little bit that is already existing…) is available at GitHub.

So long, and happy hacking!