BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News JUnitMax tightens the feedback cycle of software development

JUnitMax tightens the feedback cycle of software development

This item in japanese

While I attended a Responsive Design Workshop in Hamburg, Kent kindly answered some of questions regarding his latest product, the recently re-released JUnitMax.

JUnitMax is a continuous test runner plugin for Eclipse (3.5+) that runs your unit tests whenever the files are saved and compiled by the IDE. It aims to tighten the feedback loop by providing an immediate status on the health of the system you are working on.

JUnitMax Screenshot

JUnitMax tries to be as unobtrusive as possible. Only a small red or green area in one of the corners of the IDE indicates the current test status. A tiny line of red and green dots represents the test runner history. Failed tests show up in the problem window and are highlighted as errors at the appropriate code lines.

Kent started developing JUnitMax from the need of a better feedback loop when developing. Running the unit tests manually is tedious, painful and too often forgotten. If they then fail there are too many changes that happened to the system and it is much harder to reason about which one caused the problem. As a programmer he is impatient about getting timely feedback, so Max helps there a lot too. JUnitMax was developed using the Responsive Design Strategies in very small steps with no need for the overhead of iterations.

Having the little green area in the corner of the IDE is addictive. It causes you to want to get back to green as soon as possible. It also is a kind of reassuring sign while working, it shows you green when you are safe to proceed. Much like a traffic light.

Tests are not the only feedback tool that can be run continuously, but they currently provide the highest value. It is easy to imagine other applications for this feedback loop - such as code metrics, style checker etc. One could think of JUnitMax as a tiny built-in continuous integration server running within the IDE. So whereas the compiler provides syntactic feedback on the code, Max provides semantic feedback.

The business model behind JUnitMax is commercial. Although it started as side project, evolving it to a tool that can be generally used, is stable and mature enough requires a lot of effort. So Kent deliberately chose to provide JUnitMax as a paid tool with an annual subscription. Currently a personal (100 USD/year) and enterprise subscriptions are available.

Many software developers don't want to pay for development tools as they are used to get them for free from OpenSource projects or vendors. But the ones that do, know that the price of the tools is very low compared to the value of the time saved using them. On the other hand their productivity increases dramatically with the right tools. So those arguing over the cost of tools should think about the relative value of their working hours and the possible gains from using the tools.

When implementing JUnitMax, the biggest challenges weren't on the technical side but rather targeted at the business. Creating a marketable product and actually marketing it is hard for a programmer. Adding business driven features (like a license manager) to the tool proved to bee less motivating that adding or extending other user centric features.

One valuable feature of JUnitMax is "Revert to last green". Like a version control system, it remembers the state of the system at each green test run and enables the developer to return to this state, effectively discarding all changes since this point in time. As it runs continuously this is covers only a very small timespan but helps to roll back small sets of changes consistently even without continuously checking them into a version control system (like git or mercurial).

JUnitMax handles the order in which tests are run according to results Kent derived from sampling a large number of unit test runs. Test runtime length and test failure probability are distributed exponentially (power law).

Newly added tests or tests that just failed are most likely to fail again. Either one test fails or a lot of them. Tests that haven't failed for a long time (non-hotspots) are not likely to fail. From those conclusions, sorting the tests by run-time and in recently failed order before executing them increases the likelihood of a early test failure dramatically. This allows JUnitMax to provide quick feedback.

JUnitMax is not only valuable feedback for the developers using it, it also reports back Exceptions happening in its codebase, providing automatic feedback for Kent to further improve the tool. He also actively approaches the JUnitMax user base to gather feedback, suggestions and additional information. Although intended for Java projects, developers working in other JVM languages sucessfully used JUnitMax to run their tests.

To see JUnitMax in action, Jason Gorman's Code Smells and SOLID principles series screen casts are highly recommended. For interested users there is a mailing list.

The only issues with JUnitMax so far were running it inside of STS on a aspectj project and conflicting resource access when manually running the same tests which were using a shared setup.

Rate this Article

Adoption
Style

BT