Software documentation

Development tools

Structure

Techniques and Standards

How To

Functional Info

Background Info

JMRI: Unit testing with JUnit

JUnit is a system for building "unit tests" of software. Unit tests are small tests that make sure that individual parts of the software do what they're supposed to do. In a distributed project like JMRI, where there are lots of developers in only loose communication with each other, unit tests are a good way to make sure that the code hasn't been broken by a change.

For more information on JUnit, see the JUnit home page. A very interesting example of test-based development is available from Robert Martin's book.

Some of the classes have JUnit tests available. It's good to add JUnit tests as you make changes (test your new functionality to make sure that it is working, and keeps working), when you have to figure out what somebody's code does (the test documents exactly what should happen!), and when you track down a bug (make sure it doesn't come back).

Running the Tests

To run the existing tests, say

   ant alltest

This will compile the test code, which lives in the "test" subdirectory of the "java" directory in our usual code distributions, and then run the tests under a GUI. (To make sure you've recompiled everything, you may want to do ant clean first)
If you know the name of your test class, or the test class for your package, you can run that directly with the "runtest" script:

   ant tests
./runtest.csh jmri.jmrit.powerpanel.PowerPanelTest
The first line compiles all the test code, and the second runs a specific test or test suite.
(Hint: How to set this up using IntelliJ)

Continuous Integration Test Execution

The continuous integration environment senses changes in the code repository, rebuilds the code, performs a variety of checks. If no fatal issues are found, the continuous integration process executes the "alltest" ant target against the build to run the tests against the successful build of the code base.

Error Reporting

If a test fails during the continuous integration execution of "alltest", an e-mail is sent to the jmri-build e-mail list as well as to the developers who have checked in code which was included in the build.

You may visit the web site to subscribe to the jmri-builds e-mail list to get the bad news as quickly as possible, or monitor to view the archives of the e-mail list and see past logs. Or you can monitor the "dashboard" at the continuous integration web site.

(When the build succeeds, nothing is mailed, to cut down on traffic)

Code Coverage Reports

As part of running the tests, Jenkins accumulates information on how much of the code was executed, called the "code coverage". We use the JaCoCo tool to do the accounting. It provides detailed reports at multiple levels:

Writing Tests

By convention, we have a "test" class shadowing (almost) every real class. The "test" directory contains a tree of package directories parallel to the "src" tree. Each test class has the same name as the class to be tested, except with "Test" appended, and will appear in the "test" source tree. For example, the "jmri.Version" class's source code is in "src/jmri/Version.java", and it's test class is "jmri.VersionTest" found in "test/jmri/VersionTest.java".

There are additional classes which are used to group the test classes for a particular package into JUnit test suites.

Writing Additional Tests for an Existing Class

To write additional tests for a class with existing tests, first locate the test class. (If one doesn't exist, see the section below about writing tests for a new class)

If the test suite has not been converted to JUnit4 yet, one or more test methods can be added to the class using the JUnit conventions. Basically, each method needs a name that starts with "test", e.g. "testFirst", and has to have a "public void" signature. JUnit will handle everything after that.

If the test suite has been converted to JUnit4, the JUnit4 conventions require that the test be preceeded by the "@Test" annotation:

    @Test
    public void testSomething() {
        ...
    }

See the section on JUnit4 Migration for more information on JUnit4.

In general, test methods should be small, testing just one piece of the classes operation. That's why they're called "unit" tests.

Writing Tests for a New Class

To write a test for a new class, you need to create a file that shadows your new class. For our example, consider creating a test for a new class that appears in "src/jmri/jmrix/foo/Foo.java". The new test would be created in a file named "test/jmri/jmrix/foo/FooTest.java" .

Assuming that the Foo class has a default constructor named foo(), Then the following would be minimal contents for the test/jmri/jmrix/foo/FooTest.java file:

    package jmri.jmrix.foo;

    import org.junit.After;
    import org.junit.Assert;
    import org.junit.Before;
    import org.junit.Ignore;
    import org.junit.Test;

    /*
     * Tests for the Foo Class
     * @author  Your Name   Copyright (C) 2016
     */
    public class FooTest {

       @Test
       public void testCtor() {
           Assert.AssertNotNull("Foo Constructor Return",new foo());
       }
  
      @Before
      public void setUp() {
         apps.tests.Log4JFixture.setUp();
      }
     
     @After
     public void setUp() {
        apps.tests.Log4JFixture.tearDown();
     }
  }

You may also choose to copy an existing test file and make modifications to suite the needs of your new class. Please make sure you're copying a file in the new JUnit4 format, with the @Test statements, to keep us from having to update your new file later.

After the test class is created it needs to be added to the package test for the package. In the case of our example, that should be the file test/jmri/jmrix/foo/PackageTest.java

If the PackageTest has not been converted to JUnit4 format yet, the following line needs to be added to the list of test classes in the "suite" method:

suite.addTest(new junit.framework.JUnit4TestAdapter(FooTest.class));

If the PackageTest has been converted to JUnit4 format, then "FooTest.class" needs to be added to the list test classes in the @Suite.SuiteClasses annotation that appears before the beginning of the PackageTest class.

Writing Tests for a New Package

To write a tests for a new package, in addition to writing tests for each class, you need to create a "PackageTest.java file that calls your new tests. For our example, we will create the file "test/jmri/jmrix/foo/PackageTest.java" and call the tests in "test/jmri/jmrix/foo/FooTest.java".

The following would be minimal contents for the test/jmri/jmrix/foo/PackageTest.java file:
    package jmri.jmrix.foo;

    import org.junit.runner.RunWith;
    import org.junit.runners.Suite;

    /**
     * tests for the jmri.jmrix.foo package
     *
     * @author Your Name Copyright (C) 2016
     */
    @RunWith(Suite.class)
    @Suite.SuiteClasses({
       FooTest.class
    })
    public class PackageTest{
    }

You may also choose to copy an existing test file and make modifications to suite the needs of your new class.

After the PackageTest class is created it needs to be added to the PackageTest for the enclosing package. In the case of our example, the enclosing package test would be the file test/jmri/jmrix/PackageTest.java

If the enclosing PackageTest has not been converted to JUnit4 format yet, the following line needs to be added to the list of test classes in the "suite" method:

suite.addTest(new junit.framework.JUnit4TestAdapter(jmri.jmrix.foo.PackageTest.class));

If the enclosing PackageTest has been converted to JUnit4 format, then "jmri.jmrix.foo.PackageTest.class" needs to be added to the list test classes in the @RunWithSuite annotation that appears before the beginning of the enclosing PackageTest class.

Key Test Metaphors

Handling Log4J Output From Tests

JMRI uses Log4j to handle logging of various conditions, including error messages and debugging information. Tests are intended to run without error or warning output, so that it's immediately apparent from an empty standard log that they ran cleanly.

Log4j usage in the test classes themselves has two aspects:

  1. It's perfectly OK to use log.debug(...) statements to make it easy to debug problems in test statements. log.info(...) can be used sparingly to indicate normal progress, because it's normally turned off when running the tests.
  2. In general, log.warn or log.error should only be used when the test then goes on to trigger a JUnit assertion or exception, because the fact that an error is being logged does not show up directly in the JUnit summary of results.

On the other hand, you might want to deliberately provoke errors in the code being tested to make sure that the conditions are being handled properly. This will often produce log.error(...) or log.warn(...) messages, which must be intercepted and checked.

To allow this, JMRI runs it's using tests with a special log4j appender, which stores messages so that the JUnit tests can look at them before they are forwarded to the log. There are two aspects to making this work:

  1. All the test classes should include common code in their setup() and teardown() code to ensure that log4j is properly initiated, and that the custom appender is told when a test is beginning and ending.
    
        // The minimal setup for log4J
    protected void setUp() throws Exception {
    super.setUp(); // Note: skip this line when using JUnit4
    apps.tests.Log4JFixture.setUp();
    }
    protected void tearDown() throws Exception {
    apps.tests.Log4JFixture.tearDown();
    super.tearDown(); // Note: skip this line when using JUnit4
    }
  2. When a test is deliberately invoking a message, it should then use the check to see that the message was created. For example, if the class under test is expected to do
    
        log.warn("Provoked message");
    
    
    the invoking test case should follow that with the line:
    
        jmri.util.JUnitAppender.assertWarnMessage("Provoked message");
    
    

    It will be a JUnit error if a log.warn(...) or log.error(...) message is produced that isn't matched to a JUnitAppender.assertWarnMessage(...) call.

In any case, all of your main() routines should start with

    apps.tests.Log4JFixture.initLogging();

so that they can be run independently.

Note: Our CI test executables are configured to fail if any FATAL or ERROR messages are emitted instead of being handled. This means that although you can run your tests successfully on your own computer if they're emitting ERROR messages, but you won't be able to merge your code into the common repository until those are handled.

Resetting the InstanceManager

If you are testing code that is going to reference the InstanceManager, you should clear and reset it to ensure you get reproducible results.

Depending on what managers your code uses, your setUp() implementation should start with:


    super.setUp(); // Note: skip this line when using JUnit4
    apps.tests.Log4JFixture.setUp();
    jmri.util.JUnitUtil.resetInstanceManager();
    jmri.util.JUnitUtil.initInternalTurnoutManager();
    jmri.util.JUnitUtil.initInternalLightManager();
    jmri.util.JUnitUtil.initInternalSensorManager();

(You can omit the initialization managers you don't need) See the jmri.util.JUnitUtil class for the full list of available ones, and please add more if you need one that's not there yet.

Your tearDown() should end with:


    jmri.util.JUnitUtil.resetInstanceManager();
    apps.tests.Log4JFixture.tearDown();
    super.tearDown();  // Note: skip this line when using JUnit4

Working with Listeners

JMRI is a multi-threaded application. Listeners for JMRI objects are notified on various threads. Sometimes you have to wait for that to take place.

If you want to wait for some specific condition to be true, e.g. receiving a reply object, you can use a waitFor method call which looks like:


    JUnitUtil.waitFor(()->{reply!=null}, "reply didn't arrive");

The first argument is a lambda closure, a small piece of code that'll be evaluated repeatedly until true. The String second argument is the text of the assertion (error message) you'll get if the condition doesn't come true in a reasonable length of time.

Waiting for a specific result is fastest and most reliable. If you can't do that for some reason, you can do a short time-based wait:


    JUnitUtil.releaseThread(this);

This uses a nominal delay.

Note that this should not be used to synchronize with Swing threads. See the Testing Swing Code section for that.

In general, you should not have calls to sleep(), wait() or yield() in your code. Use the JUnitUtil and JFCUtil support for those instead.

Working with Threads

(See a following section for how to work with Swing (GUI) objects and the Swing/AWT thread)

Some tests will need to start threads, for example to test signal controls or aspects of layout I/O.

General principles your tests must obey for reliable operation:

For example, if creating a thread based on AbstractAutomat, you can check the start with:


    AbsractAutomat p = new MyThreadClass();
p.start();
JUnitUtil.waitFor(()->{return p.isRunning();}, "logic running");
and ensure termination with

    p.stop();
JUnitUtil.waitFor(()->{return !p.isRunning();}, "logic stopped");

Please make sure your unit tests clean up after themselves! They should not leave any threads running. Any threads they start should have either terminated normally by the end of the test (don't let them just time out and crash later during some other test!) or you should add code to terminate them.

You can check whether you've left any threads running by editing the java/test/apps/tests/Log4JFixture.java and finding a line like:

        // checkThreads(false);  // true means stop on 1st extra thread
Uncomment that, compile and run your tests. It will report any threads you've left running.

Testing I/O

Some test environments don't automatically flush I/O operations such as streams during testing. If you're testing something that does I/O, for example a TrafficController, you'll need to add "flush()" statements on all your output streams. (Having to wait a long time to make a test reliable is a clue that this is happening somewhere in your code)

Temporary File Creation in Tests

Testcases which create temporary files must be carefully created so that there will not be any problems with file path, filesystem security, pre-existence of the file, etc. These tests must also be written in a way that will operate successfully in the continuous integration build environment. And the temporary files should not become part of the JMRI code repository.

Here are some ideas which can help avoid these types of problems.

The above issues were identified via one testcase which executed properly on a Windows-based PC for both the "alltest" and "headlesstest" ant target, regardless of how many times it was run. In the continuous integration environment, the test ran properly the first time after it was checked in, but failed for every subsequent continuous integration environment execution of "headlesstest". Once the test was modified based on the temporary file recommendations shown here, the test became stable over multiple continuous integration executions of "headlesstest".

Testing Swing Code

AWT and Swing code runs on a separate thread from JUnit tests. Once a Swing or AWT object has been displayed (via show() or setVisible(true)), it cannot be reliably accessed from the JUnit thread. Even using the listener delay technique described above isn't reliable.

For the simplest possible test, displaying a window for manual interaction, it's OK to create and invoke a Swing object from a JUnit test. Just don't try to interact with it once it's been displayed!

Because we run tests in "headless" mode during the continuous integration builds, it's important that tests needing access to the screen start with:

Assume.assumeFalse(GraphicsEnvironment.isHeadless());
or
if (GraphicsEnvironment.isHeadless()) { return; }

if the Assume statement throws an exception.

This will run the myTest suite of tests only when a display is available.

GUI tests should close windows when they're done, and in general clean up after themselves. If you want to keep windows around so you can manipulate them, e.g. for manual testing or debugging, you can use the jmri.demo system parameter to control that:


        if (!System.getProperty("jmri.demo", "false").equals("false")) {
            myFrame.setVisible(false);
            myFrame.dispose();
        }

For many tests, you'll both make testing reliable and improve the structure of your code by separating the GUI (Swing) code from the JMRI logic and communications. This lets you check the logic code separately, but invoking those methods and checking the state them update.

For more complicated GUI testing, two tools are generally used. The older one, JFCUnit is no longer maintained, so we recommend that new tests be written with Jemmy. Note: Only use one of these in your test class! Don't try to mix them.

Using Jemmy

Recently, e.g. 2016, some tests have been developed using the jemmy tool. See e.g. the Speedometer tests Speedometer tests for an example.

Locating GUI Items using Jemmy

Jemmy must be able to find the objects on the screen. Jemmy Operators are generally used to both locate and manipulate items on the screen.

Here are a few tips for locating items with Jemmy:

Using JFCUnit

JFCUnit to control interactions with Swing objects.

For a very simple example of the use of JFCUnit, see the test/jmri/util/SwingTestCaseTest.java file.

To use JFCUnit, you first inherit your class From SwingTestCase instead of TestCase. This is enough to get basic operation of Swing tests; the base class pauses the test thread until Swing (actually, the AWT event mechanism) has completed all processing after every Swing call in the test. (For this reason, the tests will run much slower if you're e.g. moving the mouse cursor around while they're running)

For more complex GUI testing, you can invoke various aspects of the interface and check internal state using test code.

Testing Script Code

JMRI ships with sample scripts. This section discussions how you can write simple tests for those to ensure they keep working.

Testing Jython sample scripts

Test scripts can be placed in jython/test are automatically invoked by java/test/jmri/jmrit/jython/SampleScriptTest.java.

See the jmri_bindings_test.py sample for syntax, including examples of how to signal test failures.

In the future, this could be extended to pick up files automatically, to support xUnit testing, etc.

Issues

JUnit uses a custom classloader, which can cause problems finding singletons and starting Swing. If you get the error about not being able to find or load a class, suspect that adding the missing class to the test/junit/runner/excluded.properties file would fix it.

As a test only, you can try setting the "-noloading" option in the main of whichever test class you're having trouble with:


    static public void main(String[] args) {
String[] testCaseName = {"-noloading", LogixTableActionTest.class.getName()};
junit.swingui.TestRunner.main(testCaseName);
}

Please don't leave "-noloading" in place, as it prevents people from rerunning the test dynamically. Instead, the right long-term fix is to have all classes with JUnit loader issues included in the test/junit/runner/excluded.properties file. JUnit uses those properties to decide how to handle loading and reloading of classes.

Migrating to JUnit4

JUnit4 is a significant upgrade of the JUnit tool. It brings new capabilities, but also changes how tests are structured. As of this writing (Summer 2016), we're still supporting the existing test structures described above, while also using new capabilities in some new and migrated tests.

How, when and even if to migrate previously-existing test cases is a question that's still to be discussed and resolved.

The rest of this section discusses some aspects of moving to and using JUnit4.

We may want to consider:

Example of JUnit4 test and corresponding PackageTest.java file; note lack of main() procedure and other traditional boilerplate code. These files are the result of this commit that migrated a test package to JUnit4

Example of JUnit4 tests with a main() procedure

A possible set of steps for conversion:

  1. Change the imports. Typically, these can be removed:
    import junit.framework.Assert;
    import junit.framework.Test;
    import junit.framework.TestCase;
    import junit.framework.TestSuite;
    

    and you'll typically need
    import org.junit.After;
    import org.junit.Assert;
    import org.junit.Before;
    import org.junit.Ignore;
    import org.junit.Test;
    

  2. The test class no longer inherits from TestCase, so replace public class MyClassTest extends TestCase with public class MyClassTest.
  3. Mark test methods with the "@Test" annotation:
        @Test
        public void testSomething() {
            ...
        }
            
    
  4. More of our Test classes have a "from here down is testing infrastructure" comment, followed by setup code. Some of that can be removed. First, remove the class constructor, e.g.
        public MyClassTest(String s) {
               super(s);
        }      
    
  5. Next, remove the main method if there is one, e.g.:
        // Main entry point
        static public void main(String[] args) {
            apps.tests.Log4JFixture.initLogging();
            String[] testCaseName = {"-noloading", MyClassTest.class.getName()};
            junit.textui.TestRunner.main(testCaseName);
        }
    
  6. Finally, annotate the setUp() and tearDown() methods. Note: They have to be "public" methods, not "protected" or "private". They should end up looking like (plus your own content, of course):
        @Before
        public void setUp() {
              apps.tests.Log4JFixture.setUp();
        }
    
        @After 
        public void tearDown() {
              apps.tests.Log4JFixture.tearDown();
        }
    
    Make sure no more references to the super. method remain in either setUp() or tearDown(). Just remove those lines.
  7. Finally, replace the test suite definition. JUnit3 was normally used with "run everything in this class automatically" by having a method that looked like this:
        // test suite from all defined tests
        public static Test suite() {
            TestSuite suite = new TestSuite(Z21MessageTest.class);
            return suite;
        }
    

    If that's what you've got, just remove the whole block.
    If you've got more logic in the suite() routine, ask for help on the jmri-developers list. An example of the result of migrating a more complex case:
    @RunWith(Suite.class)
    @Suite.SuiteClasses({
       CMRISystemConnectionMemoTest.class,
       jmri.jmrix.cmri.serial.PackageTest.class})
    
  8. In the PackageTest.java for this test package, under Test suite(), replace the line
    suite.addTest(MyClassTest.suite()); by
    suite.addTest(new junit.framework.JUnit4TestAdapter(MyClassTest.class));
    leaving the order of tests unchanged to prevent possible side effects.
  9. Run the tests and check that all your tests were successfully included and run.

That's it! You've successfully migrated to native JUnit4.