JUnit, if'n you don't know, is a popular unit-testing framework for Java (See http://www.junit.org/.) Using JUnit, you can easily write automated tests for individual Java classes and methods. A typical large project may have thousands or tens of thousands of individual JUnit tests. Here at the Ranch, we use JUnit for testing durn near everything. We like to run our tests often, to make sure our code is alive and kickin'.
Many projects that use JUnit run their automated test suites as part of a nightly build process on multiple machines. Testing on multiple platforms is obviously a good idea, because it helps you find platform-dependent problems. Most projects should test on Windows, Linux, and Solaris as a bare minimum. For some software, testing on workstations from SGI, HP, and IBM makes sense, too -- and perhaps multiple versions of all of these operating systems, as well.
Not every project can afford a whole herd of dedicated test machines, however. Often, test machines are shared between projects, and sometimes test machines are simply desktop or server machines that have some other primary purpose.
In this situation, it may not be possible to set up all the test machines with an ideal testing environment. Indeed, some of the tests might not pass on all the machines, for predictable reasons. This is the problem Tex was facing. Pull up a camp stool and let me tell you about it.
Tex was worried about his herd. He was riding out on the range, rounding up the scruffy herd of test servers. Every one was different, and truth be told, they didn't even all belong to him. He had to beg, borrow, and steal time on many of these machines so that he could test his Java software on many different platforms. His prize cattle, ah, servers, had ideal test environments set up. But the others -- the rare exotic breeds like the IRIX and HP-UX servers -- didn't have the right JDK versions. Other little dogies were missing some of the external software his application used -- it was just too durn expensive to buy for every machine in the herd.
As a result, some of his JUnit tests failed on each machine. All the failures were expected, and due to the variation between the different breeds. But he was always scratchin' his head trying to keep all of these pecadillos straight.
I'm fixin' to help Tex out. Saddle up and come give us a hand, as we extend JUnit so that each test case gets to tell JUnit whether or not it should be run on any given heifer, err, server.
To write a test in JUnit, you extend the junit.framework.TestCase class and implement one or more methods named testXXXX, where XXXX describes the functionality under test. By default, the JUnit framework will create one instance of your class for each testXXXX method, and invoke these methods via reflection. Each invocation represents a single test. The TestCase class also contains methods called setUp and tearDown. You can override these to set up and dismantle a test scaffold for each test; JUnit will call them immediately before and immediately after calling testXXXX, respectively.
Your class will inherit a fairly large API from TestCase. The inherited methods fall into two categories: methods that let you make testing assertions, which are documented elsewhere (see the book JUnit in Action,) and the lesser-known methods which let you extend the functionality of JUnit itself. We'll look at one such method here:
public void run(TestResult result);
JUnit calls this method to run the TestCase. The default implementation turns around and calls a method
public void run(TestCase test);
on the TestResult object. The TestResult arranges to record the success or failure of the test, keeps track of the number of tests actually run, and then turns around and calls
public void runBare() throws Throwable;
on the TestCase. This method, finally, is the one that actually calls the setUp method, invokes the testXXXX method by reflection, and calls tearDown.
This sequence of calls is complicated, but it give us lots of chances to stick in our own customized code. Because JUnit is so flexible, it's surprising how little code you need to write to add your own features.
What we want to do is, somehow, help Tex out by having JUnit ask each TestCase object whether or not it should be expected to pass in the current environment. Let's extend TestCase to provide our own customized base class for tests. We'll add a method canRun which returns true or false according to whether the test should be run or not. We also need to provide a constructor that takes a test name as an argument; JUnit needs this constructor to put the tests together.
package com.javaranch.junit; import junit.framework.*; public class JRTestCase extends TestCase { public JRTestCase(String s) { super(s); } public boolean canRun() { return true; } }
Now, when you write a JUnit test case, you can extend this class and override canRun. Here's a test case that only makes sense when run under Windows:
package com.javaranch.junit; public class WindowsOnlyTest extends JRTestCase { public DemoTest(String s) { super(s); } public boolean canRun() { return System.getProperty("os.name"). toLowerCase().indexOf("win") != -1; } public void testWindowsFeature() { // Should run this test only on Windows } }
That's great, but of course JUnit doesn't yet care that we've defined the canRun method, and will run these tests on all platforms anyway. We can make running a test conditional on the result of calling canRun by overriding the run method from TestCase in JRTestCase:
public void run(TestResult result) { if (canRun()) super.run(result); }
That's it! Now if we run this test on Windows, we'll see
C:\JavaRanch> java -classpath .;junit.jar \ junit.textui.TestRunner com.javaranch.junit.WindowsOnlyTest . Time: 0.006 OK (1 tests)
But if we run it on Linux, we'll see
[ejfried@laptop JavaRanch] % java -classpath .:junit.jar \ junit.textui.TestRunner com.javaranch.junit.WindowsOnlyTest Time: 0.001 OK (0 tests)
You can use this test class as part of a TestSuite and run it on the whole herd of test servers, and the Windows-only tests will only run on the runty little calves.
When you override canRun, you can make the implementation as complicated as you want. In particular, you can make it return true for some tests and false for others. The getName method of TestCase returns the name of the testXXXX method that a particular TestCase object will run. You might write a canRun method like this:
public boolean canRun() { if (getName().equals("testOracle")) return OracleUtils.oracleIsAvailable(); else return true; }
Using this canRun method, the testOracle test method will only be run if the Oracle database is available; otherwise, it will be skipped.
You can make all of your test classes extend JRTestCase, or only some of them. Because only JRTestCase itself knows about the canRun method, you can freely mix and match JRTestCase and TestCase objects in the same TestSuite. You can use JRTestCase only for those tests that are picky about when they should run.
Using JRTestCase, Tex can write tests that know about their external dependencies. His test suite will now pass at 100% on every server in the herd -- although on some servers, it'll be a mite smaller. Note that JUnit will report the actual number of tests run on each server.
It's easy to add features to JUnit by taking advantage of the rich API provided by its framework classes, and especially by TestCase. This article is barely a drop in the bucket compared to what's possible. I hope I've got you fired up to have a look at the JUnit API documents and see what all else you might have a hankerin' to cook up.
Download all of the code from this article.