This project has retired. For details please refer to its Attic page.
buildr — Testing
  1. Start Here
    1. Welcome
    2. Quick Start
    3. Installing & Running
    4. Community Wiki
  2. Using Buildr
    1. This Guide (PDF)
    2. Projects
    3. Building
    4. Artifacts
    5. Packaging
    6. Testing
    7. Releasing
    8. Settings/Profiles
    9. Languages
    10. More Stuff
    11. Extending Buildr
    12. How-Tos
  3. Reference
    1. API
    2. Rake
    3. Antwrap
    4. Troubleshooting
  4. Get Involved
    1. Download
    2. Mailing Lists
    3. Twitter
    4. Issues/Bugs
    5. CI Jobs
    6. Contributing
  5. Google Custom Search

Testing

  1. Writing Tests
  2. Excluding Tests and Ignoring Failures
  3. Running Tests
  4. Integration Tests
  5. Using Setup and Teardown
  6. Testing Your Build
  7. Behaviour-Driven Development

Untested code is broken code, so we take testing seriously. Off the bat you get to use either JUnit or TestNG for writing unit tests and integration tests. And you can also add your own framework, or even script tests using Ruby. But first, let’s start with the basics.

Writing Tests

Each project has a TestTask that you can access using the test method. TestTask reflects on the fact that each project has one task responsible for getting the tests to run and acting on the results. But in fact there are several tasks that do all the work, and a test task coordinates all of that.

The first two tasks to execute are test.compile and test.resources. They work similar to compile and resources, but uses a different set of directories. For example, Java tests compile from the src/test/java directory into the target/test/classes directory, while resources are copied from src/test/resources into target/test/resources.

The test.compile task will run the compile task first, then use the same dependencies to compile the test classes. That much you already assumed. It also adds the test framework (e.g. JUnit, TestNG) and JMock to the dependency list. Less work for you.

If you need more dependencies, the best way to add them is by calling test.with. This method adds dependencies to both compile.dependencies (for compiling) and test.dependencies (for running). You can manage these two dependency lists separately, but using test.with is good enough in more cases.

Once compiled, the test task runs all the tests.

Different languages use different test frameworks. You can find out more about available test frameworks in the Languages section.

Excluding Tests and Ignoring Failures

If you have a lot of tests that are failing or just hanging there collecting dusts, you can tell Buildr to ignore them. You can either tell Buildr to only run specific tests, for example:

test.include 'com.acme.tests.passing.*'

Or tell it to exclude specific tests, for example:

test.exclude '*FailingTest', '*FailingWorseTest'

Note that we’re always using the package qualified class name, and you can use star (*) to substitute for any set of characters.

When tests fail, Buildr fails the test task. This is usually a good thing, but you can also tell Buildr to ignore failures by resetting the :fail_on_failure option:

test.using :fail_on_failure=>false

Besides giving you a free pass to ignore failures, you can use it for other causes, for example, as a gentle reminder:

test do
  warn "Did you forget something?" if test.tests.nil? || test.tests.empty?
end

The tests collection holds the names of all classes with tests, if any. And there’s classes, which holds the names of all test classes. We’ll let you imagine creative use for these two.

Running Tests

It’s a good idea to run tests every time you change the source code, so we wired the build task to run the test task at the end of the build. And conveniently enough, the build task is the default task, so another way to build changes in your code and run your tests:

$ buildr

That only works with the local build task and any local task that depends on it, like package, install and upload. Each project also has its own build task that does not invoke the test task, so buildr build will run the tests cases, but buildr foo:build will not.

While it’s a good idea to always run your tests, it’s not always possible. There are two ways you can get build to not run the test task. You can set the environment variable test to no (but skip and off will also work). You can do that when running Buildr:

$ buildr test=no

Or set it once in your environment:

$ export TEST=no
$ buildr

If you’re feeling really adventurous, you can also disable tests from your Buildfile or buildr.rb file, by setting options.test = false. We didn’t say it’s a good idea, we’re just giving you the option.

The test task is just smart enough to run all the tests it finds, but will accept include/exclude patterns. Often enough you’re only working on one broken test and you only want to run that one test. Better than changing your Buildfile, you can run the test task with a pattern. For example:

$ buildr test:KillerAppTest

Buildr will then run only tests that match the pattern KillerAppTest. It uses pattern matching, so test:Foo will run com.acme.FooTest and com.acme.FooBarTest. With Java, you can use this to pick a class name, or a package name to run all tests in that package, or any such combination. In fact, you can specify several patterns separated with commas. For example:

$ buildr test:FooTest,BarTest

Buildr forcefully runs all tests that match the pattern. If you want to re-run all tests even if your sources have not changed, you can execute:

$ buildr test:*

You can exclude tests by preceeding them with a minus sign (‘-’):

$ buildr test:-Bar

The above would run all tests except those with a name containing Bar. Exclusions can be combined with inclusions:

$ buildr test:Foo,-Bar

Buildr would then run tests with names containing Foo but not Bar.

As you probably noticed, Buildr will stop your build at the first test that fails. We think it’s a good idea, except when it’s not. If you’re using a continuous build system, you’ll want a report of all the failed tests without stopping at the first failure. To make that happen, set the environment variable test to “all”, or the Buildr options.test option to :all. For example:

$ buildr package test=all

We’re using package and not build above. When using a continuous build system, you want to make sure that packages are created, contain the right files, and also run the integration tests.

During development, if you want to re-run only tests that have failed during the last test execution, you can execute:

$ buildr test:failed

One last note on running tests. By default when you run tests, Buildr will automatically run all transitive test dependencies. This mean if you run “buildr test” inside project bar and bar depends on project foo, Buildr will first run tests in project foo if there have been any changes affecting foo that haven’t been taken into account yet. This behavior often surprises people, especially when they are trying to get things done and only care about tests in bar at that moment. For those times when you’d like to focus your testing on specific projects, Buildr has the only option that will only run tests for projects specified on the command line,

$ buildr test=only bar:test

Integration Tests

So far we talked about unit tests. Unit tests are run in isolation on the specific project they test, in an isolated environment, generally with minimal setup and teardown. You get a sense of that when we told you tests run after the build task, and include JMock in the dependency list.

In contrast, integration tests are run with a number of components, in an environment that resembles production, often with more complicates setup and teardown procedures. In this section we’ll talk about the differences between running unit and integration tests.

You write integration tests much the same way as you write unit tests, using test.compile and test.resources. However, you need to tell Buildr that your tests will execute during integration test. To do so, add the following line in your project definition:

test.using :integration

Typically you’ll use unit tests in projects that create internal modules, such as JARs, and integration tests in projects that create components, such as WARs and EARs. You only need to use the :integration option with the later.

To run integration tests on the current project:

$ buildr integration

You can also run specific tests cases, for example:

$ buildr integration:ClientTest

If you run the package task (or any task that depends on it, like install and upload), Buildr will first run the build task and all its unit tests, and then create the packages and run the integration tests. That gives you full coverage for your tests and ready to release packages. As with unit tests, you can set the environment variable test to “no” to skip integration tests, or “all” to ignore failures.

Using Setup and Teardown

Some tests need you to setup an environment before they run, and tear it down afterwards. The test frameworks (JUnit, TestNG) allow you to do that for each test. Buildr provides two additional mechanisms for dealing with more complicated setup and teardown procedures.

Integration tests run a setup task before the tests, and a teardown task afterwards. You can use this task to setup a Web server for testing your Web components, or a database server for testing persistence. You can access either task by calling integration.setup and integration.teardown. For example:

integration.setup { server.start ; server.deploy }
integration.teardown { server.stop }

Depending on your build, you may want to enhance the setup/teardown tasks from within a project, for example, to populate the database with data used by that project’s test, or from outside the project definition, for example, to start and stop the Web server.

Likewise, each project has its own setup and teardown tasks that are run before and after tests for that specific project. You can access these tasks using test.setup and test.teardown.

Testing Your Build

So you got the build running and all the tests pass, binaries are shipping when you find out some glaring omissions. The license file is empty, the localized messages for Japanese are missing, the CSS files are not where you expect them to be. The fact is, some errors slip by unit and integration tests. So how do we make sure the same mistake doesn’t happen again?

Each project has a check task that runs just after packaging. You can use this task to verify that your build created the files you wanted it to create. And to make it extremely convenient, we introduced the notion of expectations.

You use the check method to express and expectation. Buildr will then run all these expectations against your project, and fail at the first expectation that doesn’t match. An expectation says three things. Let’s look at a few examples:

check package(:war), 'should exist' do
  it.should exist
end
check package(:war), 'should contain a manifest' do
  it.should contain('META-INF/MANIFEST.MF')
end
check package(:war).path('WEB-INF'), 'should contain files' do
  it.should_not be_empty
end
check package(:war).path('WEB-INF/classes'), 'should contain classes' do
  it.should contain('**/*.class')
end
check package(:war).entry('META-INF/MANIFEST'), 'should have license' do
  it.should contain(/Copyright (C) 2007/)
end
check file('target/classes'), 'should contain class files' do
  it.should contain('**/*.class')
end
check file('target/classes/killerapp/Code.class'), 'should exist' do
  it.should exist
end

The first argument is the subject, or the project if you skip the first argument. The second argument is the description, optional, but we recommend using it. The method it returns the subject.

You can also write the first expectation like this:

check do
  package(:jar).should exist
end

We recommend using the subject and description, they make your build easier to read and maintain, and produce better error messages.

There are two methods you can call on just about any object, called should and should_not. Each method takes an argument, a matcher, and executes that matcher. If the matcher returns false, should fails. You can figure out what should_not does in the same case.

Buildr provides the following matchers:

Method Checks that …
exist Given a file task checks that the file (or directory) exists.
empty Given a file task checks that the file (or directory) is empty.
contain Given a file task referencing a file, checks its contents, using string or regular expression. For a file task referencing a directory, checks that it contains the specified files; global patterns using * and ** are allowed.

All these matchers operate against a file task. If you run them against a ZipTask (including JAR, WAR, etc) or a TarTask, they can also check the contents of the archive. And as you can see in the examples above, you can also run them against a path in an archive, checking its contents as if it was a directory, or against an entry in an archive, checking the content of that file.

The package method returns a package task based on packaging type, identifier, group, version and classifier. The last four are inferred, but if you create a package with different specifications (for example, you specify a classifier) your checks must call package with the same qualifying arguments to return the very same package task.

Buildr expectations are based on RSpec. RSpec is the behavior-driven development framework we use to test Buildr itself. Check the RSpec documentation if want to see all the supported matchers, or want to write your own.

Behaviour-Driven Development

Buildr supports several Behaviour-Driven Development(BDD) frameworks for testing your projects. Buildr follows each framework naming conventions, searching for files under the src/spec/{lang} directory.

You can learn more about each BDD framework in the Languages section.

Next, let’s talk about customizing your environment and using profiles