The Rust test suite


The rust test suite has several sets of tests for different purposes. As the compiler is built over multiple stages, and with varying host and target combinations, debugging and profiling settings, the tests can be run in many different ways.


These options can be combined. For instance, make check CHECK_IGNORED=1 TESTNAME=test/run-pass/ runs the ignored test in the run-pass directory.

Language and compiler tests

These are tests of the compiler against Rust source code. They typically have a main function that takes no arguments and may have directives that instruct the test runner how to run the test. These tests may be compiled and executed, pretty-printed, jitted, etc. depending on the test configuration.

The test runner for these tests is at src/test/compiletest and is compiled to test/compiletest.stage[N].

A typical test might look like:

// ignore-pretty 'bool' doesn't pretty-print (#XXX)
// Regression test for issue #YYY

fn main() {
   let a: bool = 10; //~ ERROR mismatched types

There are seven different modes for compile tests. Each test is run under one or more modes:

Valid directives include:

There are eight directories containing compile tests, living in the src/tests directory:

And finally, build targets:

Specifying the expected errors and warnings

When writing a compile-fail test, you must specify at least one expected error or warning message. The preferred way to do this is to place a comment with the form //~ ERROR msg or //~ WARNING msg on the line where the error or warning is expected to occur. You may have as many of these comments as you like. The test harness will verify that the compiler reports precisely the errors/warnings that are specified, no more and no less. An example of using the error/warning messages is:

// Regression test for issue #XXX

fn main() {
   let a: bool = 10; //~ ERROR mismatched types
   log (debug, b);

In fact, this test would fail, because there are two errors: the type mismatch and the undefined variable b.

Sometimes it is not possible or not convenient to place the //~ comment on precisely the line where the error occurs. For those cases, you may make a comment of the form //~^ where the caret ^ indicates that the error is expected to appear on the line above. You may have as many caret as you like, so //~^^^ ERROR foo indicates that the error message foo is expected to be reported 3 lines above the comment. We could therefore correct the above test like so:

// Regression test for issue #XXX

fn main() {
   let a: bool = 10; //~ ERROR mismatched types
   log (debug, b);
   //~^ ERROR undefined variable `b`

The older technique for specifying error messages was to use an error-pattern directive. These directives are placed at the top of the file and each message found in an error-pattern directive must appear in the output.

Using error comments is preferred, however, because it is a more thorough test:

Multi-crate testing

Sometimes it is useful to write tests that make use of more than one crate. We have limited support for this scenario. Basically, you can write and add modules into the src/test/auxiliary directory. These files are not built nor tested directly. Instead, you write a main test in one of the other directories (run-pass, compile-fail, etc) and add a aux-build directive at the head of the main test. When running the main test, the test framework will build the files it is directed to build from the auxiliary directory. These builds must succeed or the test will fail. You can then include use and import commands to make use of the byproducts from these builds as you wish.

An example consisting of two files:

  fn iter<T>(v: [T], f: fn(T)) {...}

  extern crate cci_iter_lib;
  fn main() {
    cci_iter_lib::iter([1, 2, 3]) {|i| ... }


Unit Tests

Most crates include unit tests which are part of the crate they test. These crates are built with the --test flag and run as part of make check.

libcore has its tests in a separate crate, named libcoretest.

All tests in a module should go in an inner module named test, with the attribute #[cfg(test)]. Placing tests in their own module is a practical issue - because test cases are not included in normal builds, building with --test require a different set of imports than without, and that causes ‘unused import’ errors.

use std::option;

fn do_something() { ... }

mod test {
   use std::vec;

   fn helper_fn() { ... }

   fn resolve_fn_types() {

Build targets

Documentation tests

The build system is able to extract Rust code snippets from documentation and run them using the compiletest driver. Currently the tutorial and reference manual are tested this way. The targets are make check-stage[N]-doc-tutorial and make check-stage[N]-doc-rust, respectively. There are also several auxiliary guides; to run the tests extracted from them, do:

Crate API docs are tested as well:

To run all doc tests use make check-stage[N]-doc.

Minimal (but faster) checking on windows

Because Windows has slow process spawning running make check on that platform can take a long time. For this reason we have a make check-lite target that the Windows build servers run to keep the cycle time down. This is a stripped-down target which only checks run-pass, run-fail, compile-fail, run-make and target libraries.

Benchmarks, saved metrics and ratchets

All benchmark metrics are saved by default. Depending on configuration, some benchmark metrics are ratcheted. The codegen compile tests are always ratcheted if they run, since they are deterministic / low-noise. See [[Doc unit testing]] for details on metrics and ratchets.