After watching Ryan Davis present a way to write a test framework from scratch, I knew I wanted to try that on my own. Before I could get to it, I stumbled upon a video of Kent Beck writing a small self-tested test library in CoffeeScript. In it, Kent Beck demonstrates the effect immediate feedback has on a test-driven workflow, but it was the self-testing aspect that fascinated me.
I set out to write a Ruby self-tested library from scratch, describing the process as I go. You can follow along with the project's git repository.
Assertions
Assertions are the building blocks of this test library. We start with the assert
function, which takes a boolean parameter. It does nothing if the parameter evaluates to true
and complains if it evaluates to false
. In Ruby, raising an exception is one way to signal failure. It is convenient for assert
since uncaught exceptions interrupt the program. Incidentally, I named the library Raisin since we will be raisin' exceptions.
As we said, assert
does nothing if the provided value is true
. Let's write our very first failing test:
assert(true)
puts "Success!"
puts "Success!"
should stay at the very bottom of the file as we add new code. Run the script:
ruby raisin.rb
It does not print "Success!" yet, because assert
is undefined. As an aside, I defined ,t
as a shortcut in vim (in normal mode press comma, then press "t") to run the tests, i.e. the current file as the code is self-tested.
:noremap <leader>t :!ruby %<CR>
Defining the function with an empty implementation makes the test pass:
def assert(condition)
end
Now let's cover the other intended behaviour: parameters that do not evaluate to true
cause an exception.
class AssertionError < StandardError
end
class NothingRaised < StandardError
end
raised = false
begin
assert(false)
rescue AssertionError
raised = true
end
raise NothingRaised unless raised
We define AssertionError
for use within our assertions. That way we have meaningful error messages: we know specifically that an assertion failed and not something else in the code. In a similar manner, we define NothingRaised
, but just for the purposes of this test. If the assertion fails, we print "Success!", otherwise we raise NothingRaised
so we know something is wrong.
We run the script to confirm the test failure:
raisin.rb:18:in `<main>': NothingRaised (NothingRaised)
Then we implement assert
to make the test pass.
def assert(condition)
raise AssertionError unless raised
end
(Commit eb31d6e)
assert_equal
is an assertion that I use most of the time. We can implement this on top of assert
. Add an empty implementation and a failing test:
def assert_equal(expected, actual)
end
assert_equal(true, true)
raised = false
begin
assert_equal("foo", "bar")
rescue AssertionError
raised = true
end
raise NothingRaised unless raised
We are repeating the structure of that second test, but we can refactor that after we make it pass.
def assert_equal(expected, actual)
assert(expected == actual)
end
Let's now refactor the verification that AssertionError
was raised.
def test_failure(&block)
raised = false
begin
yield
rescue AssertionError
raised = true
end
raise NothingRaised unless raised
end
test_failure do
assert(false)
end
test_failure do
assert_equal("foo", "bar")
end
test_failure
takes a block so that we can execute it at the right spot in the test. If we took the assertion as an argument (test_failure(assert(false))
), then the assertion would be evaluated before the code in test_failure
and we wouldn't be able to catch the AssertionError
. Since test_failure
is a helper function in our own tests, we are not going to write a test for it, but still want to test it manually. Comment out the implementation of assert
and confirm that the tests fail:
def assert(condition)
#raise AssertionError unless condition
end
test_failure do
assert(false)
end
raisin.rb:14:in `test_failure': NothingRaised (NothingRaised)
from raisin.rb:23:in `<main>'
All good.
(Commit d6b9a3e)
In the case of a failing assert_equal
we should get more information than just AssertionError
. Exceptions optionally carry messages. We can use those to convey details about assertion failures.
def assert(condition, failure_reason = nil)
unless condition
raise AssertionError, failure_reason
end
end
Now we can update assert_equal
to report what went wrong:
def assert_equal(expected, actual)
message = "Expected #{expected.inspect}, got #{actual.inspect}"
assert(expected == actual, message)
end
Let's try it:
assert_equal("foo", "bar")
raisin.rb:19:in `assert': Expected "foo", got "bar" (AssertionError)
from raisin.rb:32:in `assert_equal'
from raisin.rb:41:in `<main>'
Without the calls to inspect
the printed message would be Expected foo, got bar
which is less readable than when the strings are quoted.
Do we test the message? I feel like that would be testing Ruby itself, which is not very useful. What we can do is test the message format later if we decide to add pretty printing.
(Commit 0b9fa6b)
Test suites
Let's declare a test suite that uses our library and run it.
def greet(name = nil)
['Hello', name].compact.join(", ") + "!"
end
[
->() { assert_equal("Hello, Bob!", greet("Bob")) },
->() { assert_equal("Hello!", greet) }
].map(&:call)
Here, each lambda is a test case. The problem with lambdas is that they leak state between tests. Consider these tests:
x = 1
[
->() { assert_equal 1, x },
->() { x = x + 1; assert_equal 2, x },
->() { assert_equal 1, x }
].map(&:call)
The third test fails:
raisin.rb:19:in `assert': Expected 1, got 2 (AssertionError)
from raisin.rb:32:in `assert_equal'
from raisin.rb:63:in `block in <main>'
from raisin.rb:64:in `map'
from raisin.rb:64:in `<main>'
Methods keep state isolated. We will represent test cases as methods.
def test_with_name
assert_equal("Hello, Bob!", greet("Bob"))
end
def test_without_name
assert_equal("Hello!", greet)
end
test_with_name
test_without_name
Now our tests also have names to convey their purpose. The downside is that we need to call each function. We can overcome that with a little meta-programming.
class GreetingTestSuite
def test_with_name
assert_equal("Hello, Bob!", greet("Bob"))
end
def test_without_name
assert_equal("Hello!", greet)
end
end
suite = GreetingTestSuite.new
suite.public_methods(false).each do |test|
suite.send(test)
end
We take advantage of the public_methods
method. We pass it false
to only get methods that we have defined and not superclass methods. Compare:
suite.public_methods
[:test_with_name, :test_without_name, :instance_of?, :public_send, :instance_variable_get, :instance_variable_set, :instance_variable_defined?, :remove_instance_variable, :private_methods, :kind_of?, :instance_variables, :tap, :method, :public_method, :singleton_method, :is_a?, :extend, :define_singleton_method, :to_enum, :enum_for, :<=>, :===, :=~, :!~, :eql?, :respond_to?, :freeze, :inspect, :display, :object_id, :send, :to_s, :nil?, :hash, :class, :singleton_class, :clone, :dup, :itself, :taint, :tainted?, :untaint, :untrust, :trust, :untrusted?, :methods, :protected_methods, :frozen?, :public_methods, :singleton_methods, :!, :==, :!=, :__send__, :equal?, :instance_eval, :instance_exec, :__id__]
and
suite.public_methods(false)
[:test_with_name, :test_without_name]
Once we have the correct method names we send those messages to the suite object. That code is reusable as is so we can put it in a method that takes a class instance as parameter.
def run_suite(suite)
suite.public_methods(false).each do |test|
suite.send(test)
end
end
(Commit edf2e31)
Grouping the tests in a class also allows us to perform setup and tear-down if the test suite has any.
def run_suite(suite)
suite.public_methods(false).each do |test|
suite.send(:setup) if suite.respond_to?(:setup)
suite.send(test)
suite.send(:teardown) if suite.respond_to?(:teardowon)
end
end
While I prefer a functional approach in general, I identify two code smells in the above code. First, the method is called run_suite
and takes a suite
parameter. It would make more sense to call suite.run
. Second, we are checking if an object responds to a message before sending it that message. A more confident approach is to always run setup
and teardown
, but provide default implementations. We already have a special case of a test suite in the form of the example GreetingTestSuite
. What we are missing is a generalized version:
class TestSuite
def run
test_names = public_methods(false).grep(/^test_/)
test_names.each do |test|
setup
send(test)
teardown
end
end
def setup
end
def teardown
end
end
The adopted convention is that the names of the methods that are test cases begin with test_
. That way we make sure to not erroneously call the setup
and teardown
methods. It also allows test suites to contain helper methods.
We need to to update the test code to use our new class:
class GreetingTestSuite < TestSuite
# ...
end
GreetingTestSuite.new.run
(Commit 67b9a5d)
With multiple test suites we have to run each one manually, just like we did with test cases before that. We can automate that by keeping track of classes that inherit from TestSuite
and running all of those test suites.
class TestSuite
@suites = []
def self.inherited(suite)
@suites << suite
end
def self.run
@suites.each do |suite|
suite.new.run
end
end
#...
end
We cannot yet get rid of GreetingTestSuite.new.run
in favour of calling TestSuite.run
at the very bottom of our file because of a subtlety with nested test suites that we will address later.
(Commit d6f15e6)
Reporting
In the current implementation failing tests manifest themselves by raising an error, but there is no way to distinguish between a test that has not been run and a test that passes. After each step in the previous section, I've been manually verifying that the tests are still running by making assertions fail. That reveals two things:
- the test suite functionality is not tested rigorously;
- reporting success is as important as reporting failure.
Let's address the first one by counting how many tests have been run. This will allow us to assert that all tests in a suite are being run.
class DummySuite < TestSuite
def test_equality
assert_equal(1, 1)
end
end
assert_equal 1, DummySuite.new.run.runs
We got rid of the greetings code and associated tests in favor of a simpler example. The assertion fails with the following error:
raisin.rb:77:in `<main>': undefined method `runs' for [:test_equality]:Array (NoMethodError)
This is because TestSuite#run
implicitly returns the array of test names. We are going to return a report instead.
class Report
end
class TestSuite
def run
test_names = public_methods(false).grep(/^test_/)
test_names.each do |test|
setup
send(test)
teardown
end
Report.new
end
end
Running the tests again gives us:
raisin.rb:82:in `<main>': undefined method `runs' for #<Report:0x00555e6f553548> (NoMethodError)
With an empty implementation of runs
in Report
, we finally get an assertion error (Expected 1, got nil
). We can make the test pass with the following:
class TestSuite
def run
runs = 0
test_names = public_methods(false).grep(/^test_/)
test_names.each do |test|
setup
send(test)
runs = runs + 1
teardown
end
Report.new(runs)
end
end
class Report
attr_reader :runs
def initialize(runs)
@runs = runs
end
end
We can test that only methods starting with test_
are called by adding an extra method to DummySuite
called foo
for example. This does not fail the assertion, as expected.
The next step is to count the failed tests. I will spare you the TDD process. Here is one possible implementation:
class TestSuite
def run
runs = 0
failures = 0
test_names = public_methods(false).grep(/^test_/)
test_names.each do |test|
setup
begin
send(test)
rescue AssertionError
failures = failures + 1
end
runs = runs + 1
teardown
end
Report.new(runs, failures)
end
end
class Report
attr_reader :runs, :failures
def initialize(runs, failures)
@runs = runs
@failures = failures
end
end
class DummySuite < TestSuite
def test_equality
assert_equal(1, 2) # should fail
end
def test_the_truth
assert(true)
end
end
result = DummySuite.new.run
assert_equal 2, result.runs
assert_equal 1, result.failures
(Commit 72f91c3)
We should also show progress as tests run. I particularly like the convention of printing a dot when a test passes and the letter "F" when it fails. The easiest way for now is to add print
statements inside TestSuite#run
:
def run
runs = 0
failures = 0
test_names = public_methods(false).grep(/^test_/)
test_names.each do |test|
setup
begin
send(test)
print "."
rescue AssertionError
failures = failures + 1
print "F"
end
runs = runs + 1
teardown
end
puts
Report.new(runs, failures)
end
That works, but the method is getting a little unwieldy. It does too much: it selects which methods to run, it gathers statistics, it prints to the terminal. The test suite itself shouldn't need to know how to do all that. First, we'll introduce a collaborator that hides the exception handling:
class TestResult
def self.from(&block)
begin
yield
TestSuccess.new
rescue AssertionError
TestFailure.new
end
end
end
class TestSuccess
def success?
true
end
end
class TestFailure
def success?
false
end
end
We use it like this:
class TestSuite
def run
runs = 0
failures = 0
test_names = public_methods(false).grep(/^test_/)
test_names.each do |test|
result = TestResult.from do
setup
send(test)
teardown
end
if result.success?
print "."
else
failures = failures + 1
print "F"
end
runs = runs + 1
end
puts
Report.new(runs, failures)
end
end
Notice that I've included the setup and tear-down in the block. It's a matter of precaution, in case they contain assertions as well. Also, running the test and displaying the result used to be interleaved and now they are done one after the other, allowing us to extract the reporting code. The report class becomes:
class Report
attr_reader :runs, :failures
def initialize
@runs = 0
@failures = 0
end
def add_result(result)
if result.success?
print "."
else
@failures = @failures + 1
print "F"
end
@runs = @runs + 1
end
end
Here is the improved test runner:
class TestSuite
def run
report = Report.new
test_names = public_methods(false).grep(/^test_/)
test_names.each do |test|
result = TestResult.from do
setup
send(test)
teardown
end
report.add_result(result)
end
report
end
end
(Commit d707825)
One of the things we haven't tested is the report output. This requires a small modification of the report class. We will pass the input/output stream to the constructor. That way we can use $stdout
when we want to display something in the terminal and an instance of StringIO
when we need to capture the output for tests. In our test case, we are going to simulate a suite that has one failing and one passing test. The resulting output should be an "F" followed by a dot. We reuse the DummySuite
from before, but we capture the output.
output = StringIO.new
report = DummySuite.new.run(output)
assert_equal(2, report.runs)
assert_equal(1, report.failures)
assert_equal("F.", output.string)
We modify Report
as described above:
class Report
attr_reader :runs, :failures
def initialize(io)
@io = io
@runs = 0
@failures = 0
end
def add_result(result)
if result.success?
io.print "."
else
@failures = @failures + 1
io.print "F"
end
@runs = @runs + 1
end
private
attr_accessor :io
end
Finally, we update TestSuite#run
to use the new API:
class TestSuite
def run(io = $stdout)
report = Report.new(io)
# ...
end
end
(Commit 3aff6fc)
By just writing "F" when a test fails we lose valuable information about the reason of the failure. We can store the error and print a nice summary of all errors when the test suite finishes running. Let's make a test test suite (you read that right) to test the summary.
suite = Class.new(TestSuite) do
def test_1
assert(false, "failure1")
end
def test_2
assert(false, "failure2")
end
end
output = StringIO.new
report = suite.new.run(output)
assert(output.string.include?("failure1"), "Report does not include error details")
assert(output.string.include?("failure2"), "Report does not include error details")
assert(output.string.include?("2 runs, 2 failures"), "Report does not include statistics")
I created an anonymous class, because we don't really care about reusing it. To implement this, we first need to store the error messages for failed tests:
class TestResult
def self.from(&block)
begin
yield
TestSuccess.new
rescue AssertionError => error
TestFailure.new(error)
end
end
end
class TestFailure
attr_reader :error
def initialize(error)
@error = error
end
def success?
false
end
end
Then we gather them in the report and print them at the end:
class Report
attr_reader :runs
def initialize(io)
@io = io
@runs = 0
@errors = []
end
def add_result(result)
if result.success?
io.print "."
else
@errors << result.error
io.print "F"
end
@runs = @runs + 1
end
def failures
@errors.count
end
def summarize
io.puts
@failures.each do |failure|
io.puts
io.puts failure.message
io.puts failure.backtrace
io.puts
end
io.puts
io.puts "#{runs} runs, #{failures} failures"
end
private
attr_accessor :io
end
Finally, we call summarize from TestSuite#run
:
class TestSuite
def run(io = $stdout)
# ...
report.summarize
report
end
end
We also need to update an assertion in the previous test case from
assert_equal("F.", output.string)
to
assert(output.string.include?("F."))
(Commit 175d0a4)
Test suites testing test suites
Now that we can organize tests and see them fail, we should refactor.
Include the library in a new file where the new test suites will reside:
# test.rb
require_relative './raisin'
We'll be porting our tests into that suite. What this gives is an opportunity to organize our existing tests, but also test the test suite functionality using itself. I added some tests, making sure to see each one fail before fixing it and getting to the next:
class NothingRaised < StandardError
end
class AssertionTests < TestSuite
def test_true
assert(true)
end
def test_truthy
assert("")
assert([])
assert("foo")
assert(Object.new)
end
def test_false
assert_error { assert(false) }
end
def test_falsy
assert_error { assert(nil) }
end
def test_equal
assert_equal("foo", 'foo')
assert_equal(1, 1)
end
def test_not_equal
assert_error { assert_equal(1, 2) }
assert_error { assert_equal("foo", "bar") }
end
def assert_error(&block)
raised = false
begin
yield
rescue AssertionError
raised = true
end
raise NothingRaised unless raised
end
end
AssertionTests.new.run
We make another suite to test the output of running a test suite:
class OutputTests < TestSuite
def test_statistics
suite = Class.new(TestSuite) do
def test_equality
assert_equal(1, 2)
end
def test_the_truth
assert(true)
end
end
output = StringIO.new
report = suite.new.run(output)
assert_equal(2, report.runs)
assert_equal(1, report.failures)
assert(output.string.include?("2 runs, 1 failures"),
"Report does not include statistics")
end
def test_summary
suite = Class.new(TestSuite) do
def test_1
assert(false, "failure1")
end
def test_2
assert(false, "failure2")
end
end
output = StringIO.new
suite.new.run(output)
assert(output.string.include?("failure1"),
"Report does not include error details")
assert(output.string.include?("failure2"),
"Report does not include error details")
end
end
OutputTests.new.run
(Commit 3d47bcf. I also grouped the assertions in a module in commit cfcdd41)
Notice that we're instantiating and running the test suites individually. That's because executing TestSuite.run
will run all registered suites, including the tests suites that fail on purpose because they are part of a test case.
Since we run the inner test suites directly, we can remove them from TestSuite
's suites. That way they won't be run automatically. It's a bit dirty, but it gets the job done.
class TestSuite
def self.unregister(suite)
@suites.delete(suite)
end
end
Let's define a test helper that we'll use everywhere so that we don't forget to unregister any inner test suites:
def define_suite(&block)
suite = Class.new(TestSuite, &block)
TestSuite.unregister(suite)
suite
end
Example usage:
class ReportingTests < TestSuite
def test_statistics
suite = define_suite do
def test_equality
assert_equal(1, 2)
end
def test_the_truth
assert(true)
end
end
output = StringIO.new
report = suite.new.run(output)
assert_equal(2, report.runs)
assert_equal(1, report.failures)
assert(output.string.include?("2 runs, 1 failures"),
"Report does not include statistics")
end
end
After we replace all previous inner test suite instantiations with our helper, we can replace AssertionTests.new.run
and ReportingTests.new.run
with a single TestSuite.run
.
(Commit 6fa98df)
Auto-runner
It would be cool to not have to invoke TestSuite.run
at all. With Minitest, you can require mintiest/autorun
and executing the script automatically runs the tests within. Let's implement that!
The at_exit
method takes a block that will get executed when the program exits. The end of the program is a good place to run tests, because it guarantees that all the production and test code has been loaded.
require_relative './raisin'
module Raisin
@@at_exit_registered ||= false
def self.autorun
unless @@at_exit_registered
at_exit { TestSuite.run }
@@at_exit_registered = true
end
end
end
Raisin.autorun
In the code above, we make sure that we only register the callback only once in case that file is require
d multiple times. Let's remove that TestSuite.run
from the bottom of our test file and replace the require_relative './raisin'
by require_relative './autorun'
. Run it:
$ ruby test.rb
......
6 runs, 0 failures
..
2 runs, 0 failures
It works!
(Commit 2a53359)
Better reports
Our test output is useful, but it could be better. We don't want it broken down by class, we want to combine the progress reports and print a grand summary at the end. We would also like failures to be reported where the assertion was made and not at the line of the library internals where the exception was raised.
First, we change TestSuite#run
to take the report as argument, TestSuite.run
to pass it in and we update our tests:
class TestSuite
def self.run(io = $stdout)
@suites.each do |suite|
suite.new.run(Report.new(io))
end
end
def run(report)
# ...
end
end
# in the tests
suite = define_suite { #.... }
output = StringIO.new
suite.new.run(Report.new(output))
Then we make only one report and pass to the different test suites. In the end we print the summary.
class TestSuite
def self.run(io = $stdout)
report = Report.new(io)
@suites.each do |suite|
suite.new.run(report)
end
report.summarize
end
def run(report)
test_names = public_methods(false).grep(/^test_/)
test_names.each do |test|
result = TestResult.from do
setup
send(test)
teardown
end
report.add_result(result)
end
report
end
end
Since the run
instance method no longer creates a report by itself and it does not invoke report.summarize
we need to add those steps to the tests to make them pass.
class ReportingTests < TestSuite
def test_statistics
suite = define_suite do
def test_equality
assert_equal(1, 2)
end
def test_the_truth
assert(true)
end
end
output = StringIO.new
report = suite.new.run(Report.new(output))
report.summarize
assert_equal(2, report.runs)
assert_equal(1, report.failures)
assert(output.string.include?("2 runs, 1 failures"),
"Report does not include statistics")
end
def test_summary
suite = define_suite do
def test_1
assert(false, "failure1")
end
def test_2
assert(false, "failure2")
end
end
output = StringIO.new
report = Report.new(output)
suite.new.run(report)
report.summarize
assert(output.string.include?("failure1"),
"Report does not include error details")
assert(output.string.include?("failure2"),
"Report does not include error details")
end
end
The test setup is a little heavy, but I like it more than the alternatives I can think of. We could declare TestResult
s instead of a test suite and feed those to the report (with report.add_result(result)
). That involves too many library internals. We also cannot use the TestSuite.run
facade, because at this point we're already in code executed by it. That would effectively run a loop until we reach the call stack depth limit. Besides, we're already verifying that reports are wired up correctly through visual inspection: running our own test suite produces output. It's the same with registering test suites through inheritance: because we always make a test fail first, we can tell something is wrong if we don't see the failure.
(Commit 9f45081)
Let's also exclude raisin
internals from the backtrace. First we'll organize the project files like this:
$ tree
.
βββ lib
βΒ Β βββ raisin
βΒ Β βΒ Β βββ autorun.rb
βΒ Β βββ raisin.rb
βββ README.md
βββ test
βββ raisin.rb
Don't forget to change the require_relative
paths. Also note test.rb
is now test/raisin.rb
.
(Commit 17454c3)
Then we exclude backtrace lines referring to lib/raisin
.
class Report
def summarize
io.puts
@errors.each do |failure|
io.puts
io.puts failure.message
io.puts filter(failure.backtrace)
io.puts
end
io.puts
io.puts "#{runs} runs, #{failures} failures"
end
private
def filter(backtrace)
backtrace.rejct { |line| line =~ /lib\/raisin/ }
end
end
(Commit 088fbd2)
Random execution order
A nice feature for a test framework is the execution of test cases in a random order. This helps improve the quality of your tests by detecting dependencies. The tests should pass no matter the execution order. Another aspect of this is that if a certain ordering makes the tests fail, the programmer should be able to reproduce that particular run. To make the process deterministic we'll take a random seed as an optional command line argument and generate a default if none is given.
First, we introduce a new entry point:
module Raisin
def self.run(args)
TestSuite.run
end
end
We modify Raisin.autorun
to call it with the command-line arguments:
module Raisin
@@at_exit_registered ||= false
def self.autorun
unless @@at_exit_registered
at_exit { Raisin.run(ARGV) }
@@at_exit_registered = true
end
end
end
We parse the arguments and pass them to the test suite runner.
require 'optparse'
module Raisin
def self.run(args)
options = RunOptions.parse(args)
TestSuite.run(options)
end
end
class RunOptions
attr_reader :seed
def self.parse(arguments = [])
program_options = {}
parser = OptionParser.new do |options|
options.on('-h', '--help', 'Display this help') do
puts options
exit
end
options.on('--seed SEED', Integer, 'Set random seed') do |value|
program_options[:seed] = value.to_i
end
end
parser.parse!(arguments)
new(program_options.fetch(:seed, Random.new_seed))
end
private
def initialize(seed)
@seed = seed
end
end
There are a few things going on here. OptionParser
is from the standard library and does the heavy lifting of parsing arguments and pretty printing help. program_options.fetch(:seed, Random.new_seed)
makes sure we always know what the random seed is.
Then we shuffle the test order using a random number generator seeded with that seed.
class TestSuite
def self.run(io = $stdout, options)
report = Report.new(io, options)
@suites.each do |suite|
suite.new.run(report, options)
end
# ...
end
def run(report, options)
# ...
test_names.shuffle(random: Random.new(options.seed)).each do |test|
# ...
end
# ...
end
end
We could also srand
at the beginning of the program. However, that could interfere with the code being tested if that also contains calls to srand
. Furthermore, passing in the seed makes the code more explicit and testing it easier.
Finally, we let the user know how to repeat that specific test order:
class RunOptions
def invocation_command
['ruby', $PROGRAM_NAME, '--seed', seed].join(" ")
end
end
class Report
def initialize(io, options)
# ...
@options = options
end
def summarize
# ...
io.puts
io.puts 'Rerun the tests in the same order with:'
io.puts options.invocation_command
end
private
attr_reader :options
end
Testing the feature can be as simple as finding a seed that will run tests in a specific order and checking that order:
def test_order
suite = define_suite do
def test_1
assert(false)
end
def test_2
assert(true)
end
end
output = StringIO.new
options = RunOptions.parse(%w[--seed 2])
report = Report.new(output, options)
suite.new.run(report, options)
assert_equal('.F', output.string)
end
The natural output order here would be "F." and we check it is reversed with a specific seed. Removing the seed (options = RunOptions.parse([])
) will sometimes fail the test, but with a seed of 2 it always passes.
(Commit 991a00e)
Cleanup
Before we wrap up, let's put all classes and modules in the Raisin
module. That way we don't pollute the global namespace. The tests need updating as well to prefix everything in our library with the top-level namespace.
(Commit a5b7220)
Final words
The cloc
utility counts 151 lines of Ruby in the lib/
directory and 96 lines in test/
. That is not a lot for a fully functional test library.
We're missing a few useful assertions, like a test that two floating point numbers are very close (we cannot use assert_equal
because of machine precision). This and other assertions can be easily built on top of the assert
primitive.
I also left out test doubles on purpose. I may revisit the project to add implement them later.
Another thing to implement is a more helpful comparison of expected and actual input for assert_equal
.
All in all, writing this library satisfied my own curiosity about self-tested code. I hope the post inspired you to write your own toy test library!
See the final code.