µnit is a small and portable unit testing framework for C which includes pretty much everything you might expect from a C testing framework, plus a few pleasant surprises, wrapped in a nice API.
µnit is intended to be copied into your code or, if you use git, included as a submodule.
git clone https://github.com/nemequ/munit.git
If you're reading this, you probably already know what unit testing is and why you should be doing it, so let's just dive right in and start explaining how to use µnit. If you'd like to skip the lengthy prose and just start with a heavily documented example, see the example.c file distributed with µnit.
With a single source file and a single header, getting started is trivial!
It's terribly difficult to have good unit tests without actually testing anything.
Built-in support for generating and reproducing random numbers across different platforms, helps increase coverage without bug-hunting guesswork.
Once you have a few tests, group them together into a suite.
Expose your code to the CLI, and learn how to bend it to your will.
Automatically run tests in multiple configurations.
Combine multiple suites for an easy way to manage tests for larger projects.
Minor features like memory allocation, convenience macros, etc.
Assertions are a fundamental part of any unit testing
framework, and µnit is no exception. If you've used other
unit testing frameworks you probably know roughly what to
expect, but if you're used to the standard
library's assert()
function then you're in
for a treat.
Let's say you want to test two values for equality:
void your_function(int foo, int bar) {
assert(foo == bar);
}
What happens when foo != bar
?
srcfile.c:5: your_function: Assertion `foo == bar' failed.
When you want to debug this, the first thing you probably
ask yourself is, "What were the values
of foo and bar?"
Alas, assert
knows not, and you must fire up
a debugger (or add some printf
s).
Now, lets look at what happens if you use µnit's assertion macros:
void your_function(int foo, int bar) {
munit_assert_int(foo, ==, bar);
}
ERROR> srcfile.c:5: assertion failed: foo == bar (1729 == 1701)
The values of foo and bar are shown! No debuggers, and no printfs.
Of course we don't stop at int
. There is
a munit_assert_type
macro for:
char
unsigned char
("uchar")short
unsigned short
("ushort")int
unsigned int
("uint")long int
("long")unsigned long int
("ulong")long long int
("llong")unsigned long long int
("ullong")size_t
("size")float
double
void*
("ptr")munit_int8_t
("int8")munit_uint8_t
("uint8")munit_int16_t
("int16")munit_uint16_t
("uint16")munit_int32_t
("int32")munit_uint32_t
("uint32")munit_int64_t
("int64")munit_uint64_t
("uint64")
The munit_(u)intN_t
are just macros defined
to types from stdint.h, except on older versions of Visual
Studio which don't support stdint.h where they are defined
to exact-width
built-in types.
Additionally, there are several more specialized macros for different common types:
munit_assert_double_equal(double a, double b, int precision)
Assert that two doubles are equal within a tolerance of 1.0×10-precision. For example, 3.141592654 and 3.141592653589793 are considered equal with a precision of 9 but not 10.
munit_assert_string_equal(const char* a, const char* b)
Assert that two strings are equivalent
(i.e., strcmp(a,b) == 0
,
not a == b
).
munit_assert_string_not_equal(const char* a, const char* b)
Like munit_assert_string_equal
, but make
sure they aren't equivalent.
munit_assert_memory_equal(size_t size, const void* a, const void* b)
A personal favorite, this will make sure two blocks of memory contain the same data. Failures messages will tell you the offset of the first non-equal byte.
munit_assert_memory_not_equal(size_t size, const void* a, const void* b)
Make sure two blocks of memory don't contain the same data.
munit_assert_ptr_equal(void* a, void* b)
Another way of writing munit_assert_ptr(a, ==, b)
munit_assert_ptr_not_equal(void* a, void* b)
Another way of writing munit_assert_ptr(a, !=, b)
munit_assert_null(const void* ptr)
Another way of writing munit_assert_ptr(ptr, ==, NULL)
munit_assert_not_null(const void* ptr)
Another way of writing munit_assert_ptr(ptr, !=, NULL)
munit_assert_true(bool value)
Check that the boolean value is true.
munit_assert_false(bool value)
Check that the boolean value is false.
Additionally, µnit contains a munit_assert()
macro, which is similar to assert
but uses
µnit's logging facilities, for those cases where more
specialized macros will not work.
Finally, if you define MUNIT_ENABLE_ASSERT_ALIASES prior to including munit.h, versions of all the assertion macros without the "munit_" prefix will be defined. For example:
#define MUNIT_ENABLE_ASSERT_ALIASES
#include "munit/munit.h"
void main(int argc, char** argv) {
assert_int(argc, ==, 1);
}
One feature that is often overlooked in testing frameworks is pseudo-random number generation. Being able to randomize tests a bit is a great way to increase the coverage of your tests without the performance implications of testing every possible value.
If you've never used random numbers in tests before, you might be terrified of the implications for reproducibility; if the tests are randomized then you can't reproduce failures, and if you can't reproduce failures how can you be expected to fix them? Fear not, this is where seeding comes in! Every time the test suite is run, a 32-bit seed value is written to the console in hexadecimal notation. If you see a failure you can simply plug that number back into the test runner and the PRNG will output the same values as it did in the failing tests.
So, why not just use C's rand
and srand
functions? The PRNG functions
built into C are platform-specific. Even if you use the
same seed, if your test machine is different from your
development machine (e.g., if you're using a CI service)
it's likely you will be unable to reproduce the same
failure with the same seed. To combat this, µnit contains
a simple PRNG which will output the same values in the
same order on all platforms.
Of course, we've also added a few convenience functions on top of just generating random numbers. Here is the whole API:
int munit_rand_int_range(int min, int max)
This is probably the function you are looking for. It will generate a random value between min and max (inclusive). The difference between min and max must be ≤ 231−1.
munit_uint32_t munit_rand_uint32(void)
Generate a random value, between 0 and 232−1 (inclusive).
double munit_rand_double(void)
Generate a random double-precision value between 0 and 1.
void munit_rand_memory(size_t size, munit_uint8_t buffer[])
Fill a buffer with however much random data you want.
void munit_rand_seed(munit_uint32_t seed)
Seed the random number with the supplied value. You probably don't want to use this, since the CLI will handle seeding for you, but it's there if you do.
Now that you have the hang of how to test code, lets take a look at how to structure test cases for µnit. First, each test should be a separate function with the following prototype:
MunitResult my_test(const MunitParameter params[], void* user_data_or_fixture);
The name of the test (my_test in this case) doesn't matter; it's for your internal use only. As for the arguments, we'll get back to them soon.
There are four possible results in µnit which can be returned from a test:
Each thing you want to test should be a separate function. It can be tempting to just have the test suite call one function and have that function test everything, but it will make your life harder in the long run.
MunitTest
struct
Once you have a test case (or, even better, more than
one!) it's time to put them together in a suite. First,
you'll want to create an array of MunitTest
s:
MunitTest tests[] = {
{
"/my-test", /* name */
my_test, /* test */
NULL, /* setup */
NULL, /* tear_down */
MUNIT_TEST_OPTION_NONE, /* options */
NULL /* parameters */
},
/* Mark the end of the array with an entry where the test
* function is NULL */
{ NULL, NULL, NULL, NULL, MUNIT_TEST_OPTION_NONE, NULL }
};
The name is a human-readable identifier for the test. The convention for µnit is to start each name with a forward slash, but technically it's not required. Actually, as long as there are no NULL characters in the name you can technically do pretty much whatever you want.
The second field, test, is just the function you created earlier.
We'll come back to setup and tear_down in a minute. options is a bitmask of options; currently the only values are:
If a setup function is provided as part of a test, it is called before the test function is invoked, and the return value of the setup function is passed as the user_data_or_fixture parameter to the test function.
Similarly, if a tear down function is provided, it will be called after the test function with the fixture argument set to the return value of the setup function.
This is commonly used to initialize and destroy structures and resources used by the test. For example:
static void*
test_setup(const MunitParameter params[], void* user_data) {
return strdup("Hello, world!");
}
static void
test_tear_down(void* fixture) {
free(fixture);
}
static MunitResult
test(const MunitParameter params[], void* fixture) {
char* str = (char*) fixture;
munit_assert_string_equal(str, "Hello, world!");
return MUNIT_OK;
}
MunitSuite
structOnce you have your array of tests, it's time to put them in a test suite:
static const MunitSuite suite = {
"/my-tests", /* name */
tests, /* tests */
NULL, /* suites */
1, /* iterations */
MUNIT_SUITE_OPTION_NONE /* options */
};
Like the test name, the suite name typically begins with a forward slash, though it's not required. When you run the suite, the suite name will be concatenated with the test name to determine the full test name. For example, the name of the test in this suite will be "/my-tests/my-test".
The second field, tests, is the array of tests you created earlier.
The suites field allows you to embed one test suite in another. For our simple example we've set it to "NULL" to indicate there are no sub-suites, but in practice you'll commonly want to use this feature to help organize tests for larger projects where it's convenient to split your unit tests across multiple files.
Another interesting use case for nested suites is projects which include other projects. If both projects use µnit it becomes easy to include the sub-project's unit tests when running the parent's.
After suites is iterations. Generally you'll want a single iteration of each test, but if your tests are fast include randomization, or the possibility of a race condition, you might want to run each test multiple times.
Finally, there is the options field. Currently there are no suite-level options, but this field is provided for future expansion.
munit_main
Once you have your suite ready to go, all that is left is
to call munit_main()
, which will parse any
command line arguments and run the suite:. The prototype
looks like:
int
munit_suite_main(const MunitSuite* suite,
void* user_data,
int argc,
const char* argv[]);
Most of this is probably pretty self-explanitory; you pass it a pointer to the suite, as well as the command line arguments. We'll (finally) talk about user_data in the next section, but for now you can just pass "NULL".
The return value will be "EXIT_FAILURE" if any
tests fail, or "EXIT_SUCCESS" if all tests
succeed. This makes the value suitable for returning
directly from your main()
function.
In the simplest case you'll end up with something like this:
int main (int argc, const char* argv[]) {
return munit_suite_main(&suite, NULL, argc, argv);
}
You've probably noticed that we've been basically ignoring some arguments; namely, user_data and parameters. We're going to continue ignoring parameters for now (we'll get to them in the Parameterized Tests section), but it's finally time to talk about user_data.
If there is no setup function in a test,
the user_data parameter which you pass
to munit_main()
is passed to the test
function as the user_data_or_fixture parameter.
If there is a setup function,
the user_data parameter you pass
to munit_main()
is passed as
the user_data parameter to the setup function,
and the return value of the setup function is
passed instead of user_data to the
test.
Parameterized tests help you run a single test many times with slightly different inputs. The idea is that you create a list of parameters and possible values, and your test is run once for every possible combination of parameters.
As an example, let's say you create two parameters, called foo and bar, and each parameter has three possible values; foo can be "one", "two", or "three", and bar can be "red", "green", "blue",. This yields 9 possible combinations:
Of course, you may have far more parameters and/or many more possible values.
MunitParameterEnum
To add parameters to your tests, you'll need to create an
array of MunitParameterEnum
s. The structure
is very simple:
typedef struct {
char* name;
char** values;
} MunitParameterEnum;
The name field should be the name of the parameter—for our above example, the first parameter name would be "foo" and the second "bar". values is a NULL-terminated array of strings representing the possible values of that parameter. So, we might end up with something like:
static char* foo_params[] = {
"one", "two", "three", NULL
};
static char* bar_params[] = {
"red", "green", "blue", NULL
};
static MunitParameterEnum test_params[] = {
{ "foo", foo_params },
{ "bar", bar_params },
{ NULL, NULL },
};
Then, simply set the parameters field of
your MunitTest
struct
to test_params (or whatever you called your
array), and you're done!
In addition to "normal" parameters, µnit supports leaving the values field as NULL to indicate that the parameter may have any value. Any-valued parameters will not cause additional tests to be run, they are merely a way to allow people using the CLI to specify a value.
Any-valued parameters are primarily useful for when there are a huge number of potential parameters. Usually you'll want to use the PRNG to choose values randomly, but it may be helpful to provide a parameter to override that behavior and instead use a specified value.
Once you have an executable for your tests compiled, running them is a relatively straightforward process. Simple running the executable will run all the tests and give you a report of the result, which is all many people will ever want. However, the CLI contains some features which may prove useful…
First, lets take a look at some sample output, which comes from the example.c in the µnit repository:
Running test suite with seed 0x4f78f287... /example/compare [ OK ] [ 0.00000908 / 0.00000650 CPU ] /example/rand [ OK ] [ 0.00001886 / 0.00001704 CPU ] /example/parameters foo=one, bar=red [ OK ] [ 0.00001201 / 0.00001016 CPU ] foo=one, bar=green [ OK ] [ 0.00001104 / 0.00000970 CPU ] foo=one, bar=blue [ OK ] [ 0.00001222 / 0.00001034 CPU ] foo=two, bar=red [ OK ] [ 0.00001271 / 0.00001039 CPU ] foo=two, bar=green [ OK ] [ 0.00001131 / 0.00001004 CPU ] foo=two, bar=blue [ OK ] [ 0.00001159 / 0.00001047 CPU ] foo=three, bar=red [ OK ] [ 0.00001180 / 0.00000991 CPU ] foo=three, bar=green [ OK ] [ 0.00001110 / 0.00000925 CPU ] foo=three, bar=blue [ OK ] [ 0.00000901 / 0.00000824 CPU ] 11 of 11 (100%) tests successful, 0 (0%) test skipped.
The first piece of information presented is the random
seed. If you use the PRNG in your
tests, you can use this value to make the PRNG reproduce
the same sequence (using
the --seed
parameter) in
successive runs, hopefully allowing you to reproduce the
failure.
Next, we have the list of three tests which were run. As you can see, the first two tests have information on the result (in this case, all tests passed so you see "OK") as well as how long the test took to run, in both wall-clock time (the first number) and CPU time.
The third test is a bit different, since it uses parameters. Each combination of parameters is listed on its own line, with its own result and timing information.
Finally, there is a brief summary of the results.
Next, lets take a look at the command line options you can
supply to µnit, and what they all do. Note that you can
also get a list, along with some brief documentation, by
passing the --help
option.
If you're trying to reproduce a failure you'll probably
want to use the --seed
parameter. Set it to
the same value that was used before; for example, the
output above used the seed "0x4f78f287". To
recreate the same test conditions, you should run:
./test-runner --seed 0x4f78f287
The iterations option allows you to run each
test a N times (unless the test includes
the MUNIT_TEST_OPTION_SINGLE_ITERATION
flag).
For tests run multiple times, both the average and
cumulative timing information will be included. For
example:
Running test suite with seed 0xa9ae01fb... /example/compare [ OK ] [ 0.00000153 / 0.00000154 CPU ] Total: [ 0.02645989 / 0.02659637 CPU ] /example/rand [ OK ] [ 0.00000363 / 0.00000367 CPU ] /example/parameters foo=one, bar=red [ OK ] [ 0.00000123 / 0.00000125 CPU ] Total: [ 0.02123751 / 0.02157195 CPU ] foo=one, bar=green [ OK ] [ 0.00000114 / 0.00000114 CPU ] Total: [ 0.01970051 / 0.01974722 CPU ] foo=one, bar=blue [ OK ] [ 0.00000110 / 0.00000111 CPU ] Total: [ 0.01906747 / 0.01916830 CPU ] foo=two, bar=red [ OK ] [ 0.00000111 / 0.00000112 CPU ] Total: [ 0.01919546 / 0.01933590 CPU ] foo=two, bar=green [ OK ] [ 0.00000111 / 0.00000112 CPU ] Total: [ 0.01927790 / 0.01937041 CPU ] foo=two, bar=blue [ OK ] [ 0.00000113 / 0.00000112 CPU ] Total: [ 0.01951630 / 0.01938922 CPU ] foo=three, bar=red [ OK ] [ 0.00000113 / 0.00000113 CPU ] Total: [ 0.01957777 / 0.01959911 CPU ] foo=three, bar=green [ OK ] [ 0.00000114 / 0.00000113 CPU ] Total: [ 0.01969819 / 0.01954282 CPU ] foo=three, bar=blue [ OK ] [ 0.00000114 / 0.00000113 CPU ] Total: [ 0.01968101 / 0.01953382 CPU ] 11 of 11 (100%) tests successful, 0 (0%) test skipped.
You can specify any number of parameters by repeatedly
providing the --param
argument. If you do
this, only the value you provide will be tested for each
parameter you provide. For example, if you were to
specify --param foo two
, the output would
change to something like:
Running test suite with seed 0x1baed9f5... /example/compare [ OK ] [ 0.00000678 / 0.00000461 CPU ] /example/rand [ OK ] [ 0.00002074 / 0.00001861 CPU ] /example/parameters foo=two, bar=red [ OK ] [ 0.00001271 / 0.00001146 CPU ] foo=two, bar=green [ OK ] [ 0.00001348 / 0.00001174 CPU ] foo=two, bar=blue [ OK ] [ 0.00001257 / 0.00001073 CPU ] 5 of 5 (100%) tests successful, 0 (0%) test skipped.
If you specify both the foo and bar parameters
(--param foo two --param bar red
), you'll
see something like:
Running test suite with seed 0x8cda69c5... /example/compare [ OK ] [ 0.00000573 / 0.00000360 CPU ] /example/rand [ OK ] [ 0.00001138 / 0.00000953 CPU ] /example/parameters foo=two, bar=red [ OK ] [ 0.00001362 / 0.00001148 CPU ] 3 of 3 (100%) tests successful, 0 (0%) test skipped.
--list
will show you a list of all the available tests.
--list-params
is similar
to --list
, but it will also include a list of
all available parameters and possible values for each
test.
By default, every possible combination of parameters is
executed. If the --single
option is provided
then each test will instead be run in a single
(randomized) configuration. Note that using the same seed
will cause the same parameter values to be used.
We haven't talked about the message logging API yet; we'll do that in the Miscellaneous section. For now, just know that you can choose what level of messages to show by passing this option.
Similar to --log-visible
, except that instead
of controlling what message level are visible,
this option controls what message level causes the test to
fail.
By default, µnit will fork before executing a test, then run the test in the child process. This provides numerous benefits:
Unfortunately, it provides a few (comparatively unimportant) drawbacks.
If you would like to disable forking you can do so with this option. Note that forking is not supported on Windows, thus this option is unavailable on that platform.
Exit the test suite immediately if any tests fail instead of trying to also check the remaining suites.
One handy feature of µnit is the ability to hide the standard error output of tests which pass, allowing you to include debugging output in your tests which will only be shown if the test fails.
This feature works by redirecting stderr to a temporary file and, after the test has completed, it will only splice the contents of that file to its own stderr if the test failed.
There are some drawbacks, so it can be disabled if you choose:
Control whether or not colors are used when printing the test results (green for success, yellow for skipped, and red for failed or errored).
isatty()
function, on Windows it will also check for the presence
of the ANSICON environment variable.
Print a brief description of the available command line options and exit.
µnit includes the ability to nest suites of tests into other suites this is primarily used to make it easier to split your tests across multiple files, but it could also be used to import tests from another project. For example, if projects A and B both use µnit, and project B uses project A (perhaps as a git submodule), it could be possible for project B to include project A's tests in their own.
This feature is relatively straightforward;
each MunitSuite
has a suites field
which can hold an array of sub-suites.
When using this feature, keep in mind that prefixes are appended to, not replaced, as the runner descends through the suites. For example, if your top-level suite has a "foo" prefix, and you embed a sub-suite with a "bar" prefix, tests in the sub-suite will be named like "/foo/bar/baz" and "/foo/bar/qux", not "/bar/baz" and "/bar/qux".
µnit contains a set of macros for memory allocation.
Using them is, of course, completely optional; they're
effectively the same thing as calling the built-in memory
allocation functions (malloc
and calloc
) followed by
an munit_assert_not_null
call on the result.
Additionally, all memory will be cleared.
void* munit_malloc(size_t size)
malloc()
, except the memory is cleared and guaranteed to be non-NULLvoid* munit_calloc(size_t nmemb, size_t size)
calloc()
, except the memory is guaranteed to be non-NULLType* munit_new(Type)
munit_malloc(sizeof(Type))
Type* munit_newa(Type, size_t nmemb)
munit_calloc(nmemb, sizeof(Type))
Memory returned by any of these functions can be
deallocated by calling free()
. Memory
allocated by munit_new()
or munit_newa()
will be cast to the
appropriate return type.
µnit includes a small message logging utility which can be used to log messages at four different severities:
The logging functions are:
void munit_logf(MunitLogLevel level, const char* fmt, ...)
printf
-style function to log a message with the specified level.void munit_log(MunitLogLevel level, const char* message)
void munit_errorf(const char* fmt, ...)
printf
-style function to log an error.void munit_error(const char* message)
Note that trailing newlines are not necessary (or desirable) as they will be added automatically.
MUNIT_LIKELY(expr)
__builtin_expect((expr), true)
on compilers which support it.MUNIT_UNLIKELY(expr)
__builtin_expect((expr), false)
on compilers which support it.MUNIT_UNUSED
MUNIT_ARRAY_PARAM(param_name)
param_name
on C99-compliant compilers; intended to be used for conformant array parameters.MUNIT_SIZE_MODIFIER
Because nothing else did what I wanted:
A PRNG is a great way to randomize tests, which helps increase coverage, without slow exhaustive testing. See the PRNG section of the documentation for details.
There are many unit testing frameworks which don't require lots of build system magic. There are also lots of build systems which are full featured. Very few manage to tick both boxes.
Relying on platform-specific functionality or linker
trickery can make for really cool features, but it's
not feasible if you're trying to write software which
works across multiple platforms. µnit's requirements
may be a bit higher than some others (for example,
you'll need a working malloc()
), but
they're low enough for the vast majority of users.
Timed unit tests can't replace "proper" benchmarking but they're a lot better than nothing. Let's be honest, most developers are even worse about benchmarking than unit testing.
That said, it can be helpful even if you already do benchmarking. The original µnit user has a pretty impressive benchmark, and I've still found µnit's timing information helpful.
MIT.
µnit was originally written for Squash, and, at the time this was written, Squash is the only project publicly using it. By the time you're reading this, though, there may be others… you can search GitHub. Or use another code search engine (such as Open Hub Code Search, or Google.
It's "µnit", which is rendered as "munit" in ASCII. The funny looking 'u' is the letter mu, which isn't part of ASCII.
Lots of projects include macros used to define tests and/or suites. For a good example, see the Basic Usage section of greatest's README. This can make it very easy to define tests and suites.
I'm generally not comfortable hiding things like that in macros; I would rather make the API simple enough that you don't need the macros. Hiding things behind simplified macros often makes it harder to understand what is going on, and harder to use non-default settings since you have to completely switch APIs (likely to something that is more complicated than it needs to be because you're expected to use the macros, not the "internal" API).
For this to work, the API has to be simple enough that you don't need the macros. I think µnit's API meets this criteria, but if you disagree I'd be interested in hearing from you.
That's okay, µnit isn't for everyone; people have different requirements and preferences. Here are a few you might want to look into:
If you would like for something to be added to the list just let us know.