Testing Procedures

Good software must be robust and proactively defend against failures and attacks. One of the necessary components of good software, then, is to test it thoroughly and to ensure that your code handles both good and bad input. You are required to create test cases of your own.


You can access a blank project structure by logging into stu.cs.jmu.edu and extracting the following archive:

/cs/students/cs361/f23/kirkpams/src/project.tgz

Compiler Warnings as Errors

All of the project source code distributions contain a Makefile that specifies the compiler flags to be used. These flags include -Werror -Wall -Wextra. These three flags enforce a strict coding standard that you must follow. These flags specify additional code checks beyond what is syntactically required.

As an example, with this combination of flags, you cannot declare a variable and not use it. Nor can you use a local variable without initializing it. Both of these are technically allowed by C, but they are a common source of bugs and they are signs of sloppy coding. All code submitted must compile cleanly with these flags in place.


Error Checking

When testing your code, you MUST try to break it with invalid input. Your code must detect this invalid input and react in an appropriate manner. Your test cases must include the following conditions:

  • invalid command-line flags
  • missing required command-line arguments
  • passing non-existent or empty files
  • repeating command-line flags
  • invalid arguments to flags
  • invalid combinations of flags
  • very large numbers and files
  • passing invalid parameters to functions
  • passing NULL values and empty strings as arguments

Testing Infrastructure

All project distributions have the same structure:

project/src/
This directory will contain all of the source code for the projects. Some code will be provided, but you may need to create other files here, too.
project/tests/
This directory contains the testing infrastructure code.
project/build/
This directory will be generated when you run make to compile your code.

The CS 361 testing infrasture contains several files and subdirectories that you will be working with to test your code:

project/tests/public.c
You will modify this file to include unit tests for your project executable. You will modify this file to test functional units of the program rigorously, with both acceptable input and intentionally bad parameters (although the types must be correct). For instance, you will need to include tests that pass NULL pointers to ensure the functions handle these arguments without crashing.
project/tests/itests.include
This configuration file specifies command-line arguments that will be used for integration testing. You will modify this to add test cases for both good and bad command-line arguments.
project/tests/expected/
This directory contains text files with the expected output for integration tests. When you add test cases to itests.include, you must also create a corresponding *.txt file in this directory.
project/tests/inputs/
This directory contains files that can be used as input to the projects. In your itests.include arguments, these files should be referenced with the inputs directory.
project/tests/Makefile, project/tests/integration.sh, and project/tests/testsuite.c
These files are the drivers for the testing infrastructure. You should not need to modify them, but you are encouraged to read through them.

Running All Tests

If you navigate to the project directory and run the provided code with the existing test cases, you will see the following output:

make -C tests test
make[1]: Entering directory '/cs/home/stu-f/kirkpams/project/tests'
make -C ../
make[2]: Entering directory '/cs/home/stu-f/kirkpams/project'
make[2]: Nothing to be done for 'default'.
make[2]: Leaving directory '/cs/home/stu-f/kirkpams/project'
gcc -c -g -O0 -Wall --std=c99 -pedantic -Wextra  testsuite.c
gcc -c -g -O0 -Wall --std=c99 -pedantic -Wextra  public.c
gcc -g -O0  -o testsuite testsuite.o public.o ../build/helper.o  -lcheck -lm -lpthread -lrt -lsubunit
========================================
             UNIT TESTS
0%: Checks: 1, Failures: 1, Errors: 0
public.c:12:F:Public:add_2_3:0: Assertion 'add (2,3) == 5' failed: add (2,3) == 0, 5 == 5
========================================
          INTEGRATION TESTS
Addition_3_5                   FAIL (see outputs/Addition_3_5.diff for details)
No memory leak found.
========================================
make[1]: Leaving directory '/cs/home/stu-f/kirkpams/project/tests' 

The first grouping of lines (before the first =======) show the compilation of the test suite. This should not fail. If it does, the problem is most likely that you have modified the public.c file in a way that the header files are not correctly found. Also, make sure that your tests use the START_TEST and END_TEST structure required by the check framework.

The next grouping of lines show the results of the unit tests. The first line indicates that the current code passed 0% of the test cases. There was 1 test case (Checks) and it failed. The test cases can also detect run-time errors, although this code did not produce any. After this first line, you will be given a specific output about the test case failure. For instance, this output indicates that add (2,3) should return 5, but it returned 0 instead.

The third grouping shows the output of the integration tests. While unit tests focus on specific internal functions of the project code, the integration tests exclusively compare the output produced with what was expected. Your code must match the expected output verbatim to pass these tests.

Unit Testing

Unit test cases are intended to test whether or not one particular function or piece of code is working correctly. They should be very precisely targeted in what they do. To create one, you can modify the tests/public.c file. The structure of a test case is as follows:

/* This should never fail */
START_TEST (sanity_check)
{
  ck_assert (1 == 1);
}
END_TEST 

The test case is passed if the assertion in the ck_assert() is true. You can specify as many assertions in a single test case as is appropriate. In order to pass the test case, they must all be true. For more information on how to create assertions, you should consult the Check API. You can also see more documentation (including a tutorial) on the Check home page.

Once you have created a test case, you add it to the test suite by adding the following line to the public_tests() function:

tcase_add_test (tc_public, sanity_check); 

Integration Testing

While unit tests focus on individual components of your code, integration testing will evaluate the complete functionality of your project. These tests are based purely on the output produced by your code. It must match the contents of the tests/expected/*.txt file verbatim. If there is even a single extra space anywhere, the test case fails. Once you have created an integration test case output file, you add the test case by adding the following line to the tests/itests.include file:

run_test    Add_two_negatives       "-1 -2" 

This line indicates that the Add_two_negatives test case should be run with the following command line (add is the project executable that was compiled in the project directory):

$ ../add -1 -2 

Using Input Files

Most test cases, particularly as the semester goes on, will need some sort of input file. You could (theoretically) put these anywhere you wanted and refer to them appropriately, but the convention that we will be using is to place them all in the tests/inputs directory. If the B_add_two_negatives test case above also used an input file called foo, the tests/itests.include line would look like:

run_test    Add_two_negatives       "-1 -2 inputs/foo" 

Memory Leak Check

Our testing infrastructure also automatically runs valgrind to check for memory leaks. In short, if you do a malloc() somewhere but never use free() to clean up the data, you have lost that memory. If you program runs for long enough, these leaks add up, and you will never be able to allocate new dynamic data structures. The following output shows what you would see if you had a leak:

==919== LEAK SUMMARY:
==919==    definitely lost: 4 bytes in 1 blocks
==919==    indirectly lost: 0 bytes in 0 blocks
==919==      possibly lost: 0 bytes in 0 blocks
==919==    still reachable: 0 bytes in 0 blocks
==919==         suppressed: 0 bytes in 0 blocks
==919== Rerun with --leak-check=full to see details of leaked memory
==919== 
==919== For counts of detected and suppressed errors, rerun with: -v
==919== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) 

In this case, the program leaked 4 bytes of memory. Here is the program that was used to generate this summary:

#include <stdio.h>
#include <stdlib.h>

int
main ()
{
  int *p = (int)malloc (4);
  return 0;
} 

It is possible to get more information about the cause of the leak by running valgrind on the test case manually with the -leak-check=full option. That is, for the add program described above, you could run the following line from the project/tests directory:

$ valgrind --leak-check=full ../add -1 -2 

In this case, you would also see the following lines of output:

==922== 4 bytes in 1 blocks are definitely lost in loss record 1 of 1
==922==    at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==922==    by 0x400537: main (in /cs/home/stu-f/kirkpams/project/tests/leak) 

These lines indicate that the leaked memory was allocated by malloc() inside of main(). At that point, I could look in my source code and see that the variable p had memory allocated to it, but it was never freed.



James Madison University logo


© 2011-2024 Michael S. Kirkpatrick.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.