The purpose of this document is to provide guidelines on how to write and maintain tests in Delta Shell. It also provides list of typical issues occurring during automatic testing and guidelines on how to resolve them. The issues are discussed in order of severity.

Testing Conventions

Where data access or integration tests can write their output data?

Delta Shell uses the following directory layout, not locations of the test/ and test-data/ directories used for test code and data:

build/
doc/
lib/
setup/
src/ ..................................... program sources
  Common/
  DeltaShell/
  Plugins/
    DelftModels/
       DeltaShell.Sobek.Readers/
          DeltaShell.Sobek.Readers.csproj
          ...
test/ .................................... tests sources
  Common/
  DeltaShell/
  Plugins/
    DelftModels/
       DeltaShell.Sobek.Readers.Tests/
          DeltaShell.Sobek.Readers.Tests.csproj
          ...
test-data/ ............................... data files used by some tests
  Common/
  DeltaShell/
  Plugins/
    DelftModels/
       DeltaShell.Sobek.Readers.Tests/
          RD-02X.bui
          ...
DeltaShell.proj .......................... MSBuild project
DeltaShell.sln ........................... Visual Studio Solution

Some conventions we use when writing tests:

  • Projects of the tests are organized in the same way as projects of the sources (in most cases), see above listing.
  • Tests should never write their output data into test-data/ directory. That directory is used for input data files used the tests. Instead they should copy required data into the test output directory (usually bin/Debug or bin/Release, it is very easy to do using FileUtils.Copy() method).
  • Make sure that output files from the previous test run are deleted somewhere in test [DS10:SetUp].

Where can I find data files which can be used in data access or integration tests?

All data files which are used in the data access or integration tests must be located in the test-data/ directory. Within the test code this directory can be accessed using TestDataPath.Path variable which is available in DelftTools.TestUtils project.

Name the class xxxTest.*

Why?
To make it easy to find the tests for a class.

Unit test categories.

Make sure that you put [DS10:Category(TestCategory.<name of the category>)] attribute above your test if it is not a unit test. This is very important since it makes sure that all unit tests run very fast (3000 tests in <1 min).

Currently we use the following categories:

  • <no category is specified> - unit test. Fast test to test a class in isolation.
  • Integration - integration test where multiple classes / components are used
    • DataAccess - specific integration test where file is accessed (read / write)
    • WindowsForms - specific integration where form is shown
  • Performance - unit or integration test where speed of the code is measured.
  • Jira - (optional) test describing a bug in Jira. The test should specify for which issue it was created.
  • WorkInProgress - used to mark test which is incomplete (probably failing). In this case it will not run on build server during automated testing.
  • Slow - very slow integration tests (>1m) need to run separately, less frequently, in order to keep the rest of the test suite run fast. Mark your test Slow in addition to the other categories.

When test shows forms and accesses files - use TestCategory.Integration as a category, no multiple categories!!

Example of the unit test
[Test]
public void AddDependentVariableValues()
{
   var x = new Variable<double>();
   var y = new Variable<double> { Arguments = {x} };

   x.Values.Add(0.0);
   x.Values.Add(0.1);
   x.Values.Add(0.2);

   Assert.AreEqual(3, x.Values.Count, "number of values in x (independent variable)");
   Assert.AreEqual(3, y.Values.Count, "number of values in y (dependent variable)");
   Assert.AreEqual(y.DefaultValue, y.Values[0], , "first value in dependent variable is set to default");
}
Example of the unfinished performance test.
[Test]
[Category(TestCategory.Performance)]
[Category(TestCategory.WorkInProgress)]
public void PerformanceAddValuesToArgument_WithEvents()
{
    IVariable<double> x = new Variable<double> { GenerateUniqueValueForDefaultValue = true };
    IVariable<double> y = new Variable<double> { Arguments = {x} };


    const int valuesToAdd = 20000;

    // check if code runs fast enough
    TestHelper.AssertIsFasterThan(50, () => x.Values.InsertAt(0, 0, valuesToAdd));
}

Build Server and Automatic Testing

What should I do before checking code in?

Every developer is strongly advised to use TeamCity pre-tested commit feature if commit was not tested locally! Visual Studio plugin for TeamCity can be downloaded from My Settings & Tools menu on build server.

The rule is: if someone checks code in that breaks build (new failing tests) - build must be reverted if it is not fixed within 30 min and the author of the broken tests does not take responsibility on TeamCity, and can't be reached. Use the following script to revert a specific build: build/tools/revert-svn-changes.cmd

Build servers not compiling / hundreds of test failing

– Contact person responsible
– If unable to contact in time:
— Try fixing it yourself: add missing reference, fix typo, or..
— Try reverting the check-in using SVN on your own (clean!) working copy (see Undo commit when new tests (or build) fail)

Configuration crashing

– TestRunner crashed. Check build log, or try test run again.
– Teamcity will mark the configuration red, but will not show any failed tests (see image, 2nd row).
— If the number of ran tests is less than what is expected, this means the test run was interrupted, usually due to an internal crash (stack overflow, teamcity crash, etc). Check the build log stack trace for details.

Configuration hanging / execution timeout

Each configuration has maximum duration, typically 30, 60 or 90 minutes. The test run is killed after that time. Identify why it's taking longer than before:
– TestRunner hanging on .NET exception window? Logon build server with remote desktop to note exception and close window.
– Infinite loop / recursion going on? Check the build log for the running test / stack trace on process kill to identify test.
– Was a very 'heavy' test added? Contact author, fix test duration, or (worst case) adjust configuration allowed time

References between test projects

When one test project references another test project, the build server will run the test twice. This is undesired, so no references should exist between test projects. Cross-references are typically created when people try to re-use helper classes/functions from another test project.

Unfortunately it is not always easy to identify this situation, but sometimes it leads to unstable tests (for example issues with STA threading, drag drop, OLE, etc), so those may be an indicator. Another indicator is if the number of tests in a configuration suddenly increases significantly. (see image)

Build server running tests of deleted projects

When you delete or rename a (test)project, the old dll is not always deleted from the build server. Sometimes this results in the build server still running the tests from that dll. Below you see an example of a renamed test project. One build server (d00551) only runs the new test dll, while the other (D00910) also runs the tests in the old test dll, resulting in a different number of total tests ran.

You can verify this in the Tests tab on in TeamCity for the build server which appears to run more tests: if you sort by name and go through the list, you will notice duplicates, see image below.

You can fix this by either forcing a clean checkout (takes significant time for build server to be available again), or manually deleting the dlls on each affected build server (takes more manual labour).

Identify failing tests (bonus points: occasionally failing tests)

– Contact person responsible for failing test, or..
– Fix the test yourself (if safe to do), or..
– Worst case: move test to WorkInProgress

Occasionally failing tests
Usually it is trivial to determine which check-in caused a failing test. But sometimes the test fails on a (seemingly) unrelated check-in. In those cases it may be an unstable test: one that fails only occasionally, for example due to:

  • a specific build server
  • the order in which tests are run
  • the culture (dot versus comma)
  • threading race conditions
  • locked files
  • path inconsistencies
  • DragDrop registration failed (see below)
  • etc..

Unstable tests are usually a pain to debug, especially if it can't be reproduced locally. If it only fails on the build server, some time must be invested to add debug logging, or check if the build server has visual studio 2010 to debug (with remote desktop).

Incorrect test categories

Tests running in the wrong category can make them untraceable (performance tests) or can slow down configurations which are supposed to be fast (eg, integration tests in unit test configuration). You can list all tests on the build server and search for obvious mistakes, like those containing 'Save' 'Load', 'Fast' in their name. Tests that take >30 seconds (in any category) should probably be in the Slow category. This prevents them from slowing down the configurations.

Finding tests that should not be in the unit test configuration
Typically people tend to forget to put any category on tests, so the first place to look at is the unit test category on the build server. A very good method is to order tests by their duration. Unit tests taking >5 seconds are dubious.

Speeding up tests

Some tests take way too long to test what they are supposed to be testing. For example when testing if a model runs, it may be sufficient to run for two hours, instead of running for several weeks. Tests containing for-loops with large numbers can perhaps have a smaller limit. Of course make sure you don't make the test unstable.

Unstable performance measurements
Whenever a performance test duration varies more than around 15% on subsequent runs it becomes difficult to work with. When the expected duration is low (eg < 20 milliseconds) it is of course easily disturbed by external factors, such as other processes requesting resources. In those cases it may be a good choice to make the test slightly heavier, for example to a target time of around 40 milliseconds.

In other cases the variation may be caused by the ordering of tests, especially when a caching mechanism is used internally. In those cases it may be better to either 'warm-up'/'cool-down' the call first. If you want to know the best-case time: do the call once before doing the call within AssertIsFasterThan. If you want to know the worst-case time: clear relevant caches (if possible).

AssertIsFasterThan
Any performance test should use TestHelper.AssertIsFasterThan(). And only once per test method. In 99.99% of the cases it's perfectly possible to use AssertIsFasterThan and it provides useful/uniform measurement graphs on the build server.

Undo commit when new tests (or build) fail

When you or someone has made a commit that crashes the build server, or made hundreds of tests fail and that person cannot be reached to fix it, you can usually revert the check-in without running the risk of losing anyones work.

The main requirement is that you have a clean working copy, eg no local changes and up-to-date with the repository. If you don't have a secondary (or tertiary) check-out directory you can use for this purpose, you may want to commit your local changes first (if they are safe to commit) to make sure it doesn't get mixed up.

The how-to for tortoise is described

here

.

To summarize:

  1. Go to your (clean, up-to-date) Delta-Shell check-out folder / Solution in Visual Studio
  2. Right click, navigate to SVN Show Log
  3. Select the revision(s) you want to undo.
  4. Right click to open the context menu (see image)
  5. Choose 'Revert changes from this revision'
  6. The changes have now been undone in your local files.
  7. Commit the changes!
  8. Contact the person responsible with details about revision numbers.
    • The changes of the person responsible are not lost: they are available in the repository as previous revision.

Fix Drag&Drop problem

Whenever one of your (Windows.Forms) test fails with something about DragDrop registration failing, there is probably a thread apartment issue. The details are quite technical, but to fix it you must simply add the following file (App.config) to your test project:

App.config
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <sectionGroup name="NUnit">
      <section name="TestRunner" type="System.Configuration.NameValueSectionHandler"/>
    </sectionGroup>
  </configSections>
  <NUnit>
    <TestRunner>
      <!-- Valid values are STA,MTA. Others ignored. -->
      <add key="ApartmentState" value="STA" />
    </TestRunner>
  </NUnit>
</configuration>
  • No labels