Blog

Why you should subscribe at top-level for objects

Imagine the following class. The NetworkData class represents data on the network and needs to react to removal/addition of branches on the network. It could be implemented like this:

class NetworkData
{
   public INetwork Network 
   {
       get {return network;}
   }
   set 
   {
      if (value == network)
      {
         return;
      }
      if (network != null)
      {
         (network.Branches).CollectionChanged -= NetworkCollectionChanged;
      }
      network = value;
      if (network != null)
      {      
          (network.Branches).CollectionChanged += NetworkCollectionChanged;
      }
   }
}
public void NetworkCollectionChanged(object sender, NotifyCollectionChangingEventArgs e)
{
   //we need a check here because the collectionchange might be bubbled from entities below branch
   if (sender == Network.Branches)
   {
      //do something interesting..
   }
}

This looks pretty fine but has a subtle problem with NHibernate...what happens during a save?

1 NHibernate replaces the collection of Branches in network with its own collection in order to track the collection
2 NHibernate resets the same network in NetworkData. Because it is the same instance nothing is done.

Now when a change in the collection of branches happens the check in NetworkCollectionChanged comes to the wrong conclusion. Network. Branches is now NHiberatenate's list where sender is still the wrapped inner list. Hence the check evaluates false and the 'interesting' code is not run. A way to fix it would be to subscribe on the network level like this:

      if (network != null)
      {
         (network).CollectionChanged -= NetworkCollectionChanged;
      }
      network = value;
      if (network != null)
      {      
          (network).CollectionChanged += NetworkCollectionChanged;
      }

We now use bubbling functionality of network. The sender that is received in the NetworkCollectionChanged is the new list that NHibernate set in the network instead of the 'old' list that is wrapped. And sender == Network.Brances again...

Release Flow
Testing log4Net log messages.

How can you assert that some code generates a log message?
Like this:

 TestHelper.AssertLogMessageIsGenerated(() =>
                                                       {
                                                           Log.Debug("Hello");
                                                       }, "Hello");

The first argument of the method is an Action. The second part is the expected message. The testhelper executes the action and verifies the logmessage was found.

How to create a OpenDA model runner with deltashell

This article describes what needs to be done if you want to create an application that uses the OpenDAModelProvider interface of deltashell to work with DS models. I created a windows forms application using the 'zip' file from the nightly build. Use at least revision 12906 (this is the revision I used). There was a bug in earlier build

Setup

1 Create a folder called D:\ODA
2 Download deltashell.zip and extract it to D:\ODA\shell. Now you should have a folder D:\ODA\shell\plugins and D:\ODA\shell\release

Creating the application

3 Create a windows forms application called 'Runner' in D:\ODA\Runner
3.1 Add references to the following assemblies :

from D:\ODA\shell\plugins\DeltaShell.Plugins.OpenDA
DeltaShell.Plugins.OpenDA
OpenDA.DotNet.AdditionalInterfaces

from D:\ODA\shell\release
OpenMI.Standard2
DelftTools.Shell.Core
DelftTools.Units
DelftTools.Utils
DeltaShell.Core

So your references look like this

Add a button to the form and add the following code to click event handler:

                var provider = new DeltaShell.Plugins.OpenDA.DeltaShellOpenDAModelProvider();
                provider.Initialize(@"D:\ODA", "settings.xml");
                var instance = provider.CreateInstance();
                provider.SaveInstance(instance);

Creating the model.

1 Start the loader in D:\ODA\shell\release\DeltaShell.Loader.exe. Do not use a different version of DS to create the model. You might get versioning problems with the file
2 Add a demo flow model via menu development->'flow model 1d (demo network)'.
3 Rename the model to 'model'.
4 Save the project as model.dsproj in d:\ODA and close DeltaShell

The settings file

Create a text file named settings.xml in D:\ODA. These are the contents:

<?xml version="1.0"?>
<DeltaShellOpenDAModelProviderSettings xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <ProjectPath>D:\ODA\model.dsproj</ProjectPath>
  <FullyQualifiedModelWrapperType>DeltaShell.Plugins.DelftModels.WaterFlowModel.OpenMI2.OpenMI2WaterFlow1DWrapper, DeltaShell.Plugins.DelftModels.WaterFlowModel, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null</FullyQualifiedModelWrapperType>
  <ModelName>model</ModelName>
  <DeltaShellLoaderFolder>D:\ODA\shell\release</DeltaShellLoaderFolder>
</DeltaShellOpenDAModelProviderSettings>

Running it.

Start the runner application and click the button. If all goes well your model.dsproj project will contain a model called Calibrated model. This is the model that was passed in to the SaveInstance method of the provider.

Validation library in DS

Validation via aspect (should be attributes)

The ValidationAspects library defines an Extension method on object:

public static ValidationResult Validate(this object instance);

This method uses attributes like

[GreaterThan] and [ValidationMethod]

To create a ValidationResult with validation errors. So you can validate any object and all the validation rules defined on that object with be tested.

Using PostSharp with these aspect enables throwing validation expections on setter.

FileImporters in DS

Deltashell currently has two interfaces that should be used to implement import functionality

IFileImporter : Can add items to a folder or can replace value of dataitem if that value matches the supported datatypes of the IFileImporter.

ITargetItemFileImporter : mutates it's TargetItem by importing stuff into it if TargetItem is set. If targetitemisRequired is false this importer can also function as a regular IFileImporter

These interface need review (wink)

Logging in DS

When you run a model in DS you automaticcaly get a dataitem with a log file with all the log4net messages that occured during model run.

'During' means the model is in a running state: (initializing,executing,finishing).

This is done RunningActivityLogAppender listening to changes in activities of activityrunner. So when a model goes to initializing the RALA keeps messages for that model. When the model finishes the RALA adds (or updates) a dataitem in the model.

Creating a new version of WaterFlowModel MSI that installs side-by-side

To make a new version of the flow msi you need two Guids. One for the product (upgradecode) and one for the folder in the start menu.
Create your GUIDS here: http://www.guidgenerator.com/online-guid-generator.aspx

1 Change the OutputName tag in the wixProj file. Keep the suffix of buildnumber as it adds a buildnumber to the msi name.

Changes in the Product.wxs:
2 Change the <?define ReleaseVersion="1.0.1" ?> element and <ProductName> in the Product.wxs
3 Change the GUID for the UpgradeCode in the <Product> element.
4 Use the same GUID as in 3 for the <Upgrade> elementID
5 Change the GUID of the ProgramMenuDir" component.Like this <Component Id="ProgramMenuDir" Guid="CHANGE-THIS-GUID">

Note that the assembly version of DeltaShell.Gui should also be changed to allow real side-by-side (no setting sharing). When the MSI is build by teamcity this works automatically because teamcity includes the svn revision in the version.
The name of the artifact in the MSI and ZIP configuration should also change
Check if your Lenovo W510 uses Turbo Boost Mode

It looks like Lenovo has Turbo Boost mode but is stays off using default settings. It results in default frequency of 1.3GHz on a single core. It can be increased to 2GHz using:

  • Disable Hyper Threading option in BIOS (results in 4 cores instead of 8 cores, where 4 cores are not real)
  • Test it e.g. by running compilation in VS and running CPU-Z and / or Inter Turbo Boost Monitor

On my machine it shows:

Make your W510 compile faster!

NHibernate cascades
"deleted object would be re-saved by cascade (remove deleted object from associations)[DelftTools.Hydro.HydroNode#2]"

Recently a lot of NHibernate errors have come up. A very common cause is the cascade rules in the mapping as they require close attention. An example of a 'wrong' mapping was the mapping of composite branch structures (CS) (the mapping is shown below). The CS has a list of child structures and these were persisted using a cascade='all'. The same structures were also in the list of branchfeatures on the branch and these were mapped using cascade=all-delete-orphan. So the same weir was saved via branch and via the composite structure. This is fine as long as the cascade don't conflict.

Sometimes a conflict occurs when one cascade results in a delete and another in a save. For example a compositie structure is deleted and the cascade deletes all child-structures. But one of these structures is still in the branchfeatures collection of a branch. The branch cascade insists on the object being saved (or at least not deleted) and the CS cascade wants to delete. Hence a conflict. This can be fixed by removing the structure from both lists or downgrading/removing one of the cascades. In this case the branch should be responsible for saving the features and the CS should have a list of child structures without cascades.

A quick recap of the different cascades :

  • none - do not do any cascades, let the users handles them by themselves.
  • save-update - when the object is saved/updated, check the associations and save/update any object that require it (including save/update the associations in many-to-many scenario).
  • delete - when the object is deleted, delete all the objects in the association.
  • delete-orphan - when the object is deleted, delete all the objects in the association. In addition to that, when an object is removed from the association and not associated with another object (orphaned), also delete it. Orphan only checks one relation
  • all - when an object is save/update/delete, check the associations and save/update/delete all the objects found.
  • all-delete-orphan - when an object is save/update/delete, check the associations and save/update/delete all the objects found. In additional to that, when an object is removed from the association. (orphaned) delete it. So 're-parenting' (changing parent) is a problem.

A note about orphan cascade:
Orphan only looks at one instance at a time. So a composite structure might have a list of child structures. When a child structure is removed from the CS it is considered an orphan. NHibernate does not care about other relation the structure might have (for example with branch or a new CS)

About reparenting : http://fabiomaulo.blogspot.com/2009/09/nhibernate-tree-re-parenting.html

Speed-up of build on new Lenovo Laptops

The following actions should speed-up overall build on new laptops

  • Disable pagefile (optional)
  • run build/ngen_all.cmd - will pre-compile PostSharp and all shared libs

Benchmark

Old Laptop (DELL E6500, 7200RPM HDD, 2x cores CPU), laptop of Martijn Muurman:

Parallel build, using build-debug.cmd

519 sec

Visual Studio 2008

680 sec

New Laptop (Lenovo W510, OCZ2 Vertex SSD, 4x cores CPU, i7):

Parallel build, using build-debug.cmd and RAM drive

93 sec

Parallel build, using build-debug.cmd and SSD

93 sec

Parallel build-debug.cmd (without PostSharp) SSD

74 sec

Parallel build-debug.cmd (without PostSharp, without post-build copy events) SSD

61 sec

Visual Studio 2010 build, SSD

195 sec

Visual Studio 2010 build, RAM drive

198 sec

Visual Studio 2010 build, loader, RAM drive

198 sec

Parallel build, using build-debug-loader.cmd and RAM drive

~60 sec

Parallel build allows additional x2.1 speed-up but it very hard to make it work in Visual Studio since it builds projects sequentially. One option is to add parallel build via Tools, see the following post to add it in Visual Studio: Hack: Parallel MSBuilds from within the Visual Studio IDE

The Exception you did not expect

The following test is green in resharper test runner.

[Test]
[ExpectedException(typeof(ArgumentOutOfRangeException))]
public void ReadOnly()
{
     throw new InvalidOperationException();
}

So make sure you check the message as well like this

        [Test]
        [ExpectedException(typeof(ArgumentOutOfRangeException),ExpectedMessage = "hoi")]
        public void ReadOnly()
        {
            throw new InvalidOperationException("kees");      
        }

The last test does show up red.

Build server statistics (NUnit)

Workload of build agens

Statistics of Delta Shell NUnit configuration

TDD is not good design methodology - very good reading

Check out links as well. About SOLID, SRP, etc.

Link to the article: http://blog.ploeh.dk/2010/12/22/TheTDDApostate.aspx

The TDD Apostate

I've been doing Test-Driven Development since 2003. I still do, I still love it, and I still expect to be doing it in the future. Over the years, I've repeatedly returned to the discussion of whether TDD should be regarded as Test-Driven Development or Test-Driven Design. For a long time I've been of the conviction that TDD is both of those. Not so any longer.TDD is not a good design methodology.
Over the years I've written tons of code with TDD. I've written code where tests blindly drove the design, and I've written code where the design was the result of a long period of deliberation, and the tests were only the manifestations of already well-formed ideas.

I can safely say that the code where tests alone drove the design never turned out particularly well. Although it was testable and, after a fashion, 'loosely coupled', it was still Spaghetti Code in the sense that it lacked overall consistency and good abstractions.

On the other hand, I'm immensely pleased with code like AutoFixture 2.0, which was mostly the result of hours of careful contemplation riding my bike to and from work. It was still written test-first, but the design was well thought out in advance.

This made me think: did I just fail (repeatedly) at Test-Driven Design, or is the overall concept a fallacy?

That's a pretty hard question to answer; what constitutes good design? In the following, let's assume that the SOLID principles is a pretty good indicator of good design. If so, does test-first drive us towards SOLID design?

TDD versus the Single Responsibility Principle

Does TDD ensure the application of the Single Responsibility Principle (SRP)? This question is easy to answer and the answer is a resounding NO! Nothing prevents us from test-driving a God Class. I've seen many examples, and I've been guilty of it myself.

Constructor Injection is a much better help because it makes SRP violations so painful.

The score so far: 0 points to TDD.

TDD versus the Open/Closed Principle

Does TDD ensure that we follow the Open/Closed Principle (OCP)? This is a bit harder to answer. I've previously argued that Testability is just another name for OCP, so that would in itself imply that TDD drives OCP. However, the issue is more complex than that, because there are several different ways we can address the OCP:

  • Inheritance
  • Composition

According to Roy Osherove's book The Art of Unit Testing, the _Extract and Override _technique is a common unit testing trick. Personally, I rarely use it, but if used it will indirectly drive us a bit towards OCP via inheritance.

However, we all know that we should favor composition over inheritance, so does TDD drive us in that direction? As I alluded to previously, TDD does tend to drive us towards the use of Test Doubles, which we can view as one way to achieve OCP via composition.

However, another favorite composition technique of mine is to add functionality with a Decorator. This is only possible if the original type implements an interface that can be decorated. It's possible to write a test that forces a SUT to implement an interface, but TDD as a technique in itself does not drive us in that direction.

Grudgingly, however, I most admit that TDD still scores half a point against OCP, for a total score so far of ½ point.

TDD versus the Liskov Substitution Principle

Does TDD drive us towards adhering to the Liskov Substitution Princple (LSP)? Perhaps, but probably not.

Black box testing can't protect us against the SUT attempting to downcast its dependencies, but at least it doesn't particularly pull us in that direction either. When it comes to the SUT's treatment of a dependency, TDD pulls in neither direction.

Can we test-drive interface implementations that inadvertently violate the LSP? Yes, easily. As I discussed in a previous post, the use of Header Interfacespulls us towards LSP violations. The more members an interface has, the more likely are LSP violations.

TDD can definitely drive us towards Header Interfaces (although they tend to hurt in the long run). I've seen this happen numerous times, and I've been there myself. TDD doesn't properly encourage LSP adherence.

The score this round: 0 points for TDD, for a running total of ½ point.

TDD versus the Interface Segregation Principle

Does TDD drive us towards the Interface Segregation Principle (ISP)? No. It's pretty easy to test-drive a SUT towards a Header Interface, just as we can test-drive towards a God Class.

Another 0 points for TDD. The score is still ½ point to TDD.

TDD versus the Dependency Inversion Principle

Does TDD drive us towards the Dependency Inversion Principle (DIP)? Yes, it does.

The whole drive towards Testability – the ability to replace dependencies with Test Doubles – drives us exactly in the same direction as the DIP.

Since we tend to mistake such mechanistic loose coupling with proper application design, this probably explains why we, for so long, have confused TDD with good design. However, although I view loose coupling as a prerequisite for good design, it is by no means enough.

For those that still keep score, TDD scores 1 point against DIP, for a total of 1½ points.

TDD does not ensure SOLID

With 1½ out of 5 possible points I have stated my case. I am convinced that TDD itself does not drive us towards SOLID design. It's definitely possible to usetest-first techniques to drive towards SOLID designs, but that will always be an extra effort that supplements TDD; it's not something that is inherently built into TDD.

Obviously you could argue that SOLID in itself is not the end-all, be-all of proper API design. I would agree. However, based on my experience with TDD, I think the conclusion holds. TDD does not drive us towards good design. It is not a design technique.

I still write code test-first because I find it more productive, but I make design decisions out of band. I'm a Test-Driven Design Apostate.

Yesterday we discussed the SharpMap plug-in API. In my opinion we didn't make any good progress. I thought about some issues of the API and I discussed this with Hidde and Pim. I told them that I really miss the strict separation of concerns and the logical layers (UI, Domain, DataAccess, other) in all architecture and design of the DeltaShell framework. For example if we discuss the interface ILayer used in maps. To which architectural layer does this thing belong? Is it UI, Domain or DataAccess infrastructure, other? When I look at the interface I see all of them collected in one interface which make the interface unclear and hard to understand. We do this in many interfaces. Why do we do this? Its a good practice to use separate logical layers in application frameworks. Why dont we use them? In my opinion we should re-design interfaces like ILayer so that its more clear to which layers they belong. This will make the interfaces more lean and mean. I will try to work out an example to make my point more clear...