Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Requirements on the interface

The interface you use to communicate (IMyModelApi in the example above) should follow a few basic rules to work smoothly with the remoting implementation. Any interface will do: no special attributes are required, but you should try to use mostly simple (value) types (as is common when defining an interface of a native dll).

Supported types:

  • all primitive types (int, double, string, etc)
  • enums
  • NOT decimal
  • double[], bool[], int[], short[], float[], byte[]
  • 'Array' if of above type at runtime!
  • ref/out keywords
  • Type (through type converter)
  • DateTime (through type converter)

To support custom classes a little bit of extra work is required. You can either annotate the class with the required attributes for serialization (see below), or you can introduce a type converter which does custom serialization (see DateTimeToProtoConverter.cs). The latter is useful if you do not have access to the source code (eg, a .NET object) or do not which to mix remoting and domain.

To work without a type converter, you can annotate a class as follows:

Code Block
languagecsharp

    [ProtoContract]
    public class CurvilinearGrid
    {
        [ProtoMember(1)] public int SizeN;
        [ProtoMember(2)] public int SizeM;

        [ProtoMember(3)] public double[] X;
        [ProtoMember(4)] public double[] Y;
        // additional code
    }

The attributes come from the protobuf .net library and the annotation should be done according to their specification. In short; the class should have the ProtoContract attribute and each member you want to serialize to the other process should have the ProtoMember attribute with a unique number. Only non primitive or annotated classes can be members. Also the class should have a default constructor.

Performance considerations

Running code in another process obviously has some performance overhead when compared to running it in the same process. The overhead comes from starting the other process, intercepting the calls, serializing the call & data into bytes, sending the information to the other process, decoding the information on the other end, doing the work, serializing the resulting data into bytes, sending the data back, and then decoding the result value(s). Although the absolute time to do all this work is actually quite small (say, 1 millisecond per call), it may significantly impact your performance depending on how many calls you are making versus the amount of work being done per call. Also if your parameters or result values are large (eg, arrays of data), the time it takes to serialize and deserialize the data increases and/or could lead to increased memory overhead. When running the remote instance on the same machine, DeltaShell counters this problem by using a technique called 'Shared Memory' to transfer large arrays more efficiently. In short; it doesn't serialize the array, instead it just memcpy's it. This happens automatically when the array size exceeds some threshold and requires no additional configuration. It only works for one dimensional arrays however and also not for arrays defined inside custom types.

So, to summarize, take this into consideration:

  • More work / less calls is better
  • Send large multi dimensional arrays as one dimensional
  • Avoid large arrays in custom types

Additionally for your purpose it might be beneficial to re-use remote instances, start them in advance (warmup), or run multiple concurrent!