Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info

When running activities as an ensemble that request time series sets from the database that are not a part of that ensemble, the default ensembleId should be added to the TimeSeriesSets definition. The default ensemble Id is "main".

All time series sets written when running in ensemble mode will have the ensembleId as specified in the workflow ensembleId element, unless overruled by a local ensembleId defined in the timeSeriesSet on writing.

workflow:parallel execution over multiple Forecasting Shell instances

Grouping to accommodate parallel execution of multiple activities parallelization of (a portion) of the workflow when multiple CPU available.  Since Since FEWS 2017.01 it is optionally possible to run the parallel activities on multiple forecasting shells. Parallel activities should only be configured when the underlying activities have no data dependency. If data dependency exists, activities should be executed in sequence. This may be forced by embedding them as part of a sequence group. To run on multiple FSS specify  

Two flavours can be accommodated:

  • parallel - multipleForecastingShells=true: use this setting to conduct parallel execution of multiple activities
  • parallel - forecastingShellCount: use this setting to conduct parallel execution of any underlying loop (ensemble loop or location loop) within an activity

When using multipleForecastingShells=true. Note that all taskrun partitions receive the same scheduledDispatchedTime. In order to be run successfully, they need to be dispatched within amount of seconds specified by the fews.master.mc.conf launcher maxlatetime, otherwise the taskruns may become terminated because they are overdue.

workflow:parallel - split ensemble loops and locations loops over multiple forecasting shells

To split loops over multiple forecasting shells specify forecastingShellCount = n. Every When using forecastingShellCount = n  every underlying loop is split into n parts. Every FSS FS instance is running the same specified activity but skipping the part of the ensemble loop / location loop that is meant for an other FSSFS instance. It is not allowed to start a nested parallel when you are in a ensemble/location loop parallel.

...