Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Schema location: https://fewsdocs.deltares.nl/schemas/version1.0/performanceIndicatorSets.xsd

Performance Indicator module

The performance indicator module is used as an analysis tool in DELFT-FEWS to establish an overview of how well the forecasting system is performing in terms of accuracy of the individual forecasting module or in terms of the forecasting system as a whole. Performance can be assessed in two ways;

...

The second type of measure can be assessed once observed data for which forecasts were made becomes available.

Assessing performance of modules

The first and most simple application of the performance indicator module is in the traditional module calibration. This is by comparing two time series where one time series is the estimated series and the other is the reference time series. These time series are compared over a configurable length. As with other time series this is referenced with respect to the forecast start time (T0).

...

On establishing the performance, the indicator is returned as a time series (simulated historical). This time series is a non-equidistant time series, labelled as a forecast historical with the time stamp set to T0

Assessing performance of forecast values- lead time accuracy

Performance of forecast is assessed on the basis of lead time accuracy. This is done by comparing the forecast lead time value against the observed value at the same time (received later!). For each lead time, this value is assessed over a given number of forecasts.

...

On selecting reference values , these may not yet be available (should this be the case then the number of forecasts considered (J ) is reduced accordingly. If less than the configured number is considered, then a WARN message indicating how many of the expected number were actually used.

Assessing performance of forecast values- timing of thresholds

An important indicator of performance is the timing of predicted threshold event crossings. Again this is evaluated over a number of forecasts. To evaluate this the threshold crossings in the indicator and the reference series are considered. For each pair of matching thresholds (matched on threshold id's) the time between the two is evaluated, and expressed either as a time bias (T_BIAS) or a time absolute error (T_MAE). Times are evaluated in terms of seconds.

...

The results of the evaluation are written as a time series (simulated historical), with as a reference time the T0 of the evaluation run and a time stamp for each .

Assessing performance of forecast forecast precipitation

Performance indicators available:

...

(information to be added)

Assessing performance of forecast peak accuracy

(information to be added)

Configuration of performance module


Figure 134 Elements of the performance module configuration

performanceIndicatorSet

Root element for configuration of a performance Module indicator. Multiple elements may be defined for each performance indicator to be assessed.

...

  • performanceIndicatorId : Optional Id for the configuration. Used for reference purposes only
inputVariable

Definition of inputVariables (time series). Input variables are identified by their VariableId. See transformation module on definition of the inputVariable element. An input variable will need to be defined for both simulated and for observed time series.

outputVariable

Definition of outputVariable time series of performance indicator values is to be written to. This will normally be a non-equidistant time series as it is not a-priori certain when the performance indicator module is run.

modulePerformanceIndicator

Root element for configuration of performance indicator assessing module performance

...

  • indicatorType : selection of performance indicator. Enumeration of options includes:
    • bias
    • meanabsoluteerror
    • meansquareerror
    • nashsutcliffeefficiency
    • peakmeansquareerror
    • volumeerror
  • calculatedVariableId : VariableId to identify calculated time series
  • observedVariableId : VariableId to identify observed (reference) time series
  • outputVariableId : VariableId to write resulting Performance index time series to.
  • sampleOutputVariableId : VariableId to write total number of values (samples) used for the analysis to the output time series. Optional. 
  • analysedCalculatedVariableId : VariableId to write exactly that part of the calculated series, which was used for the analysis, to the output series. Optional .
  • analysedObservedVariableId : VariableId to write exactly that part of the observed series, which was used for the analysis, to the output series. Optional .

leadTimeAccuracyIndicator

Root element for configuration of performance indicator assessing lead time accuracy

...

  • indicatorType : selection of performance indicator. Enumeration of options includes;
  • bias
  • meanabsoluteerror
  • meansquareerror
  • calculatedVariableId : VariableId to identify calculated time series
  • observedVariableId : VariableId to identify observed (reference) time series
  • outputVariableId : VariableId to write resulting Performance index time series to
  • sampleOutputVariableId : VariableId to write total number of values (samples) used for the analysis to the output time series. Optional. NOTE: this option is not supported by all assessment types. In that case, there will be a nullpointer error.
  • intermediateValuesVariableId : VariableId to write intermediate values (as visible in log file if debug is on) to the output series. Optional and only applicable if leadTimePeriods are configured.
  • analysedCalculatedVariableId : VariableId to write exactly that part of the calculated series, which was used for the analysis, to the output series. Optional and only applicable if leadTimePeriods are configured.
  • analysedObservedVariableId : VariableId to write exactly that part of the observed series, which was used for the analysis, to the output series. Optional and only applicable if leadTimePeriods are configured.
thresholdTimingIndicator

Root element for configuration of performance indicator assessing accuracy of threshold Timing

...


Figure 135 Elements of the ModulePerformance configuration

additionalCriteria

Additional criteria identified in establishing performance indicators. Application depends on the performance indicator selected.

...

  • Criteria : list of criteria that may be applied. Enumeration of options includes;
  • minnumberofforecasts
  • timewindowinseconds
  • thresholdvaluesetid
  • peakthresholdvalue
  • maximumgapbetweenpeaksinseconds
  • minimumrecessionbetweenpeaks
  • value: value of criteria defined
  • violationOfCriteriaFlaggedAs: optional flag applied to PerformanceIndicator output series if criteria identified (eg. minnumberofforecasts )__ do not hold. Enumeration of;
  • unreliable
  • doubtful
description

Description of criteria defined. For reference purposes only.

...


Figure 136 Elements of the leadTimeAccuracy configuration.

leadTimes

Root element for defining lead times.

leadTime

Lead time for which to assess lead time performance.

...

  • time : lead time in number of  time units  from the  forecast time of the calculated time series.  The time units are  configured with the  attribute 'unit'
  • outputVariableId: variableId to output lead time accuracy to. This is defined when a separate time series is defined to keep track of performance at different lead times. It is not required when keeping track of performance in a single time series (Note that in the former a simulated historical time series can be used. In the latter this must be a simulated forecasting time series).
leadTimePeriods

Root element for defining lead time periods

leadTimePeriod

Lead time period for which to assess lead time performance.

...


Figure 137 Elements of the thresholdTimingAccuracy configuration.

thresholdIds

Root element for defining threshold crossings to be assessed.

thresholdId

Configuration of threshold crossing to be checked.

...