You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

Accumulative

Input
  • inputVariable
Options
  • validationRules. It is possible to define a set of validationRules. With these validation rules it is possible to define a set of criteria which determines the outputflag of the calculated value based on the number of missing values and/or doubtfulls values counted in the input values.
Output
  • outputVariable
Description

This transformation performs an aggregation from an instantaneous time series to an aggregated time series. This procedure sums the values of the input timeseries that are within the aggregation period. If no aggregation period is configured, then the aggregation period is equal to the period between the current output time and the previous output time. Alternatively the aggregation period can be configured in the time series set of the output variable. In that case the aggregation period is relative to the current output time and aggregation periods for different output times are allowed to overlap. Using overlapping aggregation periods it is possible to use this transformation to calculate a moving sum. If one of the input values is missing or unreliable the output is missing.

The table below shows an example of accumulating 6-hourly values to daily values using this transformation.

 

Original series

Result

Date/Time

Value

Value

01-01-2007 00:00

1,00

 

01-01-2007 06:00

2,00

 

01-01-2007 12:00

3,00

 

01-01-2007 18:00

4,00

 

02-01-2007 00:00

5,00

14,00

02-01-2007 06:00

6,00

 

02-01-2007 12:00

NaN

 

02-01-2007 18:00

8,00

 

03-01-2007 00:00

9,00

NaN

03-01-2007 06:00

10,00

 

The figure below shows original 15 minute data and the aggregated hourly data using the accumulative function:

Validation rules

The concept of the validation rules was introduced as a solution for a common problem in operational situations when using aggregation transformations. When for example a yearly average was computed a single missing value in the input values would cause that the yearly average was also a missing value.

The validation rules provide a solution for these types of situations.

The validation rules are optional in the configuration and can be used to define the outputflag and the custom flagsource of the output value based on the number of missing values/unreliables values and/or the number of doubtfull values in the used input values.

With these rules it is possible to define for example that the output of the transformation is reliable if less than 10% of the input is unreliable and/or missing and that if this percentage above 10% that in that case the output should be a missing value.

It is important to note that input values which are missing and input values which are marked as unreliable are treated the same. Both are seen as missing values by the validation rules.

This prevents that a single missing value in the input will lead to a missing value in the aggregated output value.

Below the configuration of the basic example which was described above.

				<validationRule>
					<inputMissingPercentage>10</inputMissingPercentage>
					<outputValueFlag>reliable</outputValueFlag>
				</validationRule>
				<validationRule>
					<inputMissingPercentage>100</inputMissingPercentage>
					<outputValueFlag>missing</outputValueFlag>
				</validationRule>

The configured validation rules are applied in the following way. The first validation rules is applied first. In the example above the first rule is that if 10% or less of the input is missing (or unreliable) that the output flag will be set the reliable. If the input doesn't meet the criteria for the first rule the transformation module will try to apply the second rule. In this case the second rule will always apply because a percentage 100% is configured.

This is a recommended way of configuring the validation rules. By default if validation rules are configured and none of the configured rules are valid the output will be set to missing. But for the users of the system it is more understandable if the behaviour of the aggregation is configured instead of a hard-coded fallback mechanism in the software.

To explain the validation rules a bit more a more difficult example will explained. Let's say that we would like configure our aggregation in such a way that the following rules are applied:

1 if the percentage of missing and/or unreliable values is less than 15% the output should be reliable.
2 if the percentage of missing values is less than 40% the output should be doubtfull.
3 in all other cases the output should be a missing value.

Below shows a configuration example if the rules above were implemented.

				<validationRule>
					<inputMissingPercentage>15</inputMissingPercentage>
					<outputValueFlag>reliable</outputValueFlag>
				</validationRule>
				<validationRule>
					<inputMissingPercentage>40</inputMissingPercentage>
					<outputValueFlag>doubtful</outputValueFlag>
				</validationRule>
				<validationRule>
					<inputMissingPercentage>100</inputMissingPercentage>
					<outputValueFlag>missing</outputValueFlag>
				</validationRule>

In some cases in it one would like to differ between situations in which for example the output was marked as reliable. In the example above if all of the input values were reliable the output is marked reliable. But if for example 10% of the input values were unreliable the output is also marked as reliable.

It would be nice if the user of the system would be able to see in the GUI of FEWS why the input was marked reliable.

To make this possible the concept of the custom flag source is added the validation rules. In addition to configuring an output flag it is also possible to configure a custom flag source. In the table of the Timeseriesdialog the custom flag source can be visible by

Below an example in which w

Configuration example
	<transformation id="aggregation accumulative">
		<aggregation>
			<accumulative>
				<inputVariable>
					<timeSeriesSet>
						<moduleInstanceId>ImportTelemetry</moduleInstanceId>
						<valueType>scalar</valueType>
						<parameterId>H.obs</parameterId>
						<locationSetId>hydgauges</locationSetId>
						<timeSeriesType>external historical</timeSeriesType>
						<timeStep unit="minute" multiplier="15"/>
						<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
						<readWriteMode>read only</readWriteMode>
						<delay unit="minute" multiplier="0"/>
					</timeSeriesSet>
				</inputVariable>
				<outputVariable>
					<timeSeriesSet>
						<moduleInstanceId>Aggregate_Historic</moduleInstanceId>
						<valueType>scalar</valueType>
						<parameterId>accumulative</parameterId>
						<locationSetId>hydgauges</locationSetId>
						<timeSeriesType>external historical</timeSeriesType>
						<timeStep unit="hour" multiplier="1"/>
						<relativeViewPeriod unit="day" startOverrulable="true" start="-7" end="0"/>
						<readWriteMode>add originals</readWriteMode>
						<synchLevel>1</synchLevel>
					</timeSeriesSet>
				</outputVariable>
			</accumulative>
		</aggregation>
	</transformation>
  • No labels