You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

The fileformats.xsd is a container schema that includes the schemas below.

Basename

Extension

Root element

Page

Comment

pi_parameters

xsd

parameters

Module parameters

pi_timeseries

xsd

timeseries

Time series

pi_latinputs

xsd

latinputs

Lateral inputs

pi_locations

xsd

locations

Time series location

pi_mapstacks

xsd

mapstacks

Map stacks

pi_state

xsd

state

Module state

pi_table

xsd

table

Lookup tables

pi_diag

xsd

diag

Module diagnostics

pi_branches

xsd

branches

Branches

pi_crosssections

xsd

crosssections

Cross sections

pi_polygons

xsd

polygons

polygon boundary

pi_profiles

xsd

profiles

Longitudinal profile

pi_cells

xsd

cells

grid cell centre point data














Appendices

wlStartOfApps XML Schema listings

The table below lists the various XML Schemas. Page number refers to the page in Appendix E where the schema is defined.

Schema

page in Appendix E

Contents

1

Schema fileformats.xsd

2

element Branches

3

element Cells

6

element CrossSections

7

element Diag

10

element geoDatum

11

element LatInputs

12

element locationId

13

element Locations

14

element MapStacks

16

element parameter

19

element Parameters

20

element Polygons

22

element Profiles

27

element State

29

element Table

31

element TimeSeries

34

element timeZone

39

complexType TimestepType

40



nextapp Grid data file formats

PCRaster

PCRaster is a unique raster based GIS package ideally suited for dynamic modelling. Information on the PCRaster native format (and a free version, including tools for conversion) can be obtained from http://www.pcraster.nl.

USGS BIL / BSQ / BIP

These files consist of an ASCII header file (with content information) and a binary data file. The ASCII description files support the following formats:

    1. Band sequential (BSQ) multiband images
    2. Band interleaved by line (BIL) multiband images
    3. Band interleaved by pixel (BIP) multiband images


In order to support time steps an addition has been made to the .hdr file. A USGS BIL, BIP and BSQ file always have a *.hdr header file that contains the settings for the dataset. So as to add support for time data in these files the following changes must be made:

    1. Add a line to the header file with the number of time blocks: for example nBlocks 3.


    1. The nBands property in the header file represents the number of available parameters.


Creating a dynamic bil file:
Create the binary file in BSQ, BIL or BIP format. If you use for example ArcView to create *.BSQ binary files for each time step, then you can use the DOS copy command to add files together as follows:
Copy /B timestep1.bsq /B + timestep2.bsq /B + timestep3.bsq /B + ... timestepN.bsq /B destination.bsq /B
For more information on the copy command, go to the Start Run menu and type copy /? and press enter.
Example of a header file:
ByteOrder I
Layout BIL
nRows 2
nCols 2
nBands 1
nBlocks 1
nBits 32
BandRowBytes 8
TotalRowBytes 8
BandGapBytes 0
NoData -999
ULXmap 163900
ULYmap 522900
Xdim 10000
Ydim 10000

Image file

The binary image file for the BIL/BIP/BSQ image format is merely a bit stream of the image data. How the image data is arranged in that bit stream defines whether it is a BIL, BIP, or BSQ image.
Band interleaved by line data stores pixel information band by band for each line, or row, of the image. For example, given a three-band image, all three bands of data are written for row one, all three bands of data are written for row two, and so on, until the total number of rows in the image is reached.
Band interleaved by pixel data is similar to BIL data, except that the data for each pixel is written band by band. For example, with the same three-band image, the data for bands one, two, and three is written for the first pixel in column one; the data for bands one, two, and three is written for the first pixel in column two; and so on. Band sequential format stores information for the image one band at a time. In other words, data for all pixels for band one is stored first, then data for all pixels for band two, and so on.
For further information see: http://www.esri.com/library/whitepapers/pdfs/eximgav.pdf

ESRI ASCII

ASCII grids are stored in a format compatible with ESRI (and many other) software. The ASCII raster file format is a simple format that can be used to transfer raster data between various applications. The header data includes the following keywords and values:

    1. ncols - number of columns in the data set.
    2. nrows - number of rows in the data set.
    3. xllcenter or xllcorner - x-coordinate of the centre or lower-left corner of the lower-left cell.
    4. yllcenter or yllcorner - y-coordinate of the centre or lower-left corner of the lower-left cell.
    5. cellsize - cell size for the data set.
    6. nodata_value - value in the file assigned to cells whose value is unknown. This keyword and value is optional. The nodata_value defaults to -9999.


The first row of data is at the top of the data set, moving from left to right. Cell values should be delimited by spaces. No carriage returns/linefeeds are necessary at the end of each row in the data set. The number of columns in the header is used to determine when a new row begins. The number of cell values must be equal to the number of rows times the number of columns.
Example:
ncols 4nrows 3xllcorner 175208.9306yllcorner 320440.9027cellsize 25NODATA_value -9999
12131415
12131415
12141415

nextapp EA Schema to PI mapping

The EA Time Series Schema established in draft from for hydrometric data can be mapped to the time series XML schema established as a part of the published interface. This mapping is illustrated in the table below (* indicate mandatory fields).

NFFS

EA Hydrometric

Comment

Published Interface

Proposed Format

 

 

 

 

Header

HydrometricData

 

sourceOrganisation

sourceOrganisation

 

sourceSystem

sourceSystem

 

fileDescriprion

fileDescriprion

 

creationDate

creationDate

 

creationTime

creationTime

 

 

Station

 

region

region

Optional. May be useful if coding used is not unique across regions

stationName

stationName

 

longname

 

Optional long descriptive name of station

locationid

stationReference

Compulsory

geodatum

 

Identifies geographic datum

location

ngr

Uses "geodatum" field to allow different datums - ngr is OS1936

 

SetofReadings

 

parameter

parameter *

compulsory

type*

dataType *

Enumeration supports only Accumulative of Instantaneous - interval specified later

 

 

 

units*

units *

Optional string identifying units - ISO standards should be adhered to

startdate

startDate

field for date

startdate

startTime

field for time

endate

endDate

field for date

endate

endTime

field for time

missval

invalidNumber

 

 

dayOrigin

not included in published interface

 

readingsPerDay

not included in published interface. Information contained in other fields

time step*

 

Time step in seconds (used for equidistant series)

 

 

 

Event

Reading

 

data*

date *

field for date

time*

time *

field for time

value*

Reading*

field for value of reading

flag

quality

Only single flag considered - could be used for quality flag

 

quality2

not included in Published Interface

 

highLow

not included in Published Interface, may be combined with flag

 

 

 

 

Comment

not included in Published Interface

 

startDate

not included in Published Interface

 

startTime

not included in Published Interface

 

endDate

not included in Published Interface

 

endTime

not included in Published Interface




nextapp Quality flags

Following the discussion between Delft Hydraulics and CEH & Wallingford Software on Friday 2 May 2003 in Wallingford on providing an enumeration of quality flags as a part of the SIS, a proposed quality flag enumeration is given in the table below. As described in the timeseries XML schemas provided we have made allowance for a single quality flag. This is contrary to the EA XML interchange formats which provide for three separate quality flags. A single flag seems more appropriate to the level of communication we are dealing with and should avoid ambiguity. The flags are single byte values and are incorporated in the XML Schema for time series. Quality flags are provided for each data point.
Quality flags are constructed on a philosophy of two qualifiers. The first described the origin of the data and the second the quality.
Possible origins of data are:

  1. Original: This entails the data value is the original value. It has not been amended by NFFS
  2. Completed: This entails the original value was missing and was replaced by a non-missing value.
  3. Corrected: This entails the original value was replaced with another non-missing value.


Possible qualifiers are:

  1. Reliable: Data is reliable and valid
  2. Doubtful: The validity of the data value is uncertain
  3. Unreliable: The data value is unreliable and cannot be used.


Following this specification, the table below gives an overview of quality flag enumerations
Table D.1Enumeration of quality flags

Enumeration

Description

0

Original/Reliable
The data value is the original value retrieved from an external source and it successfully passes all validation criteria set.

1

Corrected/Reliable
The original value was removed and corrected. Correction may be through interpolation or manual editing.

2

Completed/Reliable
Original value was missing. Value has been filled in through interpolation, transformation (e.g. stage discharge) or a model.

3

Original/Doubtful
Observed value retrieved from external data source. Value is valid, but marked as suspect due to soft validation limits being exceeded.

4

Corrected/Doubtful
The original value was removed and corrected. However, the corrected value is doubtful due to validation limits.

5

Completed/Doubtful
Original value was missing. Value has been filled in as above, but resulting value is doubtful due to limits in transformation/interpolation or input value used for transformation being doubtful.

6

Missing/Unreliable
Observed value retrieved from external data source. Value is invalid due to validation limits set. Value is removed

7

Corrected/Unreliable
The original value was removed and corrected. However, corrected value is unreliable and is removed.

8

Completed/Unreliable
Original value was missing. Value has been filled in as above, but resulting value is unreliable and is removed.

9

Missing value in originally observed series. Note this is a special form of both Original/Unreliable and Original/Reliable.


Notes:

  • No difference is made between historic and forecast data. This is not considered a quality flag. The data model of NFFS is constructed such that this difference is inherent to the data type definition.
  • External sources may either be an actual external source, a forecasting module or a transformation. The convention in NFFS the definition of data series parameter types identifies the data source.



nextapp Schema Documentation


nextapp Module Adapter Application



This memo describes application of the module adapter to allow integration with the National Flood Forecasting System (NFFS) through use of the published interface format interchange specification.
The objective of the memo is twofold:

  1. to provide a description of the preferred approach in developing module adapters, and,
  2. to illustrate the approach through an example taken from the Northeast region.


The document SIS Module Adapter Specification, Version 2.4 describes the use of the general adapter in allowing modules to be run from within NFFS. The general adapter (this module is a part of NFFS) is configured to provide the required data (both dynamic and static) in the published interface format. A module adapter (supplied by the same supplier as the module itself) is used to translate the data from the published interface format to the native module format.

Figure 2Schematic interaction between the General Adapter and the Module adapter through the published interface.
The general adapter is as stated an integral part of NFFS. This adapter is used to link all third party modules required to make a forecast in a configured forecast system. To allow this, each instance of the general adapter linking NFFS with a forecasting module is configured as required to provide the data inputs to the module, run the module and retrieve the data outputs from the module.

The preferred approach in running a module within NFFS is in three steps:

  1. Export of data required by the module through the published interface from the NFFS database to the module native format. This data can cover dynamic time series data, parameter value sets, module states etc. A full description of the data types supported is given in SIS Module Adapter Specification, Version 2.0. To determine what data is exported to a given module, a configuration file (XML formatted) is passed to the General Adapter as an argument. This file is a part of the NFFS and is configured during set-up of a forecasting system. After the general adapter has run the input files required by the module are available in the published interface format. This can entail time series, module states etc. Figure 3 shows a schematic description of this first step. The module adapter must provide on completion an XML formatted diagnostic file giving the general adapter information on the status of the translation process. For the format of this file and associated enumeration see the published interface specification.



NFFS DatabaseGeneral AdapterXML:Export configurationXML: Input Files (PI)Module AdapterModule import configurationNative format module input NFFS supplierModule supplierModule initial stateModule parametersXML format Diagnostic file
Figure 3 Detailed view of export of data from NFFS to module native format through the published Interface (PI).

  1. In the second step the module itself is run. This is achieved by the general adapter executing a call to the module executable, with the possible addition of command line arguments. This module executable must be able to run in batch mode, without any user interaction required. The module executable reads the required input data, parameters and states in its native format, performs the required calculation and establishes a set of output files and states, again in the native module format. This set of output files must include a file giving diagnostic information on the module run. This file can be in the native module format. Figure 4 gives a detailed schematic overview of concept of running a module executable within NFFS.




Command Line ArgumentsNative format module input Module supplierModule initial stateModule parametersModule ExecutableNative format module outputModule output stateNative format diagnostic fileOther module files
(native format)
Figure 4Detailed view of running a module executable from NFFS.

  1. The third step comprises the import of module output data into NFFS. Again this is using the published interface format. The module adapter is called to transform native module output formats to the published interface format. Once completed the output data is retrieved through the published interface format for insertion into the NFFS central database. As with the module inputs an XML formatted configuration file is used to determine what data are imported.


General AdapterXML:Export configurationXML: Output Files (PI)Module AdapterModule export configurationNative format module output NFFS supplierModule supplierNative format diagnostic file
Module output state
XML format Diagnostic fileNFFS Database
Figure 5Detailed view of import of data to NFFS from module native format through the published Interface (PI).
The three steps make it clear that there are three functional elements to be provided by a module supplier to allow coupling of a module with NFFS. This functionality may be encompassed in either two or three separate executable.

Element

Function

Configuration

Comment

Module Adapter

Import data from Published Interface format to Native module format

Configuration file specifying data to import. Preferred format of file: XML

 

Module executable

Run module using data in native module input format. Write output in native module output format

 

  • Batch file
  • Runs in dedicated work directory

Module Adapter

Exports data from native module format to published interface format

Configuration file specifying data to export. Preferred format of file: XML

 


Example: Kinematic wave routing at HEBDEN bridge
As an example a comparison is made in this memo of the ICA model component file for the forecast at Hebden Bridge on the Calder River in Northeast region. Although some understanding of the structure of the ICA is helpful in this comparison, it seems that developing an example from the ICA is most appropriate given the complexity of the ICA. The full ICA module component file is included in the Appendix for reference.
Description of current approach as implemented in ICA:
Two data series are forecast for the site at Hebden Bridge. These are the water level and the discharge (described in the ICA in the forecast requirement files lu-hebdbr1.rffs and qx-hebdbr1.rffs ). The discharge at the site is first calculated using the Kinematic Wave model (supplied by CEH) for the reach upstream of Hebden Bridge.
There are two inputs to the kinematic wave model, being the discharges from the Walsden sub-catchment and the Todmorden sub-catchment (ICA forecast requirements QX-WALSDN1 and QX-TODMDN1).
These two inputs are routed using the KW model to form a temporary output discharge series at Hebden Bridge (Series name in ICA Model Component File: 2Q-HEBDBR1).
In the next step, the level series available in the database for the historic period (ICA code LU-HEBDBR1) is transformed to a discharge series using the applicable stage discharge curve (ICA: RATING algorithm). This supplies for the historic period an "observed" discharge series.
Using this "observed" discharge series, an ARMA error modelling step is carried out. This uses as input the temporary discharge calculated by the KW model and the "observed" discharge series. Using the ARMA procedure the corrected output series for Hebden Bridge is derived (ICA: QX-HEBDBR1) for the new forecast period.
In the final step levels for the forecast period are derived using this forecast discharge series again through the applicable rating curve (ICA: RATING algorithm).
Data series naming conventions in NFFS
In NFFS, the naming convention is a little different from that applied in the ICA. The ID's with which series are identified are constructed on the basis of the (unique) station code. Data series available at that station are then identified by the station code plus a data type suffix. Clear distinction is made through parameter types of series available for the historic period and for the forecast period. The series ID is used as a unique identifier in locating data series. Each series does have a "name" that is used in all displays etc to enhance readability.

Description

ICA Series

NFFS series
Historical

NFFS series
Forecast

Input discharge series

QX-WALSDN1

H-27392-Q.hc

H-27392-Q.fc

Input discharge series

QX-TODMDN1

H-27931-Q.hc

H-27931-Q.fc

Output discharge at Hebden Bridge
(simulated)

2Q-HEBDBR1

H-27932-Q.hr

H-27932-Q.fr

Output discharge at Hebden Bridge
(updated)

QX-HEBDBR1

H-27932-Q.uhr

H-27932-Q.ufr

Observed Level at Hebden Bridge

LU-HEBDBR1

H-27932-h.m

Not Applicable

Forecast Level at Hebden Bridge

LU-HEBDBR1

H-27932-h.hr

H-27932-h.fr

Note: in the ICA the observed and modelled series for the historical period seem to occupy the same database location. These two are indeed the same value in case an ARMA error correction procedure is used with a parameterisation including the auto-regressive component.
NFFS Workflow for forecast at Hebden Bridge
In NFFS a series of tasks such as that required in creating the forecast at Hebden Bridge are configured in a Task File. Each of the tasks required is performed through either a standard NFFS utility (e.g. validation, interpolation, error modelling) or an external module. The external module is called through the general adapter.
The example below illustrates the workflow definition for the forecast at Hebden Bridge (all file names are as an example, and the XML structure is illustrative but not definite). The sequence of steps to deliver the forecast requirement at Hebden Bridge is the same as specified in the ICA.
Workflow files such as this are configured as a part of NFFS, not by any third party module suppliers. The first element uses the General Adapter to call the Kinematic Wave module. The configuration file is used by the General Adapter to direct how this module is used. An example of the configuration file for this element is given in the Appendix. Again this configuration file is configured during NFFS configuration, not by the third party module supplier.
<?xml version="1.0" encoding="UTF-8"?>
<workflow
xmlns="http://www.wldelft.nl/fews"
xmlns:target="http://www.wldelft.nl/fews"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.wldelft.nl/fews workflow.xsd" version="1.0">
<sequence>
<!-- Run KW for reach to Hebden Bridge (using General Adapter)-->
<element name="kw.hebdbr">
<caption>KW run to Hebden Bridge"</caption>
<class>nl.wldelft.fews.util.GeneralAdapter</class>
<configuration>kw_hebdbr_historical.xml</configuration>
<auto>"true"</auto>
<output>"rating.hebdbr.observed</output>
</element>
<!-- Transform observed levels to discharges at Hebden Bridge
(using ConversionServer)-->
<element name="rating.hebdbr.observed">
<caption>Apply rating curve at Hebden Bridge"</caption>
<class>nl.wldelft.fews.util.ConversionServer</class>
<configuration>rating_hebdbr_observed.xml</configuration>
<auto>"true"</auto>
<input>"kw.hebdbr</input>
<output>"rating.hebdbr.observed</output>
</element>
<!-- Run ARMA_FILL through General Adapter-->
<element name="arma.hebdbr.observed">
<caption>ARMA correction at Hebden Bridge"</caption>
<class>nl.wldelft.fews.util.GeneralAdapter</class>
<configuration>arma_hebdbr_historical.xml</configuration>
<auto>"true"</auto>
<input>"rating.hebdbr.observed</input>
<output>"arma.hebdbr.observed</output>
</element>
<!-- Transform predicted flows to levels at Hebden Bridge-->
<element name="rating.hebdbr.predicted">
<caption>Apply rating curve at Hebden Bridge"</caption>
<class>nl.wldelft.fews.util.ConversionServer</class>
<configuration>rating_hebdbr_predicted.xml</configuration>
<auto>"true"</auto>
<input>"arma.hebdbr.observed</input>
</element>
</sequence>
</workflow>
Appendix I:
ICA Model Component file for Hebden bridge (file: CALD3-HEBDBR.RFFS)
CALD3-HEBDBR :MOD_COMP_ID
! Version: 1.13 Date: 9-OCT-1997 11:41
! Revision: 2 Date: 9-OCT-1997 14:12
! Origin : IH RFFS Model Network Set-up (Hand)
! File-type : Model Component Description
1 :MC_FILE_FORMAT
1 :COMP_STORE_IND
20961 :COMP_DATA_ID
HEBDEN BRIDGE :Channel flow- kinematic wave :COMP_NAME
1 :COMP_CLASS_IND
4 :COMP_POSITN_IND
6 :COMP_TYPE_IND
1 1 0 :COMP_PURPOSE
0 :DATA_NUM_PROFI
:PROFILE_REQ_ID_PROFI
0 :PROFILE_SUB_ID
0 :SCALE_TYPE_PROFI
0.00000 :SCALE_VALUE_PROFI
2 :DATA_NUM_INPUT
QX-WALSDN1 QX-TODMDN1 :FORECAST_REQ_ID_INPUT
0 0 :SCALE_TYPE_INPUT
0.00000 0.00000 :SCALE_VALUE_INPUT
0 :DATA_NUM_CONTR
:FORECAST_REQ_ID_CONTR
0 :DATA_NUM_SPECI
:FORECAST_REQ_ID_SPECI
2 :DATA_NUM_OUTPU
QX-HEBDBR1 LU-HEBDBR1 :FORECAST_REQ_ID_OUTPU
0 0 :SCALE_TYPE_OUTPUT
0.00000 0.00000 :SCALE_VALUE_OUTPUT
1 :DATA_NUM_DUMMY
2Q-HEBDBR1 :DUMMY_REQ_ID
0 :TIME_DELAY_IND
3 :COMP_DECOMP_TYPE
4 :COMP_DECOMP_NUM
FA_KW :MODEL_ALGOR_ID (1)
2 2 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_INPUT (1)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_CONTR (1)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_SPECI (1)
1 1 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_OUTPU (1)
3 0 4 0 3 0 0 0 0 0 0 0 :ALGOR_MSIZE_STATE (1)
3 0 1 0 3 0 0 0 0 0 0 0 :ALGOR_MSIZE_TRANS (1)
38 1 1 3 1 4 4 2 4 4 3 0 :ALGOR_MSIZE_PARAM (1)
29 9 2 1 1 3 4 4 0 0 0 0 :ALGOR_MSIZE_IPARA (1)
9.84000 2.90000 0.00000 1.35900 :ALGOR_VALS_PARAM (1)
0.00000 0.00000 0.00000 1.35900 :ALGOR_VALS_PARAM (1)
0.00000 0.00000 0.00000 0.73300 :ALGOR_VALS_PARAM (1)
0.00000 0.00000 0.00000 0.73300 :ALGOR_VALS_PARAM (1)
0.00000 0.00000 0.00000 1.00000 :ALGOR_VALS_PARAM (1)
0.00000 1.00000 0.00000 1.00000 :ALGOR_VALS_PARAM (1)
0.00000 1.00000 0.00000 1.00000 :ALGOR_VALS_PARAM (1)
0.00000 0.00000 0.00000 1.00000 :ALGOR_VALS_PARAM (1)
0.00000 0.00000 0.00000 1.00000 :ALGOR_VALS_PARAM (1)
0.00000 0.00000 :ALGOR_VALS_PARAM (1)
3 2 4 1 1 0 0 1 1 3 0 0 :ALGOR_VALS_IPARA (1)
0 1 1 0 1 2 2 0 1 3 1 0 :ALGOR_VALS_IPARA (1)
1 3 2 0 1 :ALGOR_VALS_IPARA (1)
QX-WALSDN1 QX-TODMDN1 :DATA_STREAM_INPUT (1)
:DATA_STREAM_CONTR (1)
:DATA_STREAM_SPECI (1)
2Q-HEBDBR1 :DATA_STREAM_OUTPU (1)
0 :TEST_SNOWFLAG_IND (1)
0 :SIMULAT_SKIP_IND (1)
0 :EXECUT_SPECI_IND (1)
6 :EMERG_WARM_UP (1)
RATING :MODEL_ALGOR_ID (2)
1 1 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_INPUT (2)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_CONTR (2)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_SPECI (2)
1 1 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_OUTPU (2)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_STATE (2)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_TRANS (2)
5 0 0 1 5 1 0 0 0 0 0 0 :ALGOR_MSIZE_PARAM (2)
2 2 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_IPARA (2)
13.95000 0.06600 1.47800 9999.00000 :ALGOR_VALS_PARAM (2)
9999.00000 :ALGOR_VALS_PARAM (2)
1 1 :ALGOR_VALS_IPARA (2)
LU-HEBDBR1 :DATA_STREAM_INPUT (2)
:DATA_STREAM_CONTR (2)
:DATA_STREAM_SPECI (2)
QX-HEBDBR1 :DATA_STREAM_OUTPU (2)
0 :TEST_SNOWFLAG_IND (2)
0 :SIMULAT_SKIP_IND (2)
0 :EXECUT_SPECI_IND (2)
0 :EMERG_WARM_UP (2)
ARMA_FILL :MODEL_ALGOR_ID (3)
1 1 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_INPUT (3)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_CONTR (3)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_SPECI (3)
1 1 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_OUTPU (3)
4 0 0 1 2 2 0 0 0 0 0 0 :ALGOR_MSIZE_STATE (3)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_TRANS (3)
2 0 2 0 2 0 0 0 0 0 0 0 :ALGOR_MSIZE_PARAM (3)
3 3 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_IPARA (3)
-1.50830 0.61721 :ALGOR_VALS_PARAM (3)
2 0 1 :ALGOR_VALS_IPARA (3)
2Q-HEBDBR1 :DATA_STREAM_INPUT (3)
:DATA_STREAM_CONTR (3)
:DATA_STREAM_SPECI (3)
QX-HEBDBR1 :DATA_STREAM_OUTPU (3)
0 :TEST_SNOWFLAG_IND (3)
0 :SIMULAT_SKIP_IND (3)
0 :EXECUT_SPECI_IND (3)
4 :EMERG_WARM_UP (3)
RATING :MODEL_ALGOR_ID (4)
1 1 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_INPUT (4)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_CONTR (4)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_SPECI (4)
1 1 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_OUTPU (4)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_STATE (4)
0 0 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_TRANS (4)
5 0 0 1 5 1 0 0 0 0 0 0 :ALGOR_MSIZE_PARAM (4)
2 2 0 0 0 0 0 0 0 0 0 0 :ALGOR_MSIZE_IPARA (4)
13.95000 0.06600 1.47800 9999.00000 :ALGOR_VALS_PARAM (4)
9999.00000 :ALGOR_VALS_PARAM (4)
2 1 :ALGOR_VALS_IPARA (4)
QX-HEBDBR1 :DATA_STREAM_INPUT (4)
:DATA_STREAM_CONTR (4)
:DATA_STREAM_SPECI (4)
LU-HEBDBR1 :DATA_STREAM_OUTPU (4)
0 :TEST_SNOWFLAG_IND (4)
0 :SIMULAT_SKIP_IND (4)
0 :EXECUT_SPECI_IND (4)
0 :EMERG_WARM_UP (4)
1 :CALIBRATION_TYPE
0 :CALIB_EVENT_NUM
1 1 0 0 0 :CALIB_EVENT_BEG (1)
1 19999 0 0 :CALIB_EVENT_END (1)
0 :CALIB_EVENT_WARMUP (1)
0 :CALIB_EVENT_CARRY (1)
1 :CALIB_SCALE_TYPE
1 :CALIB_ORDER_TYPE
1 :CALIB_WEIGHTING_TYPE
1 :UNCERTAINTY_TYPE
0 :UNCERT_PARAM_NUM
0.00000 :UNCERT_VAL_PARAM
UPPER CALDER RECALIBRATION OCT 1997 :CALIB_NOTES (1)
sd fitted in model :CALIB_NOTES (2)
:CALIB_NOTES (3)
:CALIB_NOTES (4)
:CALIB_NOTES (5)
recalib R.AUSTIN IH :COMP_GEN_COMMENT (1)
:COMP_GEN_COMMENT (2)
:COMP_GEN_COMMENT (3)
:COMP_GEN_COMMENT (4)
:COMP_GEN_COMMENT (5)
Appendix II:
Configuration file for running KW module at Hebden Bridge using General Adapter
<?xml version="1.0" encoding="UTF-8"?>
<dyntasks
xmlns="http://www.wldelft.nl/fews"
xmlns:target="http://www.wldelft.nl/fews"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.wldelft.nl/fews dyntasks.xsd" version="1.0">
<general exportpath="file: nffs/kw/input" importpath="file: nffs/kw/output"/>
<sequence>
<!-- export time series for discharge for Todmorden-->
<export seriesid="H-27931-Q.hc" file="file:todmdn.xml">
<xml format="fews"/>
</export>
<!-- export time series for discharge for Walsdn-->
<export seriesid="H-27392-Q.hc" file="file:walsdn.xml">
<xml format="fews"/>
</export>
<!-- export parameters for KW model to Hebden Bridge-->
<exportparams>
<moduleid>kw.hebdbr</moduleid>
<paramsetid>kw.hebdbr.par</paramsetid>
<filename>file:./kw_hebdbr_par.xml</filename>
</exportparams>
<!-- export module initial state -->
<writestate stateid="kw.hebdbr.state"/>
<!-- start kw adapter to transfrom XML data to KW format -->
<!-- note:all settings are examples of filenames. it is assumed the kw module adapter requires one argument. This is an XML file with the relevant configuration! -->
<task diagfile="file:./kwadapter_diag.xml" moduleid="kw.hebdbr">
<deletefile>file:kw_adapter.rtn</deletefile>
<exe>file:nffs/bin/kw_adapter.exe</exe>
<workdir>nffs/kw/workdir</workdir>
<arg>nffs/kw/config/kw_hebdbr_input.xml</arg>
<taskreturn>file:kw_adapter.rtn</taskreturn>
<taskfail>NONEXISTS</taskfail>
</task>
<!-- start kw module -->
<task diagfile="file:kwmodule_diag.xml" moduleid="kw.hebdbr">
<deletefile>file:kw_exe.rtn</deletefile>
<exe>file:nffs/bin/kw.exe</exe>
<workdir>nffs/kw/workdir</workdir>
<arg>nffs/kw/config/kw_hebdbr.rffs</arg>
<taskreturn>file:kw_exe.rtn</taskreturn>
<taskfail>NONEXISTS</taskfail>
</task>
<!-- Start kw adapter to import data form native to XML -->
<task diagfile="file:kwadapter_diag.xml" moduleid="kw.hebdbr">
<deletefile>file:kw_adapter.rtn</deletefile>
<exe>file:nffs/bin/kw_adapter.exe</exe>
<workdir>nffs/kw/workdir</workdir>
<arg>nffs/kw/config/kw_hebdbr_output.xml</arg>
<taskreturn>file:kw_adapter.rtn</taskreturn>
<taskfail>NONEXISTS</taskfail>
</task>
<!-- importing of timeseries -->
<import seriesid="H-27932-Q.hr" file="file:kw_hebdbr_q.xml">
<xml format="fews"/>
</import>
<!-- import resulting module state -->
<readstate stateid="kw.hebdbr.state" statename="state kw to hebdbr">
<stateloc type="file">
<readlocation>nffs/kw/state</readlocation>
<writelocation>nffs/kw/state</writelocation>
</stateloc>
</readstate>
</sequence>
</dyntasks>


Appendix III:
Example of an output XML file (shown in part)
Filename: KW_HEBDBR_Q.XML
<?xml version="1.0" encoding="UTF-8"?>
<timeseries xmlns="http://www.wldelft.nl/fews"
xmlns:target="http://www.wldelft.nl/fews"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.wldelft.nl/fews
timeser.xsd" version="1.0">
<header>
<type>instantanious</type>
<timeseriesid>kw.output.1</timeseriesid>
<content>Discharge</content>
<timestep>3600</timestep>
<missval>-999</missval>
<longname>Simulated discharge at Hebden Bridge</longname>
<stationName>Hebden Bridge</stationName>
<units>m3/s</units>
<locationid>H-27932</locationid>
<sourceOrganisation>Northeast Region</sourceOrganisation>
<sourceSystem>KW module</sourceSystem>
<fileDescription>XML Data</fileDescription>
<creationDate>2003-05-05</creationDate>
<creationTime>14:42:14</creationTime>
</header>
<event date="2003-01-07" time="08:00:00" value="2.480000" flag="0"/>
<event date="2003-01-07" time="09:00:00" value="2.470000" flag="0"/>
<event date="2003-01-07" time="10:00:00" value="2.460000" flag="0"/>
<event date="2003-01-07" time="11:00:00" value="2.440000" flag="0"/>
<event date="2003-01-07" time="12:00:00" value="2.420000" flag="0"/>
...
...
<event date="2003-01-17" time="02:00:00" value="1.490000" flag="0"/>
<event date="2003-01-17" time="03:00:00" value="1.480000" flag="0"/>
<event date="2003-01-17" time="04:00:00" value="1.480000" flag="0"/>
<event date="2003-01-17" time="05:00:00" value="1.480000" flag="0"/>
<event date="2003-01-17" time="06:00:00" value="1.480000" flag="0"/>
<event date="2003-01-17" time="07:00:00" value="1.490000" flag="0"/>
</timeseries>


nextapp Questions and aswers

The interface describes an on-line or operational system. However the XML templates and data types described apply to off-line modelling. Is it the intention for the DELFT-FEWS databases to hold data such as cross-sections, river channels etc. which are used to develop the model from scratch?

The interface is indeed intended to be operated an on-line operational system. It's prime communication with external forecasting modules is through dynamic data such as time series, model states and diagnostics (see below). The published interface formats have been defined to cover a wider range of data types, but these need not be used in all cases - i.e. there is no requirement to allow communication of all data types. Indeed there is currently no module that caters for the full range of formats. Although the EA is to decide to what extent data should be passed, the practical approach followed with e.g. ISIS and the CEH modules is to pass only time series, module states and diagnostics information.
In exchanging data between a module and NFFS, a prioritisation of data types can be identified.

Priority

Data Type

Comment

1

Time Series
Module states
Diagnostics

sufficient to cover most modules used in flood forecasting systems, including e.g. Mike-11 and NAM

1 (a)

Time series of grid data

additionaly required for modules with 2D I/O formats (e.g inundation codes)

2

Parameters

Module parameters may be passed where the module allows calibration through the NFFS calibration facilities

2 (a)

Longitudinal profile data

For hydrodynamic modules longitudinal data types may be passed - not a strict requirement.

3

Static data (cross sections, branches, etc)

This data is not required for operational forecasting. In exceptional cases data such as on branches may be required (for display purposes), but this need not be passed through the adapter and can be configured as appropriate.


Will the HarmonIT developments be used in the module adapter? If so, will appropriate tools be provided?

In its current form the General Adapter does not use the HarmonIT (OpenMI) module interchange facilities. The DELFT-FEWS application will in the near future be extended with an OpenMI compatible adapter, but this is as yet not considered an NFFS requirement.

A major part of the document consists of XML templates for a wide range of data types. It is our understanding that relatively few of these are used in practice and that these are simply made available to ensure a general system.

This assumption is correct - see also G.1.1

On what platform is the system running (Windows, Linux or other)?

The DELFT-FEWS system has been developed in JAVA and is platform independent. However, in the configuration of NFFS, all runs of forecasting modules are executed on dedicated servers. Presently, these are Windows systems.

What is the module adapter (script, bat-file, win32dll, win exe, or other)? There are no technology specifications for the adapter.

There is no specific requirement for the module adapter. The only requirement is that it can communicate through the XML file formats. The use of standard methods for reading and writing XML files, based on the schemas provided is highly recommended. This not only reduces implementation efforts but also guarantees compatibility. The general adapter currently allows the external module to be run either as an executable, or as a Java method. A minor extension will allow running of DLL's. Batch files are not recommended as the General Adapters monitors return codes from the external module. These are not always correctly passed in Batch files, and as such ungraceful failure of a module is not easily identified.

How does the module adapter and the general adapter communicate (win32 API, files, or other)?

The general adapter initiates the executable, Java method (or DLL) through the system/Java calls. Arguments may be passed when required, though these must be static. The general adapter may also be used to set environment variables for the module where required.
The module normally runs in a dedicated directory (tree). a working directory can be configured to act as the root of the module run.

Figure 1 on page 4-1 indicates that all communication between the NFFS system and the system that is delivered by a model supplier goes through the Module adapter. I this a correct assumption and can everything under the blue line be considered as a black box ?. If so, is it possible to describe the specific requirements to such a black box ?

This is correct. The most specific requirement is that the module must not have any manual interaction. This would preclude running the module in a distributed system.

Are there any specific requirements for the data formats used in communication between the module and the module adapter (e.e. binary or ASCII?)

This is a sole choice of the module adapter. There is a slight preference to ASCII formats as this eases debugging. However, if this has implications as to module performance (additional pre-processing requirements) then the binary format is advocated.

Could we ask for further clarification to distinguish between module states and module results and under what circumstances these are stored in the NFFS databases?

The general adapter has full state management facilities. This means the module is provided with an initial state for the start of run. The module should also provide a resulting state at the end of run, or at an intermediate time. When required this resulting state is administered and stored in the NFFS database for later use.
It is important to note that the actual module state file is handled blindly - ie it is not interpreted. NFFS takes the state and time-stamps it for storage in the database. NFFS will not change the content of the state, though renaming of files/directories on import/export is possible.
For cold start runs a default state is provided to the module.
Module results are generally passed back through the Published Interface XML format and stored in the database. Obviously only the required locations need be passed.

What tools are provided within the system for derived data such as catchment mean rainfall, catchment evaporation, etc.?

DELFT-FEWS provides a full set of generic tools for these derived data, including all of the above. Generally DELFT-FEWS is configured to pre-process all data and provide the module directly with required inputs. The module need not do any additional processing.

What tools are provided for updating/data assimilation and how are these integrated into the module structure shown in figure 1?

DELFT-FEWS provides an internal ARMA error modelling tool. This is module independent and is run in sequence to the module when required.

What adapter mechanisms will be used for sequences of modules?

Sequences of modules and data handling functions are run in sequence by so-called workflows. This means that a workflow may be configured to first run a rainfall-runoff model (e.g. NAM, PDM) through the General adapter, then run an error correction routine to the output and subsequently run a hydrodynamic module (e.g. ISIS, MIKE11, SOBEK)

What are the requirement for migrating from an stand-alone system to an on-line system for the module adapter

There is no difference to the module if it is on or off line. This layer is managed by DELFT-FEWS.

Diagnostics: must all diagnostics from a model run be reported in one diagnostics-file or does the interface allow for multiple files (this could be relevant if the model adapter executes a sequence of sub-modules during a model-run)?

In its current form the diagnostics file to each external module run should be one file. The general adapter does, however, allow for executing a sequence of modules, with a diagnostics file for each.

Diagnostics: can any information be passed back - or is it limited to the status type messages with one line of text per status code?

The diagnostics passed back should be relatively limited - there is no strict requirement on the length of the message, but to the point messaging is advocated. The message should provide the level of knowledge to the operational forecaster to monitor the system. Different levels of messages should be used, with information useful in debugging being labelled appropriately.
For more detailed messaging - native module formats can be used, but this is only for specialist analysis. It should be noted that when a module fails, a zipped dump files is made of all module files and I/O. This is set aside for later, detailed, analysis.

Execution: since all model runs of a particular setup seems to be running in the same fixed location (URI) it is the responsibility of the run-time adapter to clean up used files after a model run. Will DELFT-FEWS overwrite existing files during subsequent runs (e.g. the input TS XML files)? This would reduce the cleaning up to files, which are not overwritten by the model tool itself.

It is good practice for a module to clean up redundant files after use. The general adapter does allow configuration of a purge activity either before the module run or after the module run (or both).

Dynamic files: who decides the location of the dynamic files which are created by the general adapter? Both the model adapter and the general adapter must know the location (one to write and the other to read them), but who determines the structure in which they are created?

In the general adapter configuration (XML) the location of the PI-XML files to pass to the module is given, as well as the location where PI-XML files are expected. The location of diagnostics files and work directories can also be configured. The structure is dictated by the Published Interface. All files passed to the module will be in the same directory and all files passed back are expected in the same directory (with the exception of the diagnostics file).

Dynamic files: Can/will DELFT-FEWS pass other dynamic module parameters? Especially, how will DELFT-FEWS pass information of simulation period - start date/time and end date/time and optionally time-of-forecast date/time for the model module to use?

To ensure simplicity, the length of the module run is dictated by the length of the time series passed. When a distinction is to be made between forecast and historic run, these will be passed as separate time series. The module adapter should identify run times from these.

Model states: It is assumed that DELFT-FEWS decides which of the available states (default, scenario) will be supplied to. How will the model adapter know which type of model state to return?

Indeed. The module state returned will be administered by DELFT-FEWS. Depending on the status of the run it will be administered as a state to be used in ensuing forecasts, or disregarded.

We presume that the adapter will run at the time of forecast, and therefore the general adapter will provide data on request for specified intervals. Correct?

Indeed. The General Adapter is configured to provide the module with the required data. Data may be provided at equidistant or non-equidistant intervals.

Is it correct that one module run can supply one set of state information as output, i.e. data for one timestamp (and time step) only?

Yes. A state represents the state of a module at a single point in time.

Please advise on what quality checks are made for observations used for updating that can be used to switch off updating if the updating observations are missing or of poor quality.

The following quality checks can be made within DELFT-FEWS (when configured by the user)

  • Detection of outliers (data flagged unreliable)
  • Check of exceedance of hard limits (data flagged unreliable)
  • Check of exceedance of soft limits (data flagged doubtful)
  • Rate of change check. When a rate of change exceeds a pre-set value, data is flagged unreliable
  • Same readings check: When data remains within a pre-set range for a configured period, data is flagged unreliable
  • Temporary shift check. When a 'temporary shift' is detected, data is flagged unreliable

Can paths for files and directories include several levels of subdirectories - or only one relative to the general paths?

Several levels are possible.

  • No labels