Skip to end of metadata
Go to start of metadata

Project

Summary

To implement the current policy, Rijkswaterstaat replenishes an average of 12 million m3 of sand annually. Exactly how much sand is needed and at which places and times the sand can best be deposited is based, among other things, on the annual assessment of the coast and knowledge and experience of the sandy coastal system.

The long-term knowledge program Beheer en Onderhoud van de Nederlandse kust (B&O kust) aims to answer research questions about nourishments and to expand and spread knowledge about the coastal system. Deltares and Rijkswaterstaat work together in this program.

The approach is a cyclical process: Rijkswaterstaat bases choices about nourishments and coastal management on the existing knowledge of the coastal system. If there are knowledge gaps, hypotheses are formulated for this. Testing these hypotheses and answering the research questions that arise in this context are tackled in sub-projects. New insights resulting from the research can lead to adjustments in the implementation of nourishments and coastal management.

In addition to answering research questions and testing hypotheses, the B&O kust program also has the objective of disseminating knowledge and issuing regional advice (aimed at questions from the coastal management of the Rijkswaterstaat regional departments).

B&O Coast Projects

The B&O Coast program consists of four projects in which different aspects of the morphological and ecological system are central. Data analyses and model simulations are carried out per project to answer research questions and to test hypotheses.

  1. Condition of the coast
  2. Exchange tidal basins & Morphodynamics island heads
  3. Operation of coastal foundation & Redistribution nourishment sand
  4. Specialist advice

Sources

https://www.helpdeskwater.nl/onderwerpen/waterveiligheid/programma-projecten/beheer-onderhoud/kennisprogramma-kust/


FAIR - before

State of F

It seemed at first that most of the data sets in this project were already AIR or even FAIR. When searching for them; and when searching for them, some also appeared to be FAIR already! AIR data sets were findable because the naming of the data set was known. For people unfamiliar with the data it is possible they would not be able to find them.

State of A

For Deltarians, the accessibility of most data sets was quick, with one or a few clicks. For the data sets that were put on the Deltares Data Portal (DDP) via a harvester, the "OpenDAP service" link never worked; only the Direct Download link worked.

State of I

The data that is harvested from the thredds server is only available via direct download. So this data must be downloaded and stored locally. The manually added data is available with different access URL's: e.g. via OpenDAP or HTTP Server.

State of R

The data with links to a wiki is described following http://5stardata.info/. This is an initiative with deployment scheme for open data, a bit like FAIR. In the wiki, the data sets and its other releases over time are described. What is missing is the description of how the data set is actually processed. A link to information on data conversion leads to a page that says "work in progress".


FAIR - after

State of F

To do: change in the harvester how and what title, abstract, keywords and other important meta data appears on the front pages of the data sets in the DDP.

State of A

Changing the links that pointed to a wrong URL to one that works made the data accessible for all Deltarians. To do: change linking in the harvester.

State of I

To do: investigate if other ways of operating the data sets is necessary.

State of R

To do: for this do to, the data owners are necessary and more in-depth investigation must be done to get to know how the data is manipulated.


Lessons learned

There are different types of infrastructure to make data FAIR. Currently, for internal use in Deltares, the Deltares Data Portal (DDP) is how we make data sets findable. When making data sets available, it can be done manually or via the thredds server via a harvester. The harvester can be improved on the meta data it puts on the front page of the DDP page of the data set. Data sets that are added manually usually are findable, definitely accessible and interoperable, but the exact description of how the data is gathered or processed is often missing (not so much reusable).


Conclusions for guidelines:

  • If you make your data available in the DDP manually, also describe how you obtained/processed the data.
  • If you use a harvester, check if it displays the right information in its DDP page:
    • Title; readable for people who are not familiar with the abbreviations
    • Elaborating abstract
    • Geographical boundary
    • Time extend
    • Keywords
    • Links to data (services)


Copyright © 2016 Atlassian

Creative Commons License
Dit werk is gelicentieerd onder een Creative Commons Naamsvermelding-NietCommercieel-GelijkDelen 4.0 Internationale Licentie.