• Is there an interface for connecting with decision-support/analysis tools?
  • Is there support for optimization, sensitivity, and uncertainty analysis?
  • Would such tools reside inside or outside the framework?
  • Can the framework as a whole submit to inversion of control and be driven by an outside entity as the model components are required to?
  • How would one link an OpenMI composition with existing optimization frameworks and libraries?
  • Are there facilities for numerical gradient evaluations across a model composition?
  • Are there facilities for evaluating the propagation of numerical errors across a model composition?
  • Are there facilities for parallel/distributed/grid/cloud computing?
  • Is the OpenMI framework inherently single or multi-threaded?
  • How are multiple threads of execution managed within the OpenMI framework?
  • Are OpenMI framework assemblies designed to be reentrant?
  • When a model composition involves feedback how are deadlock and race conditions avoided/managed?
  • Why is there no interoperation between .NET and Java components? Do bridges for interoperation between Java and .NET exist?
  • What does the roadmap for OpenMI standard look like over the next three to ten years?
  • Aside from comprehensive unit testing, what software QA/QC procedures are in place for the OpenMI implementation?
  • Is the system responsible for mapping disparate models with varying levels of scale and resolution, so they can communicate? What degree of mapping is the system responsible for?
  • How do you register parameters, so that all of the models know the same metadata from one model to another, without hard-wiring the meanings a priori?
  • Is here a wizard that allows the user to automatically register model input and output parameters?
  • Does the system require adequate metadata to describe all input and output data? For example, does the system continually track max, min, name, definition, units, other parameters that it is a function of, etc.?
  • Do you have an architecture that tracks metadata throughout the analysis?
  • Does the system maintain a Master Key list of parameters and associated metadata?
  • For units consistency, does the system check to ensure that ppm, mg/L, mg/kg, mg/kg DW, and mg/kg WW can be differentiated? For example, for mg/L, L of what volume (total volume, solvent, water, .....)?
  • Is there a central data repository for QA/QC? How can all results be reproduced?
  • Is the system responsible for automatically handling models with different spatial boundary conditions (e.g., curvilinear finite element mesh mapping to a planer finite difference grid, where the spatial extent of each does not match each other)?
  • Does the system handle and track errors and warnings?
  • Does the system check for inconsistencies (e.g., no negative concentrations, time monotonically increasing)?
  • Is there a mechanism to register models so that the system knows the difference between a groundwater model and a surface water model?
  • Does the system allow the user to register their models without having to write code?
  • How does the system handle disparate databases to ensure the correct data are consumed by the correct model?
  • Does the system allow for multiple databases populating the same dataset even though a number of the parameters may overlap and their values are different?
  • Is the architectural system for transferring data web-based or on the host machine?
  • What hardware does the architectural software operate on (PC, mainframe)?
  • Is all software nonproprietary?
  • Does the system support dynamic (i.e., cyclic) feedback? If it does and the domain of one model increases at the expense of the domain of another model, and no data have been collected to cover the expanded control volume, how does the system populate the input files?
  • How does the system handle legacy user interfaces?
  • How does the system mange modeling schemes to ensure that inappropriate connections are not made?
  • Are there any issues with firewalls?
  • What has been the experience to date within the community with regards to the relationship between the OpenMI standard/implementation(s) and the proprietary nature of models and utility software? Is the proprietary nature of different models/utilities limiting the community's ability to conduct research and perform assessments? Is support software (e.g., UIs, spatial/temporal grid translators, statistical processors, data processors, etc.) being replicated, purchased, or shared? Is there any discussion of a more open source environment taking place? How would a community of practice include proprietary software?
  • What, if any, specific science issues could be addressed via collaboration between the US and EU (e.g., standards development, model/approach comparison, etc.)?
  • No labels