Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Delft-FEWS components are being deployed on many different architectures and hardware. A considerable amount of Delft-FEWS users use an IT infrastructure with virtual machines. The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization. When organizations are in the initial stage of (re-)defining their IT infrastructure, it is commonly recognized that after virtualization, containerization is the next logical step in the evolution of IT infrastructure. It will remain possible to install Delft-FEWS on on-premise hardware, or in virtual machines. Delft-FEWS system installation on regular hardware / VMS is currently done by organizing a central database, installing RPMs / MSIs / unzipping the binaries, setting OS environment variables and starting a launcher service. For installation in Kubernetes the cloud this is not going to be much different. Usually this is controlled using data driven yaml / json configuration files to apply the needed actions.

componentcloud readiness statusRoom for improvements
DatabaseBoth db docker containers as well as managed instances are already possible. Managed instances require minor adjustments of the db scripts.Support one set of database scripts for all db flavors managed and unmanaged.

Master Controller

Yes

Enable service replication

Admin Interface

Yes


Operator Client / SA

Use Database proxy (Azure: Azure Virtual Desktop)


ConfigManagerSee Operator Client, in addition the AdminInterface API can be used.

Forecasting Shell Server

Yes

Facilitate auto scaling.

WebServices

Yes


DatabaseProxy

Yes


OpenArchive

Yes

...


Delft-FEWS in the cloud

Deltareswill improve the Delft-FEWS components for use in containers and provide guidance on the installation. Autoscaling we intend to implement directly using the Kubernetes API because in our view Kubernetes is the most commonly accepted and best supported cloud computing solution.

Delft-FEWS Hardware and software requirements

...

  1. for file-based imports, use Network File Service (NFS) or Windows shares
  2. for server imports serving public data, ftp / http can be used (encryption would provide unnecessary overhead), other services in need of passwords should should use a secure connection / https

Autoscaling with Kubernetes

Most container solutions have Kubernetes under the hood.  For autoscaling we will use the Kubernetes API because in our view Kubernetes is the most commonly accepted and best supported cloud computing solution. Kubernetes uses Docker containers. A container is a "lightweight" abstraction layer on top of the host operating system. Multiple containers share the machine’s operating system kernel and do not require the overhead of associating an operating system within each application. In comparison with VMs, containers bring reduced start-up time, more compute capacity, more flexibility, fault isolation, ease of management, simplified security and reduced costs. The operational benefits for Delft-FEWS systems are also in line with the Roadmap plans for automation of installations with less needless customization, better auto-scaling and more flexible testing. We prefer using linux containers as much as possible. Whether linux containers can be used may depend on the requirements of the forecast model.  Any Windows-based forecast models can be separately run on Windows hardware, Windows VMs (or in a Windows docker container).

...

For Delft-FEWS in the cloud the same principles apply for security as on premise: Security - Shared responsibility model for Delft-FEWS system installations. Securing your cloud assets requires continuous investment in keeping your containers safe. An infamous example of malconfigured Kubernetes cloud environment has been Tesla's unsecured admin console for a Kubernetes cluster.  This led to malicious actors getting hold of credentials for Tesla's wider AWS environment who used it for cryptomining. Tesla highlighted that it was a test instance "only", but this incident shows why it's really important to secure both production and pre-production resources as far as possible. 

...

Bottom line is to ensure / check any Kubernetes cloud instances you manage are appropriately secured. Use of cloud managed Kubernetes platforms (AKS, EKS, GKE) will generally make this easier and give you more confidence compared to situations where you have to run your own cluster, as the cloud provider will take care of many aspects of configuration.  But regardless, be aware that running a Kubernetes cluster cloud instance well and securely is a big undertaking that requires serious, proactive and ongoing effort to keep things secure and maintained.

...