You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Introduction

Delft-FEWS components are being deployed on many different architectures and hardware. A considerable amount of Delft-FEWS users use an IT infrastructure with virtual machines. The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization. When organizations are in the initial stage of (re-)defining their IT infrastructure, it is commonly recognized that after virtualization, containerization is the next logical step in the evolution of IT infrastructure. Deltares will provide guidance. This guidance shall be focused on Kubernetes because in our view Kubernetes is the most commonly accepted and best supported cloud computing solution.

Kubernetes uses Docker containers. A container is a "lightweight" abstraction layer on top of the host operating system. Multiple containers share the machine’s operating system kernel and do not require the overhead of associating an operating system within each application. In comparison with VMs, containers bring reduced start-up time, more compute capacity, more flexibility, fault isolation, ease of management, simplified security and reduced costs. The operational benefits for Delft-FEWS systems are also in line with the Roadmap plans for automation of installations with less needless customization, better auto-scaling and more flexible testing. We prefer using linux containers as much as possible. Whether linux containers can be used may depend on the requirements of the forecast model.  Any Windows-based forecast models can be separately run on Windows hardware, Windows VMs (or in a Windows docker container).

Delft-FEWS Software: A cloud agnostic approach

Delft-FEWS system installation on regular hardware / VMS is currently done by unzipping the binaries, setting OS environment variables and starting a launcher service. For installation in Kubernetes this is not going to be much different. Usually this is controlled using data driven yaml / json configuration file to apply the needed actions.

componentcloud readiness statusRoom for improvements
DatabaseBoth db docker containers as well as managed instances are already possible. Managed instances require minor adjustments of the db scripts.Support one set of database scripts for all db flavors managed and unmanaged.

Master Controller

Yes

Enable service replication

Admin Interface

Yes


Operator Client / SA

Use Azure Virtual Desktop or Database proxy


Config Manager

Use Azure Virtual Desktop or Database proxy or API


Forecasting Shell Server

Yes

Facilitate auto scaling.

WebServices

Yes


DatabaseProxy

Yes


OpenArchive

Yes


Fileshares

cloud-specific



Delft-FEWS in the cloud: reference architectures

Explain and visualize reference architectures

  • Single MC
  • Dual MC (Multi MC?)


Hard- software requirements

Indications of hardware specs for installing the different VM's / containers.

The memory requirements in the cloud are similar as in a VM or on-premise. We recommend all containers to be linux unless Windows containers are specifically required. For Windows containers HW virtualization is required.


Typical cloud related choices (cloud FAQs)

Based on Webinar content / known FAQs specify a number of sub-topics, like

  • Where to place OC(s)
  • How to deal with (incoming, outgoing) data feeds
  • Costs

Scalability

         Kubernetes/Containers

          DevOps (Infrastructure as Code, Automatic deployments of config changes)

Use of managed services

There is no actual requirement for the Delft-FEWS components to use managed services. Managed services can be used as long performance is not affected. As an example, customers that are using SQLServer database replication between different geographical locations reported database timeouts. In response, we've adjusted our database indexes and reconnection strategy for these problems. Since we expect Delft-FEWS users add many more simultaneous running Forecasting Shell servers in the future, we expect / foresee more challenges in this area.

Security

Securing your cloud assets requires continuous investment in keeping your containers safe. An infamous example of malconfigured Kubernetes has been Tesla's unsecured admin console for a Kubernetes cluster (Lessons from the Cryptojacking Attack at Tesla).  This led to malicious actors getting hold of credentials for Tesla's wider AWS environment who used it for cryptomining. Tesla highlighted that it was a test instance "only", but this incident shows why it's really important to secure both production and pre-production resources as far as possible. 

  • do not use insecure keys
  • do not inappropriately open network configuration on test instances because they are "just" test instances.

Bottom line is to ensure / check any Kubernetes instances you manage are appropriately secured. Use of cloud managed Kubernetes platforms (AKS, EKS, GKE) will generally make this easier and give you more confidence compared to situations where you have to run your own cluster, as the cloud provider will take care of many aspects of configuration.  But regardless, be aware that running a Kubernetes cluster well and securely is a big undertaking that requires serious, proactive and ongoing effort to keep things secure and maintained.

Best practices & recommendations


  • No labels