New XBeach repository and portal website

The new XBeach portal website is released on http://oss.deltares.nl! It replaces the general parts of this WIKI space and the Google Groups website.
The Subversion (SVN) repository is migrated as well. The new address is: https://svn.oss.deltares.nl/repos/xbeach. Read the instruction on how to register and create a new working copy or how to relocate to start using the new SVN server.

Summary

Below you find instructions to run XBeach on the Deltares H4 cluster

Compiling and running XBeach MPI on the h4 cluster

Step 1: Compiling the program

  • Set the intel 11, 32 bit compiler by executing the follwing command lines:
. /opt/intel/Compiler/11.0/081/bin/ifortvars.sh ia32

export PATH="/opt/mpich2/bin:${PATH}" 
  • Go to the directory where your source code is avalaible
  • Update the source code with command:
 svn update 
  • Clean up your directory by typing:
 make clean 
  • In here build a Makefile with the command:
 FC=gfortran44 ./configure 
  • type ./configure --help for detailed options; i.e to build mpi executable you can use one of the following commands:
FC=gfortran44 ./configure --with-mpi
FC=gfortran44 MPIFC=/opt/mpich2-1.0.8-gcc44-x86_64/bin/mpif90 ./configure --with-mpi
  • You can also use the gfortan44 compiler to build a Makefile with netcdf output:
 FC=gfortran44 PKG_CONFIG_PATH=/opt/netcdf-4.1.1/gfortran/lib/pkgconfig  ./configure --with-netcdf 
  • Or both:
 FC=gfortran44 MPIFC=/opt/mpich2-1.0.8-gcc44-x86_64/bin/mpif90 PKG_CONFIG_PATH=/opt/netcdf-4.1.1/gfortran/lib/pkgconfig ./configure --with-netcdf --with-mpi 
  • Build you XBeach executable by running you make file and typing:
 make 
  • If nor errors you have your executable now.

Step 2: Run XBeach MPI

  • Put your run directory with the XBeach input files somewehere accessible for the h4-cluster (i.e. the P-drive)
  • Now make your own shell script (<name>.sh) to run your XBeach simulation. Below you find the handy example of Menno Genseberger. This file is not for copying but read it carefully and make sure you set the following:
  1. the number of nodes (i.e. -pe distrib 3)
  2. the number of cores per node (i.e. two_cores_per_node=yes)
  3. the path (i.e. xbeach_bin_dir=/u/thiel/checkouts/trunk) where to find the executable
  4. the run statement (i.e. mpirun -np $NSLOTS $xbeach_bin_dir/xbeach >> output_xbeach_mpi 2>&1)
### ********************************************************************
### ********************************************************************
### **                                                                **
### **  Example shell script to run XBeach executable in parallel     **
### **  with MPICH2 via SGE on linux cluster.                         **
### **  c 2009 Deltares                                               **
### **  author: Menno Genseberger                                     **
### **  Changes: Leroy van Logchem 24 Nov 2010                        **
### **  -- Use tight integrated mpich2 PE. Requires secret file:      **
### **     ~/.mpd.conf                                                **
### **     secretword=mys3cret                                        **
### **                                                                **
### ********************************************************************
### ********************************************************************
### The next line species the shell "/bin/sh" to be used for the execute
### of this script.
#!/bin/sh
### The "-cwd" requests execution of this script from the current
### working directory; without this, the job would be started from the
### user's home directory.
#$ -cwd
### The name of this SGE job is explicitly set to another name;
### otherwise the name of the SGE script itself would be used. The name
### of the job also determines how the jobs output files will be called.
#$ -N XB_ZandMotor
### The next phrase asks for a "parallel environment" called "mpich2",
### to be run with 4 slots (for instance 4 cores).
### "mpich2" is a specific name for H3/H4 linux clusters (this name is
### for instance "mpi" on DAS-2/DAS-3 linux clusters).
#$ -pe distrib 3

### Start SGE.
. /opt/sge/InitSGE

### Code compiled with Intel 11.0 compiler.
export LD_LIBRARY_PATH=/opt/intel/Compiler/11.0/081/lib/ia32:$LD_LIBRARY_PATH

### Specific setting for H3/H4 linuxclusters, needed for MPICH2
### commands (mpdboot, mpirun, mpiexed, mpdallexit etc.).
export PATH="/opt/mpich2/bin:${PATH}"

xbeach_bin_dir=/u/thiel/checkouts/trunk
cp $xbeach_bin_dir/xbeach xbeach.usedexe

### Some general information available via SGE. Note that NHOSTS can be
echo ----------------------------------------------------------------------
echo Parallel run of XBeach with MPICH2 on H4 linuxcluster.
echo SGE_O_WORKDIR: $SGE_O_WORKDIR
echo HOSTNAME     : $HOSTNAME
echo NHOSTS       : $NHOSTS
echo NQUEUES      : $NQUEUES
echo NSLOTS       : $NSLOTS
echo PE_HOSTFILE  : $PE_HOSTFILE

echo Contents of auto generated machinefile:
cat $TMPDIR/machines

echo ----------------------------------------------------------------------


### General, start XBeach in parallel by means of mpirun.
mpirun -np $NSLOTS $xbeach_bin_dir/xbeach >> output_xbeach_mpi 2>&1

### General for MPICH2, finish your MPICH2 communication network.
mpdallexit

REMARK: if you paste the text from this page in your <name>.sh file make sure you convert the file to linux format by:

 dos2unix <name>.sh 
  • Put your shell script (<name>.sh) in the simulation directory
  • In Putty go to your simulation directory on the h4
  • submit your job to the h4-cluster by:
 qsub <name>.sh 

Trouble shooting

Of course life is not always straight forward on a linux cluster. Don't feel stupid when you type "qstat" and your name is not part of the list. Stay calm and try not to cry but instead go to your simulation directory and check for the .o file. It will tell you the error. If the error is about mpd please contact Jamie (8176) or Jaap (8363) if not contact ICT.

Note for Jaap and Jamie type "mpd &" and follow instructions:

cn $HOME
touch .mpd.conf
chmod 600 .mpd.conf

Now edit the .mpd.config file and add the text:

MPD_SECRETWORD=secret
  • No labels