Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Include Page
XBEACH:ContentHeaderXBEACH:
ContentHeader

Excerpt
hiddentrue

Model for 1D/2D wave propagation, sediment transport and morphological changes

...

  • Go to the directory where your source code is avalaible
  • Update the source code with command:
Code Block
 svn svnupdateupdate 
  • Clean up your directory by typing:
Code Block
 make distcleanclean 
  • In here build a Makefile with the command:

...

  • type ./configure --help for detailed options; i.e to build mpi executable you can use one of the following commands:
Code Block

FC=gfortran44 ./configure --with-mpi
FC=gfortran44 MPIFC=/opt/mpich2-1.0.8-gcc44-x86_64/bin/mpif90 ./configure --with-mpi 
  • You can also use the gfortan44 compiler to build a Makefile with netcdf output:
Code Block
 FC=gfortran44 PKG_CONFIG_PATH=/opt/netcdf-4.1.1/gfortran/lib/pkgconfig FC=gfortran44 ./configure --with-netcdf 
  • Build you XBeach executable by running you make file and typingOr both:
Code Block
 make 
  • If nor errors you have your executable now.

Step 2: Run XBeach MPI

  • Put your run directory with the XBeach input files somewehere accessible for the h4-cluster (i.e. the P-drive)
  • Now make your own shell script (<name>.sh) to run your XBeach simulation. Below you find the handy example of Menno Genseberger. This file is not for copying but read it carefully and make sure you set the following:
FC=gfortran44 MPIFC=/opt/mpich2-1.0.8-gcc44-x86_64/bin/mpif90 PKG_CONFIG_PATH=/opt/netcdf-4.1.1/gfortran/lib/pkgconfig ./configure --with-netcdf --with-mpi 
  • Build you XBeach executable by running you make file and typing:
Code Block
 make 
  • If nor errors you have your executable now.

Step 2: Run XBeach MPI

  • Put your run directory with the XBeach input files somewehere accessible for the h4-cluster (i.e. the P-drive)
  • Now make your own shell script (<name>.sh) to run your XBeach simulation. Below you find the handy example of Menno Genseberger. This file is not for copying but read it carefully and make sure you set the following:
  1. the number of nodes (i.e. -pe distrib 3)
  2. the number of cores per node (i.e. two_cores_per_node=yes)
  3. the path (i.e.
  4. the number of nodes (i.e. -pe distrib 3)
  5. the number of cores per node (i.e. two_cores_per_node=yes)
  6. the path (i.e. xbeach_bin_dir=/u/thiel/checkouts/trunk) where to find the executable
  7. the run statement (i.e. mpirun -np $NSLOTS $xbeach_bin_dir/xbeach >> output_xbeach_mpi 2>&1)
Code Block
### ********************************************************************
### ********************************************************************
### **                                                                **
### **  Example shell script to run XBeach executable in parallel     **
### **  with MPICH2 via SGE on linux cluster.                         **
### **  c 2009 Deltares                                               **
### **  author: Menno Genseberger                                     **
### **  Changes: Leroy van Logchem 24 Nov 2010                        **
### **  -- Use tight integrated mpich2 PE. Requires secret file:      **
### **     ~/.mpd.conf                                                **
### **     secretword=mys3cret                                        **
### **                                                                **
### ********************************************************************
### ********************************************************************
### The next line species the shell "/bin/sh" to be used for the execute
### of this script.
#!/bin/sh
### The "-cwd" requests execution of this script from the current
### working directory; without this, the job would be started from the
### user's home directory.
#$ -cwd
### The name of this SGE job is explicitly set to another name;
### otherwise the name of the SGE script itself would be used. The name
### of the job also determines how the jobs output files will be called. 
#$ -N XB_ZandMotor
### The next phrase asks for a "parallel environment" called "mpich2",
### to be run with 4 slots (for instance 4 cores).
### "mpich2" is a specific name for H3/H4 linux clusters (this name is
### for instance "mpi" on DAS-2/DAS-3 linux clusters).
#$ -pe distrib 3

### Start SGE.
. /opt/sge/InitSGE

### Code compiled with Intel 11.0 compiler.
export LD_LIBRARY_PATH=/opt/intel/Compiler/11.0/081/lib/ia32:$LD_LIBRARY_PATH

### Specific setting for H3/H4 linuxclusters, needed for MPICH2
### commands (mpdboot, mpirun, mpiexed, mpdallexit etc.).
export PATH="/opt/mpich2/bin:${PATH}"

xbeach_bin_dir=/u/thiel/checkouts/trunk
cp $xbeach_bin_dir/xbeach xbeach.usedexe

### Some general information available via SGE. Note that NHOSTS can be
echo ----------------------------------------------------------------------
echo Parallel run of XBeach with MPICH2 on H4 linuxcluster.
echo SGE_O_WORKDIR: $SGE_O_WORKDIR
echo HOSTNAME     : $HOSTNAME
echo NHOSTS       : $NHOSTS
echo NQUEUES      : $NQUEUES
echo NSLOTS       : $NSLOTS
echo PE_HOSTFILE  : $PE_HOSTFILE

echo Contents of auto generated machinefile:
cat $TMPDIR/machines

echo ----------------------------------------------------------------------


### General, start XBeach in parallel by means of mpirun.
mpirun -np $NSLOTS $xbeach_bin_dir/xbeach >> output_xbeach_mpi 2>&1

### General for MPICH2, finish your MPICH2 communication network.
mpdallexit

...

Of course life is not always straight forward on a linux cluster. Don't feel stupid when you type "qstat" and your name is not part of the list. Stay calm and try not to cry but instead go to your simulation directory and check for the .o file. It will tell you the error. If the error is about mpd please contact Jamie (8176) or Jaap (8363) if not contact ICT.

Note for Jaap and Jamie type "mpd &" and follow instructions:

Code Block
 
touch .mpd.conf
chmod 600 .mpd.config

Now edit the .mpd.config file and add the text:

Code Block

MPD_SECRETWORD=secret

...

Debugging

Debugging a parallel version of XBeach using multiple processes is possible, but not as convenient as debugging in, for example, Visual Studio. This section describes a way to truly debug a parallel program like XBeach.

Compile the program without optimization

In order to prevent the debugger to randomly skip through the code, optimizations should be prevented. This can be done to alter the above mentions call to the configuration script as follows:

Code Block
 FC=gfortran44 FCFLAGS="-g -O0" ./configure --with-mpi 

The -g option makes sure debugging symbols are included in the result. The -O0 sets the optimizations to a minimum. By default these are set to -O2. Now create a binary file:

Code Block

make distclean
make

Have an X Window system available

On Microsoft Windows, make sure an X Window emulator is running. For example, start Exceed: START -> Program -> Exceed -> Exceed

Run the model

Start the just compiled XBeach binary in your model directory in a bit peculiar way:

Code Block
 mpirun -n <nr_of_processes> xterm -e gdb <path_to_binary>/xbeach 

This will start a number of command windows (xterm) equal to the number of processes you defined after the -n option. In each command window, a instance of the debugger (gdb) will be started. Each instance of the debugger will debug a specific subprocess of the XBeach program stored in the <path_to_binary> path.

Start debugging

The debugger provides at least the following commands, which can be used in each command window separately:

Command

Description

Example

r

Starts the XBeach program in the current debugger

r

b

Adds or queries a breakpoint

b <filename>.F90:<linenr>

 

 

b 1

c

Continue running after breakpoint

c

n

Continue to next line

n

p

Print variable contents

p <varname>

q

Quit running

q

Especially the commands n or p might complain about a lack of line numbers or variables. This can be due to you are still using optimizations or you use an inferior type of debugging symbols. You might want to consider using another type, like DWARF-2. How? I don't know! If you do, please provide the info here...

...

Compile XBeach parallel version for use on Linux/cluster

  1. On your Windows PC, start Exceed > Exceed XDMCP Broadcast (or the icon on your desktop).
  2. Choose 'devux.wldelft.nl'.
  3. Under Sessions, choose 'GNOME'.
  4. Use your Deltares user name and password to log in.
  5. Start a Terminal session (Applications > System Tools > Terminal).
  6. Make a directory "checkouts":
    Code Block
    mkdir ~/checkouts
  7. Checkout the latest and greatest version of XBeach (enter your Deltares password if asked for):
    Code Block
    svn co https://repos.deltares.nl/repos/XBeach/trunk ~/checkouts/XBeach
    If you already have the local repository, but want to update it, use:
    Code Block
    svn update
  8. Go to the XBeach directory:
    Code Block
    cd ~/checkouts/XBeach
  9. Run
    Code Block
    FC=ifort ./configure
  10. Run
    Code Block
    make 
  11. Make sure version 10 of the Intel Fortran compiler is used (instead of version 8):
    Code Block
    . /opt/intel/fc/10/bin/ifortvars.sh
  12. Delete all files not needed for compiling to get rid of files that could mess it up:
    Code Block
    make clean
  13. Compile the parallel version:
    Code Block
    PATH=/opt/mpich2-1.0.7/bin:$PATH USEMPI=yes ./configure && make
  14. (optional) Copy the executable to your personal bin-folder:
    Code Block
    cd ~/bin
    cp ~/checkouts/bin/xbeach.mpi .

Compiled version

Compiled Linux versions of XBeach (both the MPI and serial version) can be found in Arend's bulletin box:

Code Block
/BULLETIN/pool_ad/xbeach_linux/   # or
/u/pool_ad/BULLETIN/xbeach_linux/

...

Run XBeach parallel version on h3 cluster

To run a parallel XBeach job on the h3 cluster (from now on 'h3'), you need 3 things:

  1. A parallel version of XBeach somewhere on the u-drive (preferably in /u/username/bin)
  2. A job (shell) file you feed to the cluster
  3. A directory on your part of the u-drive with the simulation-data (params.txt, bathy.dep, etc)

Logging on to cluster

Windows
The easiest way to log on to h3, is using the program PuTTY, which can be found on the Desktop of your Deltares PC. The first time you connect to h3, you need to supply some basic parameters, which you can save for later use (with 'Save'). In the Dialog Box, under Host Name, fill in: h3.wldelft.nl; you don't need to touch the other options (leave the Protocol to SSH). Optionally, save the information as e.g. 'h3'. The first time you connect to h3, you'll probably see a message about the server's host key. Click Yes to accept. Log in with your Deltares user name and password.

Linux
If you want to connect from e.g. the development server (devux) to h3, you can connect from a terminal session. Type

Code Block
ssh h3

to connect to h3. Your user name has already been sent for you, so you only need to submit your password.

Obtain latest version of XBeach executable

There are 2 ways to obtain the latest (or any) version of the parallel XBeach executable:

  1. Compile it yourself (see the instructions in #compilexbeach)
  2. Copy it from Arend Pool's BULLETIN:
    Code Block
    cd ~/bin
    cp /u/pool_ad/BULLETIN/xbeach_linux/current/xbeach.mpi .

Obtain the XBeach MPI job file

There are also 2 ways to obtain the job file to run xbeach.mpi on h3:

  1. Copy it from Arend Pool's BULLETIN:
    Code Block
    mkdir ~/simulations    # can be skipped if directory already exists
    cd ~/simulations
    cp /u/pool_ad/BULLETIN/xbeach_linux/xbeach.sh .
    In the above instructions, it is assumed you place the job file in the direction /simulations. If you want to place it somewhere else, feel free to do so and change the instructions accordingly.
  2. Create the shell file yourself in a location you prefer. The file should contain the following code:
    Code Block
    #!/bin/sh
    
    . /opt/sge/InitSGE
    export PATH="/opt/mpich2/bin:$PATH"
    echo "numslots: $DELTAQ_NumSlots"
    echo "nodes: $DELTAQ_NodeList"
    echo $DELTAQ_NodeList | tr ' ' '\n' | sed 's/.wldelft.nl//' > machines
    echo "Machines file:"
    cat machines
    mpdboot -1 -n $DELTAQ_NumSlots -f machines
    mpirun -np $DELTAQ_NumSlots ~/bin/xbeach.mpi
    mpdallexit
    
    The second last line should contain the path to xbeach.mpi. Edit this if you have placed it somewhere else.

Run the parallel job

Make sure you have placed your simulation (directory) somewhere on a shared location (u-drive or p-drive) and go ('cd') there (for example):

Code Block
cd /u/username/simulations/simulation1   # or
cd /p/project/simulations/simulation1

Finally, submit your job to the grid engine (h3) with the following command:

Code Block
qsub -pe spread N /path-to-job-file/xbeach.sh

where N is the number of nodes you want to use and path-to-job-file the path to xbeach.sh.

More info

...

type "mpd &" and follow instructions:

Code Block

cn $HOME
touch .mpd.conf
chmod 600 .mpd.conf

Now edit the .mpd.config file and add the text:

Code Block

MPD_SECRETWORD=secret