Contents


Deprecated: please see new documentation site.



The following are sample job scripts to submit MPI/OpenMP hybird applications to our systems.

AIX

For IBM P575 machines, the following Loadleveler script sends a 4 mpi processes to 4 IBM nodes (32 processors) with one mpi process and 8 OpenMP threads running on each node:

#!/usr/bin/ksh
# @ job_type = parallel
# @ executable = /usr/bin/poe
# @ arguments = /work/default/ou/flower/run/hydro
# @ input = /dev/null
# @ output = /work/default/ou/flower/output/out.std
# @ error = /work/default/ou/flower/output/out.err
# @ initialdir = /work/default/ou/flower/run
# @ notify_user = ou@baton.phys.lsu.edu
# @ class = checkpt
# @ notification = always
# @ checkpoint = no
# @ restart = no
# @ wall_clock_limit = 10:00:00
# @ node_usage = not_shared
# @ node = 4,4
# @ tasks_per_node = 1
# @ network.MPI = sn_single,shared,US
# @ requirements = ( Arch == "Power5" )
# @ environment=MP_SHARED_MEMORY=yes; COPY_ALL
# @ queue
export OMP_NUM_THREADS=8

LINUX

For Linux clusters, the following PBS script sends a 4 mpi processes to 4 Queebee nodes (32 processors) with one mpi process and 8 OpenMP threads running on each node:

#!/bin/bash
#PBS -q checkpt
# the queue to be used. "small" is the only queue available at present.
#PBS -A loni_gridadmin1
#
#PBS -l nodes=4:ppn=8
#
# number of nodes and number of processors on each node to be used.
# Do NOT use ppn = 1.
#
#PBS -l cput=2:00:00
# requested CPU time.
#
#PBS -l walltime=2:00:00
# requested Wall-clock time.
#
#PBS -o /work/ou/mpi_openmp/run/output/myoutput2
# name of the standard out file to be "output-file".
#
#PBS -j oe
# standard error output merge to the standard output file.
#
#PBS -N mpi_openmp
# name of the job (that will appear on executing the qstat command) to be "syschk".
#
export HOME_DIR=/home/$USER/
#
export WORK_DIR=/work/ou/mpi_openmp/run
#
export NPROCS=`wc -l $PBS_NODEFILE |gawk '//{print $1}'`
# REQUIRED for PBS to work.
cp $HOME_DIR/mpi_openmp/run/* $WORK_DIR/.
# copies necessary files from home directory to scratch space.
cd $WORK_DIR
# get a new machinefile file which only contains unique nodes
rm -f hostfile 
touch hostfile
cat $PBS_NODEFILE | sort | uniq >> hostfile
# get number of MPI processes 
export NPROCS=`wc -l hostfile |gawk '//{print $1}'`
# setting number of OpenMP threads
export OMP_NUM_THREADS=8
ulimit -s hard
date
# launch your hybrid applications 
mpirun -machinefile hostfile -np $NPROCS /bin/env OMP_NUM_THREADS=$OMP_NUM_THREADS \
       $WORK_DIR/hydro
date

Users may direct questions to sys-help@loni.org.

Powered by MediaWiki