Deprecated: please see new documentation site.


Contents


MPI (messeage passing interface) is a standard in parallel computing for data communication across distributed processes.

Building MPI applications on LONI 64 bit Intel cluster

mvapich

mvapich is an implementation of mpich to make efficient usage of infiniband network, developed at Ohio Sate Univ. It has been installed on Eric and binded with various compilers, the installation directory should be located under: /usr/local/packages In the following we will use the Intel version of mvapich to illustrate the compilation of an MPI program. To use the Intel 9.1 compiler version of mvapich, you can add the following line to the .soft file under your home directory

+mvapich-0.98-intel9.1

Or alternatively, you can set the following environmental variables to achieve the same effect:

export MPIHOME=/usr/local/packages/mvapich-0.98-intel9.1
export LD_LIBRARY_PATH=$MPIHOME/lib:$LD_LIBRARY_PATH
export PATH=$MPIHOME/bin:$PATH  

You can verify if you have setup this correctly by checking whether corresponding mpif90 and mpirun are in your path:

[ou@eric2 run]$ which mpif90
/usr/local/packages/mvapich-0.98-intel9.1/bin/mpif90
[ou@eric2 run]$ which mpirun
/usr/local/packages/mvapich-0.98-intel9.1/bin/mpirun

After the correct environment is set, you can compile your program like the following:

mpicc test.c -O3 -o a.out
mpif90 test.F -O3 -o a.out

To run your application interactively on 16 processors, you need to first send interactive job request to the PBS:

qsub -I -l nodes=4:ppn=4 -l walltime=00:30:00 -l cput=00:30:00

When your job request is granted, enter the directory under which your parallel executable is, then launch:

mpirun -np 16 myexecutable

The following is a sample PBS script to send your mvapich application to the PBS queue:

(Sample PBS Script)

openmpi

Openmpi has been installed on Eric and binded with various compilers, the installation directory should be located under: /usr/local/packages In the following we will use the gnu version of openmpi to illustrate the compilation of an MPI program. To use the GNU compiler version of openmpi, you can add the following line to the .soft file under your home directory

+openmpi-1.2.2-gcc

Or alternatively, you can set the following environmental variables to achieve the same effect:

export MPIHOME=/usr/local/packages/openmpi-1.2.2-gcc
export LD_LIBRARY_PATH=$MPIHOME/lib:$LD_LIBRARY_PATH
export PATH=$MPIHOME/bin:$PATH  

After the correct environment is set, you can compile your program like the following:

mpicc test.c -O3 -o a.out
mpif90 test.F -O3 -o a.out

As long as the environment is setup correctly, you should be able to use the above sample PBS script to send your openmpi application to the PBS queue:


mvapich2

mvapich2 is an implementation of mpich2 to make efficient usage of InfiniBand network, developed at Ohio Sate Univ. It has been installed on Eric and binded with various compilers, the installation directory should be located under: /usr/local/packages In the following we will use the Intel version of mvapich to illustrate the compilation of an MPI program. To use the Intel 9.1 compiler version of mvapich, you can add the following line to the .soft file under your home directory

+mvapich2-0.98-intel9.1

Or alternatively, you can set the following environmental variables to achieve the same effect:

export MPIHOME=/usr/local/packages/mvapich2-0.98-intel9.1
export LD_LIBRARY_PATH=$MPIHOME/lib:$LD_LIBRARY_PATH
export PATH=$MPIHOME/bin:$PATH  

You can verify if you have setup this correctly by checking whether corresponding mpif90 and mpirun are in your path:

[ou@eric2 run]$ which mpif90
/usr/local/packages/mvapich2-0.98-intel9.1/bin/mpif90
[ou@eric2 run]$ which mpirun
/usr/local/packages/mvapich2-0.98-intel9.1/bin/mpirun

After the correct environment is set, you can compile your program like the following:

mpicc test.c -O3 -o a.out
mpif90 test.F -O3 -o a.out

To run a mpich2 job, a file .mpd.conf needs to exist in your homedirectoy, which contains the following line:

MPD_SECRETWORD=xxxxxxxxxxx

where xxxxxxxxx is some password string you specify. Note that the file .mpd.conf should not be readable and writable by anyone else except yourself. To set the permission, you need to:

chmod 600 ~/.mpd.conf


Then, one needs to start mpid daemon before running mpich2 jobs. The following PBS script shows an example to run mpich2 job

(Sample PBS Script)

Debugging MPI applications on LONI 64 bit Intel cluster

Debugging mpi codes with mpich2 + gdb

mpich2 provides an interface to gdb, which allows users to debug their mpi codes with traditional gdb tools. To use mpich2 + gdb to debug your codes, you need to set up mpich2 environment as in the above sections. Then, follow the below steps:

qsub -I -l nodes=1:ppn=4 -l walltime=00:30:00 -l cput=00:30:00

ask for 4 processors for an interactive run; after the interactive run starts, enter the following in the command line:

mpdboot -n 1

to start mpd daemons on one node (if you ask for more than one nodes, change 1 to the desired node number);

mpiexec -gdb -np 4 ./a.out

run the executable in the debugging mode, you should be able to debug your code as using gdb for a serial code; the output will be similar to the following:

(Sample Output)

Users may direct questions to sys-help@loni.org.

Powered by MediaWiki