Menger User Guide
Menger is a computer cluster consisting of one master node and eight slave nodes. Each node is equipped with dual 1.7GHz Pentium 4 Xeon processors and 2GB RDRAM. All nodes are connected to each other via fast ethernet switch.
To login to Menger, use ssh:
"ssh firstname.lastname@example.org" or "ssh -l username menger.math.iit.edu"
The cluster is protected by a firewall. Currently, you can remotely access the computer from charlie.cns.iit.edu and any machine in E1 building. Please change your default password after you login for the first time by typing passwd.
The following is recommended for each user.
- Store your configuration files and program codes in your home directory: /home/username
- Run your programs or executables in your directory named /scratch.host/username
- Transfer your data files to some permanent storage place, such as your home directory on charlie.cns.iit.edu.
Compilers on Menger
Release 3.2 of the PGI CDK (Cluster Development Kit) compilers and tools has been installed on the computer. It contains PGI's HPF, Fortran 90, FORTRAN 77, C, and C++ compilers, and profiling (PGPROF) and debugging (PGDBG) tools. A detailed description of the PGI CDK may be found at PGI's website. The documentation for the compilers and tools is provided at the website.
In addition, the GNU compilers(gcc, g++, g77) are also available on the machine.
Here are samples of "Hello world" programs in both C and Fortran:
main(int argc, char** argv)
int noprocs, nid;
if (nid == 0)
printf("Hello world! I'm node %i of %i \n", nid, noprocs);
integer ierr, myproc
call mpi_comm_rank(MPI_COMM_WORLD, myproc, ierr)
print *, "Hello world! I'm node", myproc
How to Compile an MPI Parallel Program
Message Passing (MPI) has become an increasingly popular programming model for parallel processing. The PGI CDK includes pre-configured versions of MPI-CH (an MPI implementation by Argonne). Compile your programs with pgcc, pgCC, pgf77, and pgf90 and link with the appropriate MPI libraries. Here are examples for the various compilers. Each command produces a file mpihello.
|pgcc [options] mpihello.c -lmpich -o mpihello||(C)|
|pgCC [options] mpihello.C (link?)||(C++)|
|pgf77 [options] mpihello.f -L/usr/pgi/linux86/lib/ -lfmpich -lmpich -o mpihello||(FORTRAN 77)|
|pgf90 [options] mpihello.f (link?)||(FORTRAN 90)|
To run your mpihello program using PBS, first edit a file named mpihello.pbs whose content is as follows:
#PBS -l nodes=2
#PBS -r n
# This is a PBS job submission script. It assumes that there are 2 2-processor
# nodes in the PBS cluster. It also assumes that Hello has been compiled
# in the local directory by typing:
# % pgf77 -o mpihello mpihello.f -lfmpich -lmpich
# % pgcc mpihello.c -lmpich -o mpihello
# PBS will reserve 2 nodes and then execute this script. "mpirun" uses
# the PBS_NODEFILE environment variable as the list of machines on which
# # IMPORTANT NOTE: Be sure to modify the "cd" command below to switch
# to the directory in which you are currently working! Also modify the
# setting of the PGI environment variable as appropriate for your
setenv PGI /usr/pgi
set path=($PGI/linux86/bin $path)
/usr/pgi/linux86/bin/mpirun -v -np 4 mpihello
Then, submit the job to the default queue by typing
You can find a copy of the example files mpihello.f, mpihello.c and mpihello.pbs in the directory '/scratch.host/test'
How to Check a Job Status using PBS