UK National HPC Service

Computer Services for Academic Research Logo
Home | Helpdesk | Machine Status | Search | Apply

Grid Middleware


Grid Middleware are software stacks designed to present disparate compute and data resources in a uniform manner, such that these resources can be accessed remotely by client software without needing to know a priori the systems' configurations. Currently The CSAR machines support basic Globus and UNICORE access.

Globus is a middleware constructed from a number of components which make up a toolkit. This toolkit provides client, server and development components for the three Globus "pillars" of Grid Computing: Resource management, Information Management and Data Management.

UNICORE (UNiform Interface to COmpute REsources) provides a science and engineering grid combining resources of supercomputer centres and making them available through the Internet. Strong authentication is performed in a consistent and transparent manner, and the differences between platforms are hidden from the user thus creating a seamless HPC portal for accessing supercomputers, compiling and running applications, and transferring input/output data.

Access to both flavours of middleware is based upon user owned X509 certificates (such as those issued by the UK eScience Certificate Authority). UNICORE makes use of these certificates directly whereas Globus allows the use of GSI (Grid Security Infrastructure) proxy impersonation certificates. Both methods allow for single sign-on to varying extents.

Globus version 2.2.4 is available on the Origins and 4.0.0 pre-WebService components on the Altix. UNICORE version 4.0.2 is available on the Origins

Restrictions on Use

The use of Globus and UNICORE to access CSAR resources is not restricted. Users must however have an existing account and project with sufficient resource to run jobs.

Set Up Procedure

To make use of the Globus services users are required to contact the CSAR help desk specifying their CSAR username and sending the subject of their X509 Certificate (also known as their Distinguished Name, DN) and specifying the issuing Certificate Authority (CA). The following command line operation on your certificate file should provide the both the subject and the issuing CA:

openssl x509 -in usercert.pem -noout -subject -issuer

CSAR only supports certificates issued by trustworthy Certificate Authorities. Trustworthiness of a CA is determined, in part, by reviewing the CA's CP/CPS (Certificate Practise and Policy statement). A list of currently supported CAs can be found in the CSAR section of the National Grid Service Website If your certificate is issued by a CA not in this list then please include the details of your CA in order to allow the CSAR service team to review it.

Once the username and certificate details have been received and CSAR service team are satisfied that the common name of the certificate appropriately matches the CSAR account name they will be added to the Globus grid-mapfile. This will allow the user to use their client software to access Grid Resources available via GSI mechanisms.

To make use of UNICORE services users are required to send their X509 certificate and specify their CSAR username to the help desk. UNICORE currently only supports the UK eScience CA and the UNICORE Certificate Authority Forschungszentrum Juelich. The User's certificate will then be added to the Unicore User Data Base and will be able to use UNICORE tools to access CSAR.

Globus client tools are also available on CSAR machines. To use these clients, the user must setup the Globus environment. This can be done using the module framework.

module load globus

This can be added to the user's .profile file.

Running the Code

Globus gatekeepers run on wren and on newton. GSIFTP (GridFTP) is available on all the origins, and after newton is connected to UKLight will be available on the Altix. GSI-OpenSSH runs on port 2222 on the Origins.

globusrun -a -r
globusrun -a -r
Test authorisation to wren or newton.
Run a job interactively on wren or newton, as though you are logged into it.
Run a job interactively on wren, as though it were a batch job.
globus-job-submit Submit a batch job on the Altix.
globus-job-submit Submit a batch job on the Origins.
Submit a batch job on fermat or green.
globus-job-run -help Print globus-job-run usage.
globus-url-copy file:///tmp/file gsi Use Gsiftp to upload /tmp/file to green.

The main UNICORE client software is a java GUI. To set this up to find CSAR resources: select from the menu bar, Settings -> User Defaults; in the General Tab add the following URL to URL(s) for UNICORE Site Servers:

Further Information

Further information can be found by following the links below.




PACX-MPI is a flavor of MPI developed by HLRS (Stuttgart). It is designed for combining processors on several MPPs into a single, virtual supercomputer (or metacomputer). The application program sees a single MPI_COMM_WORLD consisting of processors on all hosts. PACX-MPI is more lightweight than MPICH-G2 (see below) yet provides efficient intra-host communications (layered on "native" MPI) and inter-host communications (layered on TCP/IP). It has special advantages over MPICH-G2 for sites encumbered by strict firewall policies.


MPICH-G2 is an implementation of the MPI v1.1 standard. It allows communications using Globus technologies between machines on a computational grid, and local nodes using native MPT libraries. The CSAR support team are currently installing and testing MPICH-G2 on newton and plan to install it across the Origins following this.

An MPICH-G2 implementation is currently available on Newton that is built on the mpicc64 flavour of globus (flavor_mpicc64.gpt.). This uses the vendor-supplied MPI (MPT) for intramachine communication (as opposed to TCP).

To set up your environment:

export MPICH_INSTALL_PATH=$GLOBUS_LOCATION/mpich-g2 (sh-based)

setenv MPICH_INSTALL_PATH $GLOBUS_LOCATION/mpich-g2 (csh-based)

To compile code use



Create a proxy by issuing the command:


Create a file named "machines" with the content:

"" 496

Create an rsl file via the dumprsl option to mpirun:

$MPICH_INSTALL_PATH/bin/mpirun -dumprsl -np X myapplication > filename.rsl

To submit the job:

$MPICH_INSTALL_PATH/bin/mpirun -globusrsl filename.rsl

More information on MPICH-G2 is available via the following link:


Page maintained by This page last updated: Friday, 16-Sep-2005 14:56:30 BST