Windows Cluster (MSHPC)
Accessing and using Windows HPC Cluster
NOTE: this is the Windows cluster. Most users of Research Computing resources use the Odyssey cluster of Linux computers.
1. You need RC account and a Openauth token. If you don’t have one, please request it via:
Also, request access to Windows HPC cluster, which should put you in argo_users user group.
2. Windows cluster consists of:
a. (1) Login node (parathyro.rc.fas.harvard.edu)
b. (1) HPC master
c. (4) HPC compute nodes
d. (1) additional compute node dedicated to Mass Spec applications incorporating own schedulers.
e. (30) Windows XP compute nodes
Total 128 cores are available to users.
Only the login node (parathyro.rc.fas.harvard.edu) is directly accessible to Windows HPC cluster users.
3. Access Windows HPC cluster login node. Applications can be submitted to HPC cluster queue from the login node.
a. Use Windows Remote Deskop to connect to parathyro.rc.fas.harvard.edu — enter your RC account and password to authenticate: enter username as: RC\USERNAME and use your RC/ odyssey password.
b. Once connected, you will be prompted for username and Openauth token. Enter only the username (without RC\) and the Openauth token. There sometimes is an additional Openauth prompt screen for the Windows password, where you will need to enter your RC password.
4. There’s a number of applications available to run on Windows HPC cluster.
a. Mass Spectrometry / Proteomics applications shortcuts can be found in Start Menu, Programs, Mass Spec Proteomics shortcut folder.
b. Explanation of MassSpec directory structure can be found here:
c. There is a number of development tools, such as perl, python, R interpreters, putty and cygwin suite available.
5. GUI applications are currently not compute-distributable among HPC compute nodes and they execute on the login node. Distributing GUI applications and handing off display sessions to the login node is in development plan.
6. Command line applications can be submitted to the HPC queue.
a. You can submit command-line applications from HPC Job Manager GUI (shortcut). Create a job using one of the available job templates and assign it a task (one or more).
b. You can also submit command – line applications using windows shell or HPC power shell. Refer to this documentation for HPC commands available to manage jobs:
c. RECOMMENDED: Or use one of available command– line wrappers for running specific applications. This is the most user friendly method, although it does not allow user to fine-tune job / task specification. However, it should be sufficient in many cases. There are wrappers for:
- Percolator -> hpc_percolator
- Perl -> hpc_perl
- Python -> hpc_python
- R -> hpc_r
Example: To submit a python script to HPC cluster, open HPC Powershell and use hpc_python just as you would use python:
hpc_python script_name.py [options]
hpc_python \\myshare\mydir\mypythonscript.py arg1 arg2
You can view and manage your jobs in the HPC Job Manager GUI or from shell:
job view jobID (shows status)
job listtasks jobID (shows the job result and stdout)
7. OpenMPI and MPI applications are supported (both development and runtime). The login node contains HPC development toolkit and Visual Studio 2010 Beta2 with MPI / MPI.NET and OpenMPI support and MPI debugger. Refer to this URL for HPC development:
and a very useful tutorial document on HPC development in C++:
Ensure to specify the absolute include and library paths to the HPC toolkit in compiler and linker settings:
C:\Program Files\Microsoft HPC Pack 2008 SDK\Include
C:\Program Files\Microsoft HPC Pack 2008 SDK\Lib\amd64
HPC cluster compute nodes contain Microsoft OpenMP and MPI runtime libraries.
To submit an MPI job, use the hpc_mpi wrapper:
hpc_mpi MPIexecutable [options]
hpc_mpi \\myshare\mydir\myMPIexecutable option1 option2
Or use mpiexec via HPC Job Scheduler GUI or via HPC PowerShell.
8. There is a mailing list for Windows HPC users: firstname.lastname@example.org. The users are subscribed automatically.
9. For support, please contact email@example.com.
NOTE: The RC HPC Windows cluster is under constant development and is currently in the early development stage. We encourage user comments and suggestions. There are ongoing changes and improvements that may cause temporary instability.
10. Complete list of software on Windows HPC cluster
11. Additional documentation for Proteomics users:
a. Mass Spec directory structure
b. Adding FASTA databases to Mascot / MaxQuant