HPC Cluster

HPC: 

Welcome to the high performance computing (HPC) community site at CCHMC! We operate as the Research Computing group under Information Systems for Research (IS4R).

Currently, we maintain one RedHat 9 Linux based HPC cluster for research. Our primary HPC cluster environment currently has 2400+ cores and is heterogeneous with both large-memory SMPs totaling 30TB of RAM across 80 nodes. The primary connection for the Cluster and nodes is a high speed Ethernet with 10-20Gbps and the Scheduler and resource manager is IBM LSF. This environment also contains nodes with GPU capabilities. Which contains a combination of dual V100 with 32GB of RAM and quad A100 with 40GB and 80GB.

The Software available on the cluster is installed upon request and is managed via TCL modules, ranging from several versions of R/Rstudio, Python, Nextflow, Picard, Samtools and others. Also, there is a Web interface "HPC OnDemand" where most tools and desktops can be deployed from any web browser and easy use.

The File system is NFS (Network File System) where each user has 100GB allocated for home and 100GB (extended to 500GB) for scratch. Also, Data folders can be requested for additional storage and shared with multiple users and other institutions using Aspera or Active MFT.

The primary cluster is open to all CCHMC employees and collaborators with a valid use case.

If you have a question that is unanswered after reviewing this information, please email our support system at help-cluster@bmi.cchmc.org.