You are here

Announcements & Reminders


New module system and packages are available on test-centos7 queue
We are currently rebuilding new packages on Centos 7 with up-to-date compilers in conjunction with OS upgrade and Chimera integration. Currently, 158 packages are available and the rest of ~2000 packages may be compatible with Centos 7. We would like you to test your jobs in the new environment. Please let us know for the failed packages.

Singularity is available
Singularity is currently available on node data2, data4 and test-centos7 queue. It will also be provided in Chimera compute partition in 2019. If you would like to get a shell with orga mounted in the image, use command “$ singularity shell -B /sc/orga hello.simg”.

Manda, Mothra compute partition out of warranty from Jan 1, 2019
From Jan 1, 2019, the Manda and Mothra compute partition will be out of warranty. We will continue to run these servers with the best effort of support.

INTEL Parallel Studio packages updates
INTEL 2019 Parallel Studio (Compilers and tools) has been installed. The latest version of Intel’s Parallel Studio includes C, C++, Fortran, Intel MPI, etc. It can be accessed by specifying: module load intel/parallel_studio_xe_2019. Starting Tuesday, 9 Oct, it will be the default so that “module load intel” will load the 2019 version.

The 2013 and 2015 versions of Intel’s Parallel Studio package is from the system on 15 Oct. The 2018 and 2019 versions will be the only ones available.

Updating default R version from R 3.3.1 to R 3.5.1 (latest)
The default R version will be upgraded to 3.5.1. Please test your corresponding program. If you would like to stick to version 3.3.1, please use "module load R/3.3.1" or "module swap R R/3.3.1".

Web server upgrade
Web server web2 is upgrades to Centos 6.9, together with PHP to version 5.6. Please validate your website code and replace the deprecated packages if needed.

Interactive queue
A new interactive queue is available to all users. It consists of 5 Manda compute nodes with Centos 6.9 netboot images. It is dedicated to interactive jobs and provides an alternative to the interactive nodes. It is ideal for job testing since the compute resources will be granted through the LSF job scheduler. The maximum wall time is 12 hours and the default wall time is 2 hours.
You can access these nodes by “bsub -P acc_yourallocation -q interactive -n 1 -W 01:00 -Is /bin/bash”.
To submit interactive jobs with X-enabled, login to Minerva with “ssh -XYC” option and additionally submit jobs using “bsub -P acc_yourallocation -q interactive -n 1 -W 01:00 -XF -Is /bin/bash”.

Collaboration account
In Minerva, we provide collaboration account for groups who have needs to perform group-related activities, such as management of group website or archiving project directory. It would also be convenient for group members to pass their responsibilities to others. For details of the collaboration account, please refer to our webpage:

Reconfiguration of Minerva expressalloc queue
Based on the feedback we have received from users, we are going to restructure the expressalloc queue as part of the 17 Apr major PM. This should speed up the throughput of the shorter jobs as well as relieve pressure on the other queues.

The changes that will be put into place are:
- Maximum time limit for expressalloc will be increased to 6 hours.
- Two Mothra type nodes that were being used for other purposes will be re-deployed to create a new mothra-exp partition and dedicated only to running expressalloc jobs
- Two Manda type nodes that were used for other purposes will be re-deployed to create a new manda-exp partition and run only expressalloc jobs
- Two Mothra nodes will be limited to running only expressalloc and alloc jobs

How it will work:
bsub –q expressalloc … Jobs can be dispatched to all nodes including manda-exp and mothra-exp.
bsub –q expressalloc –m manda Jobs can only be dispatched to manda partition nodes. manda-exp is NOT part of manda and thus these jobs will not be dispatched to them.
bsub –q expressalloc –m mothra Same as above only with mothra and mothra-exp nodes
bsub –q expressalloc –m manda-exp Jobs can only run on the two manda-exp nodes
bsub –q expressalloc –m mothra-exp Jobs can only run on the two mothra-exp nodes
bsub –q alloc Jobs will be dispatched to manda and mothra nodes exactly as before except now there are 2 nodes for the exclusive use of alloc queue


To offset a portion of the investment that Sinai has made, we are requesting that researchers include funding in their grants for Scientific Computing. Please refer to the attached fee schedule which needs to be included in grants that plan to use Minerva or Scientific Computing’s resources. You will only be charged for the specific amount of resources that you use. If you require dedicated nodes or storage, we will also integrate new nodes dedicated to your research into our infrastructure. Please contact Patricia Kovatch for a letter of support or any dedicated computing resources. We greatly appreciate your assistance to make this endeavor sustainable.

Fee Schedule for Grants

In addition, a New Project Information Sheet (below) needs to be filled out by the PI PRIOR to the start date so we can properly bill grant funds.

New Project Information Sheet

Collecting scientific published papers is one of our important metrics to show our value to the school, so we really need this list to be a true reflection of the value of the Scientific Computing facility to the Sinai scientific community. If you have a paper that has been published using our resources that we can add to this list, please send details to

An acknowledgement of support from the Icahn School of Medicine at Mount Sinai should appear in a publication of any material, whether copyrighted or not, based on or developed with Mount Sinai-supported computing resources. Please use the following statement: “This work was supported in part through the computational resources and staff expertise provided by the Department of Scientific Computing at the Icahn School of Medicine at Mount Sinai.”

The Department of Scientific Computing offers direct development of scientific software projects at a rate of $100/hr. Initial consulting will be offered free for one hour in order to define the project and estimate its time and resource requirements. This service and its fees apply to novel code building, porting of existing code, and performance tuning over and above standard scientific computing administration, such as troubleshooting, installing software, and debugging for Minerva. If you are interested in setting up a consultation, please email and one of our staff will get in touch with you.

There have been occasions when users have been monopolizing the interactive nodes to the point that others cannot gain access. These nodes are a limited resource and are intended to facilitate the occasional need to interact with programs in real time for purposes such as debugging or data exploration. Please be considerate of others not yet logged on when launching tasks on interactive1 and interactive2.

If you notice someone who may be pushing the limits, please send a note to and someone in Scientific Computing will address the issue.

A reminder, /scratch and /projects are both in the GPFS file system. GPFS has minimal protection and explicitly does not guard against data loss or corruption. Please use the archival storage system called TSM to make a copy of any important files. Please click here for more information.

PM Reminder: There will be periodic scheduled Preventative Maintenance (PM) sessions on Tuesdays, beginning at at 8 am to update software (including security patches) and other hardware fixes. Advanced notice of PMs will be announced in advance and a notice will be sent to the Minerva mailing list.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer