You are here

Announcements & Reminders


Chimera Full PM – October 22nd 2019 – 8AM - 8PM
On October 22nd 2019, a full PM is scheduled for the Big Omics Data Engine 2 (BODE2) installation. Lenovo and system administrators will be adjusting the current IB network to accommodate the BODE2 compute nodes.

During this PM the entire Chimera cluster, including the Hydra and Orga file systems will not be accessible to users

Minerva changes beginning October 15th

• All queues in the old BODE partition are now closed and the compute nodes will be shut down.
• Old Minerva login nodes are now closed, apart from the following:
        • minerva2.hpc.mssm.ede
• Please copy your old hpc home dir to the Chimera side.
        • i.e., please login to Chimera and copy over the files under the link /hpc/users/userid/userid-old-home to /hpc/users/userid. Use the rsync command (see man pages for details and
examples). If you only want to copy what has not been changed, use the update option: -u. The link will no longer be valid.
• We put in read-only on /sc/orga/scratch; files on that dir will be purged after 2 weeks -October 29th.
        • Users please start using /sc/hydra/scratch as your scratch directory.
• Please copy your work dir (/sc/orga/work/userid) from Orga to Hydra(/sc/hydra/work/userid). Check that your soft links are still valid.
• HPC admins will handle the project directory migration.

We’re happy to announce that we received a $2M award for the Big Omics Data Engine Supercomputer 2 (BODE2)!!

The final machine will consist of:
-3,840 24-core 2.9 GHz Cascade Lake 8268 processors in 80 nodes
-192 GB of memory per node
-240 GB of SSDs per node
-15 TB memory (collectively)
-11 petabytes of usable storage (14 PB raw, 1,008 total drives)

We are ordering the machine now, with installation expected in early October and production in November.

In preparation for BODE2, we will be decommissioning BODE on September 1.

Reduction in cost of Minerva Storage
As part of the Mount Sinai research community, one of our goals is to provide maximal value for our computational and data resources. As a result of our continuing efforts, we are happy to announce that we can pass along a 45% savings for Minerva storage in 2019.

Effective 1 January 2019, the cost of Minerva storage will be $107/TB/Yr ($8.92/TB/Mo). This has been approved by Compliance and Finance. Charges for Minerva storage for 2019 will be recalculated to reflect this change and we will be issuing invoices for 2019 shortly.

External user account renewal is required quarterly by PI
From this year, all external user account has to be renewed quarterly by the respective PI. This policy is enforced in order to be compliant with IT's security rules. The first renewal will be on May 6, 2019.
External accounts will be locked unless a renewal request is sent to by external users' PI. If you have an active external account, please contact your Sinai PI for the renewal.

Chimera is open to all user
The Chimera compute partition is open to all users on April 1, 2019.
Town Hall Slides:
Slides for training class “Introduction to Chimera”:
Minerva old partition will be retired on July 1st.

Chimera queue adjustments
Several minor changes are being made to the Chimera queues:
Normal queue has been expanded to 270 nodes and set as the default queue.
User-project association is now enforced on all queues and you will need a valid project (-p acc_xxx) for your job to run. Gold server is not being actively queried on Chimera to reduce job failures.
A default memory of 3G per processor core is enforced if no memory requirement is specified.

GPU node in Chimera partition
The interactive GPU node is accessible by:
$ bsub -q interactive -W 01:00 -n 1 -R v100 -R rusage[ngpus_excl_p=1] -Is /bin/bash
The GPU compute node is accessible by:
$ bsub -q gpu -n 1 -R rusage[ngpus_excl_p=1] -W 00:10 echo “Hello World”
$ bsub -q gpu -n 1 -R v100 -R rusage[ngpus_excl_p=1] -W 00:10 echo “Hello World”

TSM files expire in 2019
The TSM files that were archived in 2013 will expire in 2019. A reminder email will be sent to users 3 months ahead. There is no simple command to extend the file expiration date. If you would like to keep these files, you will need to retrieve them to disk and archive again. If you would like to query other users’ archived files, please send in a ticket.

Amazon AWS transfers from Minerva
If you are copying large amounts of data to or from Amazon AWS, then please keep an upper limit of 100MB/s via your AWS CLI configuration. Minerva is a shared environment and aggressive uploads from some processes is bound to throttle performance for everyone else.

We do not backup user data. It is the users' responsibility to ensure data is backed up in accordance with their funding agency or private organization requirements. Refer to "Preserving Data: Who’s Got Your Back(up)?" for helpful hints in developing your individual backup strategy.

New module system and packages are available on test-centos7 queue
We are currently rebuilding new packages on Centos 7 with up-to-date compilers in conjunction with OS upgrade and Chimera integration. Currently, 158 packages are available and the rest of ~2000 packages may be compatible with Centos 7. We would like you to test your jobs in the new environment. Please let us know for the failed packages.

Singularity is available
Singularity is currently available on node data2, data4 and test-centos7 queue. It will also be provided in Chimera compute partition in 2019. If you would like to get a shell with orga mounted in the image, use command “$ singularity shell -B /sc/orga hello.simg”.

Manda, Mothra compute partition out of warranty from Jan 1, 2019
From Jan 1, 2019, the Manda and Mothra compute partition will be out of warranty. We will continue to run these servers with the best effort of support.

INTEL Parallel Studio packages updates
INTEL 2019 Parallel Studio (Compilers and tools) has been installed. The latest version of Intel’s Parallel Studio includes C, C++, Fortran, Intel MPI, etc. It can be accessed by specifying: module load intel/parallel_studio_xe_2019. Starting Tuesday, 9 Oct, it will be the default so that “module load intel” will load the 2019 version.

The 2013 and 2015 versions of Intel’s Parallel Studio package is from the system on 15 Oct. The 2018 and 2019 versions will be the only ones available.

Updating default R version from R 3.3.1 to R 3.5.1 (latest)
The default R version will be upgraded to 3.5.1. Please test your corresponding program. If you would like to stick to version 3.3.1, please use "module load R/3.3.1" or "module swap R R/3.3.1".

Web server upgrade
Web server web2 is upgrades to Centos 6.9, together with PHP to version 5.6. Please validate your website code and replace the deprecated packages if needed.

Interactive queue
A new interactive queue is available to all users. It consists of 5 Manda compute nodes with Centos 6.9 netboot images. It is dedicated to interactive jobs and provides an alternative to the interactive nodes. It is ideal for job testing since the compute resources will be granted through the LSF job scheduler. The maximum wall time is 12 hours and the default wall time is 2 hours.
You can access these nodes by “bsub -P acc_yourallocation -q interactive -n 1 -W 01:00 -Is /bin/bash”.
To submit interactive jobs with X-enabled, login to Minerva with “ssh -XYC” option and additionally submit jobs using “bsub -P acc_yourallocation -q interactive -n 1 -W 01:00 -XF -Is /bin/bash”.

Collaboration account
In Minerva, we provide collaboration account for groups who have needs to perform group-related activities, such as management of group website or archiving project directory. It would also be convenient for group members to pass their responsibilities to others. For details of the collaboration account, please refer to our webpage:

Reconfiguration of Minerva expressalloc queue
Based on the feedback we have received from users, we are going to restructure the expressalloc queue as part of the 17 Apr major PM. This should speed up the throughput of the shorter jobs as well as relieve pressure on the other queues.

The changes that will be put into place are:
- Maximum time limit for expressalloc will be increased to 6 hours.
- Two Mothra type nodes that were being used for other purposes will be re-deployed to create a new mothra-exp partition and dedicated only to running expressalloc jobs
- Two Manda type nodes that were used for other purposes will be re-deployed to create a new manda-exp partition and run only expressalloc jobs
- Two Mothra nodes will be limited to running only expressalloc and alloc jobs

How it will work:
bsub –q expressalloc … Jobs can be dispatched to all nodes including manda-exp and mothra-exp.
bsub –q expressalloc –m manda Jobs can only be dispatched to manda partition nodes. manda-exp is NOT part of manda and thus these jobs will not be dispatched to them.
bsub –q expressalloc –m mothra Same as above only with mothra and mothra-exp nodes
bsub –q expressalloc –m manda-exp Jobs can only run on the two manda-exp nodes
bsub –q expressalloc –m mothra-exp Jobs can only run on the two mothra-exp nodes
bsub –q alloc Jobs will be dispatched to manda and mothra nodes exactly as before except now there are 2 nodes for the exclusive use of alloc queue


To offset a portion of the investment that Sinai has made, we are requesting that researchers include funding in their grants for Scientific Computing. Please refer to the attached fee schedule which needs to be included in grants that plan to use Minerva or Scientific Computing’s resources. You will only be charged for the specific amount of resources that you use. If you require dedicated nodes or storage, we will also integrate new nodes dedicated to your research into our infrastructure. Please contact Patricia Kovatch for a letter of support or any dedicated computing resources. We greatly appreciate your assistance to make this endeavor sustainable.

Fee Schedule for Grants

In addition, a New Project Information Sheet (below) needs to be filled out by the PI PRIOR to the start date so we can properly bill grant funds.

New Project Information Sheet

Collecting scientific published papers is one of our important metrics to show our value to the school, so we really need this list to be a true reflection of the value of the Scientific Computing facility to the Sinai scientific community. If you have a paper that has been published using our resources that we can add to this list, please send details to

An acknowledgement of support from the Icahn School of Medicine at Mount Sinai should appear in a publication of any material, whether copyrighted or not, based on or developed with Mount Sinai-supported computing resources. Please use the following statement: “This work was supported in part through the computational resources and staff expertise provided by the Department of Scientific Computing at the Icahn School of Medicine at Mount Sinai.”

The Department of Scientific Computing offers direct development of scientific software projects at a rate of $100/hr. Initial consulting will be offered free for one hour in order to define the project and estimate its time and resource requirements. This service and its fees apply to novel code building, porting of existing code, and performance tuning over and above standard scientific computing administration, such as troubleshooting, installing software, and debugging for Minerva. If you are interested in setting up a consultation, please email and one of our staff will get in touch with you.

There have been occasions when users have been monopolizing the interactive nodes to the point that others cannot gain access. These nodes are a limited resource and are intended to facilitate the occasional need to interact with programs in real time for purposes such as debugging or data exploration. Please be considerate of others not yet logged on when launching tasks on interactive1 and interactive2.

If you notice someone who may be pushing the limits, please send a note to and someone in Scientific Computing will address the issue.

A reminder, /scratch and /projects are both in the GPFS file system. GPFS has minimal protection and explicitly does not guard against data loss or corruption. Please use the archival storage system called TSM to make a copy of any important files. Please click here for more information.

PM Reminder: There will be periodic scheduled Preventative Maintenance (PM) sessions on Tuesdays, beginning at at 8 am to update software (including security patches) and other hardware fixes. Advanced notice of PMs will be announced in advance and a notice will be sent to the Minerva mailing list.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer