You are here

Announcements & Reminders

ANNOUNCEMENTS

Interactive queue
A new interactive queue is available to all users. It consists of 5 Manda compute nodes with Centos 6.9 netboot images. It is dedicated to interactive jobs and provides an alternative to the interactive nodes. It is ideal for job testing since the compute resources will be granted through the LSF job scheduler. The maximum wall time is 12 hours and the default wall time is 2 hours.
You can access these nodes by “bsub -P acc_yourallocation -q interactive -n 1 -W 01:00 -Is /bin/bash”.
To submit interactive jobs with X-enabled, login to Minerva with “ssh -XYC” option and additionally submit jobs using “bsub -P acc_yourallocation -q interactive -n 1 -W 01:00 -XF -Is /bin/bash”.

Dedicated transfers from new data4
We have a new data transfer node that can be used to run long standing data transfers including archival and retrieval commands. These are similar to internal login nodes, but have a CentOS7 image instead, so job submissions will not work from this node.

Collaboration account
In Minerva, we provide collaboration account for groups who have needs to perform group-related activities, such as management of group website or archiving project directory. It would also be convenient for group members to pass their responsibilities to others. For details of the collaboration account, please refer to our webpage: https://hpc.mssm.edu/access/collaboration.

Incoming data
If you have requirements to bring in large quantities of data(>50TB) from outside of Sinai network, please let us know and we will gladly work with you facilitate the transfer.

Reconfiguration of Minerva expressalloc queue
Based on the feedback we have received from users, we are going to restructure the expressalloc queue as part of the 17 Apr major PM. This should speed up the throughput of the shorter jobs as well as relieve pressure on the other queues.

The changes that will be put into place are:
- Maximum time limit for expressalloc will be increased to 6 hours.
- Two Mothra type nodes that were being used for other purposes will be re-deployed to create a new mothra-exp partition and dedicated only to running expressalloc jobs
- Two Manda type nodes that were used for other purposes will be re-deployed to create a new manda-exp partition and run only expressalloc jobs
- Two Mothra nodes will be limited to running only expressalloc and alloc jobs

How it will work:
bsub –q expressalloc … Jobs can be dispatched to all nodes including manda-exp and mothra-exp.
bsub –q expressalloc –m manda Jobs can only be dispatched to manda partition nodes. manda-exp is NOT part of manda and thus these jobs will not be dispatched to them.
bsub –q expressalloc –m mothra Same as above only with mothra and mothra-exp nodes
bsub –q expressalloc –m manda-exp Jobs can only run on the two manda-exp nodes
bsub –q expressalloc –m mothra-exp Jobs can only run on the two mothra-exp nodes
bsub –q alloc Jobs will be dispatched to manda and mothra nodes exactly as before except now there are 2 nodes for the exclusive use of alloc queue

New quota policy (15T per user) in place for scratch.
Over the last few weeks, scratch usage on Orga (/sc/orga/scratch) frequently reached its ceiling. This causes unnecessary errors for users who rely on it to pull large data temporarily. To ameliorate the issue, we are going to implement a per-user quota (15T) in the scratch folders in Orga. This fair share policy is to avoid any one user from consuming all off the scratch space and reduce related job failures.

New script "showquota" available.
To check your GPFS quotas, you can run the "showquota" command. For details, please refer to our documentation page https://hpc.mssm.edu/docs/directories .

REMINDERS

FEE SCHEDULE FOR GRANTS
To offset a portion of the investment that Sinai has made, we are requesting that researchers include funding in their grants for Scientific Computing. Please refer to the attached fee schedule which needs to be included in grants that plan to use Minerva or Scientific Computing’s resources. You will only be charged for the specific amount of resources that you use. If you require dedicated nodes or storage, we will also integrate new nodes dedicated to your research into our infrastructure. Please contact Patricia Kovatch for a letter of support or any dedicated computing resources. We greatly appreciate your assistance to make this endeavor sustainable.

Fee Schedule for Grants

NEW PROJECT INFORMATION SHEET
In addition, a New Project Information Sheet (below) needs to be filled out by the PI PRIOR to the start date so we can properly bill grant funds.

New Project Information Sheet

PUBLISHED PAPERS
Collecting scientific published papers is one of our important metrics to show our value to the school, so we really need this list to be a true reflection of the value of the Scientific Computing facility to the Sinai scientific community. If you have a paper that has been published using our resources that we can add to this list, please send details to lili.gai@mssm.edu.

ACKNOWLEDGE SCIENTIFIC COMPUTING AT MOUNT SINAI
An acknowledgement of support from the Icahn School of Medicine at Mount Sinai should appear in a publication of any material, whether copyrighted or not, based on or developed with Mount Sinai-supported computing resources. Please use the following statement: “This work was supported in part through the computational resources and staff expertise provided by the Department of Scientific Computing at the Icahn School of Medicine at Mount Sinai.”

SCIENTIFIC COMPUTING DEPARTMENT CONSULTING SERVICES
The Department of Scientific Computing offers direct development of scientific software projects at a rate of $100/hr. Initial consulting will be offered free for one hour in order to define the project and estimate its time and resource requirements. This service and its fees apply to novel code building, porting of existing code, and performance tuning over and above standard scientific computing administration, such as troubleshooting, installing software, and debugging for Minerva. If you are interested in setting up a consultation, please email hpchelp@mssm.edu and one of our staff will get in touch with you.

INTERACTIVE NODE ETIQUETTE!
There have been occasions when users have been monopolizing the interactive nodes to the point that others cannot gain access. These nodes are a limited resource and are intended to facilitate the occasional need to interact with programs in real time for purposes such as debugging or data exploration. Please be considerate of others not yet logged on when launching tasks on interactive1 and interactive2.

If you notice someone who may be pushing the limits, please send a note to hpchelp@mssm.edu and someone in Scientific Computing will address the issue.

FILES ON THE GPFS FILE SYSTEM ARE NOT BACKED UP
A reminder, /scratch and /projects are both in the GPFS file system. GPFS has minimal protection and explicitly does not guard against data loss or corruption. Please use the archival storage system called TSM to make a copy of any important files. Please click here for more information.

PM Reminder: There will be periodic scheduled Preventative Maintenance (PM) sessions on Tuesdays, beginning at at 8 am to update software (including security patches) and other hardware fixes. Advanced notice of PMs will be announced in advance and a notice will be sent to the Minerva mailing list.

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer