Cori ExVivo for JGI¶
ExVivo is a specialized system used to run JGI applications requiring more shared memory than available on standard Cori Genepool hardware.
Access to Cori ExVivo is available to JGI users as of February 6th 2019. JGI users with work necessitating the large RAM available on the ExVivo nodes should contact JGI management for access. To use Cori ExVivo, first connect to
cori.nersc.gov, load the
esslurm module, and request a Slurm allocation. That request command should include an
-A argument with your project name, and specify QOS
elvis@cori10:~> module load esslurm elvis@cori10:~> salloc -A fungalp -q jgi_interactive salloc: Granted job allocation 1 salloc: Waiting for resource configuration salloc: Nodes exvivo006 are ready for job elvis@exvivo006:~> exit exit srun: Terminating job step 1.0 salloc: Relinquishing job allocation 1 elvis@cori10:~> sbatch -C skylake -A fungalp -q jgi_exvivo bioinformatics.sh Submitted batch job 2 elvis@cori10:~>
jgi_exvivois intended for production use by applications and data sets which cannot be run on Cori Genepool due to large RAM requirements. The maximum walltime for an allocation is 7 days.
jgi_interactiveis intended for exploration and development. At most 4 nodes can be allocated to this QOS. The maximum wall time is 4 hours.
jgi_sharedis intended for jobs which require more than 128GB RAM but less than 768GB. Use
--mem=###GBarguments in the Slurm invocation to request the needed resources.
Multi-node allocations will not be supported on ExVivo.
Cori ExVivo contains 20 total nodes. Each node has the following configuration:
2 Intel® Xeon® Gold 6140 (Skylake) processors, 36 cores total
1.5 TB RAM
5.1 TB available local disk, solid state drive, mounted as
The user environment on ExVivo is very similar to that of a Cori login node. Common software is available such as Cori modules, Shifter, and Anaconda.
The following file systems are available on ExVivo:
Cori Scratch (
Data and Archive (read-only access)