"LSF" -
Query Platform Load Sharing Facility (LSF)/OpenLava environment variable
LSB_HOSTS.
"PJM" -
Query Fujitsu Technical Computing Suite (that we choose to shorten
as "PJM") the hostname file given by environment variable
PJM_O_NODEINF.
The PJM_O_NODEINF file lists the hostnames of the nodes allotted.
This function returns those hostnames each repeated availableCores()
times, where availableCores() reflects PJM_VNODE_CORE.
For example, for pjsub -L vnode=2 -L vnode-core=8 hello.sh, the
PJM_O_NODEINF file gives two hostnames, and PJM_VNODE_CORE
gives eight cores per host, resulting in a character vector of 16
hostnames (for two unique hostnames).
"PBS" -
Query TORQUE/PBS environment variable PBS_NODEFILE.
If this is set and specifies an existing file, then the set
of workers is read from that file, where one worker (node)
is given per line.
An example of a job submission that results in this is
qsub -l nodes=4:ppn=2, which requests four nodes each
with two cores.
"SGE" -
Query the "Grid Engine" scheduler environment variable PE_HOSTFILE.
An example of a job submission that results in this is
qsub -pe mpi 8 (or qsub -pe ompi 8), which
requests eight cores on a any number of machines.
Known Grid Engine schedulers are
Oracle Grid Engine (OGE; acquired Sun Microsystems in 2010),
Univa Grid Engine (UGE; fork of open-source SGE 6.2u5),
Altair Grid Engine (AGE; acquires Univa Corporation in 2020),
Son of Grid Engine (SGE aka SoGE; open-source fork of SGE 6.2u5), and
"Slurm" -
Query Slurm environment variable SLURM_JOB_NODELIST (fallback
to legacy SLURM_NODELIST) and parse set of nodes.
Then query Slurm environment variable SLURM_JOB_CPUS_PER_NODE
(fallback SLURM_TASKS_PER_NODE) to infer how many CPU cores
Slurm have allotted to each of the nodes. If SLURM_CPUS_PER_TASK
is set, which is always a scalar, then that is respected too, i.e.
if it is smaller, then that is used for all nodes.
For example, if SLURM_NODELIST="n1,n[03-05]" (expands to
c("n1", "n03", "n04", "n05")) and SLURM_JOB_CPUS_PER_NODE="2(x2),3,2"
(expands to c(2, 2, 3, 2)), then
c("n1", "n1", "n03", "n03", "n04", "n04", "n04", "n05", "n05") is
returned. If in addition, SLURM_CPUS_PER_TASK=1, which can happen
depending on hyperthreading configurations on the Slurm cluster, then
c("n1", "n03", "n04", "n05") is returned.
"custom" -
If option
parallelly.availableWorkers.custom
is set and a function,
then this function will be called (without arguments) and it's value
will be coerced to a character vector, which will be interpreted as
hostnames of available workers.
It is safe for this custom function to call availableWorkers(); if
done, the custom function will not be recursively called.