Jobs are spawned by starting multiple R sessions on the commandline
(similar like on true batch systems).
Packages parallel
or multicore
are not used in any way.
makeClusterFunctionsMulticore(
ncpus = max(getOption("mc.cores", parallel::detectCores()) - 1L, 1L),
max.jobs,
max.load,
nice,
r.options = c("--no-save", "--no-restore", "--no-init-file", "--no-site-file"),
script
)
[integer(1)
]
Number of VPUs of worker.
Default is to use all cores but one, where total number of cores
"available" is given by option mc.cores
and if that is not set it is inferred by
detectCores
.
[integer(1)
]
Maximal number of jobs that can run concurrently for the current registry.
Default is ncpus
.
[numeric(1)
]
Load average (of the last 5 min) at which the worker is considered occupied,
so that no job can be submitted.
Default is inferred by detectCores
, cf. argument ncpus
.
[integer(1)
]
Process priority to run R with set via nice. Integers between -20 and 19 are allowed.
If missing, processes are not nice'd and the system default applies (usually 0).
[character
]
Options for R and Rscript, one option per element of the vector,
a la “--vanilla”.
Default is c("--no-save", "--no-restore", "--no-init-file", "--no-site-file")
.
[character(1)
]
Path to helper bash script which interacts with the worker.
You really should not have to touch this, as this would imply that we have screwed up and
published an incompatible version for your system.
This option is only provided as a last resort for very experienced hackers.
Note that the path has to be absolute.
This is what is done in the package:
https://github.com/tudo-r/BatchJobs/blob/master/inst/bin/linux-helper
Default means to take it from package directory.
Other clusterFunctions:
makeClusterFunctionsInteractive()
,
makeClusterFunctionsLSF()
,
makeClusterFunctionsLocal()
,
makeClusterFunctionsOpenLava()
,
makeClusterFunctionsSGE()
,
makeClusterFunctionsSLURM()
,
makeClusterFunctionsSSH()
,
makeClusterFunctionsTorque()
,
makeClusterFunctions()