Learn R Programming

SeqArray (version 1.12.5)

seqParallel: Apply Functions in Parallel

Description

Applies a user-defined function in parallel.

Usage

seqParallel(cl=getOption("seqarray.parallel", FALSE), gdsfile, FUN, split=c("by.variant", "by.sample", "none"), .combine="unlist", .selection.flag=FALSE, ...) seqParApply(cl=getOption("seqarray.parallel", FALSE), x, FUN, load.balancing=TRUE, ...)

Arguments

cl
NULL or FALSE: serial processing; TRUE: parallel processing with the maximum number of cores minor one; a numeric value: the number of cores to be used; a cluster object for parallel processing, created by the functions in the package parallel, like makeCluster. See details
gdsfile
a SeqVarGDSClass object, or NULL
FUN
the function to be applied, should be like FUN(gdsfile, ...) or FUN(...)
split
split the dataset by variant or sample according to multiple processes, or "none" for no split
.combine
define a fucntion for combining results from different processes; by default, "unlist" is used, to produce a vector which contains all the atomic components; "list", return a list of results created by processes; "none", no return; or a function, like "+".
.selection.flag
TRUE -- passes a logical vector of selection to the second argument of FUN(gdsfile, selection, ...)
x
a vector (atomic or list), passed to FUN
load.balancing
if TRUE, call clusterApplyLB instead of clusterApply
...
optional arguments to FUN

Value

A vector or list of values.

Details

When cl is TRUE or a numeric value, forking techniques are used to create a new child process as a copy of the current R process, see ?parallel::mcfork. However, forking is not available on Windows, and makeCluster is called to make a cluster which will be deallocated after calling FUN.

It is strongly suggested to use seqParallel together with seqParallelSetup. seqParallelSetup could work around the problem of forking on Windows, without allocating clusters frequently.

The user-defined function could use two predefined variables SeqArray:::process_count and SeqArray:::process_index to tell the total number of cluster nodes and which cluster node being used.

seqParallel(, gdsfile=NULL, FUN=..., split="none") might be used to setup multiple streams of pseudo-random numbers, and see nextRNGStream or nextRNGSubStream in the package parallel.

See Also

seqSetFilter, seqGetData, seqApply, seqParallelSetup

Examples

Run this code
library(parallel)

# choose an appropriate cluster size or number of cores
seqParallelSetup(2)


# the GDS file
(gds.fn <- seqExampleFileName("gds"))

# display
(gdsfile <- seqOpen(gds.fn))

# the uniprocessor version
afreq1 <- seqParallel(, gdsfile, FUN = function(f) {
        seqApply(f, "genotype", as.is="double",
            FUN=function(x) mean(x==0, na.rm=TRUE))
    }, split = "by.variant")

length(afreq1)
summary(afreq1)


# run in parallel
afreq2 <- seqParallel(, gdsfile, FUN = function(f) {
        seqApply(f, "genotype", as.is="double",
            FUN=function(x) mean(x==0, na.rm=TRUE))
    }, split = "by.variant")

length(afreq2)
summary(afreq2)


# check
length(afreq1)  # 1348
all(afreq1 == afreq2)

################################################################
# check -- variant splits

seqParallel(, gdsfile, FUN = function(f) {
        v <- seqGetFilter(f)
        sum(v$variant.sel)
    }, split = "by.variant")
# [1] 674 674


################################################################

seqParallel(, NULL, FUN = function() {
        paste(SeqArray:::process_index, SeqArray:::process_count, sep=" / ")
    }, split = "none")


################################################################


# close the GDS file
seqClose(gdsfile)


seqParallelSetup(FALSE)

Run the code above in your browser using DataLab