pSibuya evaluates the distribution function.
dSibuya evaluates the probability mass function.
rSibuya generates random variates from
$Sib(alpha)$ with
the algorithm described in Hofert (2011), Proposition 3.2.
dsumSibuya gives the probability mass function of the
$n$-fold convolution of Sibuya variables, that is, the sum of $n$
independent Sibuya random variables,
$S = sum(i=1..n) X[i]$, where
$X[i] ~ Sib(alpha)$.
This probability mass function can be shown (see Hofert (2010, pp. 99)) to be $$\sum_{j=1}^n{n\choose j}{j\alpha\choose k} (-1)^{k-j},\ k\in\{n,n+1,\dots\}.$$
rSibuya(n, alpha)
dSibuya(x, alpha, log=FALSE)
pSibuya(x, alpha, lower.tail=TRUE, log.p=FALSE)
dsumSibuya(x, n, alpha, method=c("log", "direct", "diff", "exp.log", "Rmpfr", "Rmpfr0", "RmpfrM", "Rmpfr0M"), mpfr.ctrl = list(minPrec = 21, fac = 1.25, verbose=TRUE), log=FALSE)rSibuya: sample size, that is, length of the resulting
vector of random variates.
for dsumSibuya: the number $n$ of summands.
integer values (quantiles)
$x$ at which to compute the probability mass or cumulative probability.logical; if TRUE, probabilities p are
given as log(p).logical; if TRUE (the default), probabilities
are $P(X <= x)$,="" otherwise,="" $p(x=""> x)$.=>"log"
"direct"
"Rmpfr*"method="direct" but use
high-precision arithmetic; "Rmpfr" and "Rmpfr0" return
doubles whereas "RmpfrM" and "Rmpfr0M" give
mpfr high-precision numbers.
Whereas "Rmpfr" and "RmpfrM" each adapt to high
enough precision, the "Rmpfr0*" ones do not adapt.
For all "Rmpfr*" methods, alpha can be set to a
mpfr number of specified
precision and this will determine the precision of all parts of
the internal computations.
"diff"diff;
"exp.log"method="log" but without
numerically stable evaluation (not recommended, use with care).
method = "Rmpfr" or "RmpfrM" only: a
list of
minPrec: minimal (estimated) precision in bits,
fac: factor with which current precision is multiplied if
it is not sufficient.
verbose: determining if and how much is printed.
integers of
length n containing the generated random variates.x.x >= n and of the same length as x (or n if
that is longer). For documentation and didactical purposes, rSibuyaR is a pure-R
implementation of rSibuya, of course slower than rSibuya
as the latter is implemented in C.
Note that the sum to evaluate for dsumSibuya is numerically
highly challenging, even already for small
$alpha$ values (for example, $n >= 10$),
and therefore should be used with care. It may require high-precision
arithmetic which can be accessed with method="Rmpfr" (and the
Rmpfr package).
Hofert, M. (2011). Efficiently sampling nested Archimedean copulas. Computational Statistics & Data Analysis 55, 57--70.
rFJoe and rF01Joe (where rSibuya is
applied).
## Sample n random variates from a Sibuya(alpha) distribution and plot a
## histogram
n <- 1000
alpha <- .4
X <- rSibuya(n, alpha)
hist(log(X), prob=TRUE); lines(density(log(X)), col=2, lwd=2)
Run the code above in your browser using DataLab