Learn R Programming

RMKdiscrete (version 0.2)

LGP: The (univariate) Lagrangian Poisson (LGP) Distribution

Description

Density, distribution function, quantile function, summary, random number generation, and utility functions for the (univariate) Lagrangian Poisson distribution.

Usage

dLGP(x,theta,lambda,nc=NULL,log=FALSE)
pLGP(q,theta,lambda,nc=NULL,lower.tail=TRUE,log.p=FALSE,add.carefully=FALSE)
qLGP(p,theta,lambda,nc=NULL,lower.tail=TRUE,log.p=FALSE,add.carefully=FALSE)
rLGP(n,theta,lambda)
sLGP(theta,lambda,nc=NULL,do.numerically=FALSE,add.carefully=FALSE)
LGP.findmax(theta,lambda)
LGP.get.nc(theta,lambda,nctol=1e-14,add.carefully=FALSE)
LGPMVP(mu,sigma2,theta,lambda)

Arguments

x, q

Numeric vector of quantiles.

p

Numeric vector of probabilities.

n

Integer; number of observations to be randomly generated.

theta

Numeric; the index (or "additive") parameter of the LGP distribution. Must be non-negative.

lambda

Numeric; the dispersion (or "Lagrangian" or "multiplicative") parameter of the LGP distribution. Must not exceed 1 in absolute value. When equal to zero, the LGP reduces to the ordinary Poisson distribution, with mean equal to theta. When negative, then the distribution has an upper limit to its support, which may be found with LGP.findmax()

nc

Numeric; the reciprocal of the normalizing constant of the distribution, by which the raw PMF must be multiplied so that it is a proper PMF, with values that sum to 1 across the support, when lambda is negative. Defaults to NULL, in which case it is computed numerically by a call to LGP.get.nc().

log, log.p

Logical; if TRUE, then probabilities p are given as log(p).

lower.tail

Logical; if TRUE (default), probabilities are \(P[X \leq x]\), otherwise, \(P[X > x]\).

nctol

Numeric; while numerically computing the normalizing constant, how close to 1 should it be before stopping? Ignored unless lambda is negative, and the upper support limit exceeds 200,000.

add.carefully

Logical. If TRUE, the program takes extra steps to try to prevent round-off error during the addition of probabilities. Defaults to FALSE, which is recommended, since using TRUE is slower and rarely makes a noticeable difference in practice.

do.numerically

Logical; should moments be computed numerically when lambda<0? Defaults to FALSE, which is recommended unless the upper support limit is fairly small (say, less than 10).

mu

Numeric vector of mean parameters.

sigma2

"Sigma squared"--numeric vector of variance parameters.

Value

dLGP() and pLGP() return numeric vectors of probabilities. qLGP(), rLGP(), and LGP.findmax() return vectors of quantiles, which are of class 'numeric' rather than 'integer' for the sake of compatibility with very large values. LGP.get.nc() returns a numeric vector of reciprocal normalizing constants. LGPMVP() returns a numeric matrix with two columns, named for the missing arguments in the function call.

sLGP() returns a numeric matrix with 10 columns, with the mostly self-explanatory names "Mean", "Median", "Mode", "Variance", "SD", "ThirdCentralMoment", "FourthCentralMoment", "PearsonsSkewness", "Skewness", and "Kurtosis". Here, "Kurtosis" refers to excess kurtosis (greater than 3), and "PearsonsSkewness" equals \(\frac{(mean - mode)}{SD}\). A "Mode" of 0.5 indicates that the point probabilities at \(x=0\) and \(x=1\) are tied for highest density; other than this possibility, the LGP is strictly unimodal.

Warning

There is a known issue with sLGP(): when lambda is negative and theta is large, the third and fourth moments returned by sLGP(), with do.numerically=TRUE, can be quite incorrect due to numerical imprecision.

Details

The Lagrangian Poisson (LGP) distribution has density $$p(x)=\frac{\theta (\theta + \lambda x)^{x-1} \exp(- \theta - \lambda x)}{x!}$$ for \(0,1,2,\ldots\), $$p(x)=0$$ for \(x>m\) if \(\lambda<0\), and zero otherwise, where \(\theta>0\), \(m=\lfloor-\theta / \lambda\rfloor\) if \(\lambda<0\), and \(\max(-1,-\theta / m)\leq\lambda\leq 1\). So, when \(\lambda\) is negative, there is an upper limit to the distribution's support, \(m\), equal to \(-\theta / \lambda\), rounded down to the next-smallest integer. When \(\lambda\) is negative, the PMF must also be normalized numerically if it is to describe a proper probability distribution. When \(\lambda=0\), the Lagrangian Poisson reduces to the ordinary Poisson, with mean equal to \(\theta\). When \(\theta=0\), we define the distribution as having unit mass on the event \(X=0\).

Function LGP.findmax() calculates the value of upper support limit \(m\); LGP.get.nc() calculates the (reciprocal of) the normalizing constant.

Function LGPMVP() accepts exactly two of its four arguments, and returns the corresponding values of the other two arguments. For example, if given values for theta and lambda, it will return the corresponding means (mu) and variances (sigma2) of an LGP distribution with the given values of \(\theta\) and \(\lambda\). LGPMVP() does not enforce the parameter space as strictly as other functions, but will throw a warning for bad parameter values.

When the upper support limit is 5 or smaller, rLGP() uses simple inversion (i.e., random unit-uniform draws passed to qLGP()). Otherwise, it uses random-number generation algorithms from Consul & Famoye (2006); exactly which algorithm is used depends upon the values of theta and lambda. All four of rLGP(), dLGP(), pLGP(), and qLGP() make calls to the corresponding functions for the ordinary Poisson distribution (dpois(),etc.) when lambda=0.

Vectors of numeric arguments are cycled, whereas only the first element of logical and integer arguments is used.

References

Consul, P. C. (1989). Generalized Poisson Distributions: Properties and Applications. New York: Marcel Dekker, Inc.

Consul, P. C., & Famoye, F. (2006). Lagrangian Probability Distributions. Boston: Birkhauser.

Johnson, N. L., Kemp, A. W., & Kotz, S. (2005). Univariate Discrete Distributions (3rd. ed.). Hoboken, NJ: John Wiley & Sons, Inc.

Examples

Run this code
# NOT RUN {
LGP.findmax(theta=2, lambda=0.2) #<--No upper support limit
LGP.findmax(theta=2, lambda=-0.2) #<--Upper support limit of 9
LGP.get.nc(theta=2, lambda=0.2)-1==0 #<--TRUE
LGP.get.nc(theta=2, lambda=-0.2)-1 #<--nc differs appreciably from 1
LGP.get.nc(theta=2, lambda=-0.1)-1 #<--nc doesn't differ appreciably from 1
LGPMVP(theta=2, lambda=0.9)
LGPMVP(mu=20, sigma2=2000)
sLGP(theta=2, lambda=0.9)
dLGP(x=0:10,theta=1,lambda=0.1)
dLGP(x=0:10,theta=1,lambda=0)
dLGP(x=0:10,theta=1,lambda=-0.1) #<--Upper support limit of 9
pLGP(q=0:10,theta=1,lambda=0.1)
pLGP(q=0:10,theta=1,lambda=0)
pLGP(q=0:10,theta=1,lambda=-0.1) 
qLGP(p=(0:9)/10,theta=1,lambda=0.1)
qLGP(p=(0:9)/10,theta=1,lambda=0)
qLGP(p=(0:9)/10,theta=1,lambda=-0.1) 
rLGP(n=5,theta=1e12,lambda=-0.0001)
# }

Run the code above in your browser using DataLab