This program computes the maximized (wrt \(p_i\)) empirical
log likelihood function for right censored data with
the MEAN constraint:
$$ \sum_i [ d_i p_i g(x_i) ] = \int g(t) dF(t) = \mu $$
where \(p_i = \Delta F(x_i)\) is a probability,
\(d_i\) is the censoring indicator.
The \(d\) for the largest observation is always taken to be 1.
It then computes the -2 log
empirical likelihood ratio which should be approximately chi-square
distributed if the constraint is true.
Here \(F(t)\) is the (unknown) CDF;
\(g(t)\) can be any given left
continuous function in \(t\).
\(\mu\) is a given constant.
The data must contain some right censored observations.
If there is no censoring or the only censoring is the largest
observation, the code will stop and we should use
el.test( )
which is for uncensored data.
The log empirical likelihood been maximized is $$ \sum_{d_i=1} \log \Delta F(x_i) + \sum_{d_i=0} \log [ 1-F(x_i) ].$$
el.cen.test(x,d,fun=function(x){x},mu,error=1e-8,maxit=15)
A list with the following components:
The -2Log Likelihood ratio.
the location of the CDF jumps.
the jump size of CDF at those locations.
P-value
the \(L_1\) norm between the last two wts
.
number of iterations carried out
a vector containing the observed survival times.
a vector containing the censoring indicators, 1-uncensor; 0-censor.
a left continuous (weight) function used to calculate
the mean as in \(H_0\).
fun(t)
must be able to take a vector input t
.
Default to the identity function \(f(t)=t\).
a real number used in the constraint, sum to this value.
an optional positive real number specifying the tolerance of iteration error in the QP. This is the bound of the \(L_1\) norm of the difference of two successive weights.
an optional integer, used to control maximum number of iterations.
Mai Zhou, Kun Chen
When the given constants \(\mu\) is too far away from the NPMLE, there will be no distribution satisfy the constraint. In this case the computation will stop. The -2 Log empirical likelihood ratio should be infinite.
The constant mu
must be inside
\(( \min f(x_i) , \max f(x_i) ) \)
for the computation to continue.
It is always true that the NPMLE values are feasible. So when the
computation cannot continue, try move the mu
closer
to the NPMLE, or use a different fun
.
This function depends on Wdataclean2(), WKM() and solve3.QP()
This function uses sequential Quadratic Programming to find the maximum. Unlike other functions in this package, it can be slow for larger sample sizes. It took about one minute for a sample of size 2000 with 20% censoring on a 1GHz, 256MB PC, about 19 seconds on a 3 GHz 512MB PC.
Pan, X. and Zhou, M. (1999). Using 1-parameter sub-family of distributions in empirical likelihood ratio with censored data. J. Statist. Plann. Inference. 75, 379-392.
Chen, K. and Zhou, M. (2000). Computing censored empirical likelihood ratio using Quadratic Programming. Tech Report, Univ. of Kentucky, Dept of Statistics
Zhou, M. and Chen, K. (2007). Computation of the empirical likelihood ratio from censored data. Journal of Statistical Computing and Simulation, 77, 1033-1042.
el.cen.test(rexp(100), c(rep(0,25),rep(1,75)), mu=1.5)
## second example with tied observations
x <- c(1, 1.5, 2, 3, 4, 5, 6, 5, 4, 1, 2, 4.5)
d <- c(1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1)
el.cen.test(x,d,mu=3.5)
# we should get "-2LLR" = 1.246634 etc.
Run the code above in your browser using DataLab