If x contains any missing (NA), undefined (NaN) or 
  infinite (Inf, -Inf) values, they will be removed prior to 
  performing the estimation.
Let \(X\) denote a random variable from a 
  three-parameter lognormal distribution with 
  parameters meanlog=\(\mu\), sdlog=\(\sigma\), and 
  threshold=\(\gamma\).  Let \(\underline{x}\) denote a vector of 
  \(n\) observations from this distribution.  Furthermore, let \(x_{(i)}\) denote 
  the \(i\)'th order statistic in the sample, so that \(x_{(1)}\) denotes the 
  smallest value and \(x_{(n)}\) denote the largest value in \(\underline{x}\).  
  Finally, denote the sample mean and variance by:
  $$\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i \;\;\;\; (1)$$
  $$s^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})^2 \;\;\;\; (2)$$
  Note that the sample variance is the unbiased version.  Denote the method of 
  moments estimator of variance by:
  $$s^2_m = \frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^2 \;\;\;\; (3)$$
Estimation
Local Maximum Likelihood Estimation (method="lmle") 
  Hill (1963) showed that the likelihood function approaches infinity as \(\gamma\) 
  approaches \(x_{(1)}\), so that the global maximum likelihood estimators of 
  \((\mu, \sigma, \gamma)\) are \((-\infty, \infty, x_{(1)})\), which are 
  inadmissible, since \(\gamma\) must be smaller than \(x_{(1)}\).  Cohen (1951) 
  suggested using local maximum likelihood estimators (lmle's), derived by equating 
  partial derivatives of the log-likelihood function to zero.  These estimators were 
  studied by Harter and Moore (1966), Calitz (1973), Cohen and Whitten (1980), and 
  Griffiths (1980), and appear to possess most of the desirable properties ordinarily 
  associated with maximum likelihood estimators.
Cohen (1951) showed that the lmle of \(\gamma\) is given by the solution to the 
  following equation:
  $$[\sum_{i=1}^n \frac{1}{w_i}] \, \{\sum_{i=1}^n y_i - \sum_{i=1}^n y_i^2 + \frac{1}{n}[\sum_{i=1}^n y_i]^2 \} - n \sum_{i=1}^n \frac{y_i}{w_i} = 0 \;\;\;\; (4)$$
  where
  $$w_i = x_i - \hat{\gamma} \;\;\;\; (5)$$
  $$y_i = log(x_i - \hat{\gamma}) = log(w_i) \;\;\;\; (6)$$
  and that the lmle's of \(\mu\) and \(\sigma\) then follow as:
  $$\hat{\mu} = \frac{1}{n} \sum_{i=1}^n y_i = \bar{y} \;\;\;\; (7)$$
  $$\hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^n (y_i - \bar{y})^2 \;\;\;\; (8)$$
  Unfortunately, while equation (4) simplifies the task of computing the lmle's, 
  for certain data sets there still may be convergence problems (Calitz, 1973), and 
  occasionally multiple roots of equation (4) may exist.  When multiple roots to 
  equation (4) exisit, Cohen and Whitten (1980) recommend using the one that results 
  in closest agreement between the mle of \(\mu\) (equation (7)) and the sample 
  mean (equation (1)).
On the other hand, Griffiths (1980) showed that for a given value of the threshold 
  parameter \(\gamma\), the maximized value of the log-likelihood (the 
  “profile likelihood” for \(\gamma\)) is given by:
  $$log[L(\gamma)] = \frac{-n}{2} [1 + log(2\pi) + 2\hat{\mu} + log(\hat{\sigma}^2) ] \;\;\;\; (9)$$
  where the estimates of \(\mu\) and \(\sigma\) are defined in equations (7) 
  and (8), so the lmle of \(\gamma\) reduces to an iterative search over the values 
  of \(\gamma\).  Griffiths (1980) noted that the distribution of the lmle of 
  \(\gamma\) is far from normal and that \(log[L(\gamma)]\) is not quadratic 
  near the lmle of \(\gamma\).  He suggested a better parameterization based on
  $$\eta = -log(x_{(1)} - \gamma) \;\;\;\; (10)$$
  Thus, once the lmle of \(\eta\) is found using equations (9) and (10), the lmle of 
  \(\gamma\) is given by:
  $$\hat{\gamma} = x_{(1)} - exp(-\hat{\eta}) \;\;\;\; (11)$$
  When method="lmle", the function elnorm3 uses the function 
  nlminb to search for the minimum of \(-2log[L(\eta)]\), using the 
  modified method of moments estimator (method="mmme"; see below) as the 
  starting value for \(\gamma\).  Equation (11) is then used to solve for the 
  lmle of \(\gamma\), and equation (4) is used to “fine tune” the estimated 
  value of \(\gamma\).  The lmle's of \(\mu\) and \(\sigma\) are then computed 
  using equations (6)-(8).
Method of Moments Estimation (method="mme")   
  Denote the \(r\)'th sample central moment by:
  $$m_r = \frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^r \;\;\;\; (12)$$
  and note that
  $$s^2_m = m_2 \;\;\;\; (13)$$
  Equating the sample first moment (the sample mean) with its population value 
  (the population mean), and equating the second and third sample central moments 
  with their population values yields (Johnson et al., 1994, p.228):
  $$\bar{x} = \gamma + \beta \sqrt{\omega} \;\;\;\; (14)$$
  $$m_2 = s^2_m = \beta^2 \omega (\omega - 1) \;\;\;\; (15)$$
  $$m_3 = \beta^3 \omega^{3/2} (\omega - 1)^2 (\omega + 2) \;\;\;\; (16)$$
  where
  $$\beta = exp(\mu) \;\;\;\; (17)$$
  $$\omega = exp(\sigma^2) \;\;\;\; (18)$$
  Combining equations (15) and (16) yields:
  $$b_1 = \frac{m_3}{m_2^{3/2}} = (\omega + 2) \sqrt{\omega - 1} \;\;\;\; (19)$$
  The quantity on the left-hand side of equation (19) is the usual estimator of 
  skewness.  Solving equation (19) for \(\omega\) yields:
  $$\hat{\omega} = (d + h)^{1/3} + (d - h)^{1/3} - 1 \;\;\;\; (20)$$
  where
  $$d = 1 + \frac{b_1}{2} \;\;\;\; (21)$$
  $$h = sqrt{d^2 - 1} \;\;\;\; (22)$$
  Using equation (18), the method of moments estimator of \(\sigma\) is then 
  computed as:
  $$\hat{\sigma}^2 = log(\hat{\omega}) \;\;\;\; (23)$$
  Combining equations (15) and (17), the method of moments estimator of \(\mu\) 
  is computed as:
  $$\hat{\mu} = \frac{1}{2} log[\frac{s^2_m}{\hat{omega}(\hat{\omega} - 1)}] \;\;\;\; (24)$$
  Finally, using equations (14), (17), and (18), the method of moments estimator of 
  \(\gamma\) is computed as:
  $$\bar{x} - exp(\hat{mu} + \frac{\hat{\sigma}^2}{2}) \;\;\;\; (25)$$
  There are two major problems with using method of moments estimators for the 
  three-parameter lognormal distribution.  First, they are subject to very large 
  sampling error due to the use of second and third sample moments 
  (Cohen, 1988, p.121; Johnson et al., 1994, p.228).  Second, Heyde (1963) showed 
  that the lognormal distribution is not uniquely determined by its moments.
Method of Moments Estimators Using an Unbiased Estimate of Variance (method="mmue") 
  This method of estimation is exactly the same as the method of moments 
  (method="mme"), except that the unbiased estimator of variance (equation (3)) 
  is used in place of the method of moments one (equation (4)).  This modification is 
  given in Cohen (1988, pp.119-120).
Modified Method of Moments Estimation (method="mmme") 
  This method of estimation is described by Cohen (1988, pp.125-132).  It was 
  introduced by Cohen and Whitten (1980; their MME-II with r=1) and was further 
  investigated by Cohen et al. (1985).  It is motivated by the fact that the first 
  order statistic in the sample, \(x_{(1)}\), contains more information about 
  the threshold parameter \(\gamma\) than any other observation and often more 
  information than all of the other observations combined (Cohen, 1988, p.125).
The first two sets of equations are the same as for the modified method of moments 
  estimators (method="mmme"), i.e., equations (14) and (15) with the 
  unbiased estimator of variance (equation (3)) used in place of the method of 
  moments one (equation (4)).  The third equation replaces equation (16) 
  by equating a function of the first order statistic with its expected value:
  $$log(x_{(1)} - \gamma) = \mu + \sigma E[Z_{(1,n)}] \;\;\;\; (26)$$
  where \(E[Z_{(i,n)}]\) denotes the expected value of the \(i\)'th order 
  statistic in a random sample of \(n\) observations from a standard normal 
  distribution.  (See the help file for evNormOrdStats for information 
  on how \(E[Z_{(i,n)}]\) is computed.)  Using equations (17) and (18), 
  equation (26) can be rewritten as:
  $$x_{(1)} = \gamma + \beta exp\{\sqrt{log(\omega)} \, E[Z_{(i,n)}] \} \;\;\;\; (27)$$
  Combining equations (14), (15), (17), (18), and (27) yields the following equation 
  for the estimate of \(\omega\):
  $$\frac{s^2}{[\bar{x} - x_{(1)}]^2} = \frac{\hat{\omega}(\hat{\omega} - 1)}{[\sqrt{\hat{\omega}} - exp\{\sqrt{log(\omega)} \, E[Z_{(i,n)}] \} ]^2} \;\;\;\; (28)$$
  After equation (28) is solved for \(\hat{\omega}\), the estimate of \(\sigma\) 
  is again computed using equation (23), and the estimate of \(\mu\) is computed 
  using equation (24), where the unbiased estimate of variaince is used in place of 
  the biased one (just as for method="mmue").
Zero-Skewness Estimation (method="zero.skew") 
  This method of estimation was introduced by Griffiths (1980), and elaborated upon 
  by Royston (1992b).  The idea is that if the threshold parameter \(\gamma\) were 
  known, then the distribution of:
  $$Y = log(X - \gamma) \;\;\;\; (29)$$
  is normal, so the skew of \(Y\) is 0.  Thus, the threshold parameter \(\gamma\) 
  is estimated as that value that forces the sample skew (defined in equation (19)) of 
  the observations defined in equation (6) to be 0.  That is, the zero-skewness 
  estimator of \(\gamma\) is the value that satisfies the following equation:
  $$0 = \frac{\frac{1}{n} \sum_{i=1}^n (y_i - \bar{y})^3}{[\frac{1}{n} \sum_{i=1}^n (y_i - \bar{y})^2]^{3/2}} \;\;\;\; (30)$$
  where
  $$y_i = log(x_i - \hat{\gamma}) \;\;\;\; (31)$$
  Note that since the denominator in equation (30) is always positive (assuming 
  there are at least two unique values in \(\underline{x}\)), only the numerator 
  needs to be used to determine the value of \(\hat{\gamma}\).
Once the value of \(\hat{\gamma}\) has been determined, \(\mu\) and \(\sigma\) 
  are estimated using equations (7) and (8), except the unbiased estimator of variance 
  is used in equation (8).
Royston (1992b) developed a modification of the Shaprio-Wilk goodness-of-fit test 
  for normality based on tranforming the data using equation (6) and the zero-skewness 
  estimator of \(\gamma\) (see gofTest).
 
Estimators Based on Royston's Index of Skewness (method="royston.skew") 
  This method of estimation is discussed by Royston (1992b), and is similar to the 
  zero-skewness method discussed above, except a different measure of skewness is used.  
  Royston's (1992b) index of skewness is given by:
  $$q = \frac{y_{(n)} - \tilde{y}}{\tilde{y} - y_{(1)}} \;\;\;\; (32)$$
  where \(y_{(i)}\) denotes the \(i\)'th order statistic of \(y\) and \(y\) 
  is defined in equation (31) above, and \(\tilde{y}\) denotes the median of \(y\).  
  Royston (1992b) shows that the value of \(\gamma\) that yields a value of 
  \(q=0\) is given by:
  $$\hat{\gamma} = \frac{y_{(1)}y_{(n)} - \tilde{y}^2}{y_{(1)} + y_{(n)} - 2\tilde{y}} \;\;\;\; (33)$$
  Again, as for the zero-skewness method, once the value of \(\hat{\gamma}\) has 
  been determined, \(\mu\) and \(\sigma\) are estimated using equations (7) and (8), 
  except the unbiased estimator of variance is used in equation (8).
Royston (1992b) developed this estimator as a quick way to estimate \(\gamma\).
Confidence Intervals 
  This section explains three different methods for constructing confidence intervals 
  for the threshold parameter \(\gamma\), or the median of the three-parameter 
  lognormal distribution, which is given by:
  $$Med[X] = \gamma + exp(\mu) = \gamma + \beta \;\;\;\; (34)$$
Normal Approximation Based on Asymptotic Variances and Covariances (ci.method="avar") 
  Formulas for asymptotic variances and covariances for the three-parameter lognormal 
  distribution, based on the information matrix, are given in Cohen (1951), Cohen and 
  Whitten (1980), Cohen et al., (1985), and Cohen (1988).  The relevant quantities for 
  \(\gamma\) and the median are:
  $$Var(\hat{\gamma}) = \sigma^2_{\hat{\gamma}} = \frac{\sigma^2}{n} \, (\frac{\beta^2}{\omega}) H \;\;\;\; (35)$$
  $$Var(\hat{\beta}) = \sigma^2_{\hat{\beta}} = \frac{\sigma^2}{n} \, \beta^2 (1 + H) \;\;\;\; (36)$$
  $$Cov(\hat{\gamma}, \hat{\beta}) = \sigma_{\hat{\gamma}, \hat{\beta}} = \frac{-\sigma^3}{n} \, (\frac{\beta^2}{\sqrt{\omega}}) H \;\;\;\; (37)$$
  where 
  $$H = [\omega (1 + \sigma^2) - 2\sigma^2 - 1]^{-1} \;\;\;\; (38)$$
  A two-sided \((1-\alpha)100\%\) confidence interval for \(\gamma\) is computed as:
  $$\hat{\gamma} - t_{n-2, 1-\alpha/2} \hat{\sigma}_{\hat{\gamma}}, \, \hat{\gamma} + t_{n-2, 1-\alpha/2} \hat{\sigma}_{\hat{\gamma}} \;\;\;\; (39)$$
  where \(t_{\nu, p}\) denotes the \(p\)'th quantile of 
  Student's t-distribution with \(n\) degrees of freedom, and the 
  quantity \(\hat{\sigma}_{\hat{\gamma}}\) is computed using equations (35) and (38) 
  and substituting estimated values of \(\beta\), \(\omega\), and \(\sigma\).  
  One-sided confidence intervals are computed in a similar manner.
A two-sided \((1-\alpha)100\%\) confidence interval for the median (see equation 
  (34) above) is computed as:
  $$\hat{\gamma} + \hat{\beta} - t_{n-2, 1-\alpha/2} \hat{\sigma}_{\hat{\gamma} + \hat{\beta}}, \, \hat{\gamma} + \hat{\beta} + t_{n-2, 1-\alpha/2} \hat{\sigma}_{\hat{\gamma} + \hat{\beta}} \;\;\;\; (40)$$
  where 
  $$\hat{\sigma}^2_{\hat{\gamma} + \hat{\beta}} = \hat{\sigma}^2_{\hat{\gamma}} + \hat{\sigma}^2_{\hat{\beta}} + \hat{\sigma}_{\hat{\gamma}, \hat{\beta}} \;\;\;\; (41)$$
  is computed using equations (35)-(38) and substituting estimated values of 
  \(\beta\), \(\omega\), and \(\sigma\).  One-sided confidence intervals are 
  computed in a similar manner.
This method of constructing confidence intervals is analogous to using the Wald test 
  (e.g., Silvey, 1975, pp.115-118) to test hypotheses on the parameters.
Because of the regularity problems associated with the global maximum likelihood 
  estimators, it is questionble whether the asymptotic variances and covariances shown 
  above apply to local maximum likelihood estimators.  Simulation studies, however, 
  have shown that these estimates of variance and covariance perform reasonably well 
  (Harter and Moore, 1966; Cohen and Whitten, 1980).
Note that this method of constructing confidence intervals can be used with 
  estimators other than the lmle's.  Cohen and Whitten (1980) and Cohen et al. (1985) 
  found that the asymptotic variances and covariances are reasonably close to 
  corresponding simulated variances and covariances for the modified method of moments 
  estimators (method="mmme").
Likelihood Profile (ci.method="likelihood.profile") 
  Griffiths (1980) suggested constructing confidence intervals for the threshold 
  parameter \(\gamma\) based on the profile likelihood function given in equations 
  (9) and (10).  Royston (1992b) further elaborated upon this procedure.  A 
  two-sided \((1-\alpha)100\%\) confidence interval for \(\eta\) is constructed as:
  $$[\eta_{LCL}, \eta_{UCL}] \;\;\;\; (42)$$
  by finding the two values of \(\eta\) (one larger than the lmle of \(\eta\) and 
  one smaller than the lmle of \(\eta\)) that satisfy:
  $$log[L(\eta)] = log[L(\hat{\eta}_{lmle})] - \frac{1}{2} \chi^2_{1, \alpha/2} \;\;\;\; (43)$$
  where \(\chi^2_{\nu, p}\) denotes the \(p\)'th quantile of the 
  chi-square distribution with \(\nu\) degrees of freedom.  
  Once these values are found, the two-sided confidence for \(\gamma\) is computed as:
  $$[\gamma_{LCL}, \gamma_{UCL}] \;\;\;\; (44)$$
  where
  $$\gamma_{LCL} = x_{(1)} - exp(-\eta_{LCL}) \;\;\;\; (45)$$
  $$\gamma_{UCL} = x_{(1)} - exp(-\eta_{UCL}) \;\;\;\; (46)$$
  One-sided intervals are construced in a similar manner.
 
This method of constructing confidence intervals is analogous to using the 
  likelihood-ratio test (e.g., Silvey, 1975, pp.108-115) to test hypotheses on the 
  parameters.
To construct a two-sided \((1-\alpha)100\%\) confidence interval for the median 
  (see equation (34)), Royston (1992b) suggested the following procedure:
- Construct a confidence interval for \(\gamma\) using the likelihood 
      profile procedure. 
- Construct a confidence interval for \(\beta\) as: 
    $$[\beta_{LCL}, \beta_{UCL}] = [exp(\hat{\mu} - t_{n-2, 1-\alpha/2} \frac{\hat{\sigma}}{n}), \, exp(\hat{\mu} + t_{n-2, 1-\alpha/2} \frac{\hat{\sigma}}{n})] \;\;\;\; (47)$$ 
- Construct the confidence interval for the median as: 
    $$[\gamma_{LCL} + \beta_{LCL}, \gamma_{UCL} + \beta_{UCL}] \;\;\;\; (48)$$ 
Royston (1992b) actually suggested using the quantile from the standard normal 
  distribution instead of Student's t-distribution in step 2 above.  The function 
  elnorm3, however, uses the Student's t quantile.
Note that this method of constructing confidence intervals can be used with 
  estimators other than the lmle's.
 
Royston's Confidence Interval Based on Significant Skewness (ci.method="skewness") 
  Royston (1992b) suggested constructing confidence intervals for the threshold 
  parameter \(\gamma\) based on the idea behind the zero-skewness estimator 
  (method="zero.skew").  A two-sided \((1-\alpha)100\%\) confidence interval 
  for \(\gamma\) is constructed by finding the two values of \(\gamma\) that yield 
  a p-value of \(\alpha/2\) for the test of zero-skewness on the observations 
  \(\underline{y}\) defined in equation (6) (see gofTest).  One-sided 
  confidence intervals are constructed in a similar manner.
To construct \((1-\alpha)100\%\) confidence intervals for the median 
  (see equation (34)), the exact same procedure is used as for 
  ci.method="likelihood.profile", except that the confidence interval for 
  \(\gamma\) is based on the zero-skewness method just described instead of the 
  likelihood profile method.