Imputes univariate missing data using predictive mean matching.
mice.impute.midastouch(
y,
ry,
x,
wy = NULL,
ridge = 1e-05,
midas.kappa = NULL,
outout = TRUE,
neff = NULL,
debug = NULL,
...
)
Vector with imputed data, same type as y
, and of
length sum(wy)
Vector to be imputed
Logical vector of length length(y)
indicating the
the subset y[ry]
of elements in y
to which the imputation
model is fitted. The ry
generally distinguishes the observed
(TRUE
) and missing values (FALSE
) in y
.
Numeric design matrix with length(y)
rows with predictors for
y
. Matrix x
may have no missing values.
Logical vector of length length(y)
. A TRUE
value
indicates locations in y
for which imputations are created.
The ridge penalty used in .norm.draw()
to prevent
problems with multicollinearity. The default is ridge = 1e-05
,
which means that 0.01 percent of the diagonal is added to the cross-product.
Larger ridges may result in more biased estimates. For highly noisy data
(e.g. many junk variables), set ridge = 1e-06
or even lower to
reduce bias. For highly collinear data, set ridge = 1e-04
or higher.
Scalar. If NULL
(default) then the
optimal kappa
gets selected automatically. Alternatively, the user
may specify a scalar. Siddique and Belin 2008 find midas.kappa = 3
to be sensible.
Logical. If TRUE
(default) one model is estimated
for each donor (leave-one-out principle). For speedup choose
outout = FALSE
, which estimates one model for all observations
leading to in-sample predictions for the donors and out-of-sample
predictions for the recipients. Mind the inappropriateness, though.
FOR EXPERTS. Null or character string. The name of an existing
environment in which the effective sample size of the donors for each
loop (CE iterations times multiple imputations) is supposed to be written.
The effective sample size is necessary to compute the correction for the
total variance as originally suggested by Parzen, Lipsitz and
Fitzmaurice 2005. The objectname is midastouch.neff
.
FOR EXPERTS. Null or character string. The name of an existing
environment in which the input is supposed to be written. The objectname
is midastouch.inputlist
.
Other named arguments.
Philipp Gaffert, Florian Meinfelder, Volker Bosch 2015
Imputation of y
by predictive mean matching, based on
Rubin (1987, p. 168, formulas a and b) and Siddique and Belin 2008.
The procedure is as follows:
Draw a bootstrap sample from the donor pool.
Estimate a beta matrix on the bootstrap sample by the leave one out principle.
Compute type II predicted values for yobs
(nobs x 1) and ymis
(nmis x nobs).
Calculate the distance between all yobs
and the corresponding ymis
.
Convert the distances in drawing probabilities.
For each recipient draw a donor from the entire pool while considering the probabilities from the model.
Take its observed value in y
as the imputation.
Gaffert, P., Meinfelder, F., Bosch V. (2015) Towards an MI-proper Predictive Mean Matching, Discussion Paper. https://www.uni-bamberg.de/fileadmin/uni/fakultaeten/sowi_lehrstuehle/statistik/Personen/Dateien_Florian/properPMM.pdf
Little, R.J.A. (1988), Missing data adjustments in large surveys (with discussion), Journal of Business Economics and Statistics, 6, 287--301.
Parzen, M., Lipsitz, S. R., Fitzmaurice, G. M. (2005), A note on reducing the bias of the approximate Bayesian bootstrap imputation variance estimator. Biometrika 92, 4, 971--974.
Rubin, D.B. (1987), Multiple imputation for nonresponse in surveys. New York: Wiley.
Siddique, J., Belin, T.R. (2008), Multiple imputation using an iterative hot-deck with distance-based donor selection. Statistics in medicine, 27, 1, 83--102
Van Buuren, S., Brand, J.P.L., Groothuis-Oudshoorn C.G.M., Rubin, D.B. (2006), Fully conditional specification in multivariate imputation. Journal of Statistical Computation and Simulation, 76, 12, 1049--1064.
Van Buuren, S., Groothuis-Oudshoorn, K. (2011), mice
: Multivariate
Imputation by Chained Equations in R
. Journal of
Statistical Software, 45, 3, 1--67. tools:::Rd_expr_doi("10.18637/jss.v045.i03")
Other univariate imputation functions:
mice.impute.cart()
,
mice.impute.lasso.logreg()
,
mice.impute.lasso.norm()
,
mice.impute.lasso.select.logreg()
,
mice.impute.lasso.select.norm()
,
mice.impute.lda()
,
mice.impute.logreg.boot()
,
mice.impute.logreg()
,
mice.impute.mean()
,
mice.impute.mnar.logreg()
,
mice.impute.mpmm()
,
mice.impute.norm.boot()
,
mice.impute.norm.nob()
,
mice.impute.norm.predict()
,
mice.impute.norm()
,
mice.impute.pmm()
,
mice.impute.polr()
,
mice.impute.polyreg()
,
mice.impute.quadratic()
,
mice.impute.rf()
,
mice.impute.ri()
# do default multiple imputation on a numeric matrix
imp <- mice(nhanes, method = "midastouch")
imp
# list the actual imputations for BMI
imp$imp$bmi
# first completed data matrix
complete(imp)
# imputation on mixed data with a different method per column
mice(nhanes2, method = c("sample", "midastouch", "logreg", "norm"))
Run the code above in your browser using DataLab