Outliers can be defined as particularly influential observations.
Most methods rely on the computation of some distance metric, and the
observations greater than a certain threshold are considered outliers.
Importantly, outliers detection methods are meant to provide information to
consider for the researcher, rather than to be an automatized procedure
which mindless application is a substitute for thinking.
An example sentence for reporting the usage of the composite method
could be:
"Based on a composite outlier score (see the 'check_outliers' function
in the 'performance' R package; L<U+00FC>decke et al., 2021) obtained via the joint
application of multiple outliers detection algorithms (Z-scores, Iglewicz,
1993; Interquartile range (IQR); Mahalanobis distance, Cabana, 2019; Robust
Mahalanobis distance, Gnanadesikan & Kettenring, 1972; Minimum Covariance
Determinant, Leys et al., 2018; Invariant Coordinate Selection, Archimbaud et
al., 2018; OPTICS, Ankerst et al., 1999; Isolation Forest, Liu et al. 2008;
and Local Outlier Factor, Breunig et al., 2000), we excluded n participants
that were classified as outliers by at least half of the methods used."
Model-specific methods
Cook's Distance:
Among outlier detection methods, Cook's distance and leverage are less
common than the basic Mahalanobis distance, but still used. Cook's distance
estimates the variations in regression coefficients after removing each
observation, one by one (Cook, 1977). Since Cook's distance is in the metric
of an F distribution with p and n-p degrees of freedom, the median point of
the quantile distribution can be used as a cut-off (Bollen, 1985). A common
approximation or heuristic is to use 4 divided by the numbers of
observations, which usually corresponds to a lower threshold (i.e., more
outliers are detected). This only works for Frequentist models. For Bayesian
models, see pareto
.
Pareto:
The reliability and approximate convergence of Bayesian models can be
assessed using the estimates for the shape parameter k of the generalized
Pareto distribution. If the estimated tail shape parameter k exceeds 0.5, the
user should be warned, although in practice the authors of the loo
package observed good performance for values of k up to 0.7 (the default
threshold used by performance
).
Univariate methods
Z-scores ("zscore", "zscore_robust")
:
The Z-score, or standard score, is a way of describing a data point as
deviance from a central value, in terms of standard deviations from the mean
("zscore"
) or, as it is here the case ("zscore_robust"
) by
default (Iglewicz, 1993), in terms of Median Absolute Deviation (MAD) from
the median (which are robust measures of dispersion and centrality). The
default threshold to classify outliers is 1.959 (threshold = list("zscore" = 1.959)
), corresponding to the 2.5\
most extreme observations (assuming the data is normally distributed).
Importantly, the Z-score method is univariate: it is computed column by
column. If a dataframe is passed, the Z-score is calculated for each
variable separately, and the maximum (absolute) Z-score is kept for each
observations. Thus, all observations that are extreme on at least one
variable might be detected as outliers. Thus, this method is not suited for
high dimensional data (with many columns), returning too liberal results
(detecting many outliers).
IQR ("iqr")
:
Using the IQR (interquartile range) is a robust method developed by John
Tukey, which often appears in box-and-whisker plots (e.g., in
geom_boxplot
). The interquartile range is the range between the first
and the third quartiles. Tukey considered as outliers any data point that
fell outside of either 1.5 times (the default threshold) the IQR below the
first or above the third quartile. Similar to the Z-score method, this is a
univariate method for outliers detection, returning outliers detected for at
least one column, and might thus not be suited to high dimensional data.
CI ("ci", "eti", "hdi", "bci")
:
Another univariate method is to compute, for each variable, some sort of
"confidence" interval and consider as outliers values lying beyond the edges
of that interval. By default, "ci"
computes the Equal-Tailed Interval
("eti"
), but other types of intervals are available, such as Highest
Density Interval ("hdi"
) or the Bias Corrected and Accelerated
Interval ("bci"
). The default threshold is 0.95
, considering
as outliers all observations that are outside the 95\
variable. See bayestestR::ci()
for more details
about the intervals.
Multivariate methods
Mahalanobis Distance:
Mahalanobis distance (Mahalanobis, 1930) is often used for multivariate
outliers detection as this distance takes into account the shape of the
observations. The default threshold
is often arbitrarily set to some
deviation (in terms of SD or MAD) from the mean (or median) of the
Mahalanobis distance. However, as the Mahalanobis distance can be
approximated by a Chi squared distribution (Rousseeuw & Van Zomeren, 1990),
we can use the alpha quantile of the chi-square distribution with k degrees
of freedom (k being the number of columns). By default, the alpha threshold
is set to 0.025 (corresponding to the 2.5\
Cabana, 2019). This criterion is a natural extension of the median plus or
minus a coefficient times the MAD method (Leys et al., 2013).
Robust Mahalanobis Distance:
A robust version of Mahalanobis distance using an Orthogonalized
Gnanadesikan-Kettenring pairwise estimator (Gnanadesikan \& Kettenring,
1972). Requires the bigutilsr package. See the
bigutilsr::dist_ogk()
function.
Minimum Covariance Determinant (MCD):
Another robust version of Mahalanobis. Leys et al. (2018) argue that
Mahalanobis Distance is not a robust way to determine outliers, as it uses
the means and covariances of all the data <U+2013> including the outliers <U+2013> to
determine individual difference scores. Minimum Covariance Determinant
calculates the mean and covariance matrix based on the most central subset of
the data (by default, 66\
is deemed to be a more robust method of identifying and removing outliers
than regular Mahalanobis distance.
Invariant Coordinate Selection (ICS):
The outlier are detected using ICS, which by default uses an alpha threshold
of 0.025 (corresponding to the 2.5\
value for outliers classification. Refer to the help-file of
ICSOutlier::ics.outlier()
to get more details about this procedure.
Note that method = "ics"
requires both ICS and ICSOutlier
to be installed, and that it takes some time to compute the results.
OPTICS:
The Ordering Points To Identify the Clustering Structure (OPTICS) algorithm
(Ankerst et al., 1999) is using similar concepts to DBSCAN (an unsupervised
clustering technique that can be used for outliers detection). The threshold
argument is passed as minPts
, which corresponds to the minimum size
of a cluster. By default, this size is set at 2 times the number of columns
(Sander et al., 1998). Compared to the others techniques, that will always
detect several outliers (as these are usually defined as a percentage of
extreme values), this algorithm functions in a different manner and won't
always detect outliers. Note that method = "optics"
requires the
dbscan package to be installed, and that it takes some time to compute
the results.
Isolation Forest:
The outliers are detected using the anomaly score of an isolation forest (a
class of random forest). The default threshold of 0.025 will classify as
outliers the observations located at qnorm(1-0.025) * MAD)
(a robust
equivalent of SD) of the median (roughly corresponding to the 2.5\
extreme observations). Requires the solitude package.
Local Outlier Factor:
Based on a K nearest neighbours algorithm, LOF compares the local density of
an point to the local densities of its neighbors instead of computing a
distance from the center (Breunig et al., 2000). Points that have a
substantially lower density than their neighbors are considered outliers. A
LOF score of approximately 1 indicates that density around the point is
comparable to its neighbors. Scores significantly larger than 1 indicate
outliers. The default threshold of 0.025 will classify as outliers the
observations located at qnorm(1-0.025) * SD)
of the log-transformed
LOF distance. Requires the dbscan package.
Threshold specification
Default thresholds are currently specified as follows:
list(
zscore = stats::qnorm(p = 1 - 0.025),
iqr = 1.5,
ci = 0.95,
cook = stats::qf(0.5, ncol(x), nrow(x) - ncol(x)),
pareto = 0.7,
mahalanobis = stats::qchisq(p = 1 - 0.025, df = ncol(x)),
robust = stats::qchisq(p = 1 - 0.025, df = ncol(x)),
mcd = stats::qchisq(p = 1 - 0.025, df = ncol(x)),
ics = 0.025,
optics = 2 * ncol(x),
iforest = 0.025,
lof = 0.025
)