The default method for setting the scale
parameter for function
ebnm_normal_scale_mixture
.
ebnm_scale_normalmix(
x,
s,
mode = 0,
min_K = 3,
max_K = 300,
KLdiv_target = 1/length(x)
)
A vector of observations. Missing observations (NA
s) are
not allowed.
A vector of standard errors (or a scalar if all are equal). Standard errors may not be exactly zero, and missing standard errors are not allowed.
A scalar specifying the mode of the prior \(g\).
The minimum number of components \(K\) to include in the finite mixture of normal distributions used to approximate the nonparametric family of scale mixtures of normals.
The maximum number of components \(K\) to include in the approximating mixture of normal distributions.
The desired bound \(\kappa\) on the KL-divergence from the solution obtained using the approximating mixture to the exact solution. More precisely, the scale parameter is set such that given the exact MLE $$\hat{g} := \mathrm{argmax}_{g \in G} L(g),$$ where \(G\) is the full nonparametric family, and given the MLE for the approximating family \(\tilde{G}\) $$\tilde{g} := \mathrm{argmax}_{g \in \tilde{G}} L(g),$$ we have that $$\mathrm{KL}(\hat{g} \ast N(0, s^2) \mid \tilde{g} \ast N(0, s^2)) \le \kappa,$$ where \(\ast \ N(0, s^2)\) denotes convolution with the normal error distribution (the derivation of the bound assumes homoskedastic observations). For details, see References below.
Jason Willwerscheid (2021). Empirical Bayes Matrix Factorization: Methods and Applications. University of Chicago, PhD dissertation.