Up to a constant, minus twice the maximized log-likelihood. Where sensible,
the constant is chosen so that a saturated model has deviance zero.
$$GCV(h)=p(h) \Xi(n^{-1}h^{-1})$$
Where
$$p(h)=\frac{1}{n}
\sum_{i=1}^{n}{\Big(y_i-r_{i}(x_i)\Big)^{2}w(x_i)}$$
and penalty
function $$\Xi()$$ can be selected from the following criteria:
Generalized Cross-validation (GCV):
$$\Xi_{GCV}(n^{-1}h^{-1})=(1-n^{-1}S_{ii})^{-2}$$
Akaike's
Information Criterion (AIC):
$$\Xi_{AIC}(n^{-1}h^{-1})=exp(2n^{-1}S_{ii})$$
Finite Prediction Error (FPE)
$$\Xi_{FPE}(n^{-1}h^{-1})=\frac{(1+n^{-1}S_{ii})}{(1-n^{-1}S_{ii})}$$
Shibata's model selector (Shibata):
$$\Xi_{Shibata}(n^{-1}h^{-1})=(1+2n^{-1}S_{ii})$$
Rice's bandwidth selector (Rice):
$$\Xi_{Rice}(n^{-1}h^{-1})=(1-2n^{-1}S_{ii})^{-1}$$