Log-predictive probability calculation for Gaussian process (GP) regression, classification, or combined unknown constraint models; primarily to be used particle learning (PL) re-sample step
lpredprob.GP(z, Zt, prior)
lpredprob.CGP(z, Zt, prior)
lpredprob.ConstGP(z, Zt, prior)
Returns a real-valued scalar - the log predictive probability
new observation whose (log) predictive probability is to be
calculated given the particle Zt
the particle describing model parameters and sufficient statistics that determines the predictive distribution
prior parameters passed from PL
generated by one of
the prior functions, e.g., prior.GP
Robert B. Gramacy, rbg@vt.edu
This is the workhorse of the PL
re-sample step. For
each new observation (in sequence), the
PL
function calls lpredprob
and these values
determine the weights used in the sample
function to
obtain the new particle set, which is then propagated, e.g., using
propagate.GP
The lpredprob.ConstGP
is essentially the combination
(product) of lpredprob.GP
and
lpredprob.CGP
for regression and classification GP
models, respectively
Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262
Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press
Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/
PL
, propagate.GP
## See the demos via demo(package="plgp") and the examples
## section of ?plgp
Run the code above in your browser using DataLab