These nimbleFunctions provide distributions that can be used
directly in R or in nimble
hierarchical models (via
nimbleCode
and nimbleModel
).
The distribution has two forms, dHMM
and dHMMo
. Define S as
the number of latent state categories (maximum possible value for elements
of x
), O as the number of possible observation state categories, and
T as the number of observation times (length of x
). In dHMM
,
probObs
is a time-independent observation probability matrix with
dimension S x O. In dHMMo
, probObs
is a three-dimensional
array of time-dependent observation probabilities with dimension S x O x T.
The first index of probObs
indexes the true latent state. The
second index of probObs
indexes the observed state. For example, in
the time-dependent case, probObs[i, j, t]
is the probability at time
t
that an individual in state i
is observed in state
j
.
probTrans
has dimension S x S. probTrans
[i, j] is the
time-independent probability that an individual in state i
at time
t
transitions to state j
time t+1
.
init
has length S. init[i]
is the probability of being in
state i
at the first observation time. That means that the first
observations arise from the initial state probabilities.
For more explanation, see package vignette
(vignette("Introduction_to_nimbleEcology")
).
Compared to writing nimble
models with a discrete latent state and a
separate scalar datum for each observation time, use of these distributions
allows one to directly sum (marginalize) over the discrete latent state and
calculate the probability of all observations for one individual (or other
HMM unit) jointly.
These are nimbleFunction
s written in the format of user-defined
distributions for NIMBLE's extension of the BUGS model language. More
information can be found in the NIMBLE User Manual at
https://r-nimble.org.
When using these distributions in a nimble
model, the left-hand side
will be used as x
, and the user should not provide the log
argument.
For example, in nimble
model code,
observedStates[i, 1:T] ~ dHMM(initStates[1:S], observationProbs[1:S,
1:O], transitionProbs[1:S, 1:S], 1, T)
declares that the observedStates[i, 1:T]
(observation history for
individual i
, for example) vector follows a hidden Markov model
distribution with parameters as indicated, assuming all the parameters have
been declared elsewhere in the model. As above, S
is the number of
system state categories, O
is the number of observation state
categories, and T
is the number of observation occasions. This will
invoke (something like) the following call to dHMM
when
nimble
uses the model such as for MCMC:
dHMM(observedStates[1:T], initStates[1:S], observationProbs[1:S,
1:O], transitionProbs[1:S, 1:S], 1, T, log = TRUE)
If an algorithm using a nimble
model with this declaration needs to
generate a random draw for observedStates[1:T]
, it will make a
similar invocation of rHMM
, with n = 1
.
If the observation probabilities are time-dependent, one would use:
observedStates[1:T] ~ dHMMo(initStates[1:S], observationProbs[1:S,
1:O, 1:T], transitionProbs[1:S, 1:S], 1, T)