Learn R Programming

HiddenMarkov (version 1.8-13)

forwardback: Forward and Backward Probabilities of DTHMM

Description

These functions calculate the forward and backward probabilities for a dthmm process, as defined in MacDonald & Zucchini (1997, Page 60).

Usage

backward(x, Pi, distn, pm, pn = NULL)
forward(x, Pi, delta, distn, pm, pn = NULL)
forwardback(x, Pi, delta, distn, pm, pn = NULL, fortran = TRUE)
forwardback.dthmm(Pi, delta, prob, fortran = TRUE, fwd.only = FALSE)

Arguments

x

is a vector of length \(n\) containing the observed process.

Pi

is the \(m \times m\) transition probability matrix of the hidden Markov chain.

delta

is the marginal probability distribution of the \(m\) hidden states.

distn

is a character string with the distribution name, e.g. "norm" or "pois". If the distribution is specified as "wxyz" then a probability (or density) function called "dwxyz" should be available, in the standard R format (e.g. dnorm or dpois).

pm

is a list object containing the current (Markov dependent) parameter estimates associated with the distribution of the observed process (see dthmm).

pn

is a list object containing the observation dependent parameter values associated with the distribution of the observed process (see dthmm).

prob

an \(n \times m\) matrix containing the observation probabilities or densities (rows) by Markov state (columns).

fortran

logical, if TRUE (default) use the Fortran code, else use the R code.

fwd.only

logical, if FALSE (default) calculate both forward and backward probabilities; else calculate and return only forward probabilities and log-likelihood.

Value

The function forwardback returns a list with two matrices containing the forward and backward (log) probabilities, logalpha and logbeta, respectively, and the log-likelihood (LL).

The functions backward and forward return a matrix containing the forward and backward (log) probabilities, logalpha and logbeta, respectively.

Details

Denote the \(n \times m\) matrices containing the forward and backward probabilities as \(A\) and \(B\), respectively. Then the \((i,j)\)th elements are $$ \alpha_{ij} = \Pr\{ X_1 = x_1, \cdots, X_i = x_i, C_i = j \} $$ and $$ \beta_{ij} = \Pr\{ X_{i+1} = x_{i+1}, \cdots, X_n = x_n \,|\, C_i = j \} \,. $$ Further, the diagonal elements of the product matrix \(A B^\prime\) are all the same, taking the value of the log-likelihood.

References

Cited references are listed on the HiddenMarkov manual page.

See Also

logLik

Examples

Run this code
# NOT RUN {
#    Set Parameter Values

Pi <- matrix(c(1/2, 1/2,   0,   0,   0,
               1/3, 1/3, 1/3,   0,   0,
                 0, 1/3, 1/3, 1/3,   0,
                 0,   0, 1/3, 1/3, 1/3,
                 0,   0,   0, 1/2, 1/2),
             byrow=TRUE, nrow=5)

p <- c(1, 4, 2, 5, 3)
delta <- c(0, 1, 0, 0, 0)

#------   Poisson HMM   ------

x <- dthmm(NULL, Pi, delta, "pois", list(lambda=p), discrete=TRUE)

x <- simulate(x, nsim=10)

y <- forwardback(x$x, Pi, delta, "pois", list(lambda=p))

# below should be same as LL for all time points
print(log(diag(exp(y$logalpha) %*% t(exp(y$logbeta)))))
print(y$LL)

#------   Gaussian HMM   ------

x <- dthmm(NULL, Pi, delta, "norm", list(mean=p, sd=p/3))

x <- simulate(x, nsim=10)

y <- forwardback(x$x, Pi, delta, "norm", list(mean=p, sd=p/3))

# below should be same as LL for all time points
print(log(diag(exp(y$logalpha) %*% t(exp(y$logbeta)))))
print(y$LL)
# }

Run the code above in your browser using DataLab