Build a probabilistic suffix tree that stores a variable length Markov chain (VLMC) model
# S4 method for stslist
pstree(object, group, L, cdata=NULL, stationary=TRUE,
nmin = 1, ymin=NULL, weighted = TRUE, with.missing = FALSE, lik = TRUE)
An object of class "PSTf"
.
a sequence object, i.e., an object of class 'stslist'
as created by TraMineR seqdef
function.
a vector giving the group membership for each observation in x. If specified, a segmented PST is produced containing one PST for each group.
Not implemented yet.
Not implemented yet.
Integer. Maximal depth of the PST. Default to maximum length of the sequence(s) in object minus 1.
Integer. Minimum number of occurences of a string to add it in the tree
Numeric. Smoothing parameter for conditional probabilities, assuring that no symbol, and hence no sequence, is predicted to have a null probability. The parameter $ymin$ sets a lower bound for a symbol's probability.
Logical. If TRUE, weights attached to the sequence object are used in the estimation of probabilities.
Logical. If TRUE, the missing state is added to the alphabet
Logical. If TRUE, the log-likelihood of the model, i.e. the likelihood of the training sequences given the model, is computed and stored in the 'logLik' slot of the PST. Setting to FALSE will spare the time required to compute the likelihood.
Alexis Gabadinho
A probabilistic suffix tree (PST) is built from a learning sample of \(n, \; n \geq 1\) sequences by successively adding nodes labelled with subsequences (contexts) \(c\) of length \(L, \; 0 \leq L \leq L_{max}\) found in the data. When the value \(L_{max}\) is not defined by the user it is set to its theorectical maximum \(\ell-1\) where \(\ell\) is the maximum sequence length in the learning sample. The nmin
argument specifies the minimum frequency of a subsequence required to add it to te tree.
Each node of the tree is labelled with a context \(c\) and stores the next symbol empirical probability distribution \(\hat{P}(\sigma|c), \; \sigma \in A\), where \(A\) is an alphabet of finite size. The root node labelled with the empty string \(e\) stores the \(0th\) order probability \(\hat{P}(\sigma), \; \sigma \in A\) of oberving each symbol of the alphabet in the whole learning sample.
The building algorithm calls the cprob
function which returns the empirical next symbol counts observed after each context \(c\) and computes the corresponding empirical probability distribution. Each node in the tree is connected to its longest suffix, where the longest suffix of a string \(c=c_{1},c_{2}, \ldots, c_{k}\) of length \(k\) is \(suffix(c)=c_{2}, \ldots, c_{k}\).
Once an initial PST is built it can be pruned to reduce its complexity by removing nodes that do not provide significant information (see prune
). A model selection procedure based on information criteria is also available (see tune
). For more details, see Gabadinho 2016.
Bejerano, G. & Yona, G. (2001) Variations on probabilistic suffix trees: statistical modeling and prediction of protein families. Bioinformatics 17, 23-43.
Gabadinho, A. & Ritschard, G. (2016) Analyzing State Sequences with Probabilistic Suffix Trees: The PST R Package. Journal of Statistical Software 72(3), 1-39.
Maechler, M. & Buehlmann, P. (2004) Variable Length Markov Chains: Methodology, Computing, and Software. Journal of Computational and Graphical Statistics 13, pp. 435-455.
Ron, D.; Singer, Y. & Tishby, N. (1996) The power of amnesia: Learning probabilistic automata with variable memory length. Machine Learning 25, 117-149.
prune
, tune
## Build a PST on one single sequence
data(s1)
s1.seq <- seqdef(s1)
s1.seq
S1 <- pstree(s1.seq, L = 3)
print(S1, digits = 3)
S1
Run the code above in your browser using DataLab