learn (nw, df, prior=jointprior(nw),
nodelist=1:nw$n,trylist=
rep(list(NULL),nw$n),
timetrace=FALSE,smalldf=NA,
usetrylist = nw$n
network
.jointprior
.maketrylist
.timeslice
.condposterior
attributes updated for
the nodes. Also, the attribute score
is updated and contains
the network score. The contribution to the network score for each
node is contained in the attribute loglik
for each node.conditional
) in the local master (see
localmaster
). The data are then learned by calling the
approriate post
function.
A socalled trylist is maintained. This consists of learned nodes with
particular parent configurations. If a node with a certain parent
configuration needs to be learned, it is checked, whether the node has
already been learned. The previously learned nodes are given as input
in the trylist parameter and is updated in the learning procedure. The
learning procedure calls reuselearn
which traverses through the
trylist to see if it is possible to reuse previously learned nodes.
If it is not possible to reuse information, the posterior
distributions of the parameters are calculated. First, the master
prior procedure is used (conditional
) to deduce the
prior parameters for the current node. Then the posteriors are
determined from the data using the algorithm described in Bottcher
(2002).
udisclik
calculates the log-likelihood contribution of discrete
nodes. For continuous nodes, this is done while learning.
The learning procedure is called from various functions using the
principle, that networks should always be updated with their
score. Thus, eg. drawnetwork
keeps the network updated
when the graph is altered.networkfamily
,
jointprior
,
maketrylist
,
network
,
post
data(rats)
fit <- network(rats)
fit.prior <- jointprior(fit,12)
fit <- learn(fit,rats,fit.prior)$nw
Run the code above in your browser using DataLab