An ARTMAP performs supervised learning. It consists of two coupled ART networks.
In theory, these could be ART1, ART2, or others. However, in SNNS ARTMAP is
implemented for ART1 only. So, this function is to be used with binary input.
As explained in the description of art1
, ART aims at solving the stability/plasticity
dilemma. So the advantage of ARTMAP is that it is a supervised learning mechanism
that guarantees stability.
artmap(x, ...)# S3 method for default
artmap(
x,
nInputsTrain,
nInputsTargets,
nUnitsRecLayerTrain,
nUnitsRecLayerTargets,
maxit = 1,
nRowInputsTrain = 1,
nRowInputsTargets = 1,
nRowUnitsRecLayerTrain = 1,
nRowUnitsRecLayerTargets = 1,
initFunc = "ARTMAP_Weights",
initFuncParams = c(1, 1, 1, 1, 0),
learnFunc = "ARTMAP",
learnFuncParams = c(0.8, 1, 1, 0, 0),
updateFunc = "ARTMAP_Stable",
updateFuncParams = c(0.8, 1, 1, 0, 0),
shufflePatterns = TRUE,
...
)
an rsnns
object. The fitted.values
member of the object contains a
list of two-dimensional activation patterns.
a matrix with training inputs and targets for the network
additional function parameters (currently not used)
the number of columns of the matrix that are training input
the number of columns that are target values
number of units in the recognition layer of the training data ART network
number of units in the recognition layer of the target data ART network
maximum of iterations to perform
number of rows the training input units are to be organized in (only for visualization purposes of the net in the original SNNS software)
same, but for the target value input units
same, but for the recognition layer of the training data ART network
same, but for the recognition layer of the target data ART network
the initialization function to use
the parameters for the initialization function
the learning function to use
the parameters for the learning function
the update function to use
the parameters for the update function
should the patterns be shuffled?
See also the details section of art1
. The two ART1 networks are connected by a map field.
The input of the first ART1 network is the training input, the input of the second network are the target values,
the teacher signals. The two networks are often called ARTa and ARTb, we call them here training data network
and target data network.
In analogy to the ART1 and ART2 implementations, there are one initialization function, one learning function, and two update functions present that are suitable for ARTMAP. The parameters are basically as in ART1, but for two networks. The learning function and the update functions have 3 parameters, the vigilance parameters of the two ART1 networks and an additional vigilance parameter for inter ART reset control. The initialization function has four parameters, two for every ART1 network.
A detailed description of the theory and the parameters is available from the SNNS documentation and the other referenced literature.
Carpenter, G. A.; Grossberg, S. & Reynolds, J. H. (1991), 'ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network', Neural Networks 4(5), 565--588.
Grossberg, S. (1988), Adaptive pattern classification and universal recoding. I.: parallel development and coding of neural feature detectors, MIT Press, Cambridge, MA, USA, chapter I, pp. 243--258.
Herrmann, K.-U. (1992), 'ART -- Adaptive Resonance Theory -- Architekturen, Implementierung und Anwendung', Master's thesis, IPVR, University of Stuttgart. (in German)
Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network Simulator User Manual, Version 4.2', IPVR, University of Stuttgart and WSI, University of Tübingen. http://www.ra.cs.uni-tuebingen.de/SNNS/welcome.html
Zell, A. (1994), Simulation Neuronaler Netze, Addison-Wesley. (in German)
art1
, art2
if (FALSE) demo(artmap_letters)
if (FALSE) demo(artmap_lettersSnnsR)
data(snnsData)
trainData <- snnsData$artmap_train.pat
testData <- snnsData$artmap_test.pat
model <- artmap(trainData, nInputsTrain=70, nInputsTargets=5,
nUnitsRecLayerTrain=50, nUnitsRecLayerTargets=26)
model$fitted.values
predict(model, testData)
Run the code above in your browser using DataLab