Learn R Programming

RSNNS (version 0.4-14)

som: Create and train a self-organizing map (SOM)

Description

This function creates and trains a self-organizing map (SOM). SOMs are neural networks with one hidden layer. The network structure is similar to LVQ, but the method is unsupervised and uses a notion of neighborhood between the units. The general idea is that the map develops by itself a notion of similarity among the input and represents this as spatial nearness on the map. Every hidden unit represents a prototype. The goal of learning is to distribute the prototypes in the feature space such that the (probability density of the) input is represented well. SOMs are usually built with 1d, 2d quadratic, 2d hexagonal, or 3d neighborhood, so that they can be visualized straightforwardly. The SOM implemented in SNNS has a 2d quadratic neighborhood.

As the computation of this function might be slow if many patterns are involved, much of its output is made switchable (see comments on return values).

Usage

som(x, ...)

# S3 method for default som( x, mapX = 16, mapY = 16, maxit = 100, initFuncParams = c(1, -1), learnFuncParams = c(0.5, mapX/2, 0.8, 0.8, mapX), updateFuncParams = c(0, 0, 1), shufflePatterns = TRUE, calculateMap = TRUE, calculateActMaps = FALSE, calculateSpanningTree = FALSE, saveWinnersPerPattern = FALSE, targets = NULL, ... )

Value

an rsnns object. Depending on which calculation flags are switched on, the som generates some special members:

map

the som. For each unit, the amount of patterns where this unit won is given.

componentMaps

a map for every input component, showing where in the map this component leads to high activation.

actMaps

a list containing for each pattern its activation map, i.e. all unit activations. The actMaps are an intermediary result, from which all other results can be computed. This list can be very long, so normally it won't be saved.

winnersPerPattern

a vector where for each pattern the number of the winning unit is given. Also, an intermediary result that normally won't be saved.

labeledUnits

a matrix which -- if the targets parameter is given -- contains for each unit (rows) and each class present in the targets (columns), the amount of patterns of the class where the unit has won. From the labeledUnits, the labeledMap can be computed, e.g. by voting of the class labels for the final label of the unit.

labeledMap

a labeled som that is computed from labeledUnits using decodeClassLabels.

spanningTree

the result of the original SNNS function to calculate the map. For each unit, the last pattern where this unit won is present. As the other results are more informative, the spanning tree is only interesting, if the other functions are too slow or if the original SNNS implementation is needed.

Arguments

x

a matrix with training inputs for the network

...

additional function parameters (currently not used)

mapX

the x dimension of the som

mapY

the y dimension of the som

maxit

maximum of iterations to learn

initFuncParams

the parameters for the initialization function

learnFuncParams

the parameters for the learning function

updateFuncParams

the parameters for the update function

shufflePatterns

should the patterns be shuffled?

calculateMap

should the som be calculated?

calculateActMaps

should the activation maps be calculated?

calculateSpanningTree

should the SNNS kernel algorithm for generating a spanning tree be applied?

saveWinnersPerPattern

should a list with the winners for every pattern be saved?

targets

optional target classes of the patterns

Details

Internally, this function uses the initialization function Kohonen_Weights_v3.2, the learning function Kohonen, and the update function Kohonen_Order of SNNS.

References

Kohonen, T. (1988), Self-organization and associative memory, Vol. 8, Springer-Verlag.

Zell, A. et al. (1998), 'SNNS Stuttgart Neural Network Simulator User Manual, Version 4.2', IPVR, University of Stuttgart and WSI, University of Tübingen. http://www.ra.cs.uni-tuebingen.de/SNNS/welcome.html

Zell, A. (1994), Simulation Neuronaler Netze, Addison-Wesley. (in German)

Examples

Run this code
if (FALSE) demo(som_iris)
if (FALSE) demo(som_cubeSnnsR)


data(iris)
inputs <- normalizeData(iris[,1:4], "norm")

model <- som(inputs, mapX=16, mapY=16, maxit=500,  
                calculateActMaps=TRUE, targets=iris[,5])

par(mfrow=c(3,3))
for(i in 1:ncol(inputs)) plotActMap(model$componentMaps[[i]], 
                                       col=rev(topo.colors(12)))

plotActMap(model$map, col=rev(heat.colors(12)))
plotActMap(log(model$map+1), col=rev(heat.colors(12)))
persp(1:model$archParams$mapX, 1:model$archParams$mapY, log(model$map+1), 
     theta = 30, phi = 30, expand = 0.5, col = "lightblue")

plotActMap(model$labeledMap)

model$componentMaps
model$labeledUnits
model$map

names(model)

Run the code above in your browser using DataLab