Learn R Programming

AMORE (version 0.2-16)

newff: Create a Multilayer Feedforward Neural Network

Description

Creates a feedforward artificial neural network according to the structure established by the AMORE package standard.

Usage

newff(n.neurons, learning.rate.global, momentum.global, error.criterium, Stao, 
	hidden.layer, output.layer, method)

Arguments

n.neurons

Numeric vector containing the number of neurons of each layer. The first element of the vector is the number of input neurons, the last is the number of output neurons and the rest are the number of neuron of the different hidden layers.

learning.rate.global

Learning rate at which every neuron is trained.

momentum.global

Momentum for every neuron. Needed by several training methods.

error.criterium

Criterium used to measure to proximity of the neural network prediction to its target. Currently we can choose amongst:

  • "LMS": Least Mean Squares.

  • "LMLS": Least Mean Logarithm Squared (Liano 1996).

  • "TAO": TAO Error (Pernia, 2004).

Stao

Stao parameter for the TAO error criterium. Unused by the rest of criteria.

hidden.layer

Activation function of the hidden layer neurons. Available functions are:

  • "purelin".

  • "tansig".

  • "sigmoid".

  • "hardlim".

  • "custom": The user must manually define the f0 and f1 elements of the neurons.

output.layer

Activation function of the hidden layer neurons according to the former list shown above.

method

Prefered training method. Currently it can be:

  • "ADAPTgd": Adaptative gradient descend.

  • "ADAPTgdwm": Adaptative gradient descend with momentum.

  • "BATCHgd": BATCH gradient descend.

  • "BATCHgdwm": BATCH gradient descend with momentum.

Value

newff returns a multilayer feedforward neural network object.

References

Pern<U+00ED>a Espinoza, A.V., Ordieres Mer<U+00E9>, J.B., Mart<U+00ED>nez de Pis<U+00F3>n, F.J., Gonz<U+00E1>lez Marcos, A. TAO-robust backpropagation learning algorithm. Neural Networks. Vol. 18, Issue 2, pp. 191--204, 2005. Simon Haykin. Neural Networks -- a Comprehensive Foundation. Prentice Hall, New Jersey, 2nd edition, 1999. ISBN 0-13-273350-1.

See Also

init.MLPneuron, random.init.MLPnet, random.init.MLPneuron, select.activation.function

Examples

Run this code
# NOT RUN {
#Example 1

library(AMORE)
# P is the input vector
P <- matrix(sample(seq(-1,1,length=1000), 1000, replace=FALSE), ncol=1) 
# The network will try to approximate the target P^2
target <- P^2                                   
# We create a feedforward network, with two hidden layers.
# The first hidden layer has three neurons and the second has two neurons.
# The hidden layers have got Tansig activation functions and the output layer is Purelin.
net <- newff(n.neurons=c(1,3,2,1), learning.rate.global=1e-2, momentum.global=0.5,
        error.criterium="LMS", Stao=NA, hidden.layer="tansig", 
        output.layer="purelin", method="ADAPTgdwm")
result <- train(net, P, target, error.criterium="LMS", report=TRUE, show.step=100, n.shows=5 )
y <- sim(result$net, P)
plot(P,y, col="blue", pch="+")
points(P,target, col="red", pch="x")
# }

Run the code above in your browser using DataLab