Learn R Programming

darch

Create deep architectures in the R programming language

Installation

The latest stable version of darch (0.10.0) can be installed from CRAN using

install.packages("darch")

When using devtools, the latest git version (identifiable by a version number ending in something greater than or equal to 9000 and by the fact that it is regularly broken) can be installed using

install_github("maddin79/darch")

or, if you want the latest stable version,

install_github("maddin79/darch@v0.10.0")

Then, use ?darch to view its documentation or example("darch") to view some simple examples.

About

The darch package is built on the basis of the code from G. E. Hinton and R. R. Salakhutdinov (available under Matlab Code for deep belief nets : last visit: 12.11.2015).

This package is for generating neural networks with many layers (deep architectures) and train them with the method introduced by the publications "A fast learning algorithm for deep belief nets" (G. E. Hinton, S. Osindero, Y. W. Teh) and "Reducing the dimensionality of data with neural networks" (G. E. Hinton, R. R. Salakhutdinov). This method includes a pre training with the contrastive divergence method published by G.E Hinton (2002) and a fine tuning with common known training algorithms like backpropagation or conjugate gradient, as well as more recent techniques like dropout and maxout.

Copyright (C) 2013-2016 Martin Drees and contributors

References

Hinton, G. E., S. Osindero, Y. W. Teh, A fast learning algorithm for deep belief nets, Neural Computation 18(7), S. 1527-1554, DOI: 10.1162/neco.2006.18.7.1527, 2006.

Hinton, G. E., R. R. Salakhutdinov, Reducing the dimensionality of data with neural networks, Science 313(5786), S. 504-507, DOI: 10.1126/science.1127647, 2006.

Hinton, G. E., Training products of experts by minimizing contrastive divergence, Neural Computation 14(8), S. 1711-1800, DOI: 10.1162/089976602760128018, 2002.

Hinton, Geoffrey E. et al. (2012). "Improving neural networks by preventing coadaptation of feature detectors". In: Clinical Orthopaedics and Related Research abs/1207.0580. URL : arxiv.org.

Goodfellow, Ian J. et al. (2013). "Maxout Networks". In: Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pp. 1319–1327. URL : jmlr.org.

Drees, Martin (2013). "Implementierung und Analyse von tiefen Architekturen in R". German. Master's thesis. Fachhochschule Dortmund.

Rueckert, Johannes (2015). "Extending the Darch library for deep architectures". Project thesis. Fachhochschule Dortmund. URL: saviola.de.

Copy Link

Version

Monthly Downloads

38

Version

0.12.0

License

GPL (>= 2) | file LICENSE

Issues

Pull Requests

Stars

Forks

Maintainer

Last Published

May 5th, 2018

Functions in darch (0.12.0)

createDataSet,ANY,ANY,missing,missing-method

Create '>DataSet using data and targets.
addLayer

Adds a layer to the '>DArch object
createDataSet,ANY,missing,formula,missing-method

Constructor function for '>DataSet objects.
backpropagation

Backpropagation learning function
createDataSet

Create data set using data, targets, a formula, and possibly an existing data set.
contr.ltfr

addLayerField

Adds a field to a layer
addLayerField,DArch-method

Adds a field to a layer
addLayer,DArch-method

Adds a layer to the '>DArch object
createDataSet,ANY,ANY,missing,DataSet-method

Create new '>DataSet by filling an existing one with new data.
generateDropoutMask

Dropout mask generator function.
generateRBMs,DArch-method

Generates the RBMs for the pre-training.
exponentialLinearUnit

Exponential linear unit (ELU) function with unit derivatives.
fineTuneDArch

Fine tuning function for the deep architecture
darchTest

Test classification network.
crossEntropyError

Cross entropy error function
darch

Fit a deep neural network
fineTuneDArch,DArch-method

Fine tuning function for the deep architecture
darchBench

Benchmarking wrapper for darch
darchModelInfo

Creates a custom caret model for darch.
generateWeightsHeUniform

He uniform weight initialization
linearUnit

Linear unit function with unit derivatives.
generateWeightsUniform

generateWeightsGlorotNormal

Glorot normal weight initialization
linearUnitRbm

Calculates the linear neuron output no transfer function
generateWeightsGlorotUniform

Glorot uniform weight initialization
getDropoutMask

Returns the dropout mask for the given layer
generateWeightsHeNormal

He normal weight initialization
getMomentum

Returns the current momentum of the Net.
generateWeightsNormal

minimize

Minimize a differentiable multivariate function.
makeStartEndPoints

Makes start- and end-points for the batches.
maxoutUnit

Maxout / LWTA unit function
minimizeAutoencoder

Conjugate gradient for a autoencoder network
mseError

Mean squared error function
minimizeClassifier

Conjugate gradient for a classification network
maxoutWeightUpdate

Updates the weight on maxout layers
newDArch

plot.DArch

Plot '>DArch statistics or structure.
loadDArch

Loads a DArch network
provideMNIST

Provides MNIST data set in the given folder.
resetRBM

Resets the weights and biases of the RBM object
rbmUpdate

Function for updating the weights and biases of an RBM
print.DArch

Print '>DArch details.
rmseError

Root-mean-square error function
preTrainDArch

Pre-trains a '>DArch network
predict.DArch

Forward-propagate data.
rectifiedLinearUnit

Rectified linear unit function with unit derivatives.
readMNIST

Function for generating .RData files of the MNIST Database
preTrainDArch,DArch-method

Pre-trains a '>DArch network
setDarchParams

Set '>DArch parameters
setLogLevel

Set the log level.
show,DArch-method

Print '>DArch details.
setDropoutMask<-

Set the dropout mask for the given layer.
rpropagation

Resilient backpropagation training for deep architectures.
saveDArch

Saves a DArch network
runDArchDropout

Forward-propagate data through the network with dropout inference
runDArch

Forward-propagates data through the network
sigmoidUnit

Sigmoid unit function with unit derivatives.
sigmoidUnitRbm

Calculates the RBM neuron output with the sigmoid function
tanhUnit

Continuous Tan-Sigmoid unit function.
weightDecayWeightUpdate

Updates the weight using weight decay.
trainRBM

Trains an '>RBM with contrastive divergence
validateDataSet

Validate '>DataSet
tanhUnitRbm

Calculates the neuron output with the hyperbolic tangent function
softplusUnit

Softplus unit function with unit derivatives.
validateDataSet,DataSet-method

Validate '>DataSet
softmaxUnit

Softmax unit function with unit derivatives.
DArch-class

Class for deep architectures
DataSet-class

Class for specifying datasets.
RBM-class

Class for restricted Boltzmann machines
Net-class

Abstract class for neural networks.
setDropoutMask<-

Set the dropout mask for the given layer.