Learn R Programming

gradDescent (version 3.0)

MBGD: Mini-Batch Gradient Descent (MBGD) Method Learning Function

Description

A function to build prediction model using Mini-Batch Gradient Descent (MBGD) method.

Usage

MBGD(dataTrain, alpha = 0.1, maxIter = 10, nBatch = 2, seed = NULL)

Arguments

dataTrain

a data.frame that representing training data (\(m \times n\)), where \(m\) is the number of instances and \(n\) is the number of variables where the last column is the output variable. dataTrain must have at least two columns and ten rows of data that contain only numbers (integer or float).

alpha

a float value representing learning rate. Default value is 0.1

maxIter

the maximal number of iterations.

nBatch

a integer value representing the training data batch.

seed

a integer value for static random. Default value is NULL, which means the function will not do static random.

Value

a vector matrix of theta (coefficient) for linear model.

Details

This function based on GD method with optimization to use the training data partially. MBGD has a parameter named batchRate that represent the instances percentage of training data.

References

A. Cotter, O. Shamir, N. Srebro, K. Sridharan Better Mini-Batch Algoritms via Accelerated Gradient Methods, NIPS, pp. 1647- (2011)

See Also

GD

Examples

Run this code
# NOT RUN {
##################################
## Learning and Build Model with MBGD
## load R Package data
data(gradDescentRData)
## get z-factor data
dataSet <- gradDescentRData$CompressilbilityFactor
## split dataset
splitedDataSet <- splitData(dataSet)
## build model with 0.8 batch rate MBGD
MBGDmodel <- MBGD(splitedDataSet$dataTrain, nBatch=2)
#show result
print(MBGDmodel)

# }

Run the code above in your browser using DataLab