Learn R Programming

adabag (version 5.0)

plot.errorevol: Plots the error evolution of the ensemble

Description

Plots the previously calculated error evolution of an AdaBoost.M1, AdaBoost-SAMME or Bagging classifier for a data frame as the ensemble size grows

Usage

# S3 method for errorevol
plot(x, y = NULL, ...)

Value

A labeled plot is produced on the current graphics device (one being opened if needed).

Arguments

x

An object of class errorevol. This is assumed to be the result of some function that produces an object with a component named error as that returned by the errorevol function.

y

This argument can be used to represent in the same plot the evolution of the test and train errors, x and y, respectively. Should be NULL (by default) or an object of class errorevol.

...

further arguments passed to or from other methods.

Author

Esteban Alfaro-Cortes Esteban.Alfaro@uclm.es, Matias Gamez-Martinez Matias.Gamez@uclm.es and Noelia Garcia-Rubio Noelia.Garcia@uclm.es

Details

This can be useful to see how fast bagging or boosting reduce the error of the ensemble. in addition, it can detect the presence of overfitting and, therefore, the convenience of pruning the ensemble using predict.bagging or predict.boosting.

References

Alfaro, E., Gamez, M. and Garcia, N. (2013): ``adabag: An R Package for Classification with Boosting and Bagging''. Journal of Statistical Software, Vol 54, 2, pp. 1--35.

Alfaro, E., Garcia, N., Gamez, M. and Elizondo, D. (2008): ``Bankruptcy forecasting: An empirical comparison of AdaBoost and neural networks''. Decision Support Systems, 45, pp. 110--122.

Breiman, L. (1996): ``Bagging predictors''. Machine Learning, Vol 24, 2, pp.123--140.

Freund, Y. and Schapire, R.E. (1996): ``Experiments with a new boosting algorithm''. In Proceedings of the Thirteenth International Conference on Machine Learning, pp. 148--156, Morgan Kaufmann.

Zhu, J., Zou, H., Rosset, S. and Hastie, T. (2009): ``Multi-class AdaBoost''. Statistics and Its Interface, 2, pp. 349--360.

See Also

boosting, predict.boosting, bagging, predict.bagging, errorevol

Examples

Run this code
data(iris)
train <- c(sample(1:50, 25), sample(51:100, 25), sample(101:150, 25))

cntrl<-rpart.control(maxdepth=1)
#increase mfinal in your own execution of this example to see 
#the real usefulness of this function
iris.adaboost <- boosting(Species ~ ., data=iris[train,], mfinal=10, control=cntrl)

#Error evolution along the iterations in training set 
errorevol(iris.adaboost,iris[train,])->evol.train
plot.errorevol(evol.train)

#comparing error evolution in training and test set
errorevol(iris.adaboost,iris[-train,])->evol.test
plot.errorevol(evol.test, evol.train)

# See the help of the functions error evolution and boosting 
# for more examples of the use of the error evolution

Run the code above in your browser using DataLab