Function constructs ETS, SSARIMA, CES, GES and SMA and combines their forecasts using IC weights.
smoothCombine(data, models = NULL, initial = c("optimal", "backcasting"),
ic = c("AICc", "AIC", "BIC", "BICc"), cfType = c("MSE", "MAE", "HAM",
"MSEh", "TMSE", "GTMSE", "MSCE"), h = 10, holdout = FALSE,
cumulative = FALSE, intervals = c("none", "parametric", "semiparametric",
"nonparametric"), level = 0.95, bins = 200,
intervalsCombine = c("quantile", "probability"), intermittent = c("none",
"auto", "fixed", "interval", "probability", "sba", "logistic"),
imodel = "MNN", bounds = c("admissible", "none"), silent = c("all",
"graph", "legend", "output", "none"), xreg = NULL, xregDo = c("use",
"select"), initialX = NULL, updateX = FALSE, persistenceX = NULL,
transitionX = NULL, ...)
Vector or ts object, containing data needed to be forecasted.
List of the estimated smooth models to use in the
combination. If NULL
, then all the models are estimated
in the function.
Can be "optimal"
, meaning that the initial
states are optimised, or "backcasting"
, meaning that the
initials are produced using backcasting procedure.
The information criterion used in the model selection procedure.
Type of Cost Function used in optimization. cfType
can
be: MSE
(Mean Squared Error), MAE
(Mean Absolute Error),
HAM
(Half Absolute Moment), TMSE
- Trace Mean Squared Error,
GTMSE
- Geometric Trace Mean Squared Error, MSEh
- optimisation
using only h-steps ahead error, MSCE
- Mean Squared Cumulative Error.
If cfType!="MSE"
, then likelihood and model selection is done based
on equivalent MSE
. Model selection in this cases becomes not optimal.
There are also available analytical approximations for multistep functions:
aMSEh
, aTMSE
and aGTMSE
. These can be useful in cases
of small samples.
Finally, just for fun the absolute and half analogues of multistep estimators
are available: MAEh
, TMAE
, GTMAE
, MACE
, TMAE
,
HAMh
, THAM
, GTHAM
, CHAM
.
Length of forecasting horizon.
If TRUE
, holdout sample of size h
is taken from
the end of the data.
If TRUE
, then the cumulative forecast and prediction
intervals are produced instead of the normal ones. This is useful for
inventory control systems.
Type of intervals to construct. This can be:
none
, aka n
- do not produce prediction
intervals.
parametric
, p
- use state-space structure of ETS. In
case of mixed models this is done using simulations, which may take longer
time than for the pure additive and pure multiplicative models.
semiparametric
, sp
- intervals based on covariance
matrix of 1 to h steps ahead errors and assumption of normal / log-normal
distribution (depending on error type).
nonparametric
, np
- intervals based on values from a
quantile regression on error matrix (see Taylor and Bunn, 1999). The model
used in this process is e[j] = a j^b, where j=1,..,h.
The parameter also accepts TRUE
and FALSE
. The former means that
parametric intervals are constructed, while the latter is equivalent to
none
.
If the forecasts of the models were combined, then the intervals are combined
quantile-wise (Lichtendahl et al., 2013).
Confidence level. Defines width of prediction interval.
The number of bins for the prediction intervals. The lower value means faster work of the function, but less precise estimates of the quantiles. This needs to be an even number.
How to average the prediction intervals:
quantile-wise ("quantile"
) or probability-wise
("probability"
).
Defines type of intermittent model used. Can be: 1.
none
, meaning that the data should be considered as non-intermittent;
2. fixed
, taking into account constant Bernoulli distribution of
demand occurrences; 3. interval
, Interval-based model, underlying
Croston, 1972 method; 4. probability
, Probability-based model,
underlying Teunter et al., 2011 method. 5. auto
- automatic selection
of intermittency type based on information criteria. The first letter can be
used instead. 6. "sba"
- Syntetos-Boylan Approximation for Croston's
method (bias correction) discussed in Syntetos and Boylan, 2005. 7.
"logistic"
- the probability is estimated based on logistic regression
model principles.
Type of ETS model used for the modelling of the time varying probability. Object of the class "iss" can be provided here, and its parameters would be used in iETS model.
What type of bounds to use in the model estimation. The first letter can be used instead of the whole word.
If silent="none"
, then nothing is silent, everything is
printed out and drawn. silent="all"
means that nothing is produced or
drawn (except for warnings). In case of silent="graph"
, no graph is
produced. If silent="legend"
, then legend of the graph is skipped.
And finally silent="output"
means that nothing is printed out in the
console, but the graph is produced. silent
also accepts TRUE
and FALSE
. In this case silent=TRUE
is equivalent to
silent="all"
, while silent=FALSE
is equivalent to
silent="none"
. The parameter also accepts first letter of words ("n",
"a", "g", "l", "o").
Vector (either numeric or time series) or matrix (or data.frame)
of exogenous variables that should be included in the model. If matrix
included than columns should contain variables and rows - observations. Note
that xreg
should have number of observations equal either to
in-sample or to the whole series. If the number of observations in
xreg
is equal to in-sample, then values for the holdout sample are
produced using es function.
Variable defines what to do with the provided xreg:
"use"
means that all of the data should be used, while
"select"
means that a selection using ic
should be done.
"combine"
will be available at some point in future...
Vector of initial parameters for exogenous variables.
Ignored if xreg
is NULL.
If TRUE
, transition matrix for exogenous variables is
estimated, introducing non-linear interactions between parameters.
Prerequisite - non-NULL xreg
.
Persistence vector \(g_X\), containing smoothing
parameters for exogenous variables. If NULL
, then estimated.
Prerequisite - non-NULL xreg
.
Transition matrix \(F_x\) for exogenous variables. Can
be provided as a vector. Matrix will be formed using the default
matrix(transition,nc,nc)
, where nc
is number of components in
state vector. If NULL
, then estimated. Prerequisite - non-NULL
xreg
.
This currently determines nothing.
timeElapsed
- time elapsed for the construction of the model.
initialType
- type of the initial values used.
fitted
- fitted values of ETS.
quantiles
- the 3D array of produced quantiles if intervals!="none"
with the dimensions: (number of models) x (bins) x (h).
forecast
- point forecast of ETS.
lower
- lower bound of prediction interval. When intervals="none"
then NA is returned.
upper
- higher bound of prediction interval. When intervals="none"
then NA is returned.
residuals
- residuals of the estimated model.
s2
- variance of the residuals (taking degrees of freedom into account).
intervals
- type of intervals asked by user.
level
- confidence level for intervals.
cumulative
- whether the produced forecast was cumulative or not.
actuals
- original data.
holdout
- holdout part of the original data.
imodel
- model of the class "iss" if intermittent model was estimated.
If the model is non-intermittent, then imodel is NULL
.
xreg
- provided vector or matrix of exogenous variables. If xregDo="s"
,
then this value will contain only selected exogenous variables.
updateX
- boolean, defining, if the states of exogenous variables were
estimated as well.
ICs
- values of information criteria of the model. Includes AIC, AICc, BIC and BICc.
accuracy
- vector of accuracy measures for the holdout sample. In
case of non-intermittent data includes: MPE, MAPE, SMAPE, MASE, sMAE,
RelMAE, sMSE and Bias coefficient (based on complex numbers). In case of
intermittent data the set of errors will be: sMSE, sPIS, sCE (scaled
cumulative error) and Bias coefficient.
The combination of these models using information criteria weights is possible because they are all formulated in Single Source of Error framework. Due to the the complexity of some of the models, the estimation process may take some time. So be patient.
The prediction intervals are combined either probability-wise or quantile-wise (Lichtendahl et al., 2013), which may take extra time, because we need to produce all the distributions for all the models. This can be sped up with the smaller value for bins parameter, but the resulting intervals may be imprecise.
Snyder, R. D., 1985. Recursive Estimation of Dynamic Linear Models. Journal of the Royal Statistical Society, Series B (Methodological) 47 (2), 272-276.
Hyndman, R.J., Koehler, A.B., Ord, J.K., and Snyder, R.D. (2008) Forecasting with exponential smoothing: the state space approach, Springer-Verlag. http://dx.doi.org/10.1007/978-3-540-71918-2.
Kolassa, S. (2011) Combining exponential smoothing forecasts using Akaike weights. International Journal of Forecasting, 27, pp 238 - 251.
Taylor, J.W. and Bunn, D.W. (1999) A Quantile Regression Approach to Generating Prediction Intervals. Management Science, Vol 45, No 2, pp 225-237.
Lichtendahl Kenneth C., Jr., Grushka-Cockayne Yael, Winkler Robert L., (2013) Is It Better to Average Probabilities or Quantiles? Management Science 59(7):1594-1611. DOI: [10.1287/mnsc.1120.1667](https://doi.org/10.1287/mnsc.1120.1667)
# NOT RUN {
library(Mcomp)
ourModel <- smoothCombine(M3[[578]],intervals="p")
plot(ourModel)
# models parameter accepts either previously estimated smoothCombine
# or a manually formed list of smooth models estimated in sample:
smoothCombine(M3[[578]],models=ourModel)
# }
# NOT RUN {
models <- list(es(M3[[578]]), sma(M3[[578]]))
smoothCombine(M3[[578]],models=models)
# }
# NOT RUN {
# }
Run the code above in your browser using DataLab