The MASE is especially well suited for time series predictions.
It can be used to compare forecast methods on a single series and also to
compare forecast accuracy between series.
This metric never gives an infinite or undefined values (unless all observations
over time are exactly the same!).
By default, the MASE scales the error based on in-sample MAE from the naive forecast
method (random walk). The in-sample MAE is used in the denominator because it is
always available and it effectively scales the errors.Since it is based on
absolute error, it is less sensitive to outliers compared to the classic MSE.
\(MASE = \frac{1}{n}(\frac{|O_i-P_i|}{ \frac{1}{T-1} \sum^T_{t=2}~|O_t - O_{t-1}| })\)
If available, users may use and out-of-bag error from an independent dataset, which
can be specified with the oob_mae
arg. and will replace the denominator into the MASE
equation.
MASE measures total error (i.e. both lack of accuracy and precision.). The lower
the MASE below 1, the better the prediction quality. MASE = indicates no difference
between the model and the naive forecast (or oob predictions). MASE > 1 indicate
poor performance.
For the formula and more details, see online-documentation