GluonTS DeepAR Modeling Function (Bridge)
deepar_fit_impl(
x,
y,
freq,
prediction_length,
id,
epochs = 5,
batch_size = 32,
num_batches_per_epoch = 50,
learning_rate = 0.001,
learning_rate_decay_factor = 0.5,
patience = 10,
minimum_learning_rate = 5e-05,
clip_gradient = 10,
weight_decay = 1e-08,
init = "xavier",
ctx = NULL,
hybridize = TRUE,
context_length = NULL,
num_layers = 2,
num_cells = 40,
cell_type = "lstm",
dropout_rate = 0.1,
use_feat_dynamic_real = FALSE,
use_feat_static_cat = FALSE,
use_feat_static_real = FALSE,
cardinality = NULL,
embedding_dimension = NULL,
distr_output = "default",
scaling = TRUE,
lags_seq = NULL,
time_features = NULL,
num_parallel_samples = 100
)
A dataframe of xreg (exogenous regressors)
A numeric vector of values to fit
A pandas
timeseries frequency such as "5min" for 5-minutes or "D" for daily.
Refer to Pandas Offset Aliases.
Numeric value indicating the length of the prediction horizon
A quoted column name that tracks the GluonTS FieldName "item_id"
Number of epochs that the network will train (default: 5).
Number of examples in each batch (default: 32).
Number of batches at each epoch (default: 50).
Initial learning rate (default: 10-3 ).
Factor (between 0 and 1) by which to decrease the learning rate (default: 0.5).
The patience to observe before reducing the learning rate, nonnegative integer (default: 10).
Lower bound for the learning rate (default: 5x10-5 ).
Maximum value of gradient. The gradient is clipped if it is too large (default: 10).
The weight decay (or L2 regularization) coefficient. Modifies objective by adding a penalty for having large weights (default 10-8 ).
Initializer of the weights of the network (default: <U+201C>xavier<U+201D>).
The mxnet CPU/GPU context. Refer to using CPU/GPU in the mxnet documentation. (default: NULL, uses CPU)
Increases efficiency by using symbolic programming. (default: TRUE)
Number of steps to unroll the RNN for before computing predictions (default: NULL, in which case context_length = prediction_length)
Number of RNN layers (default: 2)
Number of RNN cells for each layer (default: 40)
Type of recurrent cells to use (available: 'lstm' or 'gru'; default: 'lstm')
Dropout regularization parameter (default: 0.1)
Whether to use the 'feat_dynamic_real' field from the data (default: FALSE)
Whether to use the feat_static_cat field from the data (default: FALSE)
Whether to use the feat_static_real field from the data (default: FALSE)
Number of values of each categorical feature.
This must be set if use_feat_static_cat
== TRUE (default: NULL)
Dimension of the embeddings for categorical features (default: min(50, (cat+1)//2)
for cat in cardinality)
Distribution to use to evaluate observations and sample predictions (default: StudentTOutput())
Whether to automatically scale the target values (default: TRUE)
Indices of the lagged target values to use as inputs of the RNN (default: NULL, in which case these are automatically determined based on freq)
Time features to use as inputs of the RNN (default: None, in which case these are automatically determined based on freq)
Number of evaluation samples per time series to increase parallelism during inference. This is a model optimization that does not affect the accuracy (default: 100)