This function implements the Fast Algorithm for Sparse Semiparametric Multi-functional Regression (FASSMR) with kernel estimation. This algorithm is specifically designed for estimating multi-functional partial linear single-index models, which incorporate multiple scalar variables and a functional covariate as predictors. These scalar variables are derived from the discretisation of a curve and have linear effect while the functional covariate exhibits a single-index effect.
FASSMR selects the impact points of the discretised curve and estimates the model. The algorithm employs a penalised least-squares regularisation procedure, integrated with kernel estimation using Nadaraya-Watson weights. It uses B-spline expansions to represent curves and eligible functional indexes. Additionally, it utilises an objective criterion (criterion
) to determine the initial number of covariates in the reduced model (w.opt
), the bandwidth (h.opt
), and the penalisation parameter (lambda.opt
).
FASSMR.kernel.fit(x, z, y, seed.coeff = c(-1, 0, 1), order.Bspline = 3,
nknot.theta = 3, min.q.h = 0.05, max.q.h = 0.5, h.seq = NULL, num.h = 10,
kind.of.kernel = "quad",range.grid = NULL, nknot = NULL, lambda.min = NULL,
lambda.min.h = NULL, lambda.min.l = NULL, factor.pn = 1, nlambda = 100,
vn = ncol(z), nfolds = 10, seed = 123, wn = c(10, 15, 20), criterion = "GCV",
penalty = "grSCAD", max.iter = 1000, n.core = NULL)
The matched call.
Estimated scalar response.
Differences between y
and the fitted.values
.
\(\hat{\mathbf{\beta}}\) (i.e. estimate of \(\mathbf{\beta}_0\) when the optimal tuning parameters w.opt
, lambda.opt
, h.opt
and vn.opt
are used).
Estimate of \(\beta_0^{\mathbf{1}}\) in the reduced model when the optimal tuning parameters w.opt
, lambda.opt
, h.opt
and vn.opt
are used.
Coefficients of \(\hat{\theta}\) in the B-spline basis (i.e. estimate of \(\theta_0\) when the optimal tuning parameters w.opt
, lambda.opt
, h.opt
and vn.opt
are used): a vector of length(order.Bspline+nknot.theta)
.
Indexes of the non-zero \(\hat{\beta_{j}}\).
Selected bandwidth (when w.opt
is considered).
Selected size for \(\mathcal{R}_n^{\mathbf{1}}\).
Selected value for the penalisation parameter (when w.opt
is considered).
Value of the criterion function considered to select w.opt
, lambda.opt
, h.opt
and vn.opt
.
Selected value of vn
(when w.opt
is considered).
Estimate of \(\beta_0^{\mathbf{1}}\) for each value of the sequence wn
.
Estimate of \(\theta_0^{\mathbf{1}}\) for each value of the sequence wn
(i.e. its coefficients in the B-spline basis).
Value of the criterion function for each value of the sequence wn
.
Indexes of the non-zero linear coefficients for each value of the sequence wn
.
Selected value of penalisation parameter for each value of the sequence wn
.
Selected bandwidth for each value of the sequence wn
.
Indexes of the covariates (in the entire set of \(p_n\)) used to build \(\mathcal{R}_n^{\mathbf{1}}\) for each value of the sequence wn
.
Matrix containing the observations of the functional covariate collected by row (functional single-index component).
Matrix containing the observations of the functional covariate that is discretised collected by row (linear component).
Vector containing the scalar response.
Vector of initial values used to build the set \(\Theta_n\) (see section Details
). The coefficients for the B-spline representation of each eligible functional index \(\theta \in \Theta_n\) are obtained from seed.coeff
. The default is c(-1,0,1)
.
Positive integer giving the order of the B-spline basis functions. This is the number of coefficients in each piecewise polynomial segment. The default is 3.
Positive integer indicating the number of regularly spaced interior knots in the B-spline expansion of \(\theta_0\). The default is 3.
Minimum quantile order of the distances between curves, which are computed using the projection semi-metric. This value determines the lower endpoint of the range from which the bandwidth is selected. The default is 0.05.
Maximum quantile order of the distances between curves, which are computed using the projection semi-metric. This value determines the upper endpoint of the range from which the bandwidth is selected. The default is 0.5.
Vector containing the sequence of bandwidths. The default is a sequence of num.h
equispaced bandwidths in the range constructed using min.q.h
and max.q.h
.
Positive integer indicating the number of bandwidths in the grid. The default is 10.
The type of kernel function used. Currently, only Epanechnikov kernel ("quad"
) is available.
Vector of length 2 containing the endpoints of the grid at which the observations of the functional covariate x
are evaluated (i.e. the range of the discretisation). If range.grid=NULL
, then range.grid=c(1,p)
is considered, where p
is the discretisation size of x
(i.e. ncol(x))
.
Positive integer indicating the number of interior knots for the B-spline expansion of the functional covariate. The default value is (p - order.Bspline - 1)%/%2
.
The smallest value for lambda (i. e., the lower endpoint of the sequence in which lambda.opt
is selected), as fraction of lambda.max
.
The defaults is lambda.min.l
if the sample size is larger than factor.pn
times the number of linear covariates and lambda.min.h
otherwise.
The lower endpoint of the sequence in which lambda.opt
is selected if the sample size is smaller than factor.pn
times the number of linear covariates. The default is 0.05.
The lower endpoint of the sequence in which lambda.opt
is selected if the sample size is larger than factor.pn
times the number of linear covariates. The default is 0.0001.
Positive integer used to set lambda.min
. The default value is 1.
Positive integer indicating the number of values in the sequence from which lambda.opt
is selected. The default is 100.
Positive integer or vector of positive integers indicating the number of groups of consecutive variables to be penalised together. The default value is vn=ncol(z)
, resulting in the individual penalization of each scalar covariate.
Positive integer indicating the number of cross-validation folds (used when criterion="k-fold-CV"
). Default is 10.
You may set the seed for the random number generator to ensure reproducible results (applicable when criterion="k-fold-CV"
is used). The default seed value is 123.
A vector of positive integers indicating the eligible number of covariates in the reduced model. For more information, refer to the section Details
. The default is c(10,15,20)
.
The criterion used to select the tuning and regularisation parameters: wn.opt
, lambda.opt
and h.opt
(also vn.opt
if needed). Options include "GCV"
, "BIC"
, "AIC"
, or "k-fold-CV"
. The default setting is "GCV"
.
The penalty function applied in the penalised least-squares procedure. Currently, only "grLasso" and "grSCAD" are implemented. The default is "grSCAD".
Maximum number of iterations allowed across the entire path. The default value is 1000.
Number of CPU cores designated for parallel execution. The default is n.core<-availableCores(omit=1)
.
German Aneiros Perez german.aneiros@udc.es
Silvia Novo Diaz snovo@est-econ.uc3m.es
The multi-functional partial linear single-index model (MFPLSIM) is given by the expression $$Y_i=\sum_{j=1}^{p_n}\beta_{0j}\zeta_i(t_j)+r\left(\left<\theta_0,X_i\right>\right)+\varepsilon_i,\ \ \ (i=1,\dots,n),$$ where:
\(Y_i\) is a real random response and \(X_i\) denotes a random element belonging to some separable Hilbert space \(\mathcal{H}\) with inner product denoted by \(\left\langle\cdot,\cdot\right\rangle\). The second functional predictor \(\zeta_i\) is assumed to be a curve defined on some interval \([a,b]\) which is observed at the points \(a\leq t_1<\dots<t_{p_n}\leq b\).
\(\mathbf{\beta}_0=(\beta_{01},\dots,\beta_{0p_n})^{\top}\) is a vector of unknown real coefficients and \(r(\cdot)\) denotes a smooth unknown link function. In addition, \(\theta_0\) is an unknown functional direction in \(\mathcal{H}\).
\(\varepsilon_i\) denotes the random error.
In the MFPLSIM, we assume that only a few scalar variables from the set \(\{\zeta(t_1),\dots,\zeta(t_{p_n})\}\) form part of the model. Therefore, we must select the relevant variables in the linear component (the impact points of the curve \(\zeta\) on the response) and estimate the model.
In this function, the MFPLSIM is fitted using the FASSMR algorithm. The main idea of this algorithm is to consider a reduced model, with only some (very few) linear covariates (but covering the entire discretization interval of \(\zeta\)), and discarding directly the other linear covariates (since it is expected that they contain very similar information about the response).
To explain the algorithm, we assume, without loss of generality, that the number \(p_n\) of linear covariates can be expressed as follows: \(p_n=q_nw_n\) with \(q_n\) and \(w_n\) integers. This consideration allows us to build a subset of the initial \(p_n\) linear covariates, containging only \(w_n\) equally spaced discretised observations of \(\zeta\) covering the entire interval \([a,b]\). This subset is the following: $$ \mathcal{R}_n^{\mathbf{1}}=\left\{\zeta\left(t_k^{\mathbf{1}}\right),\ \ k=1,\dots,w_n\right\}, $$ where \(t_k^{\mathbf{1}}=t_{\left[(2k-1)q_n/2\right]}\) and \(\left[z\right]\) denotes the smallest integer not less than the real number \(z\).
We consider the following reduced model, which involves only the linear covariates belonging to \(\mathcal{R}_n^{\mathbf{1}}\):
$$
Y_i=\sum_{k=1}^{w_n}\beta_{0k}^{\mathbf{1}}\zeta_i(t_k^{\mathbf{1}})+r^{\mathbf{1}}\left(\left<\theta_0^{\mathbf{1}},\mathcal{X}_i\right>\right)+\varepsilon_i^{\mathbf{1}}.
$$
The program receives the eligible numbers of linear covariates for building the reduced model through the argument wn
.
Then, the penalised least-squares variable selection procedure, with kernel estimation, is applied to the reduced model. This is done using the function sfplsim.kernel.fit
, which requires the remaining arguments (for details, see the documentation of the function sfplsim.kernel.fit
). The estimates obtained are the outputs of the FASSMR algorithm. For further details on this algorithm, see Novo et al. (2021).
Remark: If the condition \(p_n=w_n q_n\) is not met (then \(p_n/w_n\) is not an integer number), the function considers variable \(q_n=q_{n,k}\) values \(k=1,\dots,w_n\). Specifically: $$ q_{n,k}= \left\{\begin{array}{ll} [p_n/w_n]+1 & k\in\{1,\dots,p_n-w_n[p_n/w_n]\},\\ {[p_n/w_n]} & k\in\{p_n-w_n[p_n/w_n]+1,\dots,w_n\}, \end{array} \right. $$ where \([z]\) denotes the integer part of the real number \(z\).
The function supports parallel computation. To avoid it, we can set n.core=1
.
Novo, S., Vieu, P., and Aneiros, G., (2021) Fast and efficient algorithms for sparse semiparametric bi-functional regression. Australian and New Zealand Journal of Statistics, 63, 606--638, tools:::Rd_expr_doi("https://doi.org/10.1111/anzs.12355").
See also sfplsim.kernel.fit
, predict.FASSMR.kernel
, plot.FASSMR.kernel
and IASSMR.kernel.fit
.
Alternative method FASSMR.kNN.fit
.
# \donttest{
data(Sugar)
y<-Sugar$ash
x<-Sugar$wave.290
z<-Sugar$wave.240
#Outliers
index.y.25 <- y > 25
index.atip <- index.y.25
(1:268)[index.atip]
#Dataset to model
x.sug <- x[!index.atip,]
z.sug<- z[!index.atip,]
y.sug <- y[!index.atip]
train<-1:216
ptm=proc.time()
fit <- FASSMR.kernel.fit(x=x.sug[train,],z=z.sug[train,], y=y.sug[train],
nknot.theta=2, lambda.min.l=0.03,
max.q.h=0.35, nknot=20,criterion="BIC",
max.iter=5000)
proc.time()-ptm
# }
Run the code above in your browser using DataLab