This function fits a sparse semi-functional partial linear single-index (SFPLSIM). It employs a penalised least-squares regularisation procedure, integrated with nonparametric kernel estimation using Nadaraya-Watson weights.
The function uses B-spline expansions to represent curves and eligible functional indexes. It also utilises an objective criterion (criterion
) to select both the bandwidth (h.opt
) and the regularisation parameter (lambda.opt
).
sfplsim.kernel.fit(x, z, y, seed.coeff = c(-1, 0, 1), order.Bspline = 3,
nknot.theta = 3, min.q.h = 0.05, max.q.h = 0.5, h.seq = NULL, num.h = 10,
range.grid = NULL, kind.of.kernel = "quad", nknot = NULL, lambda.min = NULL,
lambda.min.h = NULL, lambda.min.l = NULL, factor.pn = 1, nlambda = 100,
lambda.seq = NULL, vn = ncol(z), nfolds = 10, seed = 123, criterion = "GCV",
penalty = "grSCAD", max.iter = 1000, n.core = NULL)
The matched call.
Estimated scalar response.
Differences between y
and the fitted.values
.
Estimate of \(\beta_0\) when the optimal tuning parameters lambda.opt
, h.opt
and vn.opt
are used.
Coefficients of \(\hat{\theta}\) in the B-spline basis (when the optimal tuning parameters lambda.opt
, h.opt
and vn.opt
are used): a vector of length(order.Bspline+nknot.theta)
.
Indexes of the non-zero \(\hat{\beta_{j}}\).
Selected bandwidth.
Selected value of the penalisation parameter \(\lambda\).
Value of the criterion function considered to select lambda.opt
, h.opt
and vn.opt
.
Minimum value of the penalized criterion used to estimate \(\beta_0\) and \(\theta_0\). That is, the value obtained using theta.est
and beta.est
.
Vector of dimension equal to the cardinal of \(\Theta_n\), containing the values of the penalized criterion for each functional index in \(\Theta_n\).
Index of \(\hat{\theta}\) in the set \(\Theta_n\).
A grid of values in [lambda.min.opt.max.mopt[1], lambda.min.opt.max.mopt[3]
] is considered to seek for the lambda.opt
(lambda.opt=lambda.min.opt.max.mopt[2]
).
A grid of values in [lambda.min.opt.max.m[m,1], lambda.min.opt.max.m[m,3]
] is considered to seek for the optimal \(\lambda\) (lambda.min.opt.max.m[m,2]
)
used by the optimal \(\beta\) for each \(\theta\) in \(\Theta_n\).
h.opt=h.min.opt.max.mopt[2]
(used by theta.est
and beta.est
) was seeked between h.min.opt.max.mopt[1]
and h.min.opt.max.mopt[3]
.
For each \(\theta\) in \(\Theta_n\), the optimal \(h\) (h.min.opt.max.m[m,2]
) used by the optimal \(\beta\) for this \(\theta\) was seeked between h.min.opt.max.m[m,1]
and h.min.opt.max.m[m,3]
.
Sequence of eligible values for \(h\) considered to seek for h.opt
.
The vector theta.seq.norm[j,]
contains the coefficientes in the B-spline basis of the jth functional index in \(\Theta_n\).
Selected value of vn
.
Matrix containing the observations of the functional covariate (functional single-index component), collected by row.
Matrix containing the observations of the scalar covariates (linear component), collected by row.
Vector containing the scalar response.
Vector of initial values used to build the set \(\Theta_n\) (see section Details
). The coefficients for the B-spline representation of each eligible functional index \(\theta \in \Theta_n\) are obtained from seed.coeff
. The default is c(-1,0,1)
.
Positive integer giving the order of the B-spline basis functions. This is the number of coefficients in each piecewise polynomial segment. The default is 3.
Positive integer indicating the number of regularly spaced interior knots in the B-spline expansion of \(\theta_0\). The default is 3.
Minimum quantile order of the distances between curves, which are computed using the projection semi-metric. This value determines the lower endpoint of the range from which the bandwidth is selected. The default is 0.05.
Maximum quantile order of the distances between curves, which are computed using the projection semi-metric. This value determines the upper endpoint of the range from which the bandwidth is selected. The default is 0.5.
Vector containing the sequence of bandwidths. The default is a sequence of num.h
equispaced bandwidths in the range constructed using min.q.h
and max.q.h
.
Positive integer indicating the number of bandwidths in the grid. The default is 10.
Vector of length 2 containing the endpoints of the grid at which the observations of the functional covariate x
are evaluated (i.e. the range of the discretisation). If range.grid=NULL
, then range.grid=c(1,p)
is considered, where p
is the discretisation size of x
(i.e. ncol(x))
.
The type of kernel function used. Currently, only Epanechnikov kernel ("quad"
) is available.
Positive integer indicating the number of interior knots for the B-spline expansion of the functional covariate. The default value is (p - order.Bspline - 1)%/%2
.
The smallest value for lambda (i. e., the lower endpoint of the sequence in which lambda.opt
is selected), as fraction of lambda.max
.
The defaults is lambda.min.l
if the sample size is larger than factor.pn
times the number of linear covariates and lambda.min.h
otherwise.
The lower endpoint of the sequence in which lambda.opt
is selected if the sample size is smaller than factor.pn
times the number of linear covariates. The default is 0.05.
The lower endpoint of the sequence in which lambda.opt
is selected if the sample size is larger than factor.pn
times the number of linear covariates. The default is 0.0001.
Positive integer used to set lambda.min
. The default value is 1.
Positive integer indicating the number of values in the sequence from which lambda.opt
is selected. The default is 100.
Sequence of values in which lambda.opt
is selected. If lambda.seq=NULL
, then the programme builds the sequence automatically using lambda.min
and nlambda
.
Positive integer or vector of positive integers indicating the number of groups of consecutive variables to be penalised together. The default value is vn=ncol(z)
, resulting in the individual penalization of each scalar covariate.
Number of cross-validation folds (used when criterion="k-fold-CV"
). Default is 10.
You may set the seed for the random number generator to ensure reproducible results (applicable when criterion="k-fold-CV"
is used). The default seed value is 123.
The criterion used to select the tuning and regularisation parameter: h.opt
and lambda.opt
(also vn.opt
if needed). Options include "GCV"
, "BIC"
, "AIC"
, or "k-fold-CV"
. The default setting is "GCV"
.
The penalty function applied in the penalised least-squares procedure. Currently, only "grLasso" and "grSCAD" are implemented. The default is "grSCAD".
Maximum number of iterations allowed across the entire path. The default value is 1000.
Number of CPU cores designated for parallel execution. The default is n.core<-availableCores(omit=1)
.
German Aneiros Perez german.aneiros@udc.es
Silvia Novo Diaz snovo@est-econ.uc3m.es
The sparse semi-functional partial linear single-index model (SFPLSIM) is given by the expression: $$ Y_i=Z_{i1}\beta_{01}+\dots+Z_{ip_n}\beta_{0p_n}+r(\left<\theta_0,X_i\right>)+\varepsilon_i\ \ \ i=1,\dots,n, $$ where \(Y_i\) denotes a scalar response, \(Z_{i1},\dots,Z_{ip_n}\) are real random covariates and \(X_i\) is a functional random covariate valued in a separable Hilbert space \(\mathcal{H}\) with inner product \(\left\langle \cdot, \cdot \right\rangle\). In this equation, \(\mathbf{\beta}_0=(\beta_{01},\dots,\beta_{0p_n})^{\top}\), \(\theta_0\in\mathcal{H}\) and \(r(\cdot)\) are a vector of unknown real parameters, an unknown functional direction and an unknown smooth real-valued function, respectively. In addition, \(\varepsilon_i\) is the random error.
The sparse SFPLSIM is fitted using the penalised least-squares approach. The first step is to transform the SSFPLSIM into a linear model by extracting from \(Y_i\) and \(Z_{ij}\) (\(j=1,\ldots,p_n\)) the effect of the functional covariate \(X_i\) using functional single-index regression. This transformation is achieved using nonparametric kernel estimation (see, for details, the documentation of the function fsim.kernel.fit
).
An approximate linear model is then obtained:
$$\widetilde{\mathbf{Y}}_{\theta_0}\approx\widetilde{\mathbf{Z}}_{\theta_0}\mathbf{\beta}_0+\mathbf{\varepsilon},$$
and the penalised least-squares procedure is applied to this model by minimising over the pair \((\mathbf{\beta},\theta)\)
$$
\mathcal{Q}\left(\mathbf{\beta},\theta\right)=\frac{1}{2}\left(\widetilde{\mathbf{Y}}_{\theta}-\widetilde{\mathbf{Z}}_{\theta}\mathbf{\beta}\right)^{\top}\left(\widetilde{\mathbf{Y}}_{\theta}-\widetilde{\mathbf{Z}}_{\theta}\mathbf{\beta}\right)+n\sum_{j=1}^{p_n}\mathcal{P}_{\lambda_{j_n}}\left(|\beta_j|\right), \quad (1)
$$
where \(\mathbf{\beta}=(\beta_1,\ldots,\beta_{p_n})^{\top}, \ \mathcal{P}_{\lambda_{j_n}}\left(\cdot\right)\) is a penalty function (specified in the argument penalty
) and \(\lambda_{j_n} > 0\) is a tuning parameter.
To reduce the quantity of tuning parameters, \(\lambda_j\), to be selected for each sample, we consider \(\lambda_j = \lambda \widehat{\sigma}_{\beta_{0,j,OLS}}\), where \(\beta_{0,j,OLS}\) denotes the OLS estimate of \(\beta_{0,j}\) and \(\widehat{\sigma}_{\beta_{0,j,OLS}}\) is the estimated standard deviation. Both \(\lambda\) and \(h\) (in the kernel estimation) are selected using the objetive criterion specified in the argument criterion
.
In addition, the function uses a B-spline representation to construct a set \(\Theta_n\) of eligible functional indexes \(\theta\). The dimension of the B-spline basis is order.Bspline
+nknot.theta
and the set of eligible coefficients is obtained by calibrating (to ensure the identifiability of the model) the set of initial coefficients given in seed.coeff
. The larger this set, the greater the size of \(\Theta_n\). ue to the intensive computation required by our approach, a balance between the size of \(\Theta_n\) and the performance of the estimator is necessary. For that, Ait-Saidi et al. (2008) suggested considering order.Bspline=3
and seed.coeff=c(-1,0,1)
. For details on the construction of \(\Theta_n\) see Novo et al. (2019).
Finally, after estimating \(\mathbf{\beta}_0\) and \(\theta_0\) by minimising (1), we proceed to estimate the nonlinear function \(r_{\theta_0}(\cdot)\equiv r\left(\left<\theta_0,\cdot\right>\right)\). For this purporse, we again apply the kernel procedure with Nadaraya-Watson weights to smooth the partial residuals \(Y_i-\mathbf{Z}_i^{\top}\widehat{\mathbf{\beta}}\).
For further details on the estimation procedure of the SSFPLSIM, see Novo et al. (2021).
Remark: It should be noted that if we set lambda.seq
to \(0\), we can obtain the non-penalised estimation of the model, i.e. the OLS estimation. Using lambda.seq
with a value \(\not= 0\) is advisable when suspecting the presence of irrelevant variables.
Ait-Saidi, A., Ferraty, F., Kassa, R., and Vieu, P. (2008) Cross-validated estimations in the single-functional index model. Statistics, 42(6), 475--494, tools:::Rd_expr_doi("https://doi.org/10.1080/02331880801980377").
Novo S., Aneiros, G., and Vieu, P., (2019) Automatic and location-adaptive estimation in functional single-index regression. Journal of Nonparametric Statistics, 31(2), 364--392, tools:::Rd_expr_doi("https://doi.org/10.1080/10485252.2019.1567726").
Novo, S., Aneiros, G., and Vieu, P., (2021) Sparse semiparametric regression when predictors are mixture of functional and high-dimensional variables. TEST, 30, 481--504, tools:::Rd_expr_doi("https://doi.org/10.1007/s11749-020-00728-w").
Novo, S., Aneiros, G., and Vieu, P., (2021) A kNN procedure in semiparametric functional data analysis. Statistics and Probability Letters, 171, 109028, tools:::Rd_expr_doi("https://doi.org/10.1016/j.spl.2020.109028").
See also fsim.kernel.fit
, predict.sfplsim.kernel
and plot.sfplsim.kernel
Alternative procedure sfplsim.kNN.fit
.
# \donttest{
data("Tecator")
y<-Tecator$fat
X<-Tecator$absor.spectra2
z1<-Tecator$protein
z2<-Tecator$moisture
#Quadratic, cubic and interaction effects of the scalar covariates.
z.com<-cbind(z1,z2,z1^2,z2^2,z1^3,z2^3,z1*z2)
train<-1:160
#SSFPLSIM fit. Convergence errors for some theta are obtained.
ptm=proc.time()
fit<-sfplsim.kernel.fit(x=X[train,], z=z.com[train,], y=y[train],
max.q.h=0.35,lambda.min.l=0.01,
max.iter=5000, nknot.theta=4,criterion="BIC",nknot=20)
proc.time()-ptm
#Results
fit
names(fit)
# }
Run the code above in your browser using DataLab