$$Y=\big<X,\beta\big>+\epsilon=\int_{T}{X(t)\beta(t)dt+\epsilon}$$
where \( \big< \cdot , \cdot \big>\) denotes the inner product on
\(L_2\) and \(\epsilon\) are random errors with mean zero, finite
variance \(\sigma^2\) and \(E[X(t)\epsilon]=0\).
The function uses the basis representation proposed by Ramsay and Silverman (2005) to model the
relationship between the scalar response and the functional covariate by
basis representation of the observed functional data
\(X(t)\approx\sum_{k=1}^{k_{n1}} c_k \xi_k(t)\) and the unknown
functional parameter \(\beta(t)\approx\sum_{k=1}^{k_{n2}} b_k
\phi_k(t)\).
The functional linear models estimated by the expression: $$\hat{y}=
\big< X,\hat{\beta} \big> =
C^{T}\psi(t)\phi^{T}(t)\hat{b}=\tilde{X}\hat{b}$$ where
\(\tilde{X}(t)=C^{T}\psi(t)\phi^{T}(t)\), and
\(\hat{b}=(\tilde{X}^{T}\tilde{X})^{-1}\tilde{X}^{T}y\)
and so,
\(\hat{y}=\tilde{X}\hat{b}=\tilde{X}(\tilde{X}^{T}\tilde{X})^{-1}\tilde{X}^{T}y=Hy\)
where \(H\) is the hat matrix with degrees of freedom: \(df=tr(H)\).
If \(\lambda>0\) then fregre.basis
incorporates a
roughness penalty:
\(\hat{y}=\tilde{X}\hat{b}=\tilde{X}(\tilde{X}^{T}\tilde{X}+\lambda
R_0)^{-1}\tilde{X}^{T}y= H_{\lambda}y\) where \(R_0\) is the penalty matrix.
This function allows covariates of class fdata
, matrix
,
data.frame
or directly covariates of class fd
. The function
also gives default values to arguments basis.x
and basis.b
for
representation on the basis of functional data \(X(t)\) and the functional
parameter \(\beta(t)\), respectively.
If basis=
NULL
creates the bspline
basis by
create.bspline.basis
.
If the functional covariate
fdataobj
is a matrix or data.frame, it creates an object of class
"fdata" with default attributes, see fdata
.
If
basis.x$type=``fourier''
and basis.b$type=``fourier''
, the
basis are orthonormal and the function decreases the number of fourier basis
elements on the \(min(k_{n1},k_{n2})\), where
\(k_{n1}\) and \(k_{n2}\) are the number of basis element of
basis.x
and basis.b
respectively.