Last chance! 50% off unlimited learning
Sale ends in
Last chance! 50% off unlimited learning
Sale ends in
$$ \begin{array}{l} \min \|Y - X K\|_F\\ \mbox{s.t. } K>=0 \end{array} $$
where $Y$ and $X$ are two real matrices of dimension $n x p$ and $n x r$ respectively, and $|.|_F$ is the Frobenius norm.
The algorithm is very fast compared to other approaches, as it is optimised for handling multiple right-hand sides.
fcnnls(x, y, ...)
"fcnnls"(x, y, verbose = FALSE, pseudo = TRUE, ...)
.fcnnls
. Currently not used.FALSE
).pseudo=FALSE
) the
algorithm uses Gaussian elimination to solve the
successive internal linear problems, using the
solve
function. If pseudo=TRUE
the
algorithm uses Moore-Penrose generalized
pseudoinverse
from the
corpcor
package instead of solve.TRUE
if the computation was performed using the
pseudoinverse. See argument pseudo
.signature(x = "matrix", y =
"matrix")
: This method wraps a call to the internal
function .fcnnls
, and formats the results in a
similar way as other lest-squares methods such as
lm
. signature(x = "numeric", y =
"matrix")
: Shortcut for fcnnls(as.matrix(x), y,
...)
. signature(x = "ANY", y = "numeric")
:
Shortcut for fcnnls(x, as.matrix(y), ...)
. NMF
package, this algorithm is used
internally by the SNMF/R(L) algorithm from Kim et
al. (2007) to solve general Nonnegative Matrix
Factorization (NMF) problems, using alternating
nonnegative constrained least-squares. That is by
iteratively and alternatively estimate each matrix
factor.The algorithm is an active/passive set method, which rearrange the right-hand side to reduce the number of pseudo-inverse calculations. It uses the unconstrained solution $K_u$ obtained from the unconstrained least squares problem, i.e. $min ||Y - X K||_F^2$ , so as to determine the initial passive sets.
The function fcnnls
is provided separately so that
it can be used to solve other types of nonnegative least
squares problem. For faster computation, when multiple
nonnegative least square fits are needed, it is
recommended to directly use the function
.fcnnls
.
The code of this function is a port from the original MATLAB code provided by Kim et al. (2007).
Van Benthem M and Keenan MR (2004). "Fast algorithm for
the solution of large-scale non-negativity-constrained
least squares problems." _Journal of Chemometrics_,
*18*(10), pp. 441-450. ISSN 0886-9383,
Kim H and Park H (2007). "Sparse non-negative matrix
factorizations via alternating non-negativity-constrained
least squares for microarray data analysis."
_Bioinformatics (Oxford, England)_, *23*(12), pp.
1495-502. ISSN 1460-2059,
nmf
## Define a random nonnegative matrix matrix
n <- 200; p <- 20; r <- 3
V <- rmatrix(n, p)
## Compute the optimal matrix K for a given X matrix
X <- rmatrix(n, r)
res <- fcnnls(X, V)
## Compute the same thing using the Moore-Penrose generalized pseudoinverse
res <- fcnnls(X, V, pseudo=TRUE)
## It also works in the case of single vectors
y <- runif(n)
res <- fcnnls(X, y)
# or
res <- fcnnls(X[,1], y)
Run the code above in your browser using DataLab