Learn R Programming

optismixture (version 0.1)

penoptpersp.alpha.only: penalized optimization of the constrained linearized perspective function

Description

penalized optimization of the constrained linearized perspective function

Usage

penoptpersp.alpha.only(y, z, a0, eps = NULL, reltol = NULL, relerr = NULL, rho0 = NULL, maxin = NULL, maxout = NULL)

Arguments

y
length $n$ vector
z
$n \times J$ matrix
a0
length $J$ vector
eps
length $J$ vector, default to be rep(0.1/J, J)
reltol
relative tolerence for Newton step, between 0 to 1, default to be $10^{-3}$. For each inner loop, we optimize $f_0 + \rho \times \mathrm{pen} $ for a fixed $\rho$, we stop when the Newton decrement $f(x) - inf_y \hat{f}(y) \leq f(x)* \mathrm{reltol}$, where $\hat{f}$ is the second-order approximation of $f$ at $x$
relerr
relerr stop when within (1+relerr) of minimum variance, default to be $10^{-3}$, between 0 to 1.
rho0
initial value for $\rho$, default to be 1
maxin
maximum number of inner iterations
maxout
maximum number of outer iterations

Value

a list of
y
input y
z
input z
alpha
optimized alpha
rho
value of rho
f
value of the objective function
rhopen
value of rho*pen when returned
outer
number of outer loops
relerr
relative error
alphasum
sum of optimized alpha

Details

To minimize $\sum_i \frac{y_i^2}{z_i^T\alpha}$ over $\alpha$ subject to $\alpha_j > \epsilon_j$ for $j = 1, \cdots, J$ and $\sum_{j=1}^J \alpha_j < 1$,

Instead we minimize $ \sum_i \frac{y_i^2}{z_i^T\alpha} + \rho \times \mathrm{pen}$ for a decreasing sequence of $\rho$

where $ \mathrm{pen} = -( \sum_{j = 1}^J( \log(\alpha_j-\epsilon_j) ) + \log(1-\sum_{j = 1}^J \alpha_j) )$

starting values are $\alpha = a0$ and can be missing.

The optimization stops when within (1+relerr) of minimum variance.