Software


(New) Our recent software and computer codes can be found at https://github.com/orgs/unc-optimization/dashboard


ProxSGD – A Python package of stochastic algorithms for nonconvex composite optimization



Sieve-SDP – A Preprocessing Tool for Semidefinite Programming



PAPA – Proximal Alternating Penalty Algorithms


  • PAPA-v1.0 is a MATLAB software package for solving constrained convex optimization problems of the form:
      \begin{array}{ll}  \displaystyle\min_{\mathbf{x}\in\mathbf{R}^p, \mathbf{y}\in\mathbb{R}^n} & f(\mathbf{x},\mathbf{y}) = g(\mathbf{x}) +h(\mathbf{y}) \\          \text{s.t.} & -\mathbf{x}+\mathbf{B}\mathbf{y} \in\mathcal{K},          \end{array}  

    where g and h are two convex functions, \mathbf{B}\in\mathbb{R}^{n\times p}, \mathcal{K} is a simple, nonempty, closed, and convex set in \mathbb{R}^n.
    Here, we assume that g and h are proximally tractable, i.e., the proximal operators \mathrm{prox}_{g} and \mathrm{prox}_h are efficient to compute (see below).

    PAPA aims at solving constrained convex optimization problem for any convex functions g and h, where their proximal operator is provided.
    PAPA is developed by Quoc Tran-Dinh at the Department of Statistics and Operations Research, University of North Carolina at Chapel Hill (UNC), North Carolina.

  • Download: PAPA-v1.0 can be downloaded HERE.
  • References: The theory and algorithms implemented in PAPA can be found in:
    1. Q. Tran Dinh: Proximal Alternating Penalty Algorithms for Nonsmooth Constrained Convex Optimization, Manuscript, STOR-UNC, Nov., 2017. (preprint: https://arxiv.org/pdf/1711.01367.pdf).

LMAOPT – Low-Rank Matrix Approximation Optimization


  • LMAOPT-v1.0 is a MATLAB code collection for solving three special cases of the following low-rank matrix optimization problem:
       \Phi^{\star} := \displaystyle\min_{U\in\mathbf{R}^{m\times r}, V\in\mathbb{R}^{n\times r}}\Big\{ \Phi(U,V) := \phi(\mathcal{A}(UV^T) - B) \Big\},  

    where \phi is a proper, closed and convex function from \mathbb{R}^l\to\mathbb{R}\cup\{+\infty\}, \mathcal{A} is a linear operator from \mathbb{R}^{m\times n} to \mathbb{R}^l, and B\in\mathbb{R}^l is a given observed vector. Here, we are more interested in the case r \ll \min\{m, n\}. Currently, we provide the code to solve three special cases of the above problem:

    1. Quadratic loss: \phi(\cdot) = \frac{1}{2}\Vert\cdot\Vert_2^2;
    2. Quadratic loss and symmetric case: \phi(\cdot) = \frac{1}{2}\Vert\cdot\Vert_2^2 and U = V; and
    3. Nonsmooth objective loss with tractably proximal operators: For instance, \phi(\cdot) = \Vert\cdot\Vert_1.
  • LMAOPT is implemented by Quoc Tran-Dinh at the Department of Statistics and Operations Research (STAT&OR), The University of North Carolina at Chapel Hill (UNC). This is a joint work with Zheqi Zhang at STAT & OR, UNC.
  • Download: LMAOPT-v1.0 can be downloaded HERE.
    For further information, please read Readme3.txt.
  • References: The theory and algorithms implemented in LMAOPT can be found in the following manuscript: Q. Tran-Dinh, and Z. Zhang: Extended Gauss-Newton and Gauss-Newton ADMM algorithms for low-rank matrix optimizationSTAT&OR Tech. Report UNC-REPORT-2016.a, (2016). Preprint: http://arxiv.org/abs/1606.03358

DECOPT: Decomposition OPTimization


  • DECOPT-v1.0 is a MATLAB software package for solving constrained convex optimization problems of the form:
      \begin{array}{ll}  \displaystyle\min_{\mathbf{x}\in\mathbf{R}^p, \mathbf{r}\in\mathbb{R}^n} & f(\mathbf{x}) + g(\mathbf{r}) \\          \text{s.t.} & \mathbf{A}\mathbf{x} - \mathbf{r} = \mathbf{b}, ~\mathbf{l}\leq \mathbf{x}\leq \mathbf{u},          \end{array}  

    where f and g are two convex functions, \mathbf{A}\in\mathbb{R}^{n\times p}, \mathbf{b}\in\mathbb{R}^n, \mathbf{l}, \mathbf{u}\in\mathbb{R}^p are the lower and upper bound of \mathbf{x}.
    Here, we assume that f and g are proximally tractable, i.e., the proximal operators \mathrm{prox}_{f} and \mathrm{prox}_g are efficient to compute (see below).

    DECOPT aims at solving constrained convex optimization problem for any convex functions f and g, where their proximal operator is provided.
    The following special cases have been customized in DECOPT:

    1. Basis pursuit:
      \displaystyle\min_{\mathbf{x}}\Big\{ \Vert\mathbf{x}\Vert_1 : \mathbf{A}(\mathbf{x}) = \mathbf{b}, ~\mathbf{l} \leq \mathbf{x}\leq\mathbf{u} \Big\}.
    2. The \ell_1/\ell_2-unconstrained LASSO problem:
      \displaystyle\min_{\mathbf{x}}\Big\{ \frac{1}{2}\Vert\mathbf{A}(\mathbf{x}) - \mathbf{b}\Vert_2^2 + \lambda\Vert\mathbf{x}\Vert_1\Big\}.
    3. The \ell_1/\ell_1-convex problem:
      \displaystyle\min_{\mathbf{x}}\Big\{ \Vert\mathbf{A}(\mathbf{x}) - \mathbf{b}\Vert_1 + \lambda\Vert\mathbf{x}\Vert_1 \Big\}.
    4. The square-root \ell_1/\ell_2 LASSO problem:
      \displaystyle\min_{\mathbf{x}}\Big\{ \Vert\mathbf{A}(\mathbf{x}) - \mathbf{b}\Vert_2 + \lambda\Vert\mathbf{x}\Vert_1 \Big\}.
    5. The \ell_1/\ell_2 – basis denosing (BPDN) problem:
      \displaystyle\min_{\mathbf{x}}\Big\{ \Vert\mathbf{x}\Vert_1 : \Vert\mathbf{A}(\mathbf{x}) - \mathbf{b}\Vert_2 \leq \delta\Big\}.
    6. The \ell_2/\ell_1\ell_1 – constrained LASSO problem:
      \displaystyle\min_{\mathbf{x}}\Big\{ \frac{1}{2}\Vert\mathbf{A}(\mathbf{x}) - \mathbf{b}\Vert_2^2 : \Vert\mathbf{x}\Vert_1 \leq \delta \Big\}.

    Here, \lambda \geq 0 is a penalty parameter, \delta > 0 is a given noise level parameter, \mathbf{b} is the observed measurement vector, and \mathbf{A} is a linear operator.

    DECOPT is developed by Quoc Tran-Dinh when he was with the Laboratory for Information and Inference Systems (LIONS), EPFL, Lausanne, Switzerland.
    This is a joint work with V. Cevher at EPFL/LIONS.

  • Download: DECOPT-v1.0 can be downloaded HERE.
    For further information, please read Readme2.txt.
  • References: The theory and algorithms implemented in DECOPT can be found in:
    1. Q. Tran-Dinh, and V. Cevher: Constrained convex minimization via model-based excessive gap. Proceedings of the annual conference on Neural Information Processing Systems Foundation (NIPS), Montreal, Canada, (2014).
      Preprint: http://papers.nips.cc/paper/5574-constrained-convex-minimization-via-model-based-excessive-gap.
    2. Q. Tran Dinh, and V. Cevher: A Primal-Dual Algorithmic Framework for Constrained Convex Minimization. LIONS Tech. Report EPFL-REPORT-199844, ,(2014).
      Preprint: http://arxiv.org/abs/1406.5403

SCOPT: Self-Concordant OPTimization


  • SCOPT-v1.0, a MATLAB package for Composite Self-concordant (like) Minimization, is a MATLAB implementation of the proximal-gradient, proximal quasi-Newton, proximal Newton, and path-following interior-point algorithms for solving composite convex minimization problems involving self-concordant and self-concordant-like cost functions:
      \displaystyle\min_{\mathbf{x}\in\mathbb{R}^n}\left\{ F(\mathbf{x}) := f(\mathbf{x}) + g(\mathbf{x}) \right\},

    where f is a self-concordant or self-concordant-like function, and g is a possibly nonsmooth and convex function.The function g is usually assumed that its proximal operator   \mathrm{prox}_{g}(\mathbf{x}) := \mathrm{arg}\!\displaystyle\min_{\mathbf{z}\in\mathbb{R}^n}\left\{g(\mathbf{z}) + \frac{1}{2}\Vert\mathbf{z} - \mathbf{x}\Vert_2^2\right\},  is “efficient” to compute (i.e., in a closed form or by polynomial-time algorithms). If the proximal operator of g can be computed efficently, then we say that g is tractably proximal.

    Definition: Briefly, a function f is called self-concordant (with the parameter M_f \geq 0) if \vert\varphi^{\prime\prime\prime}(t)\vert\leq M_f\varphi^{\prime\prime}(t)^{3/2}, where \varphi(t):= f(\mathbf{x} + t\mathbf{u}) for \mathbf{x}\in\mathrm{dom}(f), \mathbf{u}\in\mathbb{R}^m and \mathbf{x} + t\mathbf{u}\in\mathrm{dom}(f).

    Examples: f(\mathbf{X}) := -\log(\det(\mathbf{X})) (for a symmetric and positive definite matrix \mathbf{X}), f(\mathbf{x},t) := -\log(t^2 - \Vert\mathbf{x}\Vert_2^2), and f(\mathbf{x}) := -\sum_{i=1}^n\log(b_i - \mathbf{a}_i^T\mathbf{x}) are all self-concordant.

    SCOPT is developed by Quoc Tran-Dinh at the Laboratory for Information and Inference Systems (LIONS), EPFL, Lausanne, Switzerland.
    This is a joint work with A. Kyrillidis and V. Cevher at EPFL/LIONS. SCOPT consists of several algorithms customized for solving the following convex optimization problems (but not limited to those).

    1. Sparse Inverse Covariance Estimation: An instance of this problem class can be expressed as follows:
        \displaystyle\min_{\mathbf{X}\succ 0}\left\{ -\log(\det(\mathbf{X})) + \mathrm{trace}(\boldsymbol{\Sigma}\mathbf{X}) + \rho\Vert\mathrm{vec}(\mathbf{X})\Vert_1 \right\},  

      where \rho > 0 is a penalty parameter.

      • Reference: J. Friedman, T. Hastie, and R. Tibshirani. Sparse Inverse Covariance Estimation with the graphical LASSO. Biostatistics, vol. 9, no. 3, pp. 432–441, 2007.
    2. Sparse Poisson Intensity Reconstruction: As an example, this problem has the following form:
      \displaystyle\min_{\mathbf{x}\geq 0}\left\{ -\sum_{i=1}^m\mathbf{a}_i^T\mathbf{x} - \sum_{i=1}^my_i\log(\mathbf{a}_i^T\mathbf{x}) + \rho\Vert\mathbf{x}\Vert_{\mathrm{TV}}\right\},

      where y_i\in\mathbb{N},, \rho \geq 0 is a penalty parameter, and \Vert\cdot\Vert_{\mathrm{TV}} is the TV (total-variation) norm.

      • Reference: Z. T. Harmany, R. F. Marcia, and R. M. Willett. This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction Algorithms – Theory and Practice. IEEE Trans. Image Processing, vol. 21, no. 3, pp. 1084–1096, 2012.
    3. Heteroscedastic LASSO: This problem is formulated as follows:
        \displaystyle\min_{\sigma > 0, \boldsymbol{\beta}\in\mathbb{R}^p}\left\{ -\log(\sigma) + \frac{1}{2n}\Vert\mathbf{X}\boldsymbol{\beta} - \sigma\mathbf{y}\Vert_2^2 + \rho\Vert\boldsymbol{\beta}\Vert_1\right\},

      where \rho > 0 is a penalty parameter.

      • Reference: N. Stadler, P. Buhlmann, and S. V. de Geer. L1-Penalization for mixture regression models. TEST, vol. 19, no. 2, pp. 209–256, 2010.
    4. Sparse logistic and multimonomial logistic regression: This problem can be written in the following mathematical form:
        \displaystyle\min_{\mathbf{X}\in\mathbb{R}^{m\times p}}\left\{ \frac{1}{N}\sum_{i=1}^N\left[ \log\left(1 + \sum_{j=1}^m\mathrm{exp}(\mathbf{W}_{(j)}^T\mathbf{X}_{(i)})\right) -\sum_{i=1}^my_i^{(j)}\mathbf{W}_{(j)^T\mathbf{X}_{(i)}}\right] + \rho\Vert \mathrm{vec}(\mathbf{X}) \Vert_1 \right\},  

      where \mathbf{W}_{(j)} and y_i^{(j)} are input data, and \rho > 0 is a penalty parameter.

      • Reference: B. Krishnapuram, M. Figueredo, L. Carin, and A. Hertemink. Sparse multinomial logistic regression: Fast algorithms and generalization bounds. IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), vol. 27, pp. 957–968, 2005.
    5. In addition to the above mentioned problems, SCOPT also contains the codes which can be customized to solve the following constrained convex optimization problem:
        g^{\star} := \displaystyle\min_{\mathbf{x}\in\Omega}g(\mathbf{x}),  

      where g is a convex function with a tractable proximity operator, and \Omega is a nonempty, closed and convex set in \mathbb{R}^p endowed with a self-concordant barrier f (i.e., f is self-concordant with M_f = 2 and \vert\varphi'(t)\vert \leq \sqrt{\nu}\varphi''(t)^{1/2} for given \nu > 0 and \varphi(t) := f(\mathbf{x} + t\mathbf{u})).

      • Example: The clustering problem in unsupervised learning can be reformulated as the above constrained convex problem using the \max-norm:
        \begin{array}{ll}  \displaystyle\min_{\mathbf{K},\mathbf{R}, \mathbf{L}}& \Vert\mathrm{vec}(\mathbf{K} - \mathbf{A})\Vert_1 \\  \textrm{s.t.} & \begin{bmatrix}\mathbf{R} & \mathbf{K}\\ \mathbf{K}^T & \mathbf{L}\end{bmatrix}\succeq 0, ~\mathbf{R}_{ii} \leq 1, \mathbf{L}_{ii}\leq 1, ~i=1,\dots, p.  \end{array}
      • Reference: Jalali, A., and Srebro, N. Clustering using max-norm constrained optimization. In Proc. of International Conference on Machine Learning (ICML2012) (2012), pp. 1–17.
  • Download: SCOPT-v1.0 can be downloaded HERE. For further information, please read Readme.txt.
  • References: The theory and algorithms implemented in SCOPT can be found in:
    1. Q. Tran-Dinh, A. Kyrillidis, and V. Cevher (2013): A proximal Newton framework for composite minimization: Graph learning without Cholesky decomposition and matrix inversions. Proceedings of the 30th International Conference on Machine Learning (ICML), JMLR W&CP 28(2): 271–279, 2013. Preprint: https://arxiv.org/abs/1301.1459.
    2. Q. Tran-Dinh, A. Kyrillidis and V. Cevher: Composite Self-concordant Minimization, Journal of Machine Learning Research, 2014. Preprint: arxiv.org/abs/1308.2867
    3. Q. Tran-Dinh, A. Kyrillidis and V. Cevher: An inexact proximal path-following algorithm for constrained convex minimization, SIAM J. Optimization, vol. 24, no. 4, pp. 1718–1745, 2014. Preprint: http://arxiv.org/abs/ 1311.1756.

BMISolver – Optimization involving Bilinear Matrix Inequalities


  • BMIsolver-v1.1 – a MATLAB software package for BMI optimization
    BMIsolver is a MATLAB package for solving optimization problems with BMI (bilinear matrix inequality) constraints arising in feedback controller design (spectral abscissa, H2, H-infinity and mixed H2/Hinf optimization problems). It is implemented based on the methods proposed in [1] and [2] which is a joint work with S. Gummusoy and W. Michiels. The code is developed by Quoc Tran Dinh at Department of Electrical Engineering (ESAT/SCD) and Optimization in Engineering Center (OPTEC), KU Leuven, Belgium (under the supervision of Prof. M. Diehl). The current version requires YALMIP as a modeling language. We recommend SeDuMi as an SDP solver.
  • Download BMIsolver
    1. A recently updated version (v.1.1) can be found HERE.
    2. Third-party packages:
  • References
    1. Q. Tran Dinh, S. Gumussoy, W. Michiels and M. Diehl. Combining Convex-Concave Decompositions and Linearization Approaches for solving BMIs, with Application to Static Output Feedback. Tech. Report, July, 2011, [pdf].
    2. Q. Tran Dinh, W. Michiels and M. Diehl. An inner convex approximation algorithm for BMI optimization and applications in control. Tech. Report, December, 2011, [pdf].

SCPCVX: Sequential Convex Programming – CVX Interface



Useful software tools and codes in optimization


I have been using many open-source as well as commercial software packages for my research. I find that they are useful and valuable. Here are some of them.


Visitors (From 20.07.2013):
http://www.hitwebcounter.com