Research


Research Grants


  • ONR grant No. N00014-23-1-2588: Optimality Tracking Principle: A Novel and Provable Framework for Optimization and Minimax Problems (May 2023 – April 2026).
  • NSF-RTG grant No. NSF DMS-2134107: RTG: Networks: Foundations in Probability, Optimization, and Data Sciences (July 2022 – June 2027). The project website is available at: https://tarheels.live/networks/.
  • ONR grant No. N00014-20-1-2088: A Scalable Optimization Framework for Convex Minimization with Hard Constraints (02/01/2020 – 01/30/2023).
  • NSF grant No. 1619884: Efficient Methods for Large-Scale Self-Concordant Convex Minimization (July 1, 2016 – June 30, 2019).
  • UNC Junior Faculty Development Award (Jan. 1, 2016-Dec. 31, 2016).

Research interests


General research interests

My main research is on numerical optimization: theory, algorithms, and applications. I am currently focusing on efficient methods (both deterministic and stochastic) for convex optimization and nonconvex optimization, with applications in signal/image processing, statistics, engineering, machine learning, and data science. I am also working on proximal interior-point methods for convex constrained programming with applications in conic programming and SDP relaxations.

Before, I worked on equilibrium problems and variational inequalities; sequential convex programming (SCP) for nonlinear optimization and applications in model predictive control, optimal control, and static output feedback control; and first order and second order decomposition methods for large-scale convex optimization.

Below are some current research projects:

  • Hybrid Stochastic Optimization Methods for Large-Scale Nonconvex Optimization

Large-scale nonconvex optimization is a core step in many machine learning and statistical learning applications, including deep learning, reinforcement learning, and Bayesian optimization. Due to its high-dimensionality and expensive oracle evaluations, e.g. in big data regime, stochastic gradient descent (SGD) algorithms are often employed to solve these applications. Recently, there are at least two leading research trends in developing SGD algorithms. The first one essentially relies on the classical SGD scheme but applying to specific class of problems by further exploiting problem structures to obtain better performance. The second trend is based on variance reduction techniques. While variance reduction methods often have better oracle complexity bounds than SGD-based methods, their performance, e.g. on neural network training models is not satisfactory. This observation leads to a belief that there is a gap between theoretical guarantees and practical performance in SGDs. In practice, many algorithms implemented for deep learning come from the first trend, but intensively exploit adaptive strategies and momentum techniques to improve and stabilize the performance.

This research topic aims at studying both theoretical and practical aspects of stochastic gradient methods, both standard and variance reduction methods relying on some recent universal concepts such as relative smoothness and strong convexity. It also concentrates on developing a new approach called hybrid stochastic optimization methods to deal with large-scale nonconvex optimization problems. In addition to theoretical and algorithmic development, implementation and applications in machine learning will also be considered.

  • Optimization involving Self-concordant Structures.

Self-concordance is a powerful mathematical concept introduced by Y. Nesterov and A. Nemirovskii in 1990s and widely used in Interior-Point methods. Recently, it has been recognized that the class of self-concordant functions also covers many important problems in statistics and machine learning such as graphical learning, Poisson image processing, regularized logistic regression, and distance weighted discrimination. This research topic focuses on studying the self-concordant concept and its generalizations to broader classes of functions. It also investigates theoretical foundations beyond self-concordance and develops novel solution methods to solve both convex and nonconvex optimization problems involving generalized self-concordant structures where existing methods may not have theoretical guarantees or become inefficient. It also seeks new applications in different domains where such generalized self-concordance theory applies.

  • Nonstationary first-order methods for large-scale convex optimization

Convex optimization with complex structures such as linear and nonlinear constraints and max structures remains challenging to solve in large-scale settings. These problems have various applications in almost every domain. Representative examples include network optimization, distributed optimization, transportation, basis pursuits, and portfolio optimization. State-of-the-art first-order methods for solving these problems consist of [accelerated] gradient descent-based, primal-dual, augmented Lagrangian (e.g. alternative direction method of multipliers (ADMM)), coordinate descent, and stochastic gradient descent-based methods. While these methods have been widely used in practice and shown great performance, their theoretical aspects on constrained and min-max settings are not really well-understood, especially in adaptive algorithmic variants. The objective of this topic is to develop a class of nonstationary first-order methods where the algorithmic parameters are updated adaptively under the guidance of theoretical analysis. Such new algorithms are expected to have better theoretical guarantees than existing methods in terms of theoretical guarantees. It also focuses on understanding the gap between practice and theoretical guarantees in existing methods such as primal-dual schemes, ADMM, and their variants. In addition, it also aims at customizing the proposed methods to concrete applications and their practical implementation.


Publications


My publications and working papers can be found HERE.


Visitors (From 20.07.2013):
http://www.hitwebcounter.com