Prerequisites
Either STOR 415, Linear algebra and advanced calculus, or equivalent. If you don’t have one of these prerequisites, please discuss with the instructor to get permission.
Topics
- Mathematical formulations of optimization
- Mathematical formulations of optimization
- Basic concepts: decision variable, objective, constraints, feasible solutions, feasible set, optimal value, and optimal solutions.
- Representative examples (old and new): production planning, network flows, optimal transport, least absolute deviations (LAD), and Wasserstein barycenter.
- Forms of LPs and preprocessing.
- Mathematical tools from convex analysis
- Convex sets, convex hulls, polyhedra, examples, and basic properties
- Convex functions: definitions, examples, and basic properties
- Geometry of polyhedra and solution structures of LPs: basic feasible solutions, optimal solutions, and existence of solutions.
- Linear programming
- Mathematics of simplex methods: matrix form, tableau form, and examples
- Two-phase simplex methods, finiteness, and complexity
- Software for LPs: Modeling software and LP solvers
- Dual problem of LP and its construction
- Weak duality, strong duality, complementarity slackness
- Sensitivity analysis, robust LPs, and applications (if time permits)
- Quadratic programming (QP)
- Mathematical formulations of convex QPs
- Examples (e.g. SVM, linear optimal control, and portfolio optimization)
- Unconstrained QPs and solution methods
- Constrained QPs, KKT conditions, and duality
- Solution methods for general QPs: projected gradient and active-set methods.
- Unconstrained nonlinear optimizationÂ
- Mathematical models and composite problems
- Representative examples (e.g., empirical risk minimization, LASSO, logistic regression, and nonnegative matrix factorization).
- Subdifferentials, optimality condition, proximal operators, and projection
- Gradient descent methods and convergence analysis
- Accelerated gradient descent method and its variants
- Proximal/projection operator calculations; proximal gradient methods; and accelerated proximal gradient methods
- Implementation aspects: reducing per-iteration complexity, line-search, and restarting techniques.
- Stochastic optimization methods
- Basic stochastic gradient descent (SGD) methods
- Theoretical convergence guarantees of SGD
- Mini-batch and variance reduction methods
- Implementation and applications of SGD and variants