Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 8 additions & 58 deletions class02/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,39 +4,20 @@

**Topic:** Numerical optimization for control (gradient/SQP/QP); ALM vs. interior-point vs. penalty methods

**Pluto Notebook for all the chapter**: Here is the actual [final chapter](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/class02.html)

---

## Overview

This class covers the fundamental numerical optimization techniques essential for optimal control problems. We explore gradient-based methods, Sequential Quadratic Programming (SQP), and various approaches to handling constraints including Augmented Lagrangian Methods (ALM), interior-point methods, and penalty methods.

## Learning Objectives

By the end of this class, students will be able to:

- Understand the mathematical foundations of gradient-based optimization
- Implement Newton's method for unconstrained minimization
- Apply root-finding techniques for implicit integration schemes
- Solve equality-constrained optimization problems using Lagrange multipliers
- Compare and contrast different constraint handling methods (ALM, interior-point, penalty)
- Implement Sequential Quadratic Programming (SQP) for nonlinear optimization

## Prerequisites

- Solid understanding of linear algebra and calculus
- Familiarity with Julia programming
- Basic knowledge of differential equations
- Understanding of optimization concepts from Class 1
The slides for this lecture can be found here [Lecture Slides (PDF)](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/ISYE_8803___Lecture_2___Slides.pdf)

## Materials

### Interactive Notebooks

The class is structured around four interactive Jupyter notebooks that build upon each other:
The Pluto julia notebook for my final chapter can be found here [final chapter](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/class02.html)

Although the main code for the julia demo's are contained in the Pluto notebook above, the following julia notebooks are the demo's I used in the class recording/presentation.


1. **[Part 1a: Root Finding & Backward Euler](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/part1_root_finding.html)**
- Root-finding algorithms for implicit integration
- Fixed-point iteration vs. Newton's method
Expand All @@ -48,49 +29,18 @@ The class is structured around four interactive Jupyter notebooks that build upo
- Unconstrained optimization fundamentals
- Newton's method for minimization
- Hessian matrix and positive definiteness
- Regularization and line search techniques
- Practical implementation with Julia
- Regularization and line search techniques

3. **[Part 2: Equality Constraints](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/part2_eq_constraints.html)**
- Lagrange multiplier theory
- KKT conditions for equality constraints
- Quadratic programming with equality constraints
- Visualization of constrained optimization landscapes
- Practical implementation examples
- Quadratic programming with equality constraints

4. **[Part 3: Interior-Point Methods](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/part3_ipm.html)**
- Inequality constraint handling
- Barrier methods and log-barrier functions
- Interior-point algorithm implementation
- Comparison with penalty methods
- Convergence properties and practical considerations

### Additional Resources

- **[Lecture Slides (PDF)](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/ISYE_8803___Lecture_2___Slides.pdf)** - Complete slide deck from the presentation
- **[Demo Script](https://learningtooptimize.github.io/LearningToControlClass/dev/class02/penalty_barrier_demo.py)** - Python demonstration of penalty vs. barrier methods

## Key Concepts Covered

### Mathematical Foundations
- **Gradient and Hessian**: Understanding first and second derivatives in optimization
- **Newton's Method**: Quadratic convergence and implementation details
- **KKT Conditions**: Necessary and sufficient conditions for optimality
- **Duality Theory**: Lagrange multipliers and dual problems

### Numerical Methods
- **Root Finding**: Fixed-point iteration, Newton-Raphson method
- **Implicit Integration**: Backward Euler for stiff ODEs
- **Sequential Quadratic Programming**: Local quadratic approximations
- **Interior-Point Methods**: Barrier functions and path-following

### Constraint Handling
- **Equality Constraints**: Lagrange multipliers and null-space methods
- **Inequality Constraints**: Active set methods and interior-point approaches
- **Penalty Methods**: Quadratic and exact penalty functions
- **Augmented Lagrangian**: Combining penalty and multiplier methods


- Interior-point algorithm implementation

---

*For questions or clarifications, please reach out to Arnaud Deza at adeza3@gatech.edu*