Skip to content

NetSysOpt/IPM-LSTM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IPM-LSTM: A Learning-Based Interior Point Method for Solving Nonlinear Programs

This repository is an implementation of the NeurIPS 2024 paper: IPM-LSTM: A Learning-Based Interior Point Method for Solving Nonlinear Programs.

Introduction

Constrained Nonlinear Programs (NLPs) represent a category of mathematical optimization problems in which the objective function, constraints, or both, exhibit nonlinearity. The versatility of NLPs allows for their application across a wide array of domains, including power systems, robotics, and wireless communication networks. Our work focus on solving the following NLP:

$$ \begin{aligned} & \underset{x \in \mathbb{R}^{n}} {\text{min}} && f(x) \\ & \text{s.t.} && h(x) = 0 \\ & && x \geq 0 \end{aligned} $$

where the functions $f:\mathbb{R}^n\rightarrow \mathbb{R}$ and $h:\mathbb{R}^n\rightarrow \mathbb{R}^m$ are all assumed to be twice continuously differentiable. Problems with general nonlinear inequality constraints can be reformulated in the above form by introducing slack variables. The primal-dual Interior Point Method (IPM) stands as a preeminent algorithm for addressing NLPs. However, a notable computational bottleneck arises during the process of solving linear systems, necessitating matrix decomposition with a runtime complexity of $\mathcal{O}(n^3)$. In our paper, we propose the integration of Long Short-Term Memory (LSTM) neural networks to address the crucial task of solving systems of linear equations within IPMs, introducing a novel approach named IPM-LSTM. Our model is trained in an unsupervised manner:

$$ \begin{aligned} & \min_{\theta}\frac{1}{|\mathcal{M}|} \sum_{M \in \mathcal{M}} \left( \frac{1}{K} \sum_{k=1}^{K}\frac{1}{T}\sum_{t=1}^{T} \frac{1}{2}\left||J^k y^k_t(\theta) + F^k\right||^{2} \right)_M, \end{aligned} $$

where ${(y^1_1,..., y^1_T), ..., (y^K_1,..., y^K_T)}$ indicates a sequence of iterates generated by IPM-LSTM, and $\mathcal{M}$ represents a dataset of NLPs. The approximate solutions generated by IPM-LSTM exhibit a higer feasibility and optimality, and can be utilized to warm-start IPM solvers such as IPOPT.

Software dependencies

python=3.10
pytorch=1.13.1 
cyipopt=1.3.0

Running experiments

You can use the following command to generate a Convex Quadratic Programming (QP) dataset (RHS):

python generate_data.py --config ./configs/Generate_Data.yaml --prob_type Convex_QP_RHS

For training phase, you can run following command:

python main.py --config ./configs/QP.yaml --prob_type QP_RHS

Our model will be stored at ./results/lstm/params/QP_RHS_100_50_50_100_50.pth. During the testing phase, you can run the following command to obtain our experimental results:

python main.py --config ./configs/QP.yaml --prob_type QP_RHS --test --solver ipopt --save_sol

You can use the following command to plot the figures in our paper.

python plot.py --config ./configs/QP.yaml --prob_type QP_RHS --plot_type Objective_values

Citing our work

If you would like to utilize IPM-LSTM in your research, we kindly request that you cite our paper as follows:

@article{gao2024ipm,
  title={IPM-LSTM: A Learning-Based Interior Point Method for Solving Nonlinear Programs},
  author={Gao, Xi and Xiong, Jinxin and Wang, Akang and Xue, Jiang and Shi, Qingjiang and others},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={122891--122916},
  year={2024}
}

About

IPM-LSTM: A Learning-Based Interior Point Method for Solving Nonlinear Programs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •