Options
A new gradient-based neural network for solving linear and quadratic programming problems
Date Issued
2001
ISSN
10459227
Citation
IEEE Transactions on Neural Networks, 2001, Vol. 12 (5), pp. 1074 - 1083
Type
Peer Reviewed Journal Article
Abstract
In this paper, a new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming (LP) and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality (including nonnegativity) constraints are properly handled. The theory of the proposed network is rigorous and the performance is much better. The simulation results also show that the proposed neural network is feasible and efficient.
Subjects
Loading...
Availability at HKSYU Library

