首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Based on an augmented Lagrangian line search function, a sequential quadratically constrained quadratic programming method is proposed for solving nonlinearly constrained optimization problems. Compared to quadratic programming solved in the traditional SQP methods, a convex quadratically constrained quadratic programming is solved here to obtain a search direction, and the Maratos effect does not occur without any other corrections. The “active set” strategy used in this subproblem can avoid recalculating the unnecessary gradients and (approximate) Hessian matrices of the constraints. Under certain assumptions, the proposed method is proved to be globally, superlinearly, and quadratically convergent. As an extension, general problems with inequality and equality constraints as well as nonmonotone line search are also considered.  相似文献   

2.
Fixed point and Bregman iterative methods for matrix rank minimization   总被引:5,自引:0,他引:5  
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htm for non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10?5 in about 3?min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.  相似文献   

3.
An application in magnetic resonance spectroscopy quantification models a signal as a linear combination of nonlinear functions. It leads to a separable nonlinear least squares fitting problem, with linear bound constraints on some variables. The variable projection (VARPRO) technique can be applied to this problem, but needs to be adapted in several respects. If only the nonlinear variables are subject to constraints, then the Levenberg–Marquardt minimization algorithm that is classically used by the VARPRO method should be replaced with a version that can incorporate those constraints. If some of the linear variables are also constrained, then they cannot be projected out via a closed-form expression as is the case for the classical VARPRO technique. We show how quadratic programming problems can be solved instead, and we provide details on efficient function and approximate Jacobian evaluations for the inequality constrained VARPRO method.  相似文献   

4.
We present an algorithm for super-scale linearly constrained nonlinear programming (LCNP) based on Newton's method. In large-scale programming solving the Newton equation at each iteration can be expensive and may not be justified when far from a local solution. For super-scale problems, the truncated Newton method (where an inaccurate solution is computed by using the conjugate-gradient method) is recommended; a diagonal BFGS preconditioning of the gradient is used, so that the number of iterations to solve the equation is reduced. The procedure for updating that preconditioning is described for LCNP when the set of active constraints or the partition of basic, superbasic and nonbasic (structural) variables have been changed.  相似文献   

5.
In this paper, we propose approximate and exact algorithms for the double constrained two-dimensional guillotine cutting stock problem (DCTDC). The approximate algorithm is a two-stage procedure. The first stage attempts to produce a starting feasible solution to DCTDC by solving a single constrained two dimensional cutting problem, CDTC. If the solution to CTDC is not feasible to DCTDC, the second stage is used to eliminate non-feasibility. The exact algorithm is a branch-and-bound that uses efficient lower and upper bounding schemes. It starts with a lower bound reached by the approximate two-stage algorithm. At each internal node of the branching tree, a tailored upper bound is obtained by solving (relaxed) knapsack problems. To speed up the branch and bound, we implement, in addition to ordered data structures of lists, symmetry, duplicate, and non-feasibility detection strategies which fathom some unnecessary branches. We evaluate the performance of the algorithm on different problem instances which can become benchmark problems for the cutting and packing literature.  相似文献   

6.
Recently the authors have proposed a homogeneous and self-dual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interior-point type method; nevertheless, it yields either an approximate optimal solution or detects a possible infeasibility of the problem. In this paper we specialize the algorithm to the solution of general smooth convex optimization problems, which also possess nonlinear inequality constraints and free variables. We discuss an implementation of the algorithm for large-scale sparse convex optimization. Moreover, we present computational results for solving quadratically constrained quadratic programming and geometric programming problems, where some of the problems contain more than 100,000 constraints and variables. The results indicate that the proposed algorithm is also practically efficient.  相似文献   

7.
Functional optimization problems can be solved analytically only if special assumptions are verified; otherwise, approximations are needed. The approximate method that we propose is based on two steps. First, the decision functions are constrained to take on the structure of linear combinations of basis functions containing free parameters to be optimized (hence, this step can be considered as an extension to the Ritz method, for which fixed basis functions are used). Then, the functional optimization problem can be approximated by nonlinear programming problems. Linear combinations of basis functions are called approximating networks when they benefit from suitable density properties. We term such networks nonlinear (linear) approximating networks if their basis functions contain (do not contain) free parameters. For certain classes of d-variable functions to be approximated, nonlinear approximating networks may require a number of parameters increasing moderately with d, whereas linear approximating networks may be ruled out by the curse of dimensionality. Since the cost functions of the resulting nonlinear programming problems include complex averaging operations, we minimize such functions by stochastic approximation algorithms. As important special cases, we consider stochastic optimal control and estimation problems. Numerical examples show the effectiveness of the method in solving optimization problems stated in high-dimensional settings, involving for instance several tens of state variables.  相似文献   

8.
《Optimization》2012,61(1-2):93-120
In a continuous approach we propose an efficient method for globally solving linearly constrained quadratic zero-one programming considered as a d.c. (difference of onvex functions) program. A combination of the d.c. optimization algorithm (DCA) which has a finite convergence, and the branch-and-bound scheme was studied. We use rectangular bisection in the branching procedure while the bounding one proceeded by applying d.c.algorithms from a current best feasible point (for the upper bound) and by minimizing a well tightened convex underestimation of the objective function on the current subdivided domain (for the lower bound). DCA generates a sequence of points in the vertex set of a new polytope containing the feasible domain of the problem being considered. Moreover if an iterate is integral then all following iterates are integral too.Our combined algorithm converges so quite often to an integer approximate solution.Finally, we present computational results of several test problems with up to 1800

variables which prove the efficiency of our method, in particular, for linear zero-one programming  相似文献   

9.
首先将一个具有多个约束的规划问题转化为一个只有一个约束的规划问题,然后通过利用这个单约束的规划问题,对原来的多约束规划问题提出了一些凸化、凹化的方法,这样这些多约束的规划问题可以被转化为一些凹规划、反凸规划问题.最后,还证明了得到的凹规划和反凸规划的全局最优解就是原问题的近似全局最优解.  相似文献   

10.
基于增广Lagrange函数的RQP方法   总被引:3,自引:0,他引:3  
王秀国  薛毅 《计算数学》2003,25(4):393-406
Recursive quadratic programming is a family of techniques developd by Bartholomew-Biggs and other authors for solving nonlinear programming problems.This paperdescribes a new method for constrained optimization which obtains its search di-rections from a quadratic programming subproblem based on the well-known aug-mented Lagrangian function.It avoids the penalty parameter to tend to infinity.We employ the Fletcher‘s exact penalty function as a merit function and the use of an approximate directional derivative of the function that avoids the need toevaluate the second order derivatives of the problem functions.We prove that thealgorithm possesses global and superlinear convergence properties.At the sametime, numerical results are reported.  相似文献   

11.
Consider the class of linear-quadratic (LQ) optimal control problems with continuous linear state constraints, that is, constraints imposed on every instant of the time horizon. This class of problems is known to be difficult to solve numerically. In this paper, a computational method based on a semi-infinite programming approach is given. The LQ optimal control problem is formulated as a positive-quadratic infinite programming problem. This can be done by considering the control as the decision variable, while taking the state as a function of the control. After parametrizing the decision variable, an approximate quadratic semi-infinite programming problem is obtained. It is shown that, as we refine the parametrization, the solution sequence of the approximate problems converges to the solution of the infinite programming problem (hence, to the solution of the original optimal control problem). Numerically, the semi-infinite programming problems obtained above can be solved efficiently using an algorithm based on a dual parametrization method.  相似文献   

12.
并行技术在约束凸规划化问题的对偶算法中的应用   总被引:1,自引:0,他引:1  
用 Rosen(196 1)的投影梯度的方法求解约束凸规划化问题的对偶问题 ,在计算投影梯度方向时 ,涉及求关于原始变量的最小化问题的最优解 .我们用并行梯度分布算法 (PGD)计算出这一极小化问题的近似解 ,证明近似解可以达到任何给定的精度 ,并说明当精度选取合适时 ,Rosen方法仍然是收敛的  相似文献   

13.
A dynamic programming method is presented for solving constrained, discrete-time, optimal control problems. The method is based on an efficient algorithm for solving the subproblems of sequential quadratic programming. By using an interior-point method to accommodate inequality constraints, a modification of an existing algorithm for equality constrained problems can be used iteratively to solve the subproblems. Two test problems and two application problems are presented. The application examples include a rest-to-rest maneuver of a flexible structure and a constrained brachistochrone problem.  相似文献   

14.
This paper deals with the design of linear-phase finite impulse response (FIR) digital filters using weighted peak-constrained least-squares (PCLS) optimization. The PCLS error design problem is formulated as a quadratically constrained quadratic semi-infinite programming problem. An exchange algorithm with a new exchange rule is proposed to solve the problem. The algorithm provides the approximate optimal solution after a finite number of iterations. In particular, the subproblem solved at each iteration is a quadratically constrained quadratic programming. We can rewrite it as a conic optimization problem solvable in polynomial time. For illustration, numerical examples are solved using the proposed algorithm.  相似文献   

15.
This article studies a numerical solution method for a special class of continuous time linear programming problems denoted by (SP). We will present an efficient method for finding numerical solutions of (SP). The presented method is a discrete approximation algorithm, however, the main work of computing a numerical solution in our method is only to solve finite linear programming problems by using recurrence relations. By our constructive manner, we provide a computational procedure which would yield an error bound introduced by the numerical approximation. We also demonstrate that the searched approximate solutions weakly converge to an optimal solution. Some numerical examples are given to illustrate the provided procedure.  相似文献   

16.
In this paper, a general methodology to approximate sets of data points through Non-uniform Rational Basis Spline (NURBS) curves is provided. The proposed approach aims at integrating and optimizing the full set of design variables (both integer and continuous) defining the shape of the NURBS curve. To this purpose, a new formulation of the curve fitting problem is required: it is stated in the form of a constrained nonlinear programming problem by introducing a suitable constraint on the curvature of the curve. In addition, the resulting optimization problem is defined over a domain having variable dimension, wherein both the number and the value of the design variables are optimized. To deal with this class of constrained nonlinear programming problems, a global optimization hybrid tool has been employed. The optimization procedure is split in two steps: firstly, an improved genetic algorithm optimizes both the value and the number of design variables by means of a two-level Darwinian strategy allowing the simultaneous evolution of individuals and species; secondly, the optimum solution provided by the genetic algorithm constitutes the initial guess for the subsequent gradient-based optimization, which aims at improving the accuracy of the fitting curve. The effectiveness of the proposed methodology is proven through some mathematical benchmarks as well as a real-world engineering problem.  相似文献   

17.
While significant progress has been made, analytic research on principal-agent problems that seek closed-form solutions faces limitations due to tractability issues that arise because of the mathematical complexity of the problem. The principal must maximize expected utility subject to the agent’s participation and incentive compatibility constraints. Linearity of performance measures is often assumed and the Linear, Exponential, Normal (LEN) model is often used to deal with this complexity. These assumptions may be too restrictive for researchers to explore the variety of relationships between compensation contracts offered by the principal and the effort of the agent. In this paper we show how to numerically solve principal-agent problems with nonlinear contracts. In our procedure, we deal directly with the agent’s incentive compatibility constraint. We illustrate our solution procedure with numerical examples and use optimization methods to make the problem tractable without using the simplifying assumptions of a LEN model. We also show that using linear contracts to approximate nonlinear contracts leads to solutions that are far from the optimal solutions obtained using nonlinear contracts. A principal-agent problem is a special instance of a bilevel nonlinear programming problem. We show how to solve principal-agent problems by solving bilevel programming problems using the ellipsoid algorithm. The approach we present can give researchers new insights into the relationships between nonlinear compensation schemes and employee effort.  相似文献   

18.
In this paper, an algorithm of barrier objective penalty function for inequality constrained optimization is studied and a conception–the stability of barrier objective penalty function is presented. It is proved that an approximate optimal solution may be obtained by solving a barrier objective penalty function for inequality constrained optimization problem when the barrier objective penalty function is stable. Under some conditions, the stability of barrier objective penalty function is proved for convex programming. Specially, the logarithmic barrier function of convex programming is stable. Based on the barrier objective penalty function, an algorithm is developed for finding an approximate optimal solution to an inequality constrained optimization problem and its convergence is also proved under some conditions. Finally, numerical experiments show that the barrier objective penalty function algorithm has better convergence than the classical barrier function algorithm.  相似文献   

19.
In this paper, we consider the box constrained nonlinear integer programming problem. We present an auxiliary function, which has the same discrete global minimizers as the problem. The minimization of the function using a discrete local search method can escape successfully from previously converged discrete local minimizers by taking increasing values of a parameter. We propose an algorithm to find a global minimizer of the box constrained nonlinear integer programming problem. The algorithm minimizes the auxiliary function from random initial points. We prove that the algorithm can converge asymptotically with probability one. Numerical experiments on a set of test problems show that the algorithm is efficient and robust.  相似文献   

20.
This paper presents the use of surrogate constraints and Lagrange multipliers to generate advanced starting solutions to constrained network problems. The surrogate constraint approach is used to generate a singly constrained network problem which is solved using the algorithm of Glover, Karney, Klingman and Russell [13]. In addition, we test the use of the Lagrangian function to generate advanced starting solutions. In the Lagrangian approach, the subproblems are capacitated network problems which can be solved using very efficient algorithms.The surrogate constraint approach is implemented using the multiplier update procedure of Held, Wolfe and Crowder [16]. The procedure is modified to include a search in a single direction to prevent periodic regression of the solution. We also introduce a reoptimization procedure which allows the solution from thekth subproblem to be used as the starting point for the next surrogate problem for which it is infeasible once the new surrogate constraint is adjoined.The algorithms are tested under a variety of conditions including: large-scale problems, number and structure of the non-network constraints, and the density of the non-network constraint coefficients.The testing clearly demonstrates that both the surrogate constraint and Langrange multipliers generate advanced starting solutions which greatly improve the computational effort required to generate an optimal solution to the constrained network problem. The testing demonstrates that the extra effort required to solve the singly constrained network subproblems of the surrogate constraints approach yields an improved advanced starting point as compared to the Lagrangian approach. It is further demonstrated that both of the relaxation approaches are much more computationally efficient than solving the problem from the beginning with a linear programming algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号