首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
We present a new algorithm for solving a linear least squares problem with linear constraints. These are equality constraint equations and nonnegativity constraints on selected variables. This problem, while appearing to be quite special, is the core problem arising in the solution of the general linearly constrained linear least squares problem. The reduction process of the general problem to the core problem can be done in many ways. We discuss three such techniques.The method employed for solving the core problem is based on combining the equality constraints with differentially weighted least squares equations to form an augmented least squares system. This weighted least squares system, which is equivalent to a penalty function method, is solved with nonnegativity constraints on selected variables.Three types of examples are presented that illustrate applications of the algorithm. The first is rank deficient, constrained least squares curve fitting. The second is concerned with solving linear systems of algebraic equations with Hilbert matrices and bounds on the variables. The third illustrates a constrained curve fitting problem with inconsistent inequality constraints.  相似文献   

3.
4.
《Optimization》2012,61(3-4):269-284
The Kuhn–Tucker conditions of an optimization problem with inequality constraints are transformed equivalently into a special nonlinear system of equations T 0(z) = 0. It is shown that Newton's method for solving this system combines two valuable properties: The local Q-quadratic convergence without assuming the strict complementary slackness condition and the regularity of the Jacobian of T 0 at a point z under reasonable conditions, so that Newton’s method can be used also far from a Kuhn–Tucker point  相似文献   

5.
This paper presents a nonmonotone trust region algorithm for equality constrained optimization problems. Under certain conditions, we obtain not only the global convergence in the sense that every limit point is a stationary point but also the one step superlinear convergence rate. Numerical tests are also given to show the efficiency of the proposed algorithm.  相似文献   

6.
The implementation of the recently proposed semi-monotonic augmented Lagrangian algorithm for the solution of large convex equality constrained quadratic programming problems is considered. It is proved that if the auxiliary problems are approximately solved by the conjugate gradient method, then the algorithm finds an approximate solution of the class of problems with uniformly bounded spectrum of the Hessian matrix at O(1) matrix–vector multiplications. If applied to the class of problems with the Hessian matrices that are in addition either sufficiently sparse or can be expressed as a product of such sparse matrices, then the cost of the solution is proportional to the dimension of the problems. Theoretical results are illustrated by numerical experiments. This research is supported by grants of the Ministry of Education No. S3086102, ET400300415 and MSM 6198910027.  相似文献   

7.
In this paper, a new superlinearly convergent algorithm is presented for optimization problems with general nonlineer equality and inequality Constraints, Comparing with other methods for these problems, the algorithm has two main advantages. First, it doesn‘t solve anyquadratic programming (QP), and its search directions are determined by the generalized projection technique and the solutions of two systems of linear equations. Second, the sequential points generated by the algoritbh satisfy all inequity constraints and its step-length is computed by the straight line search,The algorithm is proved to possesa global and auperlinear convergence.  相似文献   

8.
The pivot and probe algorithm for solving a linear program   总被引:1,自引:0,他引:1  
In [7] we defined acandidate constraint as one which, for at least one pivot step, contains a potential pivot, discovered that most constraints are never candidate, and devised a modification of the simplex method in which only constraints which are currently candidates are updated. In this paper we present another way to take advantage of this fact. We begin by solving a relaxed linear program consisting of the constraints of the original problem which are initially candidates. Its solution gives an upper bound to the value of the original problem. We also introduce the idea of a probe, that is, a line segment joining two vectors for the primal problem, one of which is primal feasible, and use it to identify a most violated constraint; at the same time this gives a lower bound to the objective value of the original problem. This violated constraint is added to the relaxed problem which is solved again, which gives a new upper bound etc. We present computational experience indicating that time savings of 50–80% over the simplex method can be obtained by this method, which we call PAPA, the Pivot and Probe Algorithm. This report was prepared as part of the activities of the Management Science Research Group, Carnegie-Mellon University, under Contract No. N00014-75-C-0621 NR 047-048 with the U.S. Office of Naval Research. Reproduction in whole or part is permitted for any purpose of the U.S. Government.  相似文献   

9.
A new algorithm is proposed which, under mild assumptions, generates a sequence{x i } that starting at any point inR n will converge to a setX defined by a mixed system of equations and inequalities. Any iteration of the algorithm requires the solution of a linear programming problem with relatively few constraints. By only assuming that the functions involved are continuously differentiable a superlinear rate of convergence is achieved. No convexity whatsoever is required by the algorithm.  相似文献   

10.
Dinkelbach's algorithm was developed to solve convex fractinal programming. This method achieves the optimal solution of the optimisation problem by means of solving a sequence of non-linear convex programming subproblems defined by a parameter. In this paper it is shown that Dinkelbach's algorithm can be used to solve general fractional programming. The applicability of the algorithm will depend on the possibility of solving the subproblems. Dinkelbach's extended algorithm is a framework to describe several algorithms which have been proposed to solve linear fractional programming, integer linear fractional programming, convex fractional programming and to generate new algorithms. The applicability of new cases as nondifferentiable fractional programming and quadratic fractional programming has been studied. We have proposed two modifications to improve the speed-up of Dinkelbachs algorithm. One is to use interpolation formulae to update the parameter which defined the subproblem and another truncates the solution of the suproblem. We give sufficient conditions for the convergence of these modifications. Computational experiments in linear fractional programming, integer linear fractional programming and non-linear fractional programming to evaluate the efficiency of these methods have been carried out.  相似文献   

11.
A new approach, identified as progressive genetic algorithm (PGA), is proposed for the solutions of optimization problems with nonlinear equality and inequality constraints. Based on genetic algorithms (GAs) and iteration method, PGA divides the optimization process into two steps; iteration and search steps. In the iteration step, the constraints of the original problem are linearized using truncated Taylor series expansion, yielding an approximate problem with linearized constraints. In the search step, GA is applied to the problem with linearized constraints for the local optimal solution. The final solution is obtained from a progressive iterative process. Application of the proposed method to two simple examples is given to demonstrate the algorithm.  相似文献   

12.
This paper describes an implementation of the so-calledproximal point algorithm for solving convex linearly constrained nonsmooth optimization problems. Contrary to other previous implementations of the same approach (which solve constrained nonsmooth problems as unconstrained problems via exact penalty function techniques), our implementation handles linear constraints explicitly (linear constraints being incorporated into the direction-finding subproblem). The relevance and efficiency of the approach is demonstrated through comparative computational experiments on many classical test problems from the literature, as well as on a series of large constrained dual transportation problems introduced and studied here for the first time.  相似文献   

13.
精确罚函数方法是求解优化问题的一类经典方法,传统的精确罚函数不可能既是简单的又是光滑的,这里简单的是指罚函数中不包含目标函数和约束函数的梯度信息。针对等式约束问题提出了不同与传统罚函数的一类新的简单光滑罚函数并证明了它是精确的。给出了以新的罚函数为基础的罚函数方法并用数值例子说明算法是可行的。  相似文献   

14.
The purpose of this paper is to study the strong convergence of a general iterative scheme to find a common element of the set of common fixed points of a finite family of nonexpansive mappings, the set of solutions of variation inequalities for a relaxed cocoercive mapping and the set of solutions of an equilibrium problem. Our results extend recent results announced by many others.  相似文献   

15.
Based on a new efficient identification technique of active constraints introduced in this paper, a new sequential systems of linear equations (SSLE) algorithm generating feasible iterates is proposed for solving nonlinear optimization problems with inequality constraints. In this paper, we introduce a new technique for constructing the system of linear equations, which recurs to a perturbation for the gradients of the constraint functions. At each iteration of the new algorithm, a feasible descent direction is obtained by solving only one system of linear equations without doing convex combination. To ensure the global convergence and avoid the Maratos effect, the algorithm needs to solve two additional reduced systems of linear equations with the same coefficient matrix after finite iterations. The proposed algorithm is proved to be globally and superlinearly convergent under some mild conditions. What distinguishes this algorithm from the previous feasible SSLE algorithms is that an improving direction is obtained easily and the computation cost of generating a new iterate is reduced. Finally, a preliminary implementation has been tested.  相似文献   

16.
In this paper, we introduce an iterative algorithm for finding a common element of the set of solutions of a mixed equilibrium problem, the set of fixed points of an infinite family of nonexpansive mappings and the set of solutions of a general system of variational inequalities for a cocoercive mapping in a real Hilbert space. Furthermore, we prove that the proposed iterative algorithm converges strongly to a common element of the above three sets. Our results extend and improve the corresponding results of Ceng, Wang, and Yao [L.C. Ceng, C.Y. Wang, J.C. Yao, Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities, Math. Methods Oper. Res. 67 (2008) 375–390], Ceng and Yao [L.C. Ceng, J.C. Yao, A hybrid iterative scheme for mixed equilibrium problems and fixed point problems, J. Comput. Appl. Math. doi:10.1016/j.cam.2007.02.022], Takahashi and Takahashi [S. Takahashi, W. Takahashi, Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces, J. Math. Anal. Appl. 331 (2007) 506–515] and many others.  相似文献   

17.
§ 1 IntroductionConsiderthefollowingnonlinearoptimizationproblem :minimizef(x)subjecttoC(x) =0 , a≤x≤b ,( 1 .1 )wheref(x) :Rn→R ,C(x) =(c1(x) ,c2 (x) ,...,cm(x) ) T:Rn→Rm aretwicecontinuouslydifferentiable,m≤n ,a ,b∈Rn.Trustregionalgorithmsareveryeffectiveforsolvingnonlinearoptimi…  相似文献   

18.
A new heuristic procedure, which is called Smart Greedy, is proposed for solving a kind of general reliability optimization problems (non-DGR type knapsack problems). Smart Greedy uses Recursive Greedy with multiple greedy functions designated by balance coefficients, generates several solutions and then determines the best solution among them as the smart greedy solution. Recursive Greedy first checks the feasibility of sets of items for a given problem and removes infeasible items from the item sets. Second, the procedure checks the gain ratio of increments of objective function to constraint function and reduces the problem to DGR type problem by invoking LP dominance. Third, the procedure continues to allocate the increments for current items until the constraint is violated. With the current solution, the procedure then repeats the greedy procedure for current items that are added to the items removed by the LP dominance in the previous step.Computational results show that the Smart Greedy is more effective than the previously reported methods.  相似文献   

19.
We present a greedy algorithm for solving a special class of convex programming problems and establish a connection with polymatroid theory which yields a theoretical explanation and verification of the algorithm via some recent results of S. Fujishige.  相似文献   

20.
In this paper, we introduce and study a new hybrid iterative method for finding a common element of the set of solutions of a mixed equilibrium problem, the set of fixed points of an infinite family of nonexpansive mappings and the set of solutions of variational inequalities for a ξ-Lipschitz continuous and relaxed (m,v)-cocoercive mappings in Hilbert spaces. Then, we prove a strong convergence theorem of the iterative sequence generated by the proposed iterative algorithm which solves some optimization problems under some suitable conditions. Our results extend and improve the recent results of Yao et al. [Y. Yao, M.A. Noor, S. Zainab and Y.C. Liou, Mixed equilibrium problems and optimization problems, J. Math. Anal. Appl (2009). doi:10.1016/j.jmaa.2008.12.005] and Gao and Guo [X. Gao and Y. Guo, Strong convergence theorem of a modified iterative algorithm for Mixed equilibrium problems in Hilbert spaces, J. Inequal. Appl. (2008). doi:10.1155/2008/454181] and many others.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号