首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This algorithm can be viewed as a quasi-Newton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian approximation and the whole process is combined with a nonmonotone line search strategy. The simple bounds are then taken into account by placing them in an exponential penalty term that modifies the objective function. The exponential penalty scheme defines the outer iterations of the process. Each outer iteration involves the application of the previously defined preconditioned spectral gradient method for linear equality constrained problems. Therefore, an equality constrained convex quadratic programming problem needs to be solved at every inner iteration. The associated extended KKT matrix remains constant unless the process is reinitiated. In ordinary inner iterations, only the right-hand side of the KKT system changes. Therefore, suitable sparse factorization techniques can be applied and exploited effectively. Encouraging numerical experiments are presented.This research was supported by FAPESP Grant 2001-04597-4 and Grant 903724-6, FINEP and FAEP-UNICAMP, and the Scientific Computing Center of UCV. The authors thank two anonymous referees whose comments helped us to improve the final version of this paper.  相似文献   

2.
本文给出了一类线性约束下不可微量优化问题的可行下降方法,这类问题的目标函数是凸函数和可微函数的合成函数,算法通过解系列二次规划寻找可行下降方向,新的迭代点由不精确线搜索产生,在较弱的条件下,我们证明了算法的全局收敛性  相似文献   

3.
We consider the problem s.t. , where C is a closed and covex subset of with nonempty interior, and introduce a family of interior point methods for this problem, which can be seen as approximate versions of generalized proximal point methods. Each step consists of a one-dimensional search along either a curve or a segment in the interior of C. The information about the boundary of C is contained in a generalized distance which defines the segment of the curve, and whose gradient diverges at the boundary of C. The objective of the search is either f or f plus a regularizing term. When , the usual steepest descent method is a particular case of our general scheme, and we manage to extend known convergence results for the steepest descent method to our family: for nonregularized one-dimensional searches,under a level set boundedness assumption on f, the sequence is bounded, the difference between consecutive iterates converges to 0 and every cluster point of the sequence satisfies first-order optimality conditions for the problem, i.e. is a solution if f is convex. For the regularized search and convex f, no boundedness condition on f is needed and full and global convergence of the sequence to a solution of the problem is established.  相似文献   

4.
Alternating direction method of multipliers has been well studied in the context of linearly constrained convex optimization. In the last few years, we have witnessed a number of novel applications arising from image processing, compressive sensing and statistics, etc., where the approach is surprisingly efficient. In the early applications, the objective function of the linearly constrained convex optimization problem is separable into two parts. Recently, the alternating direction method of multipliers has been extended to the case where the number of the separable parts in the objective function is finite. However, in each iteration, the subproblems are required to be solved exactly. In this paper, by introducing some reasonable inexactness criteria, we propose two inexact alternating-direction-based contraction methods, which substantially broaden the applicable scope of the approach. The convergence and complexity results for both methods are derived in the framework of variational inequalities.  相似文献   

5.
Conventional supervised learning in neural networks is carried out by performing unconstrained minimization of a suitably defined cost function. This approach has certain drawbacks, which can be overcome by incorporating additional knowledge in the training formalism. In this paper, two types of such additional knowledge are examined: Network specific knowledge (associated with the neural network irrespectively of the problem whose solution is sought) or problem specific knowledge (which helps to solve a specific learning task). A constrained optimization framework is introduced for incorporating these types of knowledge into the learning formalism. We present three examples of improvement in the learning behaviour of neural networks using additional knowledge in the context of our constrained optimization framework. The two network specific examples are designed to improve convergence and learning speed in the broad class of feedforward networks, while the third problem specific example is related to the efficient factorization of 2-D polynomials using suitably constructed sigma-pi networks.  相似文献   

6.
We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x 1, . . . , x N ) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterates to a stationary point is shown when either f is pseudoconvex in every pair of coordinate blocks from among N-1 coordinate blocks or f has at most one minimum in each of N-2 coordinate blocks. If f is quasiconvex and hemivariate in every coordinate block, then the assumptions of continuity of f and compactness of the level set may be relaxed further. These results are applied to derive new (and old) convergence results for the proximal minimization algorithm, an algorithm of Arimoto and Blahut, and an algorithm of Han. They are applied also to a problem of blind source separation.  相似文献   

7.
A stochastic algorithm is proposed for the global optimization of nonconvex functions subject to linear constraints. Our method follows the trajectory of an appropriately defined Stochastic Differential Equation (SDE). The feasible set is assumed to be comprised of linear equality constraints, and possibly box constraints. Feasibility of the trajectory is achieved by projecting its dynamics onto the set defined by the linear equality constraints. A barrier term is used for the purpose of forcing the trajectory to stay within the box constraints. Using Laplace’s method we give a characterization of a probability measure (Π) that is defined on the set of global minima of the problem. We then study the transition density associated with the projected diffusion process and show that its weak limit is given by Π. Numerical experiments using standard test problems from the literature are reported. Our results suggest that the method is robust and applicable to large-scale problems.  相似文献   

8.
In this paper we describe a Newton-type algorithm model for solving smooth constrained optimization problems with nonlinear objective function, general linear constraints and bounded variables. The algorithm model is based on the definition of a continuously differentiable exact merit function that follows an exact penalty approach for the box constraints and an exact augmented Lagrangian approach for the general linear constraints. Under very mild assumptions and without requiring the strict complementarity assumption, the algorithm model produces a sequence of pairs converging quadratically to a pair where satisfies the first order necessary conditions and is a KKT multipliers vector associated to the linear constraints. As regards the behaviour of the sequence x k alone, it is guaranteed that it converges at least superlinearly. At each iteration, the algorithm requires only the solution of a linear system that can be performed by means of conjugate gradient methods. Numerical experiments and comparison are reported.  相似文献   

9.
Journal of Optimization Theory and Applications - Coordinate descent algorithms are popular in machine learning and large-scale data analysis problems due to their low computational cost iterative...  相似文献   

10.
In this paper, we define and study several types of block descent methods for the simultaneous solution of a system of linear equations with several right hand sides. Then, improved block EN methods will be proposed. Finally, block hybrid and minimal residual smoothing procedures will be considered.  相似文献   

11.
Abstract

An important class of nonparametric signal processing methods entails forming a set of predictors from an overcomplete set of basis functions associated with a fast transform (e.g., wavelet packets). In these methods, the number of basis functions can far exceed the number of sample values in the signal, leading to an ill-posed prediction problem. The “basis pursuit” denoising method of Chen, Donoho, and Saunders regularizes the prediction problem by adding an l 1 penalty term on the coefficients for the basis functions. Use of an l 1 penalty instead of l 2 has significant benefits, including higher resolution of signals close in time/frequency and a more parsimonious representation. The l 1 penalty, however, poses a challenging optimization problem that was solved by Chen, Donoho and Saunders using a novel application of interior-point algorithms (IP). This article investigates an alternative optimization approach based on block coordinate relaxation (BCR) for sets of basis functions that are the finite union of sets of orthonormal basis functions (e.g., wavelet packets). We show that the BCR algorithm is globally convergent, and empirically, the BCR algorithm is faster than the IP algorithm for a variety of signal denoising problems.  相似文献   

12.
解线性约束优化问题的新锥模型信赖域法   总被引:1,自引:0,他引:1  
本文提出了一个解线性等式约束优化问题的新锥模型信赖域方法.论文采用零空间技术消除了新锥模型子问题中的线性等式约束,用折线法求解转换后的子问题,并给出了解线性等式约束优化问题的信赖域方法.论文提出并证明了该方法的全局收敛性,并给出了该方法解线性等式约束优化问题的数值实验.理论和数值实验结果表明新锥模型信赖域方法是有效的,这给出了用新锥模型进一步研究非线性优化的基础.  相似文献   

13.
本文利用开关函数.建立了解线性约束优化问题的一个组合型可行方向法─—开关算法模型,并给出了其收敛性质,从而统一、推广了包括起线性收敛的算法在内的常见的可行方向法.依此模型,具体构造了一类起线性收敛的新算法.  相似文献   

14.
15.
施保昌 《应用数学》1993,6(2):145-150
本文提出了二类解约束优化问题的广义既约型梯度法,从统一角度研究了投影梯度法和既约梯度法的结构及其全局收敛性.本文结果统一、推广了常见的可行方向法.  相似文献   

16.
17.
The proximal alternating direction method of multipliers is a popular and useful method for linearly constrained, separable convex problems, especially for the linearized case. In the literature, convergence of the proximal alternating direction method has been established under the assumption that the proximal regularization matrix is positive semi-definite. Recently, it was shown that the regularizing proximal term in the proximal alternating direction method of multipliers does not necessarily have to be positive semi-definite, without any additional assumptions. However, it remains unknown as to whether the indefinite setting is valid for the proximal version of the symmetric alternating direction method of multipliers. In this paper, we confirm that the symmetric alternating direction method of multipliers can also be regularized with an indefinite proximal term. We theoretically prove the global convergence of the indefinite method and establish its worst-case convergence rate in an ergodic sense. In addition, the generalized alternating direction method of multipliers proposed by Eckstein and Bertsekas is a special case in our discussion. Finally, we demonstrate the performance improvements achieved when using the indefinite proximal term through experimental results.  相似文献   

18.
本文对带线性等式约束的LC^1优化问题提出了一个新的ODE型信赖域算法,它在每一次迭代时,不必求解带信赖域界的子问题,仅解一线性方程组而求得试验步。从而可以降低计算的复杂性,提高计算效率,在一定的条件下,文中还证明了该算法是超线性收敛的。  相似文献   

19.
This paper presents a family of projected descent direction algorithms with inexact line search for solving large-scale minimization problems subject to simple bounds on the decision variables. The global convergence of algorithms in this family is ensured by conditions on the descent directions and line search. Whenever a sequence constructed by an algorithm in this family enters a sufficiently small neighborhood of a local minimizer satisfying standard second-order sufficiency conditions, it gets trapped and converges to this local minimizer. Furthermore, in this case, the active constraint set at is identified in a finite number of iterations. This fact is used to ensure that the rate of convergence to a local minimizer, satisfying standard second-order sufficiency conditions, depends only on the behavior of the algorithm in the unconstrained subspace. As a particular example, we present projected versions of the modified Polak–Ribière conjugate gradient method and the limited-memory BFGS quasi-Newton method that retain the convergence properties associated with those algorithms applied to unconstrained problems.  相似文献   

20.
Linear systems associated with numerical methods for constrained optimization are discussed in thia paper ,It is shown that the corresponding subproblems arise in most well-known methods,no matter line search methods or trust region methods for constrained optimization can be expressed as similar systems of linear equations.All these linear systems can be viewed as some kinds of approximation to the linear system derived by the Lagrange-Newton method .Some properties of these linear systems are analyzed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号