首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A coordinate gradient descent method for nonsmooth separable minimization   总被引:1,自引:0,他引:1  
We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with ?1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. The local Lipschitzian error bound holds under assumptions analogous to those for constrained smooth optimization, e.g., the convex function is polyhedral and the smooth function is (nonconvex) quadratic or is the composition of a strongly convex function with a linear mapping. We report numerical experience with solving the ?1-regularization of unconstrained optimization problems from Moré et al. in ACM Trans. Math. Softw. 7, 17–41, 1981 and from the CUTEr set (Gould and Orban in ACM Trans. Math. Softw. 29, 373–394, 2003). Comparison with L-BFGS-B and MINOS, applied to a reformulation of the ?1-regularized problem as a bound-constrained optimization problem, is also reported.  相似文献   

2.
3.
Under some assumptions, the solution set of a nonlinear complementarity problem coincides with the set of local minima of the corresponding minimization problem. This paper uses a family of new merit functions to deal with nonlinear complementarity problem where the underlying function is assumed to be a continuous but not necessarily locally Lipschitzian map and gives a descent algorithm for solving the nonsmooth continuous complementarity problems. In addition, the global convergence of the derivative free descent algorithm is also proved.  相似文献   

4.
A descent method with respect to the gap function is formulated and justified for the nonsmooth equilibrium problem. It uses the procedure of inexact linear search of the Armijo type. The proposed method converges under the same assumptions as the methods with exact linear search.  相似文献   

5.
We examine the minimization of anN-dimensional real-valued function using the coordinate descent method. We impose conditions on the function under which the method converges; furthermore, by specializing our class of functions, we obtain the rate of convergence. We also present some examples from classical approximation theory where this method applies. A computational example is also given.  相似文献   

6.
A descent algorithm for nonsmooth convex optimization   总被引:1,自引:0,他引:1  
This paper presents a new descent algorithm for minimizing a convex function which is not necessarily differentiable. The algorithm can be implemented and may be considered a modification of the ε-subgradient algorithm and Lemarechal's descent algorithm. Also our algorithm is seen to be closely related to the proximal point algorithm applied to convex minimization problems. A convergence theorem for the algorithm is established under the assumption that the objective function is bounded from below. Limited computational experience with the algorithm is also reported.  相似文献   

7.
We present a novel randomized block coordinate descent method for the minimization of a convex composite objective function. The method uses (approximate) partial second-order (curvature) information, so that the algorithm performance is more robust when applied to highly nonseparable or ill conditioned problems. We call the method Flexible Coordinate Descent (FCD). At each iteration of FCD, a block of coordinates is sampled randomly, a quadratic model is formed about that block and the model is minimized approximately/inexactly to determine the search direction. An inexpensive line search is then employed to ensure a monotonic decrease in the objective function and acceptance of large step sizes. We present several high probability iteration complexity results to show that convergence of FCD is guaranteed theoretically. Finally, we present numerical results on large-scale problems to demonstrate the practical performance of the method.  相似文献   

8.
针对机器学习中广泛存在的一类问题:结构化随机优化问题(其中“结构化”是指问题的可行域具有块状结构,且目标函数的非光滑正则化部分在变量块之间是可分离的),我们研究了小批量随机块坐标下降算法(mSBD)。按照求解非复合问题和复合问题分别给出了基本的mSBD和它的变体,对于非复合问题,分析了算法在没有一致有界梯度方差假设情况下的收敛性质。而对于复合问题,在不需要通常的Lipschitz梯度连续性假设条件下得到了算法的收敛性。最后通过数值实验验证了mSBD的有效性。  相似文献   

9.
We propose a descent method via gap functions for solving nonsmooth variational inequalities with a locally Lipschitz operator. Assuming monotone operator (not necessarily strongly monotone) and bounded domain, we show that the method with an Armijo-type line search is globally convergent. Finally, we report some numerical experiments. This work has been supported by the National Research Program PRIN/2005017083 “Innovative Problems and Methods in Nonlinear Optimization”.  相似文献   

10.
We propose and analyze a new parallel coordinate descent method—NSync—in which at each iteration a random subset of coordinates is updated, in parallel, allowing for the subsets to be chosen using an arbitrary probability law. This is the first method of this type. We derive convergence rates under a strong convexity assumption, and comment on how to assign probabilities to the sets to optimize the bound. The complexity and practical performance of the method can outperform its uniform variant by an order of magnitude. Surprisingly, the strategy of updating a single randomly selected coordinate per iteration—with optimal probabilities—may require less iterations, both in theory and practice, than the strategy of updating all coordinates at every iteration.  相似文献   

11.
For solving large scale linear least‐squares problem by iteration methods, we introduce an effective probability criterion for selecting the working columns from the coefficient matrix and construct a greedy randomized coordinate descent method. It is proved that this method converges to the unique solution of the linear least‐squares problem when its coefficient matrix is of full rank, with the number of rows being no less than the number of columns. Numerical results show that the greedy randomized coordinate descent method is more efficient than the randomized coordinate descent method.  相似文献   

12.
The aim of this paper is to propose a new multiple subgradient descent bundle method for solving unconstrained convex nonsmooth multiobjective optimization problems. Contrary to many existing multiobjective optimization methods, our method treats the objective functions as they are without employing a scalarization in a classical sense. The main idea of this method is to find descent directions for every objective function separately by utilizing the proximal bundle approach, and then trying to form a common descent direction for every objective function. In addition, we prove that the method is convergent and it finds weakly Pareto optimal solutions. Finally, some numerical experiments are considered.  相似文献   

13.
In this paper we present an application of the algorithm of the cyclic coordinate descent in multidimensional variational problems with constrained speed, the physical motivation of the problem being the optimization of hydrothermal systems. The proof of the convergence of the succession generated by the algorithm was based on the use of an appropriate adaptation of Zangwill’s global theorem of convergence. We have also included an algorithm for the formal construction of the descending succession (the solution of an optimum control problem), the approximation of which we carried out using an adaptation of the Euler method in conjunction with a procedure inspired by the shooting method.  相似文献   

14.
In a bounded domain, we study elliptic boundary-value problems for equations and systems of the Douglis-Nirenberg structure in complete scales of Banach spaces. The boundary of the domain contains conic points, edges, etc. A theorem on local increase in the smoothness of generalized solutions and a theorem on complete collection of isomorphisms are proved. Applications are considered. It is shown that the results obtained are also valid for transmission problems, nonlocal elliptic problems, elliptic problems with a parameter, and parabolic problems.Translated from Ukrainskii Matematicheskii Zhurnal, Vol. 47, No. 5, pp. 701–709, May, 1995.This research was partially supported by a grant of the American Mathematical Society and by the Ukrainian State Committee on Science and Technology.  相似文献   

15.
In this paper we propose a variant of the random coordinate descent method for solving linearly constrained convex optimization problems with composite objective functions. If the smooth part of the objective function has Lipschitz continuous gradient, then we prove that our method obtains an ?-optimal solution in $\mathcal{O}(n^{2}/\epsilon)$ iterations, where n is the number of blocks. For the class of problems with cheap coordinate derivatives we show that the new method is faster than methods based on full-gradient information. Analysis for the rate of convergence in probability is also provided. For strongly convex functions our method converges linearly. Extensive numerical tests confirm that on very large problems, our method is much more numerically efficient than methods based on full gradient information.  相似文献   

16.

We present a new algorithm for large-scale unconstrained minimization that, at each iteration, minimizes, approximately, a quadratic model of the objective function plus a regularization term, not necessarily based on a norm. We prove convergence assuming only gradient continuity and complexity results assuming Lipschitz conditions. For solving the subproblems in the case of regularizations based on the 3-norm, we introduce a new method that quickly obtains the approximate solutions required by the theory. We present numerical experiments.

  相似文献   

17.
In this paper, we provide a unified iteration complexity analysis for a family of general block coordinate descent methods, covering popular methods such as the block coordinate gradient descent and the block coordinate proximal gradient, under various different coordinate update rules. We unify these algorithms under the so-called block successive upper-bound minimization (BSUM) framework, and show that for a broad class of multi-block nonsmooth convex problems, all algorithms covered by the BSUM framework achieve a global sublinear iteration complexity of \(\mathcal{{O}}(1/r)\), where r is the iteration index. Moreover, for the case of block coordinate minimization where each block is minimized exactly, we establish the sublinear convergence rate of O(1/r) without per block strong convexity assumption.  相似文献   

18.
In this paper, a method is developed for solving nonsmooth nonconvex minimization problems. This method extends the classical BFGS framework. First, we generalize the Wolfe conditions for locally Lipschitz functions and prove that this generalization is well defined. Then, a line search algorithm is presented to find a step length satisfying the generalized Wolfe conditions. Next, the Goldstein e-subgradient is approximated by an iterative method and a descent direction is computed using a positive definite matrix. This matrix is updated using the BFGS method. Finally, a minimization algorithm based on the BFGS method is described. The algorithm is implemented in MATLAB and numerical results using it are reported.  相似文献   

19.
Multiobjective DC optimization problems arise naturally, for example, in data classification and cluster analysis playing a crucial role in data mining. In this paper, we propose a new multiobjective double bundle method designed for nonsmooth multiobjective optimization problems having objective and constraint functions which can be presented as a difference of two convex (DC) functions. The method is of the descent type and it generalizes the ideas of the double bundle method for multiobjective and constrained problems. We utilize the special cutting plane model angled for the DC improvement function such that the convex and the concave behaviour of the function is captured. The method is proved to be finitely convergent to a weakly Pareto stationary point under mild assumptions. Finally, we consider some numerical experiments and compare the solutions produced by our method with the method designed for general nonconvex multiobjective problems. This is done in order to validate the usage of the method aimed specially for DC objectives instead of a general nonconvex method.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号