首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
In this paper, we first study a nonsmooth steepest descent method for nonsmooth functions defined on a Hilbert space and establish the corresponding algorithm by proximal subgradients. Then, we use this algorithm to find stationary points for those functions satisfying prox-regularity and Lipschitz continuity. As an application, the established algorithm is used to search for the minimizer of a lower semicontinuous and convex function on a finite-dimensional space. A convergence theorem, as an extension and improvement of the existing converging result for twice continuously differentiable convex functions, is also presented therein.  相似文献   

2.
A Proximal Bundle Method Based on Approximate Subgradients   总被引:1,自引:0,他引:1  
In this paper a proximal bundle method is introduced that is capable to deal with approximate subgradients. No further knowledge of the approximation quality (like explicit knowledge or controllability of error bounds) is required for proving convergence. It is shown that every accumulation point of the sequence of iterates generated by the proposed algorithm is a well-defined approximate solution of the exact minimization problem. In the case of exact subgradients the algorithm behaves like well-established proximal bundle methods. Numerical tests emphasize the theoretical findings.  相似文献   

3.
In the last years the O–D matrix adjustment problem using link counts on a traffic network modelled by means of a static user equilibrium approach has been formulated advantageously by means of bilevel programs. The algorithms developed to solve the problem present heuristic components in a lesser or greater degree. In this paper two new algorithmic alternatives are presented for this problem. The first alternative is an hybrid scheme proximal point-steepest descent that is based on a development of Codina for the approximation of the steepest descent direction of the upper level function and the second alternative is developed by García and Marín and consists of solving a sequence of simplified bilevel programs. In order to highlight the characteristics of the two methods a set of test problems have been solved in conjunction with other well known methods, such as the method of Spiess, the method of Chan, the method of Yang as well as with an adaptation of the Wolfe’s conjugate directions method for non-differentiable optimization, in order to provide a better perspective of their advantages and tradeoffs.  相似文献   

4.
The purpose of this paper is to introduce iterative algorithm which is a combination of hybrid viscosity approximation method and the hybrid steepest‐descent method for solving proximal split feasibility problems and obtain the strong convergence of the sequences generated by the iterative scheme under certain weaker conditions in Hilbert spaces. Our results improve many recent results on the topic in the literature. Several numerical experiments are presented to illustrate the effectiveness of our proposed algorithm, and these numerical results show that our result is computationally easier and faster than previously known results on proximal split feasibility problem.  相似文献   

5.
We present an approximate bundle method for solving nonsmooth equilibrium problems. An inexact cutting-plane linearization of the objective function is established at each iteration, which is actually an approximation produced by an oracle that gives inaccurate values for the functions and subgradients. The errors in function and subgradient evaluations are bounded and they need not vanish in the limit. A descent criterion adapting the setting of inexact oracles is put forward to measure the current descent behavior. The sequence generated by the algorithm converges to the approximately critical points of the equilibrium problem under proper assumptions. As a special illustration, the proposed algorithm is utilized to solve generalized variational inequality problems. The numerical experiments show that the algorithm is effective in solving nonsmooth equilibrium problems.  相似文献   

6.
Z. Akbari 《Optimization》2017,66(9):1519-1529
In this paper, we present a nonsmooth trust region method for solving linearly constrained optimization problems with a locally Lipschitz objective function. Using the approximation of the steepest descent direction, a quadratic approximation of the objective function is constructed. The null space technique is applied to handle the constraints of the quadratic subproblem. Next, the CG-Steihaug method is applied to solve the new approximation quadratic model with only the trust region constraint. Finally, the convergence of presented algorithm is proved. This algorithm is implemented in the MATLAB environment and the numerical results are reported.  相似文献   

7.
Recently, it has been observed that several nondifferentiable minimization problems share the property that the question of whether a given point is optimal can be answered by solving a certain bounded least squares problem. If the resulting residual vector,r, vanishes then the current point is optimal. Otherwise,r is a descent direction. In fact, as we shall see,r points at the steepest descent direction. On the other hand, it is customary to characterize the optimality conditions (and the steepest descent vector) of a convex nondifferentiable function via its subdifferential. Also, it is well known that optimality conditions are usually related to theorems of the alternative. One aim of our survey is to clarify the relations between these subjects. Another aim is to introduce a new type of theorems of the alternative. The new theorems characterize the optimality conditions of discretel 1 approximation problems and multifacility location problems, and provide a simple way to obtain the subdifferential and the steepest descent direction in such problems. A further objective of our review is to demonstrate that the ability to compute the steepest descent direction at degenerate dead points opens a new way for handling degeneracy in active set methods.  相似文献   

8.
In this article, we first propose a feasible steepest descent direction for box-constrained optimization. By the use of the direction and recently developed modified PRP method, we propose a subspace modified PRP method for box-constrained optimization. Under appropriate conditions, we show that the method is globally convergent. Numerical experiments are presented using box-constrained problems in the CUTEr test problem libraries.  相似文献   

9.
We propose a variant of the numerical method of steepest descent for oscillatory integrals by using a low-cost explicit polynomial approximation of the paths of steepest descent. A loss of asymptotic order is observed, but in the most relevant cases the overall asymptotic order remains higher than a truncated asymptotic expansion at similar computational effort. Theoretical results based on number theory underpinning the mechanisms behind this effect are presented.  相似文献   

10.
In this paper, by the use of the residual vector and an approximation to the steepest descent direction of the norm function, we develop a norm descent spectral method for solving symmetric nonlinear equations. The method based on the nomonotone line search techniques is showed to be globally convergent. A specific implementation of the method is given which exploits the recently developed cyclic Barzilai–Borwein (CBB) for unconstrained optimization. Preliminary numerical results indicate that the method is promising.  相似文献   

11.
A new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can be applied to find descent directions of nonsmooth functions. The preliminary results of numerical experiments with unconstrained nonsmooth optimization problems as well as the comparison of the proposed method with the nonsmooth optimization solver DNLP from CONOPT-GAMS and the derivative-free optimization solver CONDOR are presented.  相似文献   

12.
Ukrainian Mathematical Journal - The asymptotic rate of convergence of the method of steepest descent is regarded as a function of the initial approximation. We study the level set of this rate,...  相似文献   

13.
In this paper we present certain characteristic conditions for the convergence of the generalized steepest descent approximation process to a zero of a generalized strongly accretive operator, defined on a uniformly smooth Banach space. Our study is based on an important result of Reich [S. Reich, An iterative procedure for constructing zeros of accretive sets in Banach spaces, Nonlinear Anal. 2 (1978) 85–92] and given results extend and improve some of the earlier results which include the steepest descent approximation method.  相似文献   

14.
In this paper, we introduce a new method for solving nonconvex nonsmooth optimization problems. It uses quasisecants, which are subgradients computed in some neighborhood of a point. The proposed method contains simple procedures for finding descent directions and for solving line search subproblems. The convergence of the method is studied and preliminary results of numerical experiments are presented. The comparison of the proposed method with the subgradient and the proximal bundle methods is demonstrated using results of numerical experiments.  相似文献   

15.
16.
In this paper, we first propose a constrained optimization reformulation to the \(L_{1/2}\) regularization problem. The constrained problem is to minimize a smooth function subject to some quadratic constraints and nonnegative constraints. A good property of the constrained problem is that at any feasible point, the set of all feasible directions coincides with the set of all linearized feasible directions. Consequently, the KKT point always exists. Moreover, we will show that the KKT points are the same as the stationary points of the \(L_{1/2}\) regularization problem. Based on the constrained optimization reformulation, we propose a feasible descent direction method called feasible steepest descent method for solving the unconstrained \(L_{1/2}\) regularization problem. It is an extension of the steepest descent method for solving smooth unconstrained optimization problem. The feasible steepest descent direction has an explicit expression and the method is easy to implement. Under very mild conditions, we show that the proposed method is globally convergent. We apply the proposed method to solve some practical problems arising from compressed sensing. The results show its efficiency.  相似文献   

17.
Molecular similarity index measures the similarity between two molecules. Computing the optimal similarity index is a hard global optimization problem. Since the objective function value is very hard to compute and its gradient vector is usually not available, previous research has been based on non-gradient algorithms such as random search and the simplex method. In a recent paper, McMahon and King introduced a Gaussian approximation so that both the function value and the gradient vector can be computed analytically. They then proposed a steepest descent algorithm for computing the optimal similarity index of small molecules. In this paper, we consider a similar problem. Instead of computing atom-based derivatives, we directly compute the derivatives with respect to the six free variables describing the relative positions of the two molecules.. We show that both the function value and gradient vector can be computed analytically and apply the more advanced BFGS method in addition to the steepest descent algorithm. The algorithms are applied to compute the similarities among the 20 amino acids and biomolecules like proteins. Our computational results show that our algorithm can achieve more accuracy than previous methods and has a 6-fold speedup over the steepest descent method.  相似文献   

18.
In this paper, we introduce a novel projected steepest descent iterative method with frozen derivative. The classical projected steepest descent iterative method involves the computation of derivative of the nonlinear operator at each iterate. The method of this paper requires the computation of derivative of the nonlinear operator only at an initial point. We exhibit the convergence analysis of our method by assuming the conditional stability of the inverse problem on a convex and compact set. Further, by assuming the conditional stability on a nested family of convex and compact subsets, we develop a multi-level method. In order to enhance the accuracy of approximation between neighboring levels, we couple it with the growth of stability constants. This along with a suitable discrepancy criterion ensures that the algorithm proceeds from level to level and terminates within finite steps. Finally, we discuss an inverse problem on which our methods are applicable.  相似文献   

19.
A NEW STEPSIZE FOR THE STEEPEST DESCENT METHOD   总被引:8,自引:0,他引:8  
The steepest descent method is the simplest gradient method for optimization. It is well known that exact line searches along each steepest descent direction may converge very slowly. An important result was given by Barzilar and Borwein, which is proved to be superlinearly convergent for convex quadratic in two dimensional space, and performs quite well for high dimensional problems. The BB method is not monotone, thus it is not easy to be generalized for general nonlinear functions unless certain non-monotone techniques being applied. Therefore, it is very desirable to find stepsize formulae which enable fast convergence and possess the monotone property. Such a stepsize αk for the steepest descent method is suggested in this paper. An algorithm with this new stepsize in even iterations and exact line search in odd iterations is proposed. Numerical results are presented, which confirm that the new method can find the exact solution within 3 iteration for two dimensional problems. The new method is very efficient for small scale problems. A modified version of the new method is also presented, where the new technique for selecting the stepsize is used after every two exact line searches. The modified algorithm is comparable to the Barzilar-Borwein method for large scale problems and better for small scale problems.  相似文献   

20.
This paper presents a descent direction method for finding extrema of locally Lipschitz functions defined on Riemannian manifolds. To this end we define a set-valued mapping \(x\rightarrow \partial _{\varepsilon } f(x)\) named ε-subdifferential which is an approximation for the Clarke subdifferential and which generalizes the Goldstein- ε-subdifferential to the Riemannian setting. Using this notion we construct a steepest descent method where the descent directions are computed by a computable approximation of the ε-subdifferential. We establish the global convergence of our algorithm to a stationary point. Numerical experiments illustrate our results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号