共查询到20条相似文献,搜索用时 15 毫秒
1.
Alexander J. Zaslavski 《Numerical Functional Analysis & Optimization》2013,34(12):1399-1412
In this article, we study convergence of the extragradient method for constrained convex minimization problems in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In this article, the convergence of the extragradient method for solving convex minimization problems is established for nonsummable computational errors. We show that the the extragradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant. 相似文献
2.
In this article, we propose a strongly convergent variant on the projected subgradient method for constrained convex minimization problems in Hilbert spaces. The advantage of the proposed method is that it converges strongly when the problem has solutions, without additional assumptions. The method also has the following desirable property: the sequence converges to the solution of the problem which lies closest to the initial iterate. 相似文献
3.
K. C. Kiwiel 《Journal of Optimization Theory and Applications》1998,96(1):159-173
We replace orthogonal projections in the Polyak subgradient method for nonnegatively constrained minimization with entropic projections, thus obtaining an interior-point subgradient method. Inexact entropic projections are quite cheap. Global convergence of the resulting method is established. 相似文献
4.
本文结合次梯度选取技术及割平面法和强次可行方向法的思想,提出了一个求解目标函数非光滑约束优化问题的强次可行方向算法.通过设计一个新的寻找搜索方向子问题和构造新型线搜索,算法不仅能接受不可行的初始点,而且能保持迭代点的强次可行性,同时避免在可行域外目标函数值的不适度增加.算法具备全局收敛性,且初步的数值试验表明算法是稳定有效的. 相似文献
5.
Based on the gradient sampling technique, we present a subgradient algorithm to solve the nondifferentiable convex optimization problem with an extended real-valued objective function. A feature of our algorithm is the approximation of subgradient at a point via random sampling of (relative) gradients at nearby points, and then taking convex combinations of these (relative) gradients. We prove that our algorithm converges to an optimal solution with probability 1. Numerical results demonstrate that our algorithm performs favorably compared with existing subgradient algorithms on applications considered. 相似文献
6.
《Numerical Functional Analysis & Optimization》2013,34(7-8):593-617
Abstract This paper presents an algorithm, named adaptive projected subgradient method that can minimize asymptotically a certain sequence of nonnegative convex functions over a closed convex set in a real Hilbert space. The proposed algorithm is a natural extension of the Polyak's subgradient algorithm, for nonsmooth convex optimization problem with a fixed target value, to the case where the convex objective itself keeps changing in the whole process. The main theorem, showing the strong convergence of the algorithm as well as the asymptotic optimality of the sequence generated by the algorithm, can serve as a unified guiding principle of a wide range of set theoretic adaptive filtering schemes for nonstationary random processes. These include not only the existing adaptive filtering techniques; e.g., NLMS, Projected NLMS, Constrained NLMS, APA, and Adaptive parallel outer projection algorithm etc., but also new techniques; e.g., Adaptive parallel min-max projection algorithm, and their embedded constraint versions. Numerical examples show that the proposed techniques are well-suited for robust adaptive signal processing problems. 相似文献
7.
In this paper we continue the study of the subgradient method for nonsmooth convex constrained minimization problems in a uniformly convex and uniformly smooth Banach space. We consider the case when the stepsizes satisfy
k=1
k
=, lim
k
k
=0. 相似文献
8.
建立了一个投影次梯度方法来求解一类集值混合变分不等式,其中相关的映象不必是Lipschitz连续的.在合适的条件下,证明了在Hilbert空间中该方法产生的序列强收敛于问题的唯一解. 相似文献
9.
We propose an implementable BFGS method for solving a nonsmooth convex optimization problem by converting the original objective function into a once continuously differentiable function by way of the Moreau–Yosida regularization. The proposed method makes use of approximate function and gradient values of the Moreau-Yosida regularization instead of the corresponding exact values. We prove the global convergence of the proposed method under the assumption of strong convexity of the objective function. 相似文献
10.
11.
本文考虑复合函数 F(x)=f(x)+h(c(x))最小化问题,给出了校正矩阵逼近 Langrangian 函数的“单边投影 Hessian”的 Broyden-类型方法;此方法是 Q-超线性收敛的.文中还叙述了两个校正算法,并且证明了在合理的条件下,这两种算法都具有局部的两步 Q-超线性收敛性. 相似文献
12.
Exact Algorithm for the Surrogate Dual of an Integer Programming Problem: Subgradient Method Approach 总被引:1,自引:0,他引:1
One of the critical issues in the effective use of surrogate relaxation for an integer programming problem is how to solve the surrogate dual within a reasonable amount of computational time. In this paper, we present an exact and efficient algorithm for solving the surrogate dual of an integer programming problem. Our algorithm follows the approach which Sarin et al. (Ref. 8) introduced in their surrogate dual multiplier search algorithms. The algorithms of Sarin et al. adopt an ad-hoc stopping rule in solving subproblems and cannot guarantee the optimality of the solutions obtained. Our work shows that this heuristic nature can actually be eliminated. Convergence proof for our algorithm is provided. Computational results show the practical applicability of our algorithm. 相似文献
13.
Chaoli Yao 《Numerical Functional Analysis & Optimization》2019,40(11):1242-1267
This article focuses on a conjugate duality for a constrained vector optimization in the framework of abstract convexity. With the aid of the extension for the notion of infimum to the vector space, a set-valued topical function and the corresponding conjugate map, subdifferentials are presented. Following this, a conjugate dual problem is proposed via this conjugate map. Then, inspired by some ideas in the image space analysis, some equivalent characterizations of the zero duality gap are established by virtue of the subdifferentials. 相似文献
14.
The global optimization problem is considered under the assumption that the objective function is convex with respect to some variables. A finite subgradient algorithm for the search of an -optimal solution is proposed. Results of numerical experiments are presented. 相似文献
15.
关于Banach空间中凸泛函的广义次梯度不等式 总被引:2,自引:0,他引:2
本文在前人^[1,2]的基础之上,以凸泛函的次梯度不等式为工具,将Jensen不等式推广到Banach空间中的凸泛函,导出了Banach空间中的Bochner积分型的广义Jensen不等式,给出其在Banach空间概率论中某些应用,从而推广了文献[3—6]的工作. 相似文献
16.
In this paper we propose an extension of the so-called Iri-Imai method to solve constrained convex programming problems. The original Iri-Imai method is designed for linear programs and assumes that the optimal objective value of the optimization problem is known in advance. Zhang (Ref. 9) extends the method for constrained convex optimization but the optimum value is still assumed to be known in advance. In our new extension this last requirement on the optimal value is relaxed; instead only a lower bound of the optimal value is needed. Our approach uses a multiplicative barrier function for the problem with a univariate parameter that represents an estimated optimum value of the original optimization problem. An optimal solution to the original problem can be traced down by minimizing the multiplicative barrier function. Due to the convexity of this barrier function the optimal objective value as well as the optimal solution of the original problem are sought iteratively by applying Newtons method to the multiplicative barrier function. A new formulation of the multiplicative barrier function is further developed to acquire computational tractability and efficiency. Numerical results are presented to show the efficiency of the new method.His research supported by Hong Kong RGC Earmarked Grant CUHK4233/01E.Communicated by Z. Q. Luo 相似文献
17.
One of the main drawbacks of the subgradient method is the tuning process to determine the sequence of steplengths. In this paper, the radar subgradient method, a heuristic method designed to compute a tuning-free subgradient steplength, is geometrically motivated and algebraically deduced. The unit commitment problem, which arises in the electrical engineering field, is used to compare the performance of the subgradient method with the new radar subgradient method.Communicated by M. SimaanThis research was supported by the Spanish Government, CICYT Grant TAP99-1075-C02-01. We acknowledge the technical support from Logilab (HEC, University of Geneva) and especially the valuable remarks and suggestions of the referees. 相似文献
18.
In this paper, we propose an easily implementable algorithm in Hilbert spaces for solving some classical monotone variational inequality problem over the set of solutions of mixed variational inequalities. The proposed method combines two strategies: projected subgradient techniques and viscosity-type approximations. The involved stepsizes are controlled and a strong convergence theorem is established under very classical assumptions. Our algorithm can be applied for instance to some mathematical programs with complementarity constraints. 相似文献
19.
求非光滑全局优化问题的区间算法 总被引:2,自引:0,他引:2
本文通过区间工具和目标函数的特殊导数提出了一个非光滑全局优化问题的区间算法,所提出的方法能给出问题的全部全局极小点及全局极小值,理论分析和数值结构均表明本文方法是有效的。 相似文献
20.
Disjoint frames are interesting frames in Hilbert spaces, which were introduced by Han and Larson in [4]. In this article, we use disjoint frames to construct frames. In particular, we obtain some conditions for the linear combinations of frames to be frames where the coefficients in the combination may be operators. Our results generalize the corresponding results obtained by Han and Larson. Finally, we provide some examples to illustrate our constructions. 相似文献