首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   29篇
  数学   29篇
  2016年   1篇
  2015年   2篇
  2012年   2篇
  2011年   1篇
  2010年   1篇
  2009年   3篇
  2005年   1篇
  2003年   4篇
  2002年   1篇
  2001年   1篇
  2000年   2篇
  1999年   2篇
  1998年   4篇
  1997年   3篇
  1993年   1篇
排序方式: 共有29条查询结果,搜索用时 31 毫秒
1.
Nonlinear complementarity as unconstrained and constrained minimization   总被引:11,自引:0,他引:11  
The nonlinear complementarity problem is cast as an unconstrained minimization problem that is obtained from an augmented Lagrangian formulation. The dimensionality of the unconstrained problem is the same as that of the original problem, and the penalty parameter need only be greater than one. Another feature of the unconstrained problem is that it has global minima of zero at precisely all the solution points of the complementarity problem without any monotonicity assumption. If the mapping of the complementarity problem is differentiable, then so is the objective of the unconstrained problem, and its gradient vanishes at all solution points of the complementarity problem. Under assumptions of nondegeneracy and linear independence of gradients of active constraints at a complementarity problem solution, the corresponding global unconstrained minimum point is locally unique. A Wolfe dual to a standard constrained optimization problem associated with the nonlinear complementarity problem is also formulated under a monotonicity and differentiability assumption. Most of the standard duality results are established even though the underlying constrained optimization problem may be nonconvex. Preliminary numerical tests on two small nonmonotone problems from the published literature converged to degenerate or nondegenerate solutions from all attempted starting points in 7 to 28 steps of a BFGS quasi-Newton method for unconstrained optimization.Dedicated to Phil Wolfe on his 65th birthday, in appreciation of his major contributions to mathematical programming.This material is based on research supported by Air Force Office of Scientific Research Grant AFOSR-89-0410 and National Science Foundation Grant CCR-9101801.  相似文献
2.
Convergence Analysis of Perturbed Feasible Descent Methods   总被引:5,自引:0,他引:5  
We develop a general approach to convergence analysis of feasible descent methods in the presence of perturbations. The important novel feature of our analysis is that perturbations need not tend to zero in the limit. In that case, standard convergence analysis techniques are not applicable. Therefore, a new approach is needed. We show that, in the presence of perturbations, a certain -approximate solution can be obtained, where depends linearly on the level of perturbations. Applications to the gradient projection, proximal minimization, extragradient and incremental gradient algorithms are described.  相似文献
3.
Incremental Gradient Algorithms with Stepsizes Bounded Away from Zero   总被引:4,自引:0,他引:4  
We consider the class of incremental gradient methods for minimizing a sum of continuously differentiable functions. An important novel feature of our analysis is that the stepsizes are kept bounded away from zero. We derive the first convergence results of any kind for this computationally important case. In particular, we show that a certain -approximate solution can be obtained and establish the linear dependence of on the stepsize limit. Incremental gradient methods are particularly well-suited for large neural network training problems where obtaining an approximate solution is typically sufficient and is often preferable to computing an exact solution. Thus, in the context of neural networks, the approach presented here is related to the principle of tolerant training. Our results justify numerous stepsize rules that were derived on the basis of extensive numerical experimentation but for which no theoretical analysis was previously available. In addition, convergence to (exact) stationary points is established when the gradient satisfies a certain growth property.  相似文献
4.
Incremental Gradient Algorithms with Stepsizes Bounded Away from Zero   总被引:4,自引:0,他引:4  
We consider the class of incremental gradient methods for minimizing a sum of continuously differentiable functions. An important novel feature of our analysis is that the stepsizes are kept bounded away from zero. We derive the first convergence results of any kind for this computationally important case. In particular, we show that a certain -approximate solution can be obtained and establish the linear dependence of on the stepsize limit. Incremental gradient methods are particularly well-suited for large neural network training problems where obtaining an approximate solution is typically sufficient and is often preferable to computing an exact solution. Thus, in the context of neural networks, the approach presented here is related to the principle of tolerant training. Our results justify numerous stepsize rules that were derived on the basis of extensive numerical experimentation but for which no theoretical analysis was previously available. In addition, convergence to (exact) stationary points is established when the gradient satisfies a certain growth property.  相似文献
5.
Descent methods with linesearch in the presence of perturbations   总被引:3,自引:0,他引:3  
We consider the class of descent algorithms for unconstrained optimization with an Armijo-type stepsize rule in the case when the gradient of the objective function is computed inexactly. An important novel feature in our theoretical analysis is that perturbations associated with the gradient are not assumed to be relatively small or to tend to zero in the limit (as a practical matter, we expect them to be reasonably small, so that a meaningful approximate solution can be obtained). This feature makes our analysis applicable to various difficult problems encounted in practice. We propose a modified Armijo-type rule for computing the stepsize which guarantees that the algorithm obtains a reasonable approximate solution. Furthermore, if perturbations are small relative to the size of the gradient, then our algorithm retains all the standard convergence properties of descent methods.  相似文献
6.
 We present a unified convergence rate analysis of iterative methods for solving the variational inequality problem. Our results are based on certain error bounds; they subsume and extend the linear and sublinear rates of convergence established in several previous studies. We also derive a new error bound for $\gamma$-strictly monotone variational inequalities. The class of algorithms covered by our analysis in fairly broad. It includes some classical methods for variational inequalities, e.g., the extragradient, matrix splitting, and proximal point methods. For these methods, our analysis gives estimates not only for linear convergence (which had been studied extensively), but also sublinear, depending on the properties of the solution. In addition, our framework includes a number of algorithms to which previous studies are not applicable, such as the infeasible projection methods, a separation-projection method, (inexact) hybrid proximal point methods, and some splitting techniques. Finally, our analysis covers certain feasible descent methods of optimization, for which similar convergence rate estimates have been recently obtained by Luo [14]. Received: April 17, 2001 / Accepted: December 10, 2002 Published online: April 10, 2003 RID="⋆" ID="⋆" Research of the author is partially supported by CNPq Grant 200734/95–6, by PRONEX-Optimization, and by FAPERJ. Key Words. Variational inequality – error bound – rate of convergence Mathematics Subject Classification (2000): 90C30, 90C33, 65K05  相似文献
7.
Error Stability Properties of Generalized Gradient-Type Algorithms   总被引:3,自引:0,他引:3  
We present a unified framework for convergence analysis of generalized subgradient-type algorithms in the presence of perturbations. A principal novel feature of our analysis is that perturbations need not tend to zero in the limit. It is established that the iterates of the algorithms are attracted, in a certain sense, to an -stationary set of the problem, where depends on the magnitude of perturbations. Characterization of the attraction sets is given in the general (nonsmooth and nonconvex) case. The results are further strengthened for convex, weakly sharp, and strongly convex problems. Our analysis extends and unifies previously known results on convergence and stability properties of gradient and subgradient methods, including their incremental, parallel, and heavy ball modifications.  相似文献
8.
We consider the proximal form of a bundle algorithm for minimizing a nonsmooth convex function, assuming that the function and subgradient values are evaluated approximately. We show how these approximations should be controlled in order to satisfy the desired optimality tolerance. For example, this is relevant in the context of Lagrangian relaxation, where obtaining exact information about the function and subgradient values involves solving exactly a certain optimization problem, which can be relatively costly (and as we show, in any case unnecessary). We show that approximation with some finite precision is sufficient in this setting and give an explicit characterization of this precision. Alternatively, our result can be viewed as a stability analysis of standard proximal bundle methods, as it answers the following question: for a given approximation error, what kind of approximate solution can be obtained and how does it depend on the magnitude of the perturbation?  相似文献
9.
Parallel Variable Distribution for Constrained Optimization   总被引:1,自引:0,他引:1  
In the parallel variable distribution framework for solving optimization problems (PVD), the variables are distributed among parallel processors with each processor having the primary responsibility for updating its block of variables while allowing the remaining secondary variables to change in a restricted fashion along some easily computable directions. For constrained nonlinear programs convergence theory for PVD algorithms was previously available only for the case of convex feasible set. Additionally, one either had to assume that constraints are block-separable, or to use exact projected gradient directions for the change of secondary variables. In this paper, we propose two new variants of PVD for the constrained case. Without assuming convexity of constraints, but assuming block-separable structure, we show that PVD subproblems can be solved inexactly by solving their quadratic programming approximations. This extends PVD to nonconvex (separable) feasible sets, and provides a constructive practical way of solving the parallel subproblems. For inseparable constraints, but assuming convexity, we develop a PVD method based on suitable approximate projected gradient directions. The approximation criterion is based on a certain error bound result, and it is readily implementable. Using such approximate directions may be especially useful when the projection operation is computationally expensive.  相似文献
10.
Forcing strong convergence of proximal point iterations in a Hilbert space   总被引:1,自引:1,他引:0  
This paper concerns with convergence properties of the classical proximal point algorithm for finding zeroes of maximal monotone operators in an infinite-dimensional Hilbert space. It is well known that the proximal point algorithm converges weakly to a solution under very mild assumptions. However, it was shown by Güler [11] that the iterates may fail to converge strongly in the infinite-dimensional case. We propose a new proximal-type algorithm which does converge strongly, provided the problem has a solution. Moreover, our algorithm solves proximal point subproblems inexactly, with a constructive stopping criterion introduced in [31]. Strong convergence is forced by combining proximal point iterations with simple projection steps onto intersection of two halfspaces containing the solution set. Additional cost of this extra projection step is essentially negligible since it amounts, at most, to solving a linear system of two equations in two unknowns. Received January 6, 1998 / Revised version received August 9, 1999?Published online November 30, 1999  相似文献
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号