首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study Markov processes associated with stochastic differential equations, whose non-linearities are gradients of convex functionals. We prove a general result of existence of such Markov processes and a priori estimates on the transition probabilities. The main result is the following stability property: if the associated invariant measures converge weakly, then the Markov processes converge in law. The proofs are based on the interpretation of a Fokker–Planck equation as the steepest descent flow of the relative entropy functional in the space of probability measures, endowed with the Wasserstein distance.  相似文献   

2.
Time‐discrete variational schemes are introduced for both the Vlasov–Poisson–Fokker–Planck (VPFP) system and a natural regularization of the VPFP system. The time step in these variational schemes is governed by a certain Kantorovich functional (or scaled Wasserstein metric). The discrete variational schemes may be regarded as discretized versions of a gradient flow, or steepest descent, of the underlying free energy functionals for these systems. For the regularized VPFP system, convergence of the variational scheme is rigorously established. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
Under consideration is the steepest descent method for solving the problem of determination of a coefficient in a hyperbolic equation in integral statement. The properties of solutions to the direct and inverse problems are studied. Estimates for the objective functional and its gradient are obtained. Convergence in the mean is proved for the steepest descent method for minimizing the residual functional.  相似文献   

4.
In this paper, we first propose a constrained optimization reformulation to the \(L_{1/2}\) regularization problem. The constrained problem is to minimize a smooth function subject to some quadratic constraints and nonnegative constraints. A good property of the constrained problem is that at any feasible point, the set of all feasible directions coincides with the set of all linearized feasible directions. Consequently, the KKT point always exists. Moreover, we will show that the KKT points are the same as the stationary points of the \(L_{1/2}\) regularization problem. Based on the constrained optimization reformulation, we propose a feasible descent direction method called feasible steepest descent method for solving the unconstrained \(L_{1/2}\) regularization problem. It is an extension of the steepest descent method for solving smooth unconstrained optimization problem. The feasible steepest descent direction has an explicit expression and the method is easy to implement. Under very mild conditions, we show that the proposed method is globally convergent. We apply the proposed method to solve some practical problems arising from compressed sensing. The results show its efficiency.  相似文献   

5.
Using properties of Г-uniformly convex functionals we prove that in super-reflexive spaces there exists a unique steepest descent direction of a locally Lipschitz functional at any non-critical point.  相似文献   

6.
Recently, it has been observed that several nondifferentiable minimization problems share the property that the question of whether a given point is optimal can be answered by solving a certain bounded least squares problem. If the resulting residual vector,r, vanishes then the current point is optimal. Otherwise,r is a descent direction. In fact, as we shall see,r points at the steepest descent direction. On the other hand, it is customary to characterize the optimality conditions (and the steepest descent vector) of a convex nondifferentiable function via its subdifferential. Also, it is well known that optimality conditions are usually related to theorems of the alternative. One aim of our survey is to clarify the relations between these subjects. Another aim is to introduce a new type of theorems of the alternative. The new theorems characterize the optimality conditions of discretel 1 approximation problems and multifacility location problems, and provide a simple way to obtain the subdifferential and the steepest descent direction in such problems. A further objective of our review is to demonstrate that the ability to compute the steepest descent direction at degenerate dead points opens a new way for handling degeneracy in active set methods.  相似文献   

7.
Steepest Descent, CG, and Iterative Regularization of Ill-Posed Problems   总被引:3,自引:1,他引:2  
The state of the art iterative method for solving large linear systems is the conjugate gradient (CG) algorithm. Theoretical convergence analysis suggests that CG converges more rapidly than steepest descent. This paper argues that steepest descent may be an attractive alternative to CG when solving linear systems arising from the discretization of ill-posed problems. Specifically, it is shown that, for ill-posed problems, steepest descent has a more stable convergence behavior than CG, which may be explained by the fact that the filter factors for steepest descent behave much less erratically than those for CG. Moreover, it is shown that, with proper preconditioning, the convergence rate of steepest descent is competitive with that of CG.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

8.
The Convergence of the Steepest Descent Algorithm for D.C.Optimization   总被引:1,自引:0,他引:1  
Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper.And the convergence of the steepest descent algorithm for unconstrained and constrained quasi-differentiable programming is proved.  相似文献   

9.
宋春玲  夏尊铨 《数学季刊》2007,22(1):131-136
Some properties of a class of quasi-differentiable functions(the difference of two finite convex functions) are considered in this paper. And the convergence of the steepest descent algorithm for unconstrained and constrained quasi-differentiable programming is proved.  相似文献   

10.
This paper is concerned with the development of a parameter-free method, closely related to penalty function and multiplier methods, for solving constrained minimization problems. The method is developed via the quadratic programming model with equality constraints. The study starts with an investigation into the convergence properties of a so-called “primal-dual differential trajectory”, defined by directions given by the direction of steepest descent with respect to the variables x of the problem, and the direction of steepest ascent with respect to the Lagrangian multipliers λ, associated with the Lagrangian function. It is shown that the trajectory converges to a stationary point (x*, λ*) corresponding to the solution of the equality constrained problem. Subsequently numerical procedures are proposed by means of which practical trajectories may be computed and the convergence of these trajectories are analyzed. A computational algorithm is presented and its application is illustrated by means of simple but representative examples. The extension of the method to inequality constrained problems is discussed and a non-rigorous argument, based on the Kuhn—Tucker necessary conditions for a constrained minimum, is put forward on which a practical procedure for determining the solution is based. The application of the method to inequality constrained problems is illustrated by its application to a couple of simple problems.  相似文献   

11.
An algorithm is presented that minimizes a continuously differentiable function in several variables subject to linear inequality constraints. At each step of the algorithm an arc is generated along which a move is performed until either a point yielding a sufficient descent in the function value is determined or a constraint boundary is encountered. The decision to delite a constraint from the list of active constraints is based upon periodic estimates of the Kuhn-Tucker multipliers. The curvilinear search paths are obtained by solving a linear approximation to the differential equation of the continuous steepest descent curve for the objective function on the equality constrained region defined by the constraints which are required to remain binding. If the Hessian matrix of the objective function has certain properties and if the constraint gradients are linearly independent, the sequence generated by the algorithm converges to a point satisfying the Kuhn-Tucker optimality conditions at a rate that is at least quadratic.  相似文献   

12.
13.
In a Hilbert space setting, we consider new continuous gradient-like dynamical systems for constrained multiobjective optimization. This type of dynamics was first investigated by Cl. Henry, and B. Cornet, as a model of allocation of resources in economics. Based on the Yosida regularization of the discontinuous part of the vector field which governs the system, we obtain the existence of strong global trajectories. We prove a descent property for each objective function, and in the quasi-convex case, convergence of the trajectories to Pareto critical points. We give an interpretation of the dynamic in terms of Pareto equilibration for cooperative games. By time discretization, we make a link to recent studies of Svaiter et al. on the algorithm of steepest descent for multiobjective optimization.  相似文献   

14.
Minimization methods that search along a curvilinear path composed of a non-ascent negative curvature direction in addition to the direction of steepest descent, dating back to the late 1970s, have been an effective approach to finding a stationary point of a function at which its Hessian is positive semidefinite. For constrained nonlinear programs arising from recent applications, the primary goal is to find a stationary point that satisfies the second-order necessary optimality conditions. Motivated by this, we generalize the approach of using negative curvature directions from unconstrained optimization to equality constrained problems and prove that our proposed negative curvature method is guaranteed to converge to a stationary point satisfying second-order necessary conditions.  相似文献   

15.
The problem of constrained optimization via the gradient-based discrete adjoint steepest descent method is studied under the assumption that the constraint equations are solved inexactly. Error propagation from the constraint equations to the gradient is studied analytically, as is the convergence rate of the inexactly constrained algorithm as it relates to the exact algorithm. A method is developed for adapting the residual tolerance to which the constraint equations are solved. The adaptive tolerance method is applied to two simple test cases to demonstrate the potential gains in computational efficiency.  相似文献   

16.
A recent work of Shi (Numer. Linear Algebra Appl. 2002; 9 : 195–203) proposed a hybrid algorithm which combines a primal‐dual potential reduction algorithm with the use of the steepest descent direction of the potential function. The complexity of the potential reduction algorithm remains valid but the overall computational cost can be reduced. In this paper, we make efforts to further reduce the computational costs. We notice that in order to obtain the steepest descent direction of the potential function, the Hessian matrix of second order partial derivatives of the objective function needs to be computed. To avoid this, we in this paper propose another hybrid algorithm which uses a projected steepest descent direction of the objective function instead of the steepest descent direction of the potential function. The complexity of the original potential reduction algorithm still remains valid but the overall computational cost is further reduced. Our numerical experiments are also reported. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
In situations where it is not feasible to find an optimal feedback control law for a stochastic system, an open-loop law can often be derived by optimization. This article presents a method of finding the extremum of certain stochastic functionals analogous to the steepest descent method. Necessary conditions for the convergence of the algorithm are given. Two examples illustrate the use of the algorithm.This research was supported by the Office of Naval Research, Contract No. Nonr-1866 (16) and by the National Aeronautics and Space Administration, Grant No. NGR-22-007-068.  相似文献   

18.
本指出了[4]的一个错误。用最速下降规则,给出了选择割平面的一个新方法。一个复杂的例子说明,该方法有一定的实用价值。  相似文献   

19.
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
The notions of exhausters were introduced in (Demyanov, Exhauster of a positively homogeneous function, Optimization 45, 13–29 (1999)). These dual tools (upper and lower exhausters) can be employed to describe optimality conditions and to find directions of steepest ascent and descent for a very wide range of nonsmooth functions. What is also important, exhausters enjoy a very good calculus (in the form of equalities). In the present paper we review the constrained and unconstrained optimality conditions in terms of exhausters, introduce necessary and sufficient conditions for the Lipschitzivity and Quasidifferentiability, and also present some new results on relationships between exhausters and other nonsmooth tools (such as the Clarke, Michel-Penot and Fréchet subdifferentials).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号