首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Optimization》2012,61(9):1203-1226
This article presents a differential inclusion-based neural network for solving nonsmooth convex programming problems with inequality constraints. The proposed neural network, which is modelled with a differential inclusion, is a generalization of the steepest descent neural network. It is proved that the set of the equilibrium points of the proposed differential inclusion is equal to that of the optimal solutions of the considered optimization problem. Moreover, it is shown that the trajectory of the solution converges to an element of the optimal solution set and the convergence point is a globally asymptotically stable point of the proposed differential inclusion. After establishing the theoretical results, an algorithm is also designed for solving such problems. Typical examples are given which confirm the effectiveness of the theoretical results and the performance of the proposed neural network.  相似文献   

2.
本文使用双水平集函数逼近油藏模型特征, 构造出Uzawas 算法进行数值模拟. 对于两相流渗透率的数值求解问题, 可以通过测量油井数据和地震波数据来实现. 将构造出来的带限制的最优化问题使用变异的Lagrange 方法求解. 如果使用双水平集函数逼近渗透率函数, 则需要对Lagrange 函数进行修正, 从而将带限制的最优化问题转化成无限制的最优化问题. 由于双水平集函数的优越性, 进一步构造出最速梯度下降Uzawas 算法和算子分裂格式Uzawas 算法进行求解对应的最优化子问题. 数值算例表明设计的算法是高效的、稳定的.  相似文献   

3.
Steepest Descent, CG, and Iterative Regularization of Ill-Posed Problems   总被引:3,自引:1,他引:2  
The state of the art iterative method for solving large linear systems is the conjugate gradient (CG) algorithm. Theoretical convergence analysis suggests that CG converges more rapidly than steepest descent. This paper argues that steepest descent may be an attractive alternative to CG when solving linear systems arising from the discretization of ill-posed problems. Specifically, it is shown that, for ill-posed problems, steepest descent has a more stable convergence behavior than CG, which may be explained by the fact that the filter factors for steepest descent behave much less erratically than those for CG. Moreover, it is shown that, with proper preconditioning, the convergence rate of steepest descent is competitive with that of CG.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

4.
An algorithm is presented that minimizes a continuously differentiable function in several variables subject to linear inequality constraints. At each step of the algorithm an arc is generated along which a move is performed until either a point yielding a sufficient descent in the function value is determined or a constraint boundary is encountered. The decision to delite a constraint from the list of active constraints is based upon periodic estimates of the Kuhn-Tucker multipliers. The curvilinear search paths are obtained by solving a linear approximation to the differential equation of the continuous steepest descent curve for the objective function on the equality constrained region defined by the constraints which are required to remain binding. If the Hessian matrix of the objective function has certain properties and if the constraint gradients are linearly independent, the sequence generated by the algorithm converges to a point satisfying the Kuhn-Tucker optimality conditions at a rate that is at least quadratic.  相似文献   

5.
主要研究对称正定矩阵群上的内蕴最速下降算法的收敛性问题.首先针对一个可转化为对称正定矩阵群上无约束优化问题的半监督度量学习模型,提出对称正定矩阵群上一种自适应变步长的内蕴最速下降算法.然后利用李群上的光滑函数在任意一点处带积分余项的泰勒展开式,证明所提算法在对称正定矩阵群上是线性收敛的.最后通过在分类问题中的数值实验说明算法的有效性.  相似文献   

6.
In this paper, we first study a nonsmooth steepest descent method for nonsmooth functions defined on a Hilbert space and establish the corresponding algorithm by proximal subgradients. Then, we use this algorithm to find stationary points for those functions satisfying prox-regularity and Lipschitz continuity. As an application, the established algorithm is used to search for the minimizer of a lower semicontinuous and convex function on a finite-dimensional space. A convergence theorem, as an extension and improvement of the existing converging result for twice continuously differentiable convex functions, is also presented therein.  相似文献   

7.
For an even functional on a Banach space, the symmetric mountain pass lemma gives a sequence of critical values which converges to zero. Under the same assumptions on the functional, this paper establishes a new critical point theorem which provides a sequence of critical points converging to zero. The theorem is applied to sublinear elliptic equations. Then a sequence of solutions converging to zero is obtained.  相似文献   

8.
An algorithm is presented that minimizes a nonlinear function in many variables under equality constraints by generating a monotonically improving sequence of feasible points along curvilinear search paths obeying an initialvalue system of differential equations. The derivation of the differential equations is based on the idea of a steepest descent curve for the objective function on the feasible region. Our method for small stepsize behaves as the generalized reduced gradient algorithm, whereas for large enough stepsize the constrained equivalent of Newton's method for unconstrained minimization is obtained.  相似文献   

9.
We present an approximate bundle method for solving nonsmooth equilibrium problems. An inexact cutting-plane linearization of the objective function is established at each iteration, which is actually an approximation produced by an oracle that gives inaccurate values for the functions and subgradients. The errors in function and subgradient evaluations are bounded and they need not vanish in the limit. A descent criterion adapting the setting of inexact oracles is put forward to measure the current descent behavior. The sequence generated by the algorithm converges to the approximately critical points of the equilibrium problem under proper assumptions. As a special illustration, the proposed algorithm is utilized to solve generalized variational inequality problems. The numerical experiments show that the algorithm is effective in solving nonsmooth equilibrium problems.  相似文献   

10.
Recent theoretical and practical investigations have shown that the Gauss-Newton algorithm is the method of choice for the numerical solution of nonlinear least squares parameter estimation problems. It is shown that when line searches are included, the Gauss-Newton algorithm behaves asymptotically like steepest descent, for a special choice of parameterization. Based on this a conjugate gradient acceleration is developed. It converges fast also for those large residual problems, where the original Gauss-Newton algorithm has a slow rate of convergence. Several numerical test examples are reported, verifying the applicability of the theory.  相似文献   

11.
In the Ginzburg‐Landau model for superconductivity a large Ginzburg‐Landau parameter κ corresponds to the formation of tight, stable vortices. These vortices are located exactly where an applied magnetic field pierces the superconducting bulk, and each vortex induces a quantized supercurrent about the vortex. The energy of large‐κ solutions blows up near each vortex which brings about difficulties in analysis. Rigorous asymptotic static theory has previously established the existence of a finite number of the vortices, and these vortices are located precisely at the critical points of the renormalized energy (the free energy less the vortex self‐induction energy). A rigorous study of the full time‐dependent Ginzburg‐Landau equations under the classical Lorentz gauge is done under the asymptotic limit κ → ∞. Under slow times the vortices remain pinned to their initial configuration. Under a fast time of order κ the vortices move according to a steepest descent of the renormalized energy. © 2002 John Wiley & Sons, Inc.  相似文献   

12.
An extremal curve of the simplest variational problem is a continuously differentiable function. Hilbert’s differentiability theorem provides a sufficient condition for the existence of the second derivative of an extremal curve. It is desirable to have a simple example in which the condition of Hilbert’s theorem is violated and an extremal curve is not twice differentiable.In this paper, a cubic variational problem with the following properties is analyzed. The functional of the problem is bounded neither above nor below. There exists an extremal curve for this problem which is obtained by sewing together two different extremal curves and not twice differentiable at the sewing point. Despite this unfavorable situation, an attempt to apply the method of steepest descent (in the form proposed by V.F. Dem’yanov) to this problem is made. It turns out that the method converges to a stationary curve provided that a suitable step size rule is chosen.  相似文献   

13.
Recently, it has been observed that several nondifferentiable minimization problems share the property that the question of whether a given point is optimal can be answered by solving a certain bounded least squares problem. If the resulting residual vector,r, vanishes then the current point is optimal. Otherwise,r is a descent direction. In fact, as we shall see,r points at the steepest descent direction. On the other hand, it is customary to characterize the optimality conditions (and the steepest descent vector) of a convex nondifferentiable function via its subdifferential. Also, it is well known that optimality conditions are usually related to theorems of the alternative. One aim of our survey is to clarify the relations between these subjects. Another aim is to introduce a new type of theorems of the alternative. The new theorems characterize the optimality conditions of discretel 1 approximation problems and multifacility location problems, and provide a simple way to obtain the subdifferential and the steepest descent direction in such problems. A further objective of our review is to demonstrate that the ability to compute the steepest descent direction at degenerate dead points opens a new way for handling degeneracy in active set methods.  相似文献   

14.
In this paper, we first propose a constrained optimization reformulation to the \(L_{1/2}\) regularization problem. The constrained problem is to minimize a smooth function subject to some quadratic constraints and nonnegative constraints. A good property of the constrained problem is that at any feasible point, the set of all feasible directions coincides with the set of all linearized feasible directions. Consequently, the KKT point always exists. Moreover, we will show that the KKT points are the same as the stationary points of the \(L_{1/2}\) regularization problem. Based on the constrained optimization reformulation, we propose a feasible descent direction method called feasible steepest descent method for solving the unconstrained \(L_{1/2}\) regularization problem. It is an extension of the steepest descent method for solving smooth unconstrained optimization problem. The feasible steepest descent direction has an explicit expression and the method is easy to implement. Under very mild conditions, we show that the proposed method is globally convergent. We apply the proposed method to solve some practical problems arising from compressed sensing. The results show its efficiency.  相似文献   

15.
The shape derivative of a functional related to a Bernoulli problem is derived without using the shape derivative of the state. The gradient information is combined with level set ideas in a steepest descent algorithm. Numerical examples show the feasibility of the approach.  相似文献   

16.
In this paper, we investigate an inverse problem of recovering the zeroth-order coefficient and fractional order simultaneously in a time-fractional reaction-diffusion-wave equation by using boundary measurement data from both of uniqueness and numerical method. We prove the uniqueness of the considered inverse problem and the Lipschitz continuity of the forward operator. Then the inverse problem is formulated into a variational problem by the Tikhonov-type regularization. Based on the continuity of the forward operator, we prove that the minimizer of the Tikhonov-type functional exists and converges to the exact solution under an a priori choice of regularization parameter. The steepest descent method combined with Nesterov acceleration is adopted to solve the variational problem. Three numerical examples are presented to support the efficiency and rationality of our proposed method.  相似文献   

17.
In this paper we demonstrate that the numerical method of steepest descent fails when applied in a straight forward fashion to the most commonly occurring highly oscillatory integrals in scattering theory. Through a polar change of variables, however, the integral can be reformulated so that it can be solved efficiently using a combination of oscillatory integration techniques and classical quadrature. The approach is described in detail and demonstrated numerically with some oscillatory integral examples. The numerical examples demonstrate that our approach avoids the failure in some special cases, such as in an acoustic scattering model oscillatory integral with observation point located in the illuminated region. This paves the way for using the framework of numerical steepest descent methods on a wider class of problems, like the 3D high frequency scattering from convex obstacles, up to now only handled in a satisfactory way by methods due to Ganesh and Hawkins (J Comp Phys 230:104–125, 2011).  相似文献   

18.
The standard saddle point method of asymptotic expansions of integrals requires to show the existence of the steepest descent paths of the phase function and the computation of the coefficients of the expansion from a function implicitly defined by solving an inversion problem. This means that the method is not systematic because the steepest descent paths depend on the phase function on hand and there is not a general and explicit formula for the coefficients of the expansion (like in Watson's Lemma for example). We propose a more systematic variant of the method in which the computation of the steepest descent paths is trivial and almost universal: it only depends on the location and the order of the saddle points of the phase function. Moreover, this variant of the method generates an asymptotic expansion given in terms of a generalized (and universal) asymptotic sequence that avoids the computation of the standard coefficients, giving an explicit and systematic formula for the expansion that may be easily implemented on a symbolic manipulation program. As an illustrative example, the well-known asymptotic expansion of the Airy function is rederived almost trivially using this method. New asymptotic expansions of the Hankel function Hn(z) for large n and z are given as non-trivial examples.  相似文献   

19.
In this paper, we consider a new length preserving curve flow for closed convex curves in the plane. We show that the flow exists globally, the area of the region bounded by the evolving curve is increasing, and the evolving curve converges to the circle in C ?? topology as t ?? ??.  相似文献   

20.
On infinite dimensional quadratic Volterra operators   总被引:1,自引:0,他引:1  
In this paper we study a class of quadratic operators named by Volterra operators on infinite dimensional space. We prove that such operators have infinitely many fixed points and the set of Volterra operators forms a convex compact set. In addition, it is described its extreme points. Besides, we study certain limit behaviors of such operators and give some more examples of Volterra operators for which their trajectories do not converge. Finally, we define a compatible sequence of finite dimensional Volterra operators and prove that any power of this sequence converges in weak topology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号