首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   104篇
  国内免费   1篇
  完全免费   1篇
  数学   106篇
  2019年   1篇
  2018年   3篇
  2017年   8篇
  2016年   9篇
  2015年   2篇
  2014年   6篇
  2013年   11篇
  2012年   5篇
  2010年   5篇
  2009年   6篇
  2008年   2篇
  2007年   4篇
  2006年   4篇
  2005年   4篇
  2004年   8篇
  2003年   5篇
  2001年   3篇
  2000年   3篇
  1999年   4篇
  1998年   1篇
  1997年   3篇
  1995年   2篇
  1994年   3篇
  1993年   1篇
  1992年   1篇
  1990年   1篇
  1988年   1篇
排序方式: 共有106条查询结果,搜索用时 46 毫秒
1.
A nonsmooth version of Newton's method   总被引:68,自引:0,他引:68  
Newton's method for solving a nonlinear equation of several variables is extended to a nonsmooth case by using the generalized Jacobian instead of the derivative. This extension includes the B-derivative version of Newton's method as a special case. Convergence theorems are proved under the condition of semismoothness. It is shown that the gradient function of the augmented Lagrangian forC 2-nonlinear programming is semismooth. Thus, the extended Newton's method can be used in the augmented Lagrangian method for solving nonlinear programs.This author's work is supported in part by the Australian Research Council.This author's work is supported in part by the National Science Foundation under grant DDM-8721709.  相似文献
2.
In this paper we take a new look at smoothing Newton methods for solving the nonlinear complementarity problem (NCP) and the box constrained variational inequalities (BVI). Instead of using an infinite sequence of smoothing approximation functions, we use a single smoothing approximation function and Robinson’s normal equation to reformulate NCP and BVI as an equivalent nonsmooth equation H(u,x)=0, where H:ℜ 2n →ℜ 2n , u∈ℜ n is a parameter variable and x∈ℜ n is the original variable. The central idea of our smoothing Newton methods is that we construct a sequence {z k =(u k ,x k )} such that the mapping H(·) is continuously differentiable at each z k and may be non-differentiable at the limiting point of {z k }. We prove that three most often used Gabriel-Moré smoothing functions can generate strongly semismooth functions, which play a fundamental role in establishing superlinear and quadratic convergence of our new smoothing Newton methods. We do not require any function value of F or its derivative value outside the feasible region while at each step we only solve a linear system of equations and if we choose a certain smoothing function only a reduced form needs to be solved. Preliminary numerical results show that the proposed methods for particularly chosen smoothing functions are very promising. Received June 23, 1997 / Revised version received July 29, 1999?Published online December 15, 1999  相似文献
3.
A smoothing method for mathematical programs with equilibrium constraints   总被引:15,自引:0,他引:15  
Received May 3, 1996 / Revised version received November 19, 1997 Published online January 20, 1999  相似文献
4.
A trust region algorithm for minimization of locally Lipschitzian functions   总被引:7,自引:0,他引:7  
The classical trust region algorithm for smooth nonlinear programs is extended to the nonsmooth case where the objective function is only locally Lipschitzian. At each iteration, an objective function that carries both first and second order information is minimized over a trust region. The term that carries the first order information is an iteration function that may not explicitly depend on subgradients or directional derivatives. We prove that the algorithm is globally convergent. This convergence result extends the result of Powell for minimization of smooth functions, the result of Yuan for minimization of composite convex functions, and the result of Dennis, Li and Tapia for minimization of regular functions. In addition, compared with the recent model of Pang, Han and Rangaraj for minimization of locally Lipschitzian functions using a line search, this algorithm has the same convergence property without assuming positive definiteness and uniform boundedness of the second order term. Applications of the algorithm to various nonsmooth optimization problems are discussed.This author's work was supported in part by the Australian Research Council.This author's work was carried out while he was visiting the Department of Applied Mathematics at the University of New South Wales.  相似文献
5.
Convergence of Newton's method for convex best interpolation   总被引:7,自引:0,他引:7  
Summary. In this paper, we consider the problem of finding a convex function which interpolates given points and has a minimal norm of the second derivative. This problem reduces to a system of equations involving semismooth functions. We study a Newton-type method utilizing Clarke's generalized Jacobian and prove that its local convergence is superlinear. For a special choice of a matrix in the generalized Jacobian, we obtain the Newton method proposed by Irvine et al. [17] and settle the question of its convergence. By using a line search strategy, we present a global extension of the Newton method considered. The efficiency of the proposed global strategy is confirmed with numerical experiments. Received October 26, 1998 / Revised version received October 20, 1999 / Published online August 2, 2000  相似文献
6.
On NCP-Functions   总被引:7,自引:0,他引:7  
In this paper we reformulate several NCP-functions for the nonlinear complementarity problem (NCP) from their merit function forms and study some important properties of these NCP-functions. We point out that some of these NCP-functions have all the nice properties investigated by Chen, Chen and Kanzow [2] for a modified Fischer-Burmeister function, while some other NCP-functions may lose one or several of these properties. We also provide a modified normal map and a smoothing technique to overcome the limitation of these NCP-functions. A numerical comparison for the behaviour of various NCP-functions is provided.  相似文献
7.
Smooth Convex Approximation to the Maximum Eigenvalue Function   总被引:6,自引:0,他引:6  
In this paper, we consider smooth convex approximations to the maximum eigenvalue function. To make it applicable to a wide class of applications, the study is conducted on the composite function of the maximum eigenvalue function and a linear operator mapping m to , the space of n-by-n symmetric matrices. The composite function in turn is the natural objective function of minimizing the maximum eigenvalue function over an affine space in . This leads to a sequence of smooth convex minimization problems governed by a smoothing parameter. As the parameter goes to zero, the original problem is recovered. We then develop a computable Hessian formula of the smooth convex functions, matrix representation of the Hessian, and study the regularity conditions which guarantee the nonsingularity of the Hessian matrices. The study on the well-posedness of the smooth convex function leads to a regularization method which is globally convergent.  相似文献
8.
A Smoothing Newton Method for Semi-Infinite Programming   总被引:5,自引:0,他引:5  
This paper is concerned with numerical methods for solving a semi-infinite programming problem. We reformulate the equations and nonlinear complementarity conditions of the first order optimality condition of the problem into a system of semismooth equations. By using a perturbed Fischer–Burmeister function, we develop a smoothing Newton method for solving this system of semismooth equations. An advantage of the proposed method is that at each iteration, only a system of linear equations is solved. We prove that under standard assumptions, the iterate sequence generated by the smoothing Newton method converges superlinearly/quadratically.  相似文献
9.
The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.  相似文献
10.
Extended Linear-Quadratic Programming (ELQP) problems were introduced by Rockafellar and Wets for various models in stochastic programming and multistage optimization. Several numerical methods with linear convergence rates have been developed for solving fully quadratic ELQP problems, where the primal and dual coefficient matrices are positive definite. We present a two-stage sequential quadratic programming (SQP) method for solving ELQP problems arising in stochastic programming. The first stage algorithm realizes global convergence and the second stage algorithm realizes superlinear local convergence under a condition calledB-regularity.B-regularity is milder than the fully quadratic condition; the primal coefficient matrix need not be positive definite. Numerical tests are given to demonstrate the efficiency of the algorithm. Solution properties of the ELQP problem underB-regularity are also discussed.Supported by the Australian Research Council.  相似文献
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号