首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   57篇
  免费   1篇
  国内免费   3篇
力学   1篇
数学   59篇
物理学   1篇
  2022年   1篇
  2021年   2篇
  2020年   2篇
  2019年   2篇
  2017年   4篇
  2016年   2篇
  2014年   1篇
  2013年   2篇
  2012年   5篇
  2010年   2篇
  2009年   5篇
  2008年   12篇
  2007年   5篇
  2006年   1篇
  2005年   2篇
  2004年   1篇
  2003年   2篇
  2002年   1篇
  1998年   1篇
  1997年   4篇
  1995年   2篇
  1988年   1篇
  1977年   1篇
排序方式: 共有61条查询结果,搜索用时 15 毫秒
1.
The interior proximal extragradient method for solving equilibrium problems   总被引:1,自引:0,他引:1  
In this article we present a new and efficient method for solving equilibrium problems on polyhedra. The method is based on an interior-quadratic proximal term which replaces the usual quadratic proximal term. This leads to an interior proximal type algorithm. Each iteration consists in a prediction step followed by a correction step as in the extragradient method. In a first algorithm each of these steps is obtained by solving an unconstrained minimization problem, while in a second algorithm the correction step is replaced by an Armijo-backtracking linesearch followed by an hyperplane projection step. We prove that our algorithms are convergent under mild assumptions: pseudomonotonicity for the two algorithms and a Lipschitz property for the first one. Finally we present some numerical experiments to illustrate the behavior of the proposed algorithms.  相似文献   
2.
A new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can be applied to find descent directions of nonsmooth functions. The preliminary results of numerical experiments with unconstrained nonsmooth optimization problems as well as the comparison of the proposed method with the nonsmooth optimization solver DNLP from CONOPT-GAMS and the derivative-free optimization solver CONDOR are presented.  相似文献   
3.
This paper describes a class of frame-based direct search methods for unconstrained optimization without derivatives. A template for convergent direct search methods is developed, some requiring only the relative ordering of function values. At each iteration, the template considers a number of search steps which form a positive basis and conducts a ray search along a step giving adequate decrease. Various ray search strategies are possible, including discrete equivalents of the Goldstein–Armijo and one-sided Wolfe–Powell ray searches. Convergence is shown under mild conditions which allow successive frames to be rotated, translated, and scaled relative to one another.  相似文献   
4.
Based on the extended extragradient‐like method and the linesearch technique, we propose three projection methods for finding a common solution of a finite family of equilibrium problems. The linesearch used in the proposed algorithms has allowed to reduce some conditions imposed on equilibrium bifunctions. The strongly convergent theorems are established without the Lipschitz‐type condition of bifunctions. The paper also helps in the design and analysis of practical algorithms and gives us a generalization of some previously known problems. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   
5.
A convergence analysis is presented for a general class of derivative-free algorithms for minimizing a functionf(x) for which the analytic form of the gradient and the Hessian is impractical to obtain. The class of algorithms accepts finite-difference approximation to the gradient, with stepsizes chosen in such a way that the length of the stepsize must meet two conditions involving the previous stepsize and the distance from the last estimate of the solution to the current estimate. The algorithms also maintain an approximation to the second-derivative matrix and require that the change inx made at each iteration be subject to a bound that is also revised automatically. The convergence theorems have the features that the starting pointx 1 need not be close to the true solution andf(x) need not be convex. Furthermore, despite the fact that the second-derivative approximation may not converge to the true Hessian at the solution, the rate of convergence is still Q-superlinear. The theorry is also shown to be applicable to a modification of Powell's dog-leg algorithm.  相似文献   
6.
本文研究了一类新的求解伪单调变分不等式的二次投影迭代算法.利用Armijo型线性搜寻程序,建立了一类新的超平面,他们严格分离当前迭代点与变分不等式的解集.运用超平面的这种分离性质,在较弱的条件下证明了该算法生成的无穷序列是全局收敛的.数值实验证明该算法是有效的.  相似文献   
7.
Recently, in [12] a very general class oftruncated Newton methods has been proposed for solving large scale unconstrained optimization problems. In this work we present the results of an extensive numericalexperience obtained by different algorithms which belong to the preceding class. This numerical study, besides investigating which arethe best algorithmic choices of the proposed approach, clarifies some significant points which underlies every truncated Newton based algorithm.  相似文献   
8.
Powerful response surface methods based on kriging and radial basis function (RBF) interpolation have been developed for expensive, i.e. computationally costly, global nonconvex optimization. We have implemented some of these methods in the solvers rbfSolve and EGO in the TOMLAB Optimization Environment (http://www.tomopt.com/tomlab/). In this paper we study algorithms based on RBF interpolation. The practical performance of the RBF algorithm is sensitive to the initial experimental design, and to the static choice of target values. A new adaptive radial basis interpolation (ARBF) algorithm, suitable for parallel implementation, is presented. The algorithm is described in detail and its efficiency is analyzed on the standard test problem set of Dixon–Szegö. Results show that it outperforms the published results of rbfSolve and several other solvers.  相似文献   
9.
给出了一种非单调带参数的Perry-Shanno无记忆拟牛顿法, 对于目标函数为凸函数, 在参数满足适当范围的情况下, 证明了算法的全局收敛性.  相似文献   
10.
This work proposes strategies to handle three types of constraints in the context of blackbox optimization: binary constraints that simply indicate if they are satisfied or not; unrelaxable constraints that are required to be satisfied to trust the output of the blackbox; hidden constraints that are not explicitly known by the user but are triggered unexpectedly. Using tools from classification theory, we build surrogate models of those constraints to guide the Mads algorithm. Numerical results are conducted on three engineering problems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号