全文获取类型

收费全文 |
316篇 |

免费 |
32篇 |

专业分类

化学 |
36篇 |

力学 |
15篇 |

数学 |
283篇 |

物理学 |
14篇 |

出版年

2022年 |
2篇 |

2021年 |
6篇 |

2019年 |
1篇 |

2018年 |
2篇 |

2017年 |
9篇 |

2016年 |
6篇 |

2015年 |
7篇 |

2014年 |
5篇 |

2013年 |
7篇 |

2012年 |
16篇 |

2011年 |
17篇 |

2010年 |
8篇 |

2009年 |
8篇 |

2008年 |
6篇 |

2007年 |
11篇 |

2006年 |
22篇 |

2005年 |
30篇 |

2004年 |
19篇 |

2003年 |
15篇 |

2002年 |
15篇 |

2001年 |
16篇 |

2000年 |
16篇 |

1999年 |
6篇 |

1998年 |
6篇 |

1997年 |
13篇 |

1996年 |
10篇 |

1995年 |
11篇 |

1994年 |
16篇 |

1993年 |
12篇 |

1992年 |
11篇 |

1991年 |
9篇 |

1990年 |
3篇 |

1988年 |
1篇 |

1987年 |
3篇 |

1986年 |
1篇 |

1985年 |
1篇 |

1984年 |
1篇 |

**排序方式：**共有348条查询结果，搜索用时 31 毫秒

1.

Here we propose a global optimization method for general, i.e. indefinite quadratic problems, which consist of maximizing a non-concave quadratic function over a polyhedron in

*n*-dimensional Euclidean space. This algorithm is shown to be finite and exact in non-degenerate situations. The key procedure uses copositivity arguments to ensure escaping from inefficient local solutions. A similar approach is used to generate an improving feasible point, if the starting point is not the global solution, irrespective of whether or not this is a local solution. Also, definiteness properties of the quadratic objective function are irrelevant for this procedure. To increase efficiency of these methods, we employ pseudoconvexity arguments. Pseudoconvexity is related to copositivity in a way which might be helpful to check this property efficiently even beyond the scope of the cases considered here. 相似文献2.

Simulated annealing for constrained global optimization

**总被引：10，自引：0，他引：10**Hide-and-Seek is a powerful yet simple and easily implemented continuous simulated annealing algorithm for finding the maximum of a continuous function over an arbitrary closed, bounded and full-dimensional body. The function may be nondifferentiable and the feasible region may be nonconvex or even disconnected. The algorithm begins with any feasible interior point. In each iteration it generates a candidate successor point by generating a uniformly distributed point along a direction chosen at random from the current iteration point. In contrast to the discrete case, a single step of this algorithm may generate

*any*point in the feasible region as a candidate point. The candidate point is then accepted as the next iteration point according to the Metropolis criterion parametrized by an*adaptive*cooling schedule. Again in contrast to discrete simulated annealing, the sequence of iteration points converges in probability to a global optimum regardless of how rapidly the temperatures converge to zero. Empirical comparisons with other algorithms suggest competitive performance by Hide-and-Seek.This material is based on work supported by a NATO Collaborative Research Grant, no. 0119/89. 相似文献3.

Pure adaptive search in global optimization

**总被引：10，自引：0，他引：10**Pure adaptive seach iteratively constructs a sequence of interior points uniformly distributed within the corresponding sequence of nested improving regions of the feasible space. That is, at any iteration, the next point in the sequence is uniformly distributed over the region of feasible space containing all points that are strictly superior in value to the previous points in the sequence. The complexity of this algorithm is measured by the expected number of iterations required to achieve a given accuracy of solution. We show that for global mathematical programs satisfying the Lipschitz condition, its complexity increases at most

*linearly*in the dimension of the problem.This work was supported in part by NATO grant 0119/89. 相似文献4.

Global optimization approach to nonlinear optimal control

**总被引：1，自引：0，他引：1**To determine the optimum in nonlinear optimal control problems, it is proposed to convert the continuous problems into a form suitable for nonlinear programming (NLP). Since the resulting finite-dimensional NLP problems can present multiple local optima, a global optimization approach is developed where random starting conditions are improved by using special line searches. The efficiency, speed, and reliability of the proposed approach is examined by using two examples.Financial support from the Natural Science and Engineering Research Council under Grant A-3515 as well as an Ontario Graduate Scholarship are gratefully acknowledged. All the computations were done with the facilities of the University of Toronto Computer Centre and the Ontario Centre for Large Scale Computations. 相似文献

5.

Effect of the subdivision strategy on convergence and efficiency of some global optimization algorithms

**总被引：2，自引：0，他引：2** Hoang Tuy 《Journal of Global Optimization》1991,1(1):23-36

We investigate subdivision strategies that can improve the convergence and efficiency of some branch and bound algorithms of global optimization. In particular, a general class of so called weakly exhaustive simplicial subdivision processes is introduced that subsumes all previously known radial exhaustive processes. This result provides the basis for constructing flexible subdivision strategies that can be adapted to take advantage of various problem conditions. 相似文献

6.

We are dealing with a numerical method for solving the problem of minimizing a difference of two convex functions (a d.c. function) over a closed convex set in

^{ n }. This algorithm combines a new prismatic branch and bound technique with polyhedral outer approximation in such a way that only linear programming problems have to be solved.Parts of this research were accomplished while the third author was visiting the University of Trier, Germany, as a fellow of the Alexander von Humboldt foundation. 相似文献7.

Since Dantzig—Wolfe's pioneering contribution, the decomposition approach using a pricing mechanism has been developed for a wide class of mathematical programs. For convex programs a linear space of Lagrangean multipliers is enough to define price functions. For general mathematical programs the price functions could be defined by using a subclass of nondecreasing functions. However the space of nondecreasing functions is no longer finite dimensional. In this paper we consider a specific nonconvex optimization problem min {

*f*(*x*):*h*_{ j }(*x*)*g*(*x*),*j*=1, ,*m, x**X*}, where*f*(·),*h*_{ j }(·) and*g*(·) are finite convex functions and*X*is a closed convex set. We generalize optimal price functions for this problem in such a way that the parameters of generalized price functions are defined in a finite dimensional space. Combining convex duality and a nonconvex duality we can develop a decomposition method to find a globally optimal solution.This paper is dedicated to Phil Wolfe on the occasion of his 65th birthday. 相似文献8.

A general iterative method is proposed for finding the maximal root

*x*_{max}of a one-variable equation in a given interval. The method generates a monotone-decreasing sequence of points converging to*x*_{max}or demonstrates the nonexistence of a real root. It is globally convergent. A concrete realization of the general algorithm is also given and is shown to be locally quadratically convergent. Computational experience obtained for eight test problems indicates that the new method is comparable to known methods claiming global convergence. 相似文献9.

Phan Thien Thach 《Journal of Global Optimization》1993,3(3):311-324

The aim of this paper is to present a nonconvex duality with a zero gap and its connection with convex duality. Since a convex program can be regarded as a particular case of convex maximization over a convex set, a nonconvex duality can be regarded as a generalization of convex duality. The generalized duality can be obtained on the basis of convex duality and minimax theorems. The duality with a zero gap can be extended to a more general nonconvex problems such as a quasiconvex maximization over a general nonconvex set or a general minimization over the complement of a convex set. Several applications are given.On leave from the Institute of Mathematics, Hanoi, Vietnam. 相似文献

10.

S. K. Zavriev 《Journal of Global Optimization》1993,3(1):67-78

The paper is devoted to the convergence properties of finite-difference local descent algorithms in global optimization problems with a special -convex structure. It is assumed that the objective function can be closely approximated by some smooth convex function. Stability properties of the perturbed gradient descent and coordinate descent methods are investigated. Basing on this results some global optimization properties of finite-difference local descent algorithms, in particular, coordinate descent method, are discovered. These properties are not inherent in methods using exact gradients.The paper was presented at the II. IIASA-Workshop on Global Optimization, Sopron (Hungary), December 9–14, 1990. 相似文献