首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   264篇
  免费   0篇
  国内免费   2篇
化学   12篇
力学   6篇
综合类   5篇
数学   223篇
物理学   20篇
  2023年   2篇
  2022年   3篇
  2021年   2篇
  2020年   1篇
  2019年   5篇
  2018年   1篇
  2017年   4篇
  2016年   1篇
  2015年   4篇
  2014年   5篇
  2013年   13篇
  2012年   18篇
  2011年   14篇
  2010年   8篇
  2009年   13篇
  2008年   8篇
  2007年   15篇
  2006年   15篇
  2005年   9篇
  2004年   10篇
  2003年   5篇
  2002年   4篇
  2001年   8篇
  2000年   1篇
  1999年   3篇
  1998年   12篇
  1997年   12篇
  1996年   4篇
  1995年   1篇
  1994年   2篇
  1991年   3篇
  1990年   2篇
  1988年   2篇
  1986年   3篇
  1985年   8篇
  1984年   11篇
  1983年   6篇
  1982年   5篇
  1981年   5篇
  1980年   7篇
  1979年   6篇
  1978年   4篇
  1968年   1篇
排序方式: 共有266条查询结果,搜索用时 31 毫秒
31.
We consider a two-person, general-sum, rational-data, undiscounted stochastic game in which one player (player II) controls the transition probabilities. We show that the set of stationary equilibrium points is the union of a finite number of sets such that, every element of each of these sets can be constructed from a finite number of extreme equilibrium strategies for player I and from a finite number of pseudo-extreme equilibrium strategies for player II. These extreme and pseudo-extreme strategies can themselves be constructed by finite (but inefficient) algorithms. Analogous results can also be established in the more straightforward case of discounted single-controller games.  相似文献   
32.
Given a graphG=[V, E] with positive edge weights, the max-cut problem is to find a cut inG such that the sum of the weights of the edges of this cut is as large as possible. Letg(K) be the class of graphs whose longest odd cycle is not longer than2K+1, whereK is a nonnegative integer independent of the numbern of nodes ofG. We present an O(n 4K) algorithm for the max-cut problem for graphs ing(K). The algorithm is recursive and is based on some properties of longest and longest odd cycles of graphs. This research was supported by National Science Foundation Grant ECS-8005350 to Cornell University.  相似文献   
33.
A graphsack problem is a certain binary linear optimization problem with applications in optimal network design. From there a rational graphsack problem is derived by allowing the variables to vary continuously between 0 and 1. In this paper we deal with rational graphsack problems. First we develop the concept of compressed solutions and the concept of augmenting cuts. Making use of these concepts a very simple optimality criterion is derived. Finally an efficient algorithm solving rational graphsack problems is given which is polynomially bounded in time and which is closely related to the simplex algorithm.  相似文献   
34.
The gradient path of a real valued differentiable function is given by the solution of a system of differential equations. For a quadratic function the above equations are linear, resulting in a closed form solution. A quasi-Newton type algorithm for minimizing ann-dimensional differentiable function is presented. Each stage of the algorithm consists of a search along an arc corresponding to some local quadratic approximation of the function being minimized. The algorithm uses a matrix approximating the Hessian in order to represent the arc. This matrix is updated each stage and is stored in its Cholesky product form. This simplifies the representation of the arc and the updating process. Quadratic termination properties of the algorithm are discussed as well as its global convergence for a general continuously differentiable function. Numerical experiments indicating the efficiency of the algorithm are presented.  相似文献   
35.
We show that piecewise-linear homotopy algorithms may take a number of steps that grows exponentially with the dimension when solving a system of linear equations whose solution lies close to the starting point. Our examples are based on an example of Murty exhibiting exponential growth for Lemke's algorithm for the linear complementarity problem.This research was supported in part by NSF grant ECS-7921279 and by a Guggenheim Fellowship.  相似文献   
36.
We consider the recent algorithms for computing fixed points or zeros of continuous functions fromR n to itself that are based on tracing piecewise-linear paths in triangulations. We investigate the possible savings that arise when these fixed-point algorithms with their usual triangulations are applied to computing zeros of functionsf with special structure:f is either piecewise-linear in certain variables, separable, or has Jacobian with small bandwidth. Each of these structures leads to a property we call modularity; the algorithmic path within a simplex can be continued into an adjacent simplex without a function evaluation or linear programming pivot. Modularity also arises without any special structure onf from the linearity of the function that is deformed tof. In the case thatf is separable we show that the path generated by Kojima's algorithm with the homotopyH 2 coincides with the path generated by the standard restart algorithm of Merrill when the usual triangulations are employed. The extra function evaluations and linear programming steps required by the standard algorithm can be avoided by exploiting modularity.This research was performed while the author was visiting the Mathematics Research Center, University of Wisconsin-Madison, and was sponsored by the United States Army under Contract No. DAAG-29-75-C-0024 and by the National Science Foundation under Grant No. ENG76-08749.  相似文献   
37.
Since the original work of Dantzig and Wolfe in 1960, the idea of decomposition has persisted as an attractive approach to large-scale linear programming. However, empirical experience reported in the literature over the years has not been encouraging enough to stimulate practical application. Recent experiments indicate that much improvement is possible through advanced implementations and careful selection of computational strategies. This paper describes such an effort based on state-of-the-art, modular linear programming software (IBM's MPSX/370).  相似文献   
38.
This paper provides decomposition algorithms for locating minimal cuts in a large directed network. The main theorem provides several cases for the algorithms. In the worst case, it is shown that the efficiency of one of the proposed algorithms is of the same order as a no-decomposition algorithm. As in linear programming, the obvious advantage of the proposed decomposition procedure is its ability to potentially handle larger problems than a no-decomposition algorithm.  相似文献   
39.
Computation times of room acoustical simulation algorithms still suffer from the time consuming search for ray-wall-intersections. Spatial subdivision may speed up ray tracing considerably. For room acoustics, where the number of surface polygons (walls) is not so high, the voxel technique appears suitable. The voxel crossing algorithm is very fast. However, its performance was not yet investigated up to now. Voxels are small cubes by which the space is subdivided periodically. The advantage: Only in the rare case a voxel intersects a wall the intersection point needs to be computed. In this paper, by estimating the probabilities of such intersections, an analytical formula is derived, by which the optimum degree of spatial subdivision and the factor of acceleration of the algorithm can be forecasted. It turns out that the computation time increases only with instead of with K0 (the number of polygons of the room). Thus, on a modern PC, computation time for a full room acoustical simulation even for highly complicated rooms may be reduced by a factor in the order of 100, i.e. to a few seconds.  相似文献   
40.
A circular-arc graph is the intersection graph of arcs on a circle. A Helly circular-arc graph is a circular-arc graph admitting a model whose arcs satisfy the Helly property. A clique-independent set of a graph is a set of pairwise disjoint cliques of the graph. It is NP-hard to compute the maximum cardinality of a clique-independent set for a general graph. In the present paper, we propose polynomial time algorithms for finding the maximum cardinality and weight of a clique-independent set of a -free CA graph. Also, we apply the algorithms to the special case of an HCA graph. The complexity of the proposed algorithm for the cardinality problem in HCA graphs is O(n). This represents an improvement over the existing algorithm by Guruswami and Pandu Rangan, whose complexity is O(n2). These algorithms suppose that an HCA model of the graph is given.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号