全文获取类型
收费全文 | 264篇 |
免费 | 0篇 |
国内免费 | 2篇 |
专业分类
化学 | 12篇 |
力学 | 6篇 |
综合类 | 5篇 |
数学 | 223篇 |
物理学 | 20篇 |
出版年
2023年 | 2篇 |
2022年 | 3篇 |
2021年 | 2篇 |
2020年 | 1篇 |
2019年 | 5篇 |
2018年 | 1篇 |
2017年 | 4篇 |
2016年 | 1篇 |
2015年 | 4篇 |
2014年 | 5篇 |
2013年 | 13篇 |
2012年 | 18篇 |
2011年 | 14篇 |
2010年 | 8篇 |
2009年 | 13篇 |
2008年 | 8篇 |
2007年 | 15篇 |
2006年 | 15篇 |
2005年 | 9篇 |
2004年 | 10篇 |
2003年 | 5篇 |
2002年 | 4篇 |
2001年 | 8篇 |
2000年 | 1篇 |
1999年 | 3篇 |
1998年 | 12篇 |
1997年 | 12篇 |
1996年 | 4篇 |
1995年 | 1篇 |
1994年 | 2篇 |
1991年 | 3篇 |
1990年 | 2篇 |
1988年 | 2篇 |
1986年 | 3篇 |
1985年 | 8篇 |
1984年 | 11篇 |
1983年 | 6篇 |
1982年 | 5篇 |
1981年 | 5篇 |
1980年 | 7篇 |
1979年 | 6篇 |
1978年 | 4篇 |
1968年 | 1篇 |
排序方式: 共有266条查询结果,搜索用时 31 毫秒
11.
This paper derives an upper bound for the speedup obtainable by any parallel branch-and-bound algorithm using the best-bound search strategy. We confirm that parallel branch-and-bound can achieve nearly linear, or even super-linear, speedup under the appropriate conditions.This work was supported by U.S. Army Research Office grant DAAG29-82-K-0107. 相似文献
12.
Path Decomposition of Graphs with Given Path Length 总被引:3,自引:0,他引:3
Ming-qing Zhai~ 《应用数学学报(英文版)》2006,22(4):633-638
A path decomposition of a graph G is a list of paths such that each edge appears in exactly onepath in the list.G is said to admit a {P_l}-decomposition if G can be decomposed into some copies of P_l,whereP_l is a path of length l-1.Similarly,G is said to admit a {P_l,P_k}=decomposition if G can be decomposed intosome copies of P_l or P_k.An k-cycle,denoted by C_k,is a cycle with k vertices.An odd tree is a tree of which allvertices have odd degree.In this paper,it is shown that a connected graph G admits a {P_3,P_4}-decompositionif and only if G is neither a 3-cycle nor an odd tree.This result includes the related result of Yan,Xu andMutu.Moreover,two polynomial algorithms are given to find {P_3}-decomposition and {P_3,P_4}-decompositionof graphs,respectively.Hence,{P_3}-decomposition problem and {P_3,P_4}-decomposition problem of graphs aresolved completely. 相似文献
13.
Reconciliation consists in mapping a gene tree T into a species tree S, and explaining the incongruence between the two as evidence for duplication, loss and other events shaping the gene family represented by the leaves of T. When S is unknown, the Species Tree Inference Problem is to infer, from a set of gene trees, a species tree leading to a minimum reconciliation cost. As reconciliation is very sensitive to errors in T, gene tree correction prior to reconciliation is a fundamental task. In this paper, we investigate the complexity of four different combinatorial approaches for deleting misplaced leaves from T. First, we consider two problems (Minimum Leaf Removal and Minimum Species Removal) related to the reconciliation of T with a known species tree S. In the former (latter respectively) we want to remove the minimum number of leaves (species respectively) so that T is “MD-consistent” with S. Second, we consider two problems (Minimum Leaf Removal Inference and Minimum Species Removal Inference) related to species tree inference. In the former (latter respectively) we want to remove the minimum number of leaves (species respectively) from T so that there exists a species tree S such that T is MD-consistent with S. We prove that Minimum Leaf Removal and Minimum Species Removal are APX-hard, even when each label has at most two occurrences in the input gene tree, and we present fixed-parameter algorithms for the two problems. We prove that Minimum Leaf Removal Inference is not only NP-hard, but also W[2]-hard and inapproximable within factor , where n is the number of leaves in the gene tree. Finally, we show that Minimum Species Removal Inference is NP-hard and W[2]-hard, when parameterized by the size of the solution, that is the minimum number of species removals. 相似文献
14.
Colin Cooper 《Discrete Applied Mathematics》2009,157(9):2010-2014
A random recursive tree on n vertices is either a single isolated vertex (for n=1) or is a vertex vn connected to a vertex chosen uniformly at random from a random recursive tree on n−1 vertices. Such trees have been studied before [R. Smythe, H. Mahmoud, A survey of recursive trees, Theory of Probability and Mathematical Statistics 51 (1996) 1-29] as models of boolean circuits. More recently, Barabási and Albert [A. Barabási, R. Albert, Emergence of scaling in random networks, Science 286 (1999) 509-512] have used modifications of such models to model for the web and other “power-law” networks.A minimum (cardinality) dominating set in a tree can be found in linear time using the algorithm of Cockayne et al. [E. Cockayne, S. Goodman, S. Hedetniemi, A linear algorithm for the domination number of a tree, Information Processing Letters 4 (1975) 41-44]. We prove that there exists a constant d?0.3745… such that the size of a minimum dominating set in a random recursive tree on n vertices is dn+o(n) with probability approaching one as n tends to infinity. The result is obtained by analysing the algorithm of Cockayne, Goodman and Hedetniemi. 相似文献
15.
16.
17.
Linear programming duality is well understood and the reduced cost of a column is frequently used in various algorithms. On the other hand, for integer programs it is not clear how to define a dual function even though the subadditive dual theory has been developed a long time ago. In this work we propose a family of computationally tractable subadditive dual functions for integer programs. We develop a solution methodology that computes an optimal primal solution and an optimal subadditive dual function. We present computational experiments, which show that the new algorithm is tractable. 相似文献
18.
Normal surface theory is used to study Dehn fillings of a knot-manifold. We use that any triangulation of a knot-manifold may be modified to a triangulation having just one vertex in the boundary. In this situation, it is shown that there is a finite computable set of slopes on the boundary of the knot-manifold, which come from boundary slopes of normal or almost normal surfaces. This is combined with existence theorems for normal and almost normal surfaces to construct algorithms to determine precisely those manifolds obtained by Dehn filling of a given knot-manifold that: (1) are reducible, (2) contain two-sided incompressible surfaces, (3) are Haken, (4) fiber over S1, (5) are the 3-sphere, and (6) are a lens space. Each of these algorithms is a finite computation.Moreover, in the case of essential surfaces, we show that the topology of each filled manifold is strongly reflected in the combinatorial properties of a triangulation of the knot-manifold with just one vertex in the boundary. If a filled manifold contains an essential surface then the knot-manifold contains an essential normal vertex solution which caps off to an essential surface of the same type in the filled manifold. (Normal vertex solutions are the premier class of normal surface and are computable.) 相似文献
19.
B. S. Goh 《Journal of Optimization Theory and Applications》1997,92(3):581-604
Existing algorithms for solving unconstrained optimization problems are generally only optimal in the short term. It is desirable to have algorithms which are long-term optimal. To achieve this, the problem of computing the minimum point of an unconstrained function is formulated as a sequence of optimal control problems. Some qualitative results are obtained from the optimal control analysis. These qualitative results are then used to construct a theoretical iterative method and a new continuous-time method for computing the minimum point of a nonlinear unconstrained function. New iterative algorithms which approximate the theoretical iterative method and the proposed continuous-time method are then established. For convergence analysis, it is useful to note that the numerical solution of an unconstrained optimization problem is none other than an inverse Lyapunov function problem. Convergence conditions for the proposed continuous-time method and iterative algorithms are established by using the Lyapunov function theorem. 相似文献
20.
Multiplication algorithms in primary school are still frequently introduced with little attention to meaning. We present a case study focusing on a third grade class that engaged in comparing two algorithms and discussing “why they both work”. The objectives of the didactical intervention were to foster students' development of mathematical meanings concerning multiplication algorithms, and their development of an attitude to judge and compare the value and efficiency of different algorithms. Underlying hypotheses were that it is possible to promote the simultaneous unfolding of the semiotic potential of two algorithms, considered as cultural artifacts, with respect to the objectives of the didactical intervention, and to establish a fruitful synergy between the two algorithms. As results, this study sheds light onto the new theoretical construct of “bridging sign”, illuminating students’ meaning-making processes involving more than one artifact; and it provides important insight into the actual unfolding of the hypothesized potential of the algorithms. 相似文献