全文获取类型
收费全文 | 78篇 |
免费 | 7篇 |
国内免费 | 6篇 |
专业分类
化学 | 3篇 |
力学 | 3篇 |
综合类 | 3篇 |
数学 | 79篇 |
物理学 | 3篇 |
出版年
2024年 | 1篇 |
2022年 | 2篇 |
2021年 | 2篇 |
2020年 | 7篇 |
2019年 | 3篇 |
2018年 | 4篇 |
2017年 | 2篇 |
2016年 | 4篇 |
2015年 | 1篇 |
2014年 | 2篇 |
2013年 | 9篇 |
2012年 | 3篇 |
2011年 | 4篇 |
2010年 | 3篇 |
2009年 | 5篇 |
2008年 | 5篇 |
2007年 | 3篇 |
2006年 | 4篇 |
2005年 | 4篇 |
2004年 | 1篇 |
2003年 | 1篇 |
2002年 | 3篇 |
2001年 | 2篇 |
2000年 | 2篇 |
1999年 | 5篇 |
1998年 | 2篇 |
1997年 | 2篇 |
1995年 | 1篇 |
1993年 | 2篇 |
1992年 | 1篇 |
1991年 | 1篇 |
排序方式: 共有91条查询结果,搜索用时 15 毫秒
21.
Latin hypercube sampling is often used to estimate the distribution function of a complicated function of many random variables. In so doing, it is typically necessary to choose a permutation matrix which minimizes the correlation among the cells in the hypercube layout. This problem can be formulated as a generalized, multi-dimensional assignment problem. For the two-dimensional case, we provide a polynomial algorithm. For higher dimensions, we offer effective heuristic and bounding procedures.Supported in part by a grant from the National Institute of Standards and Technology (60NANB9D-0974).Supported in part by grants from the Office of Naval Research (N00014-90-J-1324) and the Air Force Office of Scientific Research (F49 620-90-C-0022).Research partially performed while visiting the Department of Mathematics, Brunel University, Uxbridge, England. 相似文献
22.
Speciation calculations are often the base upon which further and more important conclusions are drawn, e.g., solubilities and sorption estimates used for retention of hazardous materials. Since speciation calculations are based on experimentally determined stability constants of the relevant chemical reactions, the measurement and experimental uncertainty in these constants will affect the reliability of the simulation output. The present knowledge of the thermodynamic data relevant for predicting the behaviour of a complex chemical system is quite heterogeneous. In order to predict the impact of these uncertainties on the reliability of a simulation output requires sophisticated modelling codes. In this paper, we will present a computer program, LJUNGSKILE, which utilises the thermodynamic equilibrium code PHREEQC to statistically calculate uncertainties in speciation based on uncertainties in stability constants. A short example is included. 相似文献
23.
本文首次提出了 n维超立方体的层次结构模型 HHC,详细讨论了该结构中结点的分布及各结点的连接关系 .并利用 HHC,讨论了超立方体非对称比较模型的最优诊断算法 ,极大独立点集等问题 相似文献
24.
Let FFv be the set of faulty nodes in an n-dimensional folded hypercube FQn with |FFv|≤n−2. In this paper, we show that if n≥3, then every edge of FQn−FFv lies on a fault-free cycle of every even length from 4 to 2n−2|FFv|, and if n≥2 and n is even, then every edge of FQn−FFv lies on a fault-free cycle of every odd length from n+1 to 2n−2|FFv|−1. 相似文献
25.
《Random Structures and Algorithms》2018,52(1):158-178
A paradigm that was successfully applied in the study of both pure and algorithmic problems in graph theory can be colloquially summarized as stating that any graph is close to being the disjoint union of expanders. Our goal in this paper is to show that in several of the instantiations of the above approach, the quantitative bounds that were obtained are essentially best possible. Three examples of our results are the following:
- A classical result of Lipton, Rose and Tarjan from 1979 states that if is a hereditary family of graphs and every graph in has a vertex separator of size , then every graph in has O(n) edges. We construct a hereditary family of graphs with vertex separators of size such that not all graphs in the family have O(n) edges.
- Trevisan and Arora‐Barak‐Steurer have recently shown that given a graph G, one can remove only 1% of its edges to obtain a graph in which each connected component has good expansion properties. We show that in both of these decomposition results, the expansion properties they guarantee are essentially best possible, even when one is allowed to remove 99% of G's edges.
- Sudakov and the second author have recently shown that every graph with average degree d contains an n‐vertex subgraph with average degree at least and vertex expansion . We show that one cannot guarantee a better vertex expansion even if allowing the average degree to be O(1).
26.
Güzin Bayraksan 《Operations Research Letters》2018,46(2):173-178
The averaged two-replication procedure assesses the quality of a candidate solution to a stochastic program by forming point and confidence interval estimators on its optimality gap. We present an improved averaged two-replication procedure that uses Latin hypercube sampling to form confidence intervals of optimality gap. This new procedure produces tighter and less variable interval widths by reducing the sampling error by . Despite having tighter intervals, it improves an earlier procedure’s asymptotic coverage probability bound from to . 相似文献
27.
Abdelhafid Berrachedi Ivan Havel Henry Martyn Mulder 《Czechoslovak Mathematical Journal》2003,53(2):295-309
The main subject of our study are spherical (weakly spherical) graphs, i.e. connected graphs fulfilling the condition that in each interval to each vertex there is exactly one (at least one, respectively) antipodal vertex. Our analysis concerns properties of these graphs especially in connection with convexity and also with hypercube graphs. We deal e.g. with the problem under what conditions all intervals of a spherical graph induce hypercubes and find a new characterization of hypercubes: G is a hypercube if and only if G is spherical and bipartite. 相似文献
28.
超立方体网络是目前在超级计算机处理器结构中应用得最广泛的拓扑结构,Mbius立方体是超立方体的一种变形,已经被证明它在某些方面具有优于超立方体的拓扑性质.本文指出了n维Mbius立方体递归结构的一些重要拓扑性质. 相似文献
29.
Decomposition algorithms for block-angular linear programs give rise to a natural, coarse-grained parallelism that can be exploited by processing the subproblems concurrently within a distributed-memory environment. The parallel efficiency of the distributed approach, however, is critically dependent on the duration of the inherently serial master phase relative to that of the bottleneck subproblem. This paper investigates strategies for improving efficiency in distributed Dantzig—Wolfe decomposition by better balancing the load between the master and subproblem processors. We report computational experience on an Intel iPSC/2 hypercube multiprocessor with test problems having dimensions up to about 30 000 rows, 87 000 columns, and 200 coupling constraints.This paper is dedicated to Phil Wolfe on the occasion of his 65th birthday. 相似文献
30.
Art B. Owen 《Journal of Complexity》1998,14(4):466-489
Hybrids of equidistribution and Monte Carlo methods of integration can achieve the superior accuracy of the former while allowing the simple error estimation methods of the latter. In particular, randomized (0, m, s)-nets in basebproduce unbiased estimates of the integral, have a variance that tends to zero faster than 1/nfor any square integrable integrand and have a variance that for finitenis never more thane?2.718 times as large as the Monte Carlo variance. Lower bounds thaneare known for special cases. Some very important (t, m, s)-nets havet>0. The widely used Sobol' sequences are of this form, as are some recent and very promising nets due to Niederreiter and Xing. Much less is known about randomized versions of these nets, especially ins>1 dimensions. This paper shows that scrambled (t, m, s)-nets enjoy the same properties as scrambled (0, m, s)-nets, except the sampling variance is guaranteed only to be belowbt[(b+1)/(b−1)]stimes the Monte Carlo variance for a least-favorable integrand and finiten. 相似文献