全文获取类型
收费全文 | 174篇 |
免费 | 0篇 |
专业分类
化学 | 51篇 |
晶体学 | 2篇 |
力学 | 5篇 |
数学 | 24篇 |
物理学 | 92篇 |
出版年
2022年 | 1篇 |
2020年 | 1篇 |
2019年 | 3篇 |
2018年 | 2篇 |
2017年 | 2篇 |
2016年 | 1篇 |
2015年 | 1篇 |
2014年 | 1篇 |
2013年 | 15篇 |
2012年 | 7篇 |
2011年 | 5篇 |
2010年 | 4篇 |
2009年 | 4篇 |
2008年 | 9篇 |
2007年 | 7篇 |
2006年 | 9篇 |
2005年 | 7篇 |
2004年 | 7篇 |
2003年 | 6篇 |
2002年 | 3篇 |
2001年 | 3篇 |
2000年 | 3篇 |
1999年 | 1篇 |
1998年 | 2篇 |
1997年 | 3篇 |
1996年 | 3篇 |
1995年 | 4篇 |
1994年 | 3篇 |
1993年 | 3篇 |
1992年 | 7篇 |
1991年 | 4篇 |
1990年 | 2篇 |
1989年 | 3篇 |
1988年 | 1篇 |
1987年 | 4篇 |
1986年 | 3篇 |
1985年 | 7篇 |
1984年 | 3篇 |
1983年 | 3篇 |
1982年 | 1篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1978年 | 1篇 |
1976年 | 1篇 |
1975年 | 2篇 |
1974年 | 3篇 |
1973年 | 3篇 |
1969年 | 1篇 |
1954年 | 1篇 |
排序方式: 共有174条查询结果,搜索用时 46 毫秒
1.
William Evans David Kirkpatrick 《Journal of Algorithms in Cognition, Informatics and Logic》2004,50(2):168-193
We consider the problem of restructuring an ordered binary tree T, preserving the in-order sequence of its nodes, so as to reduce its height to some target value h. Such a restructuring necessarily involves the downward displacement of some of the nodes of T. Our results, focusing both on the maximum displacement over all nodes and on the maximum displacement over leaves only, provide (i) an explicit tradeoff between the worst-case displacement and the height restriction (including a family of trees that exhibit the worst-case displacements) and (ii) efficient algorithms to achieve height-restricted restructuring while minimizing the maximum node displacement. 相似文献
2.
Lisa Higham David Kirkpatrick Karl Abrahamson Andrew Adler 《Journal of Algorithms in Cognition, Informatics and Logic》1997,23(2):291-328
Probabilistic algorithms are developed for a basic problem in distributed computation, assuming anonymous, asynchronous, unidirectional rings of processors. The problem, known as Solitude Detection, requires that a nonempty subset of the processors, calledcontenders, determine whether or not there is exactly one contender. Monte Carlo algorithms are developed that err with probability bounded by a specified parameter and exhibit either message or processor termination. The algorithms transmit an optimal expected number of bits, to within a constant factor. Their bit complexities display a surprisingly rich dependence on the kind of termination exhibited and on the processors' knowledge of the size of the ring. Two probabilistic tools are isolated and then combined in various ways to achieve all our algorithms. 相似文献
3.
David G. Kirkpatrick 《Discrete and Computational Geometry》1988,3(1):267-280
A planar subdivision is the partition of the plane induced by an embedded planar graph. A representation of such a subdivision isordered if, for each vertexv of the associated graphG, the (say) clockwise sequence of edges in the embedding ofG incident withv appears explicitly.The worst-case complexity of establishing order in a planar subdivision, i.e., converting an unordered representation into an ordered one, is shown to be (n + log (G)), wheren is the size (number of vertices) of the underlying graphG and (G) is (essentially) the number of topologically distinct embeddings ofG in the plane.This work was supported by the National Science and Engineering Research Council of Canada under Grant A3583. A preliminary version of this paper appeared in theProceedings of the Third Annual ACM Symposium on Computational Geometry. 相似文献
4.
Molecular dynamics (MD) simulations of water confined in nanospaces between layers of talc (system composition Mg(3)Si(4)O(10)(OH)(2) + 2H(2)O) at 300 K and pressures of approximately 0.45 GPa show the presence of a novel 2-D ice structure, and the simulation results at lower pressures provide insight into the mechanisms of its decompression melting. Talc is hydrophobic at ambient pressure and temperature, but weak hydrogen bonding between the talc surface and the water molecules plays an important role in stabilizing the hydrated structure at high pressure. The simulation results suggest that experimentally accessible elevated pressures may cause formation of a wide range of previously unknown water structures in nanoconfinement. In the talc 2-D ice, each water molecule is coordinated by six O(b) atoms of one basal siloxane sheet and three water molecules. The water molecules are arranged in a buckled hexagonal array in the a-b crystallographic plane with two sublayers along [001]. Each H(2)O molecule has four H-bonds, accepting one from the talc OH group and one from another water molecule and donating one to an O(b) and one to another water molecule. In plan view, the molecules are arranged in six-member rings reflecting the substrate talc structure. Decompression melting occurs by migration of water molecules to interstitial sites in the centers of six-member rings and eventual formation of separate empty and water-filled regions. 相似文献
5.
6.
7.
Fotouh R. Mansour Christine L. Kirkpatrick Neil D. Danielson 《Chromatographia》2013,76(11-12):603-609
An ion-exclusion chromatography (IELC) comparison between a conventional ion-exchange column and an ultra high-performance liquid chromatography (UHPLC) dynamically surfactant modified C18 column for the separation of an aliphatic carboxylic acid and two aromatic carboxylic acids is presented. Professional software is used to optimize the conventional IELC separation conditions for acetylsalicylic acid and the hydrolysis products: salicylic acid and acetic acid. Four different variables are simultaneously optimized including H2SO4 concentration, pH, flow rate, and sample injection volume. Thirty different runs are suggested by the software. The resolutions and the time of each run are calculated and feed back to the software to predict the optimum conditions. Derringer’s desirability functions are used to evaluate the test conditions and those with the highest desirability value are utilized to separate acetylsalicylic acid, salicylic acid, and acetic acid. These conditions include using a 0.35 mM H2SO4 (pH 3.93) eluent at a flow rate of 1 mL min?1 and an injection volume of 72 μL. To decrease the run time and improve the performance, a UHPLC C18 column is used after dynamic modification with sodium dodecyl sulfate. Using pure water as a mobile phase, a shorter analysis time and better resolution are achieved. In addition, the elution order is different from the IELC method which indicates the contribution of the reversed-phase mode to the separation mechanism. 相似文献
8.
J. M. Kirkpatrick R. Venkataraman B. M. Young 《Journal of Radioanalytical and Nuclear Chemistry》2013,296(2):1005-1010
The Currie formulation for minimum detectable activity (MDA) has served for decades as the standard method for estimating radiological detection limits-it is simple and statistically defendable. It does, however, lack a means to account for the effects of systematic uncertainties. In recent years we have seen various efforts to incorporate systematic uncertainties into an MDA framework. Perhaps most notable of these is the recent ISO standard 11929 for the determination of characteristic limits in ionizing radiation measurements. This standard brings a Bayesian perspective to the problem of characteristic limits in radiation measurements that are in many ways both welcome and long overdue. In this paper, however, we note some apparent drawbacks to the ISO 11929 approach. Namely, for small values of the systematic uncertainty the correction it makes to the Currie MDA is negligible, while for large systematic uncertainties the calculated MDA values can become infinite. In between these two extremes, the user has little basis for evaluating the reliability of the result. To address these issues, we consider the problem from a new approach, developing a straightforward phenomenological statistical model of the MDA that treats systematic uncertainties explicitly. We compare predictions from our model with results of the ISO 11929 formulation as well as the traditional Currie approach. Finally, some recommendations for alternative handling of the MDA in the face of significant systematic uncertainties are presented. 相似文献
9.
We introduce a technique for computing approximate
solutions to optimization problems. If $X$ is the set
of feasible solutions, the standard goal
of approximation algorithms is to compute $x\in X$ that is an
$\varepsilon$-approximate solution in the following sense:
$$d(x) \leq (1+\varepsilon)\, d(x^*),$$
where $x^* \in X$ is an optimal solution,
$d\colon\ X\rightarrow {\Bbb R}_{\geq 0}$ is
the optimization function to be minimized, and
$\varepsilon>0$ is an input parameter.
Our approach is first to devise algorithms that
compute pseudo $\varepsilon$-approximate solutions
satisfying the bound
$$d(x) \leq d(x_R^*) + \varepsilon R,$$
where $R>0$ is a new input parameter.
Here $x^*_R$ denotes an optimal solution in the space $X_R$ of
$R$-constrained feasible solutions. The parameter $R$ provides
a stratification of $X$ in the sense that (1) $X_R \subseteq X_{R}$ for
$R < R$ and (2) $X_R = X$ for $R$ sufficiently large.
We first describe a highly efficient scheme
for converting a pseudo $\varepsilon$-approximation
algorithm into a true $\varepsilon$-approximation algorithm.
This scheme is useful because
pseudo approximation algorithms seem to be
easier to construct than $\varepsilon$-approximation algorithms.
Another benefit is that our algorithm is
automatically precision-sensitive.
We apply our technique to two problems in robotics:
(A) Euclidean Shortest Path (3ESP), namely
the shortest path for a point robot amidst polyhedral obstacles in
three dimensions, and
(B) $d_1$-optimal motion for a rod moving amidst
planar obstacles (1ORM).
Previously, no polynomial time $\varepsilon$-approximation algorithm
for (B) was known. For (A), our new solution
is simpler than previous solutions and has
an exponentially smaller complexity in terms
of the input precision. 相似文献
10.
It is shown that the presence of multiple time scales at a quantum critical point can lead to a breakdown of the loop expansion for critical exponents, since coefficients in the expansion diverge. Consequently, results obtained from finite-order perturbative renormalization-group treatments may not be an approximation in any sense to the true asymptotic critical behavior. This problem manifests itself as a nonrenormalizable field theory, or, equivalently, as the presence of a dangerous irrelevant variable. The quantum ferromagnetic transition in disordered metals provides an example. 相似文献