首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1289篇
  免费   107篇
  国内免费   40篇
化学   82篇
力学   37篇
综合类   18篇
数学   1049篇
物理学   250篇
  2024年   1篇
  2023年   15篇
  2022年   27篇
  2021年   72篇
  2020年   39篇
  2019年   28篇
  2018年   25篇
  2017年   24篇
  2016年   40篇
  2015年   33篇
  2014年   37篇
  2013年   70篇
  2012年   54篇
  2011年   73篇
  2010年   52篇
  2009年   100篇
  2008年   94篇
  2007年   90篇
  2006年   72篇
  2005年   51篇
  2004年   44篇
  2003年   45篇
  2002年   41篇
  2001年   38篇
  2000年   56篇
  1999年   32篇
  1998年   26篇
  1997年   32篇
  1996年   21篇
  1995年   11篇
  1994年   18篇
  1993年   13篇
  1992年   9篇
  1991年   7篇
  1990年   6篇
  1988年   5篇
  1987年   4篇
  1986年   9篇
  1985年   12篇
  1984年   7篇
  1982年   2篇
  1979年   1篇
排序方式: 共有1436条查询结果,搜索用时 15 毫秒
951.
Dynamic DEA (DDEA) is a mathematical programming-based technique which assesses the performance of decision making units (DMUs) in the presence of time factor. This paper provides a new technique for reducing the computational complexity of some recently introduced DDEA models.  相似文献   
952.
Journal of Statistical Physics - Given an initial distribution of sand in an Abelian sandpile, what final state does it relax to after all possible avalanches have taken place? In d≥3, we...  相似文献   
953.
The modified information criterion (MIC) is applied to detect multiple change points in a sequence of independent random variables. We find that the method is consistent in selecting the correct model, and the resulting test statistic has a simple limiting distribution. We show that the estimators for locations of change points achieve the best convergence rate, and their limiting distribution can be expressed as a function of a random walk. A simulation is conducted to demonstrate the usefulness of this method by comparing the powers between the MIC and the Schwarz information criterion.  相似文献   
954.
We present the PFix algorithm for the fixed point problem f(x)=x on a nonempty domain [a,b], where d1, , and f is a Lipschitz continuous function with respect to the infinity norm, with constant q1. The computed approximation satisfies the residual criterion , where >0. In general, the algorithm requires no more than ∑i=1dsi function component evaluations, where s≡max(1,log2(||ba||/))+1. This upper bound has order as →0. For the domain [0,1]d with <0.5 we prove a stronger result, i.e., an upper bound on the number of function component evaluations is , where r≡log2(1/). This bound approaches as r→∞ (→0) and as d→∞. We show that when q<1 the algorithm can also compute an approximation satisfying the absolute criterion , where x* is the unique fixed point of f. The complexity in this case resembles the complexity of the residual criterion problem, but with tolerance (1−q) instead of . We show that when q>1 the absolute criterion problem has infinite worst-case complexity when information consists of function evaluations. Finally, we report several numerical tests in which the actual number of evaluations is usually much smaller than the upper complexity bound.  相似文献   
955.
It is common sense to notice that one needs fewer digits to code numbers in ternary than in binary; new names are about log32 times shorter. Is this trade-off a consequence of the special coding scheme? The answer is negative. More generally, we argue that the answer to the question, stated in the title and formulated to the first author by C. Rackhoff, is in fact negative. The conclusion will be achieved by studying the role of the size of the alphabet in constructing instantaneous codes for all natural numbers, and defining random strings and sequences. We show that there is no optimal instantaneous code for all positive integers, and the binary is the worst possible. Codes over a fixed alphabet can be indefinitely improved themselves, but only “slightly”; in contrast, changing the size of the alphabet determines a significant, not linear, improvement. The key relation describing the above phenomenon can be expressed in terms of Chaitin complexity: changing the size of the coding alphabet from q to Q, 2 ≤ q < Q, results in an improvement of the complexity by a factor og log q. As a consequence, a string avoiding consistently a fixed letter is not random. In binary, this corresponds to a trivial situation. In the nonbinary case the distinction is relevant: more than 3.2n ternary strings of length n are not random (many of these strings are binary random). This phenomenon is even sharper for infinite sequences.  相似文献   
956.
We examine a number of models that generate random fractals. The models are studied using the tools of computational complexity theory from the perspective of parallel computation. Diffusion-limited aggregation and several widely used algorithms for equilibrating the Ising model are shown to be highly sequential; it is unlikely they can be simulated efficiently in parallel. This is in contrast to Mandelbrot percolation, which can be simulated in constant parallel time. Our research helps shed light on the intrinsic complexity of these models relative to each other and to different growth processes that have been recently studied using complexity theory. In addition, the results may serve as a guide to simulation physics.  相似文献   
957.
利用量子纠缠态确定性地降低通信复杂度   总被引:1,自引:0,他引:1  
曹彬 《量子光学学报》2002,8(3):121-124
文章提出一种确定性地降低两体系系统中的通信复杂度的方案,它利用了一组处于任意纠缠纯态的粒子对。在这个方案中,对于一个任意的两变量布尔函数,一个被通信双方事先分享纠缠态可以使通信复杂度降低。与只通过经典通信或双方仅仅通过交换经典信息相比较而言,利用本方案其通信复杂度降低了一个比特。  相似文献   
958.
The intriguing recent suggestion of Tegmark that the universe—contrary to all our experiences and expectations—contains only a small amount of information due to an extremely high degree of internal symmetry is critically examined. It is shown that there are several physical processes, notably Hawking evaporation of black holes and non-zero decoherence time effects described by Plaga, as well as thought experiments of Deutsch and Tegmark himself, which can be construed as arguments against the low-information universe hypothesis. Some ramifications for both quantum mechanics and cosmology are briefly discussed.  相似文献   
959.
The multiplicative complexity of a finite set of rational functions is the number of essential multiplications and divisions that are necessary and sufficient to compute these rational functions. We prove that the multiplicative complexity of inversion in the division algebra \H of Hamiltonian quaternions over the reals, that is, the multiplicative complexity of the coordinates of the inverse of a generic element from \H , is exactly eight. Furthermore, we show that the multiplicative complexity of the left and right division of Hamiltonian quaternions is at least eleven. July 17, 2001. Final version received: October 8, 2001.  相似文献   
960.
We investigate the complexity of the min-max assignment problem under a fixed number of scenarios. We prove that this problem is polynomial-time equivalent to the exact perfect matching problem in bipartite graphs, an infamous combinatorial optimization problem of unknown computational complexity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号