首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1016篇
  免费   126篇
  国内免费   21篇
化学   159篇
晶体学   2篇
力学   132篇
综合类   44篇
数学   350篇
物理学   476篇
  2024年   1篇
  2023年   20篇
  2022年   51篇
  2021年   35篇
  2020年   38篇
  2019年   26篇
  2018年   22篇
  2017年   37篇
  2016年   41篇
  2015年   44篇
  2014年   72篇
  2013年   74篇
  2012年   49篇
  2011年   59篇
  2010年   59篇
  2009年   46篇
  2008年   55篇
  2007年   45篇
  2006年   38篇
  2005年   40篇
  2004年   26篇
  2003年   36篇
  2002年   34篇
  2001年   31篇
  2000年   14篇
  1999年   22篇
  1998年   18篇
  1997年   15篇
  1996年   12篇
  1995年   13篇
  1994年   14篇
  1993年   12篇
  1992年   11篇
  1991年   10篇
  1990年   2篇
  1989年   5篇
  1988年   3篇
  1987年   2篇
  1986年   3篇
  1984年   3篇
  1983年   1篇
  1982年   2篇
  1980年   2篇
  1979年   3篇
  1978年   3篇
  1977年   3篇
  1976年   6篇
  1975年   2篇
  1974年   3篇
排序方式: 共有1163条查询结果,搜索用时 15 毫秒
71.
We consider a well-known distributed colouring game played on a simple connected graph: initially, each vertex is coloured black or white; at each round, each vertex simultaneously recolours itself by the colour of the simple (strong) majority of its neighbours. A set of vertices M is said to be a dynamo, if starting the game with only the vertices of M coloured black, the computation eventually reaches an all-black configuration.The importance of this game follows from the fact that it models the spread of faults in point-to-point systems with majority-based voting; in particular, dynamos correspond to those sets of initial failures which will lead the entire system to fail. Investigations on dynamos have been extensive but restricted to establishing tight bounds on the size (i.e., how small a dynamic monopoly might be).In this paper we start to study dynamos systematically with respect to both the size and the time (i.e., how many rounds are needed to reach all-black configuration) in various models and topologies.We derive tight tradeoffs between the size and the time for a number of regular graphs, including rings, complete d-ary trees, tori, wrapped butterflies, cube connected cycles and hypercubes. In addition, we determine optimal size bounds of irreversible dynamos for butterflies and shuffle-exchange using simple majority and for DeBruijn using strong majority rules. Finally, we make some observations concerning irreversible versus reversible monotone models and slow complete computations from minimal dynamos.  相似文献   
72.
In this paper matching upper and lower bounds for broadcast on general purpose parallel computation models that exploit network locality are proven. These models try to capture both the general purpose properties of models like the PRAM or BSP on the one hand, and to exploit network locality of special purpose models like meshes, hypercubes, etc., on the other hand. They do so by charging a cost l(|ij|) for a communication between processors i and j, where l is a suitably chosen latency function.An upper bound T(p)=∑i=0loglogp2i·l(p1/2i) on the runtime of a broadcast on a p processor H-PRAM is given, for an arbitrary latency function l(k).The main contribution of the paper is a matching lower bound, holding for all latency functions in the range from l(k)=Ω(logk/loglogk) to l(k)=O(log2k). This is not a severe restriction since for latency functions l(k)=O(logk/log1+log(k)) with arbitrary >0, the runtime of the algorithm matches the trivial lower bound Ω(logp) and for l(k)=Θ(log1+k) or l(k)=Θ(k), the runtime matches the other trivial lower bound Ω(l(p)). Both upper and lower bounds apply for other parallel locality models like Y-PRAM, D-BSP and E-BSP, too.  相似文献   
73.
This study proposes a new forcing scheme suitable for massively-parallel finite-difference simulations of stationary isotropic turbulence. The proposed forcing scheme, named reduced-communication forcing (RCF), is based on the same idea as the conventional large-scale forcing scheme, but requires much less data communication, leading to a high parallel efficiency. It has been confirmed that the RCF scheme works intrinsically in the same manner as the conventional large-scale forcing scheme. Comparisons have revealed that a fourth-order finite-difference model run in combination with the RCF scheme (FDM-RCF) is as good as a spectral model, while requiring less computational costs. For the range 80 < Reλ < 540, where Reλ is the Taylor microscale-based Reynolds number, large computations using the FDM-RCF scheme show that the Reynolds dependences of skewness and flatness factors have similar power-laws as found in previous studies.  相似文献   
74.
A review is given of some recent developments in the differential geometry of quantum computation for which the quantum evolution is described by the special unitary unimodular group, SU(2n). Using the Lie algebra su(2n), detailed derivations are given of a useful Riemannian geometry of SU(2n), including the connection and the geodesic equation for minimal complexity quantum computations.  相似文献   
75.
For the purpose of further saving computing time, an improved algorithm about NSFOT is provided in this paper. That is, by introducing the simple operations such as preprocessing or after-processing, Haar and Walsh transforms are performed conveniently on the multiprocessor. As a result, one large size problem is divided into several small size sub-problems, load on every processor not only decreases greatly but also gets so uniform that much time is saved. Both the theoretical analysis and experimental results demonstrate the effectiveness of the proposed approach.  相似文献   
76.
Understanding the nature of complex turbulent flows remains one of the most challenging problems in classical physics. Significant progress has been made recently using high performance computing, and computational fluid dynamics is now a credible alternative to experiments and theories in order to understand the rich physics of turbulence. In this paper, we present an efficient numerical tool called Incompact3d that can be coupled with massive parallel platforms in order to simulate turbulence problems with as much complexity as possible, using up to O(105) computational cores by means of direct numerical simulation (DNS). DNS is the simplest approach conceptually to investigate turbulence, featuring the highest temporal and spatial accuracy and it requires extraordinary powerful resources. This paper is an extension of Laizet et al.(Comput. Fluids 2010; 39 (3):471–484) where the authors proposed a strategy to run DNS with up to 1024 computational cores. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
77.
Monte Carlo as well as quasi-Monte Carlo methods are used to generate only few interfacial values in two-dimensional domains where boundary-value elliptic problems are formulated. This allows for a domain decomposition of the domain. A continuous approximation of the solution is obtained interpolating on such interfaces, and then used as boundary data to split the original problem into fully decoupled subproblems. The numerical treatment can then be continued, implementing any deterministic algorithm on each subdomain. Both, Monte Carlo (or quasi-Monte Carlo) simulations and the domain decomposition strategy allow for exploiting parallel architectures. Scalability and natural fault tolerance are peculiarities of the present algorithm. Examples concern Helmholtz and Poisson equations, whose probabilistic treatment presents additional complications with respect to the case of homogeneous elliptic problems without any potential term and source.  相似文献   
78.
Four-operand parallel optical computing using shadow-casting technique   总被引:1,自引:0,他引:1  
Optical shadow-casting (OSC) technique has shown excellent potential for optically implementing two-operand parallel logic gates and array logic operations. The 16 logic functions for two binary patterns (variables) are optically realizable in parallel by properly configuring an array of 2×2 light emitting diodes. In this paper, we propose an enhanced OSC technique for implementing four-operand parallel logic gates. The proposed system is capable of performing 216 logic functions by simply programming the switching mode of an array of 4×4 light emitting diodes in the input plane. This leads to an efficient and compact realization scheme when compared to the conventional two-operand OSC system.  相似文献   
79.
“End of Moore’s Law” has recently become a topic. Keeping the signal-to-noise ratio (SNR) at the same level in the future will surely increase the energy density of smaller-sized transistors. Lowering the operating voltage will prevent this, but the SNR would inevitably degrade. Meanwhile, biological systems such as cells and brains possess robustness against noise in their information processing in spite of the strong influence of stochastic thermal noise. Inspired by the information processing of organisms, we propose a stochastic computing model to acquire information from noisy signals. Our model is based on vector matching, in which the similarities between the input vector carrying external noisy signals and the reference vectors prepared in advance as memorized templates are evaluated in a stochastic manner. This model exhibited robustness against the noise strength and its performance was improved by addition of noise with an appropriate strength, which is similar to a phenomenon observed in stochastic resonance. Because the stochastic vector matching we propose here has robustness against noise, it is a candidate for noisy information processing that is driven by stochastically-operating devices with low energy consumption in future. Moreover, the stochastic vector matching may be applied to memory-based information processing like that of the brain.  相似文献   
80.
This paper presents a comprehensive review of the work done, during the 1968–2005, in the application of statistical and intelligent techniques to solve the bankruptcy prediction problem faced by banks and firms. The review is categorized by taking the type of technique applied to solve this problem as an important dimension. Accordingly, the papers are grouped in the following families of techniques: (i) statistical techniques, (ii) neural networks, (iii) case-based reasoning, (iv) decision trees, (iv) operational research, (v) evolutionary approaches, (vi) rough set based techniques, (vii) other techniques subsuming fuzzy logic, support vector machine and isotonic separation and (viii) soft computing subsuming seamless hybridization of all the above-mentioned techniques. Of particular significance is that in each paper, the review highlights the source of data sets, financial ratios used, country of origin, time line of study and the comparative performance of techniques in terms of prediction accuracy wherever available. The review also lists some important directions for future research.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号