首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1967篇
  免费   40篇
  国内免费   56篇
化学   629篇
晶体学   9篇
力学   409篇
综合类   11篇
数学   620篇
物理学   385篇
  2024年   2篇
  2023年   33篇
  2022年   43篇
  2021年   33篇
  2020年   42篇
  2019年   44篇
  2018年   52篇
  2017年   58篇
  2016年   47篇
  2015年   49篇
  2014年   84篇
  2013年   152篇
  2012年   65篇
  2011年   97篇
  2010年   93篇
  2009年   150篇
  2008年   139篇
  2007年   141篇
  2006年   126篇
  2005年   107篇
  2004年   69篇
  2003年   72篇
  2002年   29篇
  2001年   26篇
  2000年   31篇
  1999年   36篇
  1998年   46篇
  1997年   27篇
  1996年   14篇
  1995年   13篇
  1994年   18篇
  1993年   15篇
  1992年   19篇
  1991年   12篇
  1990年   8篇
  1989年   3篇
  1988年   5篇
  1987年   13篇
  1986年   5篇
  1985年   7篇
  1984年   9篇
  1983年   4篇
  1982年   7篇
  1981年   4篇
  1980年   2篇
  1979年   7篇
  1978年   3篇
  1977年   2篇
排序方式: 共有2063条查询结果,搜索用时 15 毫秒
11.
This paper reports a combined experimental and numerical investigation of three-dimensional steady turbulent flows in inlet manifolds of square cross-section. Predictions and measurements of the flows were carried out using computational fluid dynamics and laser Doppler anemometry techniques respectively. The flow structure was characterized in detail and the effects of flow split ratio and inlet flow rate were studied. These were found to cause significant variations in the size and shape of recirculation regions in the branches, and in the turbulence levels. It was then found that there is a significant difference between the flow rates through different branches. The performance of the code was assessed through a comparison between predictions and measurements. The comparison demonstrates that all important features of the flow are well represented by the predictions.  相似文献   
12.
We study the behavior of dynamic programming methods for the tree edit distance problem, such as [P. Klein, Computing the edit-distance between unrooted ordered trees, in: Proceedings of 6th European Symposium on Algorithms, 1998, p. 91–102; K. Zhang, D. Shasha, SIAM J. Comput. 18 (6) (1989) 1245–1262]. We show that those two algorithms may be described as decomposition strategies. We introduce the general framework of cover strategies, and we provide an exact characterization of the complexity of cover strategies. This analysis allows us to define a new tree edit distance algorithm, that is optimal for cover strategies.  相似文献   
13.
The peeling of a d-dimensional set of points is usually performed with successive calls to a convex hull algorithm; the optimal worst-case convex hull algorithm, known to have an O(n˙ Log (n)) execution time, may give an O(n˙n˙ Log (n)) to peel all the set; an O(n˙n) convex hull algorithm, m being the number of extremal points, is shown to peel every set with an O(n-n) time, and proved to be optimal; an implementation of this algorithm is given for planar sets and spatial sets, but the latter give only an approximate O(n˙n) performance.  相似文献   
14.
Polynomial-time approximation schemes for packing and piercing fat objects   总被引:1,自引:0,他引:1  
We consider two problems: given a collection of n fat objects in a fixed dimension, (1) ( packing) find the maximum subcollection of pairwise disjoint objects, and (2) ( piercing) find the minimum point set that intersects every object. Recently, Erlebach, Jansen, and Seidel gave a polynomial-time approximation scheme (PTAS) for the packing problem, based on a shifted hierarchical subdivision method. Using shifted quadtrees, we describe a similar algorithm for packing but with a smaller time bound. Erlebach et al.'s algorithm requires polynomial space. We describe a different algorithm, based on geometric separators, that requires only linear space. This algorithm can also be applied to piercing, yielding the first PTAS for that problem.  相似文献   
15.
Heavily overlapped, or congested spectra often display much structure but few individual “lines.” Methods have been devised for analyzing such spectra through nonlinear least-squares fitting of the intensity as a function of wavelength or wavenumber. Such total spectrum fitting (TSF) methods are examined statistically for a simple diatomic model and compared with the standard “measure-assign-fit” (MAF) approach in use since the dawn of spectroscopy. Monte Carlo computations on typically 1000 synthetic spectra at a time verify that the predictions of the variance-covariance matrix are reliable under many circumstances. However in regions where the P and R branches double up, the predicted standard errors in the key spectroscopic constants rise sharply and greatly exceed estimates from the Monte Carlo ensemble statistics. In the same regions, the MAF method actually gives better precision. However, for imperfectly overlapped R and P branches, the MAF standard errors are typically three times larger than the TSF values; moreover, the MAF statistical errors are dwarfed by bias. The TSF approach, while clearly superior in these tests, has a practical drawback: it, too, can give significant bias if the spectra are analyzed with an incorrect model, as illustrated here through calculations employing the wrong function to describe the spectral lineshape.  相似文献   
16.
A combined interference and diffraction pattern, in the form of equidistant interference fringes, resulting from illuminating a vertical metallic wire by a laser beam is analyzed to measure the diameter of four standard wires. The diameters range from 170 to 450 μm. It is found that the error in the diameter measurements increases for small metallic wires and for small distances between the wire and the screen due to scattering effects. The intensity of the incident laser beam was controlled by a pair of sheet polaroids to minimize the scattered radiation. The used technique is highly sensitive, but requires controlled environmental conditions and absence of vibration effects. The expanded uncertainty for k=2 is calculated and found to decrease from U(D)=±1.45 μm for the wire of nominal diameter 170 μm to ±0.57 μm for the diameter 450 μm.  相似文献   
17.
18.
An important aspect of learning is the ability to transfer knowledge to new contexts. However, in dynamic decision tasks, such as bargaining, firefighting, and process control, where decision makers must make repeated decisions under time pressure and outcome feedback may relate to any of a number of decisions, such transfer has proven elusive. This paper proposes a two-stage connectionist model which hypothesizes that decision makers learn to identify categories of evidence requiring similar decisions as they perform in dynamic environments. The model suggests conditions under which decision makers will be able to use this ability to help them in novel situations. These predictions are compared against those of a one-stage decision model that does not learn evidence categories, as is common in many current theories of repeated decision making. Both models' predictions are then tested against the performance of decision makers in an Internet bargaining task. Both models correctly predict aspects of decision makers' learning under different interventions. The two-stage model provides closer fits to decision maker performance in a new, related bargaining task and accounts for important features of higher-performing decision makers' learning. Although frequently omitted in recent accounts of repeated decision making, the processes of evidence category formation described by the two-stage model appear critical in understanding the extent to which decision makers learn from feedback in dynamic tasks. Faison (Bud) Gibson is an Assistant Professor at College of Business, Eastern Michigan University. He has extensive experience developing and empirically testing models of decision behavior in dynamic decision environments.  相似文献   
19.
The use of simulation modeling in computational analysis of organizations is becoming a prominent approach in social science research. However, relying on simulations to gain intuition about social phenomena has significant implications. While simulations may give rise to interesting macro-level phenomena, and sometimes even mimic empirical data, the underlying micro and macro level processes may be far from realistic. Yet, this realism may be important to infer results that are relevant to existing theories of social systems and to policy making. Therefore, it is important to assess not only predictive capability but also explanation accuracy of formal models in terms of the degree of realism reflected by the embedded processes. This paper presents a process-centric perspective for the validation and verification (V&V) of agent-based computational organization models. Following an overview of the role of V&V within the life cycle of a simulation study, emergent issues in agent-based organization model V&V are outlined. The notion of social contract that facilitates capturing micro level processes among agents is introduced to enable reasoning about the integrity and consistency of agent-based organization designs. Social contracts are shown to enable modular compositional verification of interaction dynamics among peer agents. Two types of consistency are introduced: horizontal and vertical consistency. It is argued that such local consistency analysis is necessary, but insufficient to validate emergent macro processes within multi-agent organizations. As such, new formal validation metrics are introduced to substantiate the operational validity of emergent macro-level behavior. Levent Yilmaz is Assistant Professor of Computer Science and Engineering in the College of Engineering at Auburn University and co-founder of the Auburn Modeling and Simulation Laboratory of the M&SNet. Dr. Yilmaz received his Ph.D. and M.S. degrees from Virginia Polytechnic Institute and State University (Virginia Tech). His research interests are on advancing the theory and methodology of simulation modeling, agent-directed simulation (to explore dynamics of socio-technical systems, organizations, and human/team behavior), and education in simulation modeling. Dr. Yilmaz is a member of ACM, IEEE Computer Society, Society for Computer Simulation International, and Upsilon Pi Epsilon. URL: http://www.eng.auburn.edu/~yilmaz  相似文献   
20.
Turn bounded pushdown automata with different conditions for beginning a new turn are investigated. Their relationships with closures of the linear context-free languages under regular operations are studied. For example, automata with an unbounded number of turns that have to empty their pushdown store up to the initial symbol in order to start a new turn are characterized by the regular closure of the linear languages. Automata that additionally have to re-enter the initial state are (almost) characterized by the Kleene star closure of the linear languages. For both a bounded and an unbounded number of turns, requiring to empty the pushdown store is a strictly stronger condition than requiring to re-enter the initial state. Several new language families are obtained which form a double-stranded hierarchy. Closure properties of these families under AFL operations are derived. The regular closure of the linear languages share the strong closure properties of the context-free languages, i.e., the family is a full AFL. Interestingly, three natural new language families are not closed under intersection with regular languages and inverse homomorphism. Finally, an algorithm is presented parsing languages from the new families in quadratic time.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号