首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5705篇
  免费   114篇
  国内免费   36篇
化学   3643篇
晶体学   43篇
力学   106篇
数学   630篇
物理学   1433篇
  2021年   30篇
  2020年   34篇
  2019年   49篇
  2018年   45篇
  2017年   38篇
  2016年   96篇
  2015年   64篇
  2014年   85篇
  2013年   237篇
  2012年   245篇
  2011年   302篇
  2010年   142篇
  2009年   126篇
  2008年   280篇
  2007年   294篇
  2006年   267篇
  2005年   265篇
  2004年   242篇
  2003年   228篇
  2002年   197篇
  2001年   158篇
  2000年   142篇
  1999年   82篇
  1998年   71篇
  1997年   57篇
  1996年   104篇
  1995年   77篇
  1994年   105篇
  1993年   122篇
  1992年   108篇
  1991年   66篇
  1990年   65篇
  1989年   87篇
  1988年   63篇
  1987年   66篇
  1986年   63篇
  1985年   83篇
  1984年   65篇
  1983年   37篇
  1982年   64篇
  1981年   58篇
  1980年   75篇
  1979年   56篇
  1978年   64篇
  1977年   79篇
  1976年   56篇
  1975年   51篇
  1974年   60篇
  1973年   54篇
  1972年   38篇
排序方式: 共有5855条查询结果,搜索用时 31 毫秒
1.
2.
This study examined how selected U.S. and Asian mathematics curricula are designed to facilitate students' understanding of the arithmetic average. There is a consistency regarding the learning goals among these curriculum series, but the focuses are different between the Asian series and the U.S. reform series. The Asian series and the U.S. commercial series focus the arithmetic average more on conceptual and procedural understanding of the concept as a computational algorithm than on understanding the concept as a representative of a data set; however, the two U.S. reform series focus the concept more on the latter. Because of the different focuses, the Asian and the U.S. curriculum series treat the concept differently. In the Asian series, the concept is first introduced in the context of “equal‐sharing” or “per‐unit‐quantity,” and the averaging formula is formally introduced at a very early stage. In the U.S. reform series, the concept is discussed as a measure of central tendency, and after students have some intuitive ideas of the statistical aspect of the concept, the averaging algorithm is briefly introduced.  相似文献   
3.
4.
We consider a zero-sum stochastic game with side constraints for both players with a special structure. There are two independent controlled Markov chains, one for each player. The transition probabilities of the chain associated with a player as well as the related side constraints depend only on the actions of the corresponding player; the side constraints also depend on the player’s controlled chain. The global cost that player 1 wishes to minimize and that player 2 wishes to maximize, depend however on the actions and Markov chains of both players. We obtain a linear programming formulations that allows to compute the value and saddle point policies for this problem. We illustrate the theoretical results through a zero-sum stochastic game in wireless networks in which each player has power constraints  相似文献   
5.
One of the essential differences in the design of bubble pressure tensiometers consists in the geometry of the measuring capillaries. To reach extremely short adsorption times of milliseconds and below, the so-called deadtime of the capillaries must be of the order of some 10 ms. In particular, for concentrated surfactant solutions, such as micellar solutions, short deadtimes are needed to minimize the initial surfactant load of the generated bubbles. A theoretical model is derived and confirmed by experiments performed for a wide range of experimental conditions, mainly in respect to variations in deadtime and bubble volume.  相似文献   
6.
We study the convergence of the GMRES/FOM and QMR/BiCG methods for solving nonsymmetric systems of equationsAx=b. We prove, in exact arithmetic, that any type of residual norm convergence obtained using BiCG can also be obtained using FOM but on a different system of equations. We consider practical comparisons of these procedures when they are applied to the same matrices. We use a unitary invariance shared by both methods, to construct test matrices where we can vary the nonnormality of the test matrix by variations in simplified eigenvector matrices. We used these test problems in two sets of numerical experiments. The first set of experiments was designed to study effects of increasing nonnormality on the convergence of GMRES and QMR. The second set of experiments was designed to track effects of the eigenvalue distribution on the convergence of QMR. In these tests the GMRES residual norms decreased significantly more rapidly than the QMR residual norms but without corresponding decreases in the error norms. Furthermore, as the nonnormality ofA was increased, the GMRES residual norms decreased more rapidly. This led to premature termination of the GMRES procedure on highly nonnormal problems. On the nonnormal test problems the QMR residual norms exhibited less sensitivity to changes in the nonnormality. The convergence of either type of procedure, as measured by the error norms, was delayed by the presence of large or small outliers and affected by the type of eigenvalues, real or complex, in the eigenvalue distribution ofA. For GMRES this effect can be seen only in the error norm plots.In honor of the 70th birthday of Ted RivlinThis work was supported by NSF grant GER-9450081.  相似文献   
7.
The activities and services of the accredited Risø High Dose Reference Laboratory are described. The laboratory operates according to the European standard EN 45001 regarding Operation of Testing Laboratories, and it fulfills the requirements of being able to deliver traceable dose measurements for control of radiation sterilization. The accredited services include:

1. 1. Irradiation of dosimeters and test samples with cobalt-60 gamma rays.

2. 2. Irradiation of dosimeters and test samples with 10 MeV electrons.

3. 3. Issue of and measurement with calibrated dosimeters.

4. 4. Measurement of the dosimetric parameters of an irradiation facility.

5. 5. Measurement of absorbed dose distribution in irradiated products.

The paper describes these services and the procedures necessary for their execution.  相似文献   

8.
Topics in data assimilation: Stochastic processes   总被引:1,自引:0,他引:1  
Stochastic models with varying degrees of complexity are increasingly widespread in the oceanic and atmospheric sciences. One application is data assimilation, i.e., the combination of model output with observations to form the best picture of the system under study. For any given quantity to be estimated, the relative weights of the model and the data will be adjusted according to estimated model and data error statistics, so implementation of any data assimilation scheme will require some assumption about errors, which are considered to be random. For dynamical models, some assumption about the evolution of errors will be needed. Stochastic models are also applied in studies of predictability.

The formal theory of stochastic processes was well developed in the last half of the twentieth century. One consequence of this theory is that methods of simulation of deterministic processes cannot be applied to random processes without some modification. In some cases the rules of ordinary calculus must be modified.

The formal theory was developed in terms of mathematical formalism that may be unfamiliar to many oceanic and atmospheric scientists. The purpose of this article is to provide an informal introduction to the relevant theory, and to point out those situations in which that theory must be applied in order to model random processes correctly.  相似文献   

9.
This is summary of the activities of the working group on collider physics in the IXth Workshop on High Energy Physics Phenomenology (WHEPP-9) held at the Institute of Physics, Bhubaneswar, India in January 2006. Some of the work subsequently done on these problems by the subgroups formed during the workshop is included in this report.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号