首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1292篇
  免费   52篇
  国内免费   26篇
化学   734篇
晶体学   20篇
力学   24篇
数学   359篇
物理学   233篇
  2023年   7篇
  2022年   18篇
  2021年   20篇
  2020年   24篇
  2019年   25篇
  2018年   11篇
  2017年   17篇
  2016年   33篇
  2015年   32篇
  2014年   32篇
  2013年   72篇
  2012年   84篇
  2011年   69篇
  2010年   49篇
  2009年   45篇
  2008年   73篇
  2007年   81篇
  2006年   89篇
  2005年   83篇
  2004年   72篇
  2003年   50篇
  2002年   64篇
  2001年   12篇
  2000年   28篇
  1999年   19篇
  1998年   22篇
  1997年   15篇
  1996年   19篇
  1995年   12篇
  1994年   23篇
  1993年   20篇
  1992年   18篇
  1991年   12篇
  1990年   14篇
  1989年   12篇
  1988年   3篇
  1987年   4篇
  1986年   8篇
  1985年   6篇
  1984年   4篇
  1983年   7篇
  1982年   7篇
  1981年   5篇
  1980年   8篇
  1978年   9篇
  1977年   5篇
  1976年   4篇
  1975年   6篇
  1973年   7篇
  1972年   3篇
排序方式: 共有1370条查询结果,搜索用时 15 毫秒
241.
242.
The environmental mobility/availability behavior of radionuclides in soils and sediments depends on their speciation. Experiments have been carried out to develop a simple but robust radionuclide sequential extraction method for identification of radionuclide partitioning in sediments and soils. The sequential extraction protocol was optimized for temperature, concentration of reagents and reaction time. Optimum extraction conditions were chosen based on the release of 239,240Pu, 238U and stable elements. Results from the experiments with lake sediment (SRM 4354) are compared to the previous trials where the sequential extraction protocol was optimized with the ocean sediment (SRM 4357). Based on these two trials the NIST standard sequential extraction protocol is established for defined extraction settings for temperature, reagent concentration and time.  相似文献   
243.
Fragment-based drug discovery (FBDD) represents a change in strategy from the screening of molecules with higher molecular weights and physical properties more akin to fully drug-like compounds, to the screening of smaller, less complex molecules. This is because it has been recognised that fragment hit molecules can be efficiently grown and optimised into leads, particularly after the binding mode to the target protein has been first determined by 3D structural elucidation, e.g. by NMR or X-ray crystallography. Several studies have shown that medicinal chemistry optimisation of an already drug-like hit or lead compound can result in a final compound with too high molecular weight and lipophilicity. The evolution of a lower molecular weight fragment hit therefore represents an attractive alternative approach to optimisation as it allows better control of compound properties. Computational chemistry can play an important role both prior to a fragment screen, in producing a target focussed fragment library, and post-screening in the evolution of a drug-like molecule from a fragment hit, both with and without the available fragment-target co-complex structure. We will review many of the current developments in the area and illustrate with some recent examples from successful FBDD discovery projects that we have conducted.  相似文献   
244.
The synthesis of the C2‐symmetrical ligand 1 consisting of two naphthalene units connected to two pyridine‐2,6‐dicarboxamide moieties linked by a xylene spacer and the formation of LnIII‐based (Ln=Sm, Eu, Tb, and Lu) dimetallic helicates [Ln2? 1 3] in MeCN by means of a metal‐directed synthesis is described. By analyzing the metal‐induced changes in the absorption and the fluorescence of 1 , the formation of the helicates, and the presence of a second species [Ln2? 1 2] was confirmed by nonlinear‐regression analysis. While significant changes were observed in the photophysical properties of 1 , the most dramatic changes were observed in the metal‐centred lanthanide emissions, upon excitation of the naphthalene antennae. From the changes in the lanthanide emission, we were able to demonstrate that these helicates were formed in high yields (ca. 90% after the addition of 0.6 equiv. of LnIII), with high binding constants, which matched well with that determined from the changes in the absorption spectra. The formation of the LuIII helicate, [Lu2? 1 3], was also investigated for comparison purposes, as we were unable to obtain accurate binding constants from the changes in the fluorescence emission upon formation of [Sm2? 1 3], [Eu2? 1 3], and [Tb2? 1 3].  相似文献   
245.
A semi-preparative high-performance liquid chromatography process was evaluated as a tool to quantitatively determine the purity or percentage mass fraction content (% m/m) of organic compounds. The method is simple and does not require the identification and subsequent quantitation of organic-related structure impurities. A protocol was developed and tested on four reference materials certified for purity from 95% m/m to 99.3% m/m. Comparing the purity results of each certified reference material using the new approach with their respective certified values showed no significant analytical bias. Semi-preparative high-performance liquid chromatography has proved the potential to be a primary method directly traceable to mass with an uncertainty statement written down also in terms of mass with expanding uncertainty ranging from 0.8% to 1.3% m/m compared to 0.3 to 2.0% m/m for the certified purity values at the 95% confidence interval.  相似文献   
246.
It is a well-established fact that the witness complex is closely related to the restricted Delaunay triangulation in low dimensions. Specifically, it has been proved that the witness complex coincides with the restricted Delaunay triangulation on curves, and is still a subset of it on surfaces, under mild sampling conditions. In this paper, we prove that these results do not extend to higher-dimensional manifolds, even under strong sampling conditions such as uniform point density. On the positive side, we show how the sets of witnesses and landmarks can be enriched, so that the nice relations that exist between restricted Delaunay triangulation and witness complex hold on higher-dimensional manifolds as well. We derive from our structural results an algorithm that reconstructs manifolds of any arbitrary dimension or co-dimension at different scales. The algorithm combines a farthest-point refinement scheme with a vertex pumping strategy. It is very simple conceptually, and it does not require the input point sample to be sparse. Its running time is bounded by c(d)n 2, where n is the size of the input point cloud, and c(d) is a constant depending solely (yet exponentially) on the dimension d of the ambient space. Although this running time makes our reconstruction algorithm rather theoretical, recent work has shown that a variant of our approach can be made tractable in arbitrary dimensions, by building upon the results of this paper. This work was done while S.Y. Oudot was a post-doctoral fellow at Stanford University. His email there is no longer valid.  相似文献   
247.
A new finite element (FE) is formulated based on an extension of previous FE models for studying constrained layer damping (CLD) in beams. Most existing CLD FE models are based on the assumption that the shear deformation in the core layer is the only source of damping in the structure. However, previous research has shown that other types of deformation in the core layer, such as deformations from longitudinal extension, and transverse compression, can also be important. In the finite element formulated here, shear, extension, and compression deformations are all included. As presented, there are 14 degrees of freedom in this element. However, this new element can be extended to cases in which the CLD structure has more than three layers. The numerical study shows that this finite element can be used to predict the dynamic characteristics accurately. However, there is a limitation when the core layer has a high stiffness, as the new element tends to predict loss factors and natural frequencies that are too high. As a result, this element can be accepted as a general computation model to study the CLD mechanism when the core layer is soft. Because the element includes all three types of damping, the computational cost can be very high for large scale models. Based on this consideration, a simplified finite modeling approach is presented. This approach is based on an existing experimental approach for extracting equivalent properties for a CLD structure. Numerical examples show that the use of these extracted properties with commercially available FE models can lead to sufficiently accurate results with a lower computational expense.  相似文献   
248.
Geometry on Probability Spaces   总被引:1,自引:0,他引:1  
Partial differential equations and the Laplacian operator on domains in Euclidean spaces have played a central role in understanding natural phenomena. However, this avenue has been limited in many areas where calculus is obstructed, as in singular spaces, and in function spaces of functions on a space X where X itself is a function space. Examples of the latter occur in vision and quantum field theory. In vision it would be useful to do analysis on the space of images and an image is a function on a patch. Moreover, in analysis and geometry, the Lebesgue measure and its counterpart on manifolds are central. These measures are unavailable in the vision example and even in learning theory in general. There is one situation where, in the last several decades, the problem has been studied with some success. That is when the underlying space is finite (or even discrete). The introduction of the graph Laplacian has been a major development in algorithm research and is certainly useful for unsupervised learning theory. The approach taken here is to take advantage of both the classical research and the newer graph theoretic ideas to develop geometry on probability spaces. This starts with a space X equipped with a kernel (like a Mercer kernel) which gives a topology and geometry; X is to be equipped as well with a probability measure. The main focus is on a construction of a (normalized) Laplacian, an associated heat equation, diffusion distance, etc. In this setting, the point estimates of calculus are replaced by integral quantities. One thinks of secants rather than tangents. Our main result bounds the error of an empirical approximation to this Laplacian on X.  相似文献   
249.
Problems of matching have long been studied in the operations research literature (assignment problem, secretary problem, stable marriage problem). All of these consider a centralized mechanism whereby a single decision maker chooses a complete matching which optimizes some criterion. This paper analyzes a more realistic scenario in which members of the two groups (buyers–sellers, employers–workers, males–females) randomly meet each other in pairs (interviews, dates) over time and form couples if there is mutual agreement to do so. We assume members of each group have common preferences over members of the other group. Generalizing an earlier model of Alpern and Reyniers [Alpern, S., Reyniers, D.J., 2005. Strategic mating with common preferences. J. Theor. Biol. 237, 337–354], we assume that one group (called males) is r   times larger than the other, r?1r?1. Thus all females, but only 1/r1/r of the males, end up matched. Unmatched males have negative utility -c-c. We analyze equilibria of this matching game, depending on the parameters r   and cc. In a region of (r,c)(r,c) space with multiple equilibria, we compare these, and analyze their ‘efficiency’ in several respects. This analysis should prove useful for designers of matching mechanisms who have some control over the sex ratio (e.g. by capping numbers of males at a ‘singles event’or by having ‘ladies free’ nights) or the nonmating cost c (e.g. tax benefits to married couples).  相似文献   
250.
Consider a second order divergence form elliptic operator L with complex bounded measurable coefficients. In general, operators based on L, such as the Riesz transform or square function, may lie beyond the scope of the Calderón–Zygmund theory. They need not be bounded in the classical Hardy, BMO and even some L p spaces. In this work we develop a theory of Hardy and BMO spaces associated to L, which includes, in particular, a molecular decomposition, maximal and square function characterizations, duality of Hardy and BMO spaces, and a John–Nirenberg inequality. S. Hofmann was supported by the National Science Foundation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号