首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5587篇
  免费   881篇
  国内免费   115篇
化学   239篇
晶体学   47篇
力学   327篇
综合类   56篇
数学   2278篇
物理学   1293篇
无线电   2343篇
  2025年   11篇
  2024年   81篇
  2023年   100篇
  2022年   98篇
  2021年   123篇
  2020年   154篇
  2019年   152篇
  2018年   145篇
  2017年   189篇
  2016年   212篇
  2015年   194篇
  2014年   333篇
  2013年   377篇
  2012年   289篇
  2011年   393篇
  2010年   347篇
  2009年   292篇
  2008年   326篇
  2007年   381篇
  2006年   288篇
  2005年   283篇
  2004年   272篇
  2003年   230篇
  2002年   175篇
  2001年   157篇
  2000年   144篇
  1999年   130篇
  1998年   101篇
  1997年   92篇
  1996年   55篇
  1995年   69篇
  1994年   55篇
  1993年   38篇
  1992年   42篇
  1991年   35篇
  1990年   22篇
  1989年   21篇
  1988年   20篇
  1987年   23篇
  1986年   10篇
  1985年   24篇
  1984年   16篇
  1983年   12篇
  1982年   9篇
  1981年   11篇
  1980年   11篇
  1979年   9篇
  1978年   6篇
  1977年   7篇
  1976年   5篇
排序方式: 共有6583条查询结果,搜索用时 15 毫秒
71.
This is the part I of a tutorial review intending to give an overview of the state of the art of method validation in liquid chromatography mass spectrometry (LC–MS) and discuss specific issues that arise with MS (and MS/MS) detection in LC (as opposed to the “conventional” detectors). The Part I briefly introduces the principles of operation of LC–MS (emphasizing the aspects important from the validation point of view, in particular the ionization process and ionization suppression/enhancement); reviews the main validation guideline documents and discusses in detail the following performance parameters: selectivity/specificity/identity, ruggedness/robustness, limit of detection, limit of quantification, decision limit and detection capability. With every method performance characteristic its essence and terminology are addressed, the current status of treating it is reviewed and recommendations are given, how to determine it, specifically in the case of LC–MS methods.  相似文献   
72.
Single-molecule force spectroscopy, as implemented in an atomic force microscope, provides a rarely used method by which to monitor dynamic processes that occur near surfaces. Here, a methodology is presented and characterized that facilitates the study of polymer bridging across nanometer-sized gaps. The model system employed is that of DNA-based reversible polymers, and an automated procedure is introduced that allows the AFM tip–surface contact point to be automatically determined, and the distance d between opposing surfaces to be actively controlled. Using this methodology, the importance of several experimental parameters was systematically studied, e.g. the frequency of repeated tip/surface contacts, the area of the substrate surface sampled by the AFM, and the use of multiple AFM tips and substrates. Experiments revealed the surfaces to be robust throughout pulling experiments, so that multiple touches and pulls could be carried out on a single spot with no measurable affect on the results. Differences in observed bridging probabilities were observed, both on different spots on the same surface and, more dramatically, from one day to another. Data normalization via a reference measurement allows data from multiple days to be directly compared.  相似文献   
73.
In order to increase the present understanding of bimodal emulsion drop size distributions, systematic series of experiments have been carried out to investigate the effects of formulation variables on bimodal drop size distributions, and probability distribution functions were proposed to analyze the distribution. The results show that, the span of the drop size and Sauter mean diameter become larger when the dispersed phase volume fraction becomes higher and rotor speed becomes lower; the Frechet function represents the experimental data satisfactorily. The prediction model of Sauter mean diameter established by combining the prediction theory of the maximum stable drop diameter and experimental analysis results can fit the experimental data well.  相似文献   
74.
    
Pole figure measurements with an X‐ray texture goniometer equipped with a point detector are rather time consuming: depending on the angular resolution to be recorded, of the order of several hours per pole figure. Conventionally, the pole hemisphere is scanned along latitudinal small circles according to a regular grid of constant step sizes in both the azimuthal and the polar angle. In the case of sharp textures an adaptive successive local refinement strategy of the pole hemisphere may offer a better performance in less time. Then the measurement positions of the grid are highly irregularly distributed over the pole hemisphere. To avoid erratic movements when turning the specimen, the scanning order is optimized by means of resolving a travelling salesman problem such that the total travelling time is minimized. Several algorithms are described resolving the travelling salesman problem with respect to the irregular grid to be applied for each pole figure and for each step of successive refinement. A practical application to pole figure measurements exposes total savings of about 1/8 compared to the conventional scanning order. Successive local refinement of the experimental design and optimization of the order of its measurement positions are well suited to the purpose of controlling a texture goniometer.  相似文献   
75.
    
The VLD (vive la difference) phasing algorithm combines the model electron density with the difference electron density via reciprocal space relationships to obtain new phase values and drive them to the correct values. The process is iterative and has been applied to small and medium‐size structures and to proteins. Hybrid Fourier syntheses show properties that are intermediate between those of the observed synthesis (whose peaks should correspond to the most probable atomic positions) and those of the difference synthesis (whose positive and negative peaks should correspond to missed atomic positions and to false atoms of the model, respectively). Thanks to these properties some hybrid syntheses can be used in the phase extension and refinement step, to reduce the model bias and more rapidly move to the target structure. They have been recently revisited via the method of joint probability distribution functions [Burla, Carrozzini, Cascarano, Giacovazzo & Polidori (2011). Acta. Cryst. A 67 , 447–455]. The results suggested that VLD could be usefully combined, for ab initio phasing, with the hybrid rather than with the difference Fourier synthesis. This paper explores the feasibility of such a combination and shows that the original VLD algorithm is only one of several variants, all with relevant phasing capacity. The study explores the role of several parameters in order to design a standard procedure with optimized phasing power.  相似文献   
76.
    
The method of joint probability distribution functions has been applied to molecular replacement techniques. The rotational search is performed by rotating the reciprocal lattice of the protein with respect to the calculated transform of the model structure; the translation search is performed by fast Fourier transform. Several cases of prior information are studied, both for the rotation and for the translation step: e.g. the conditional probability density for the rotation or the translation of a monomer is found both for ab initio and when the rotation and/or the translation values of other monomers are given. The new approach has been implemented in the program REMO09, which is part of the package for global phasing IL MILIONE [Burla, Caliandro, Camalli, Cascarano, De Caro, Giacovazzo, Polidori, Siliqi & Spagna (2007). J. Appl. Cryst. 40 , 609–613]. A large set of test structures has been used for checking the efficiency of the new algorithms, which proved to be significantly robust in finding the correct solutions and in discriminating them from noise. An important design concept is the high degree of automatism: REMO09 is often capable of providing a reliable model of the target structure without any user intervention.  相似文献   
77.
    
The validity of the normal distribution as an error model is commonly tested with a (half) normal probability plot. Real data often contain outliers. The use of t‐distributions in a probability plot to model such data more realistically is described. It is shown how a suitable value of the parameter ν of the t‐distribution can be determined from the data. The results suggest that even data that seem to be modeled well using a normal distribution can be better modeled using a t‐distribution.  相似文献   
78.
    
This study proposes an improved physical model to predict sand deposition at high temperature in gas turbine components. This model differs from its predecessor (Sreedharan and Tafti, 2011) by improving the sticking probability by accounting for the energy losses during particle-wall collision based on our previous work (Singh and Tafti, 2013). This model predicts the probability of sticking based on the critical viscosity approach and collision losses during a particle–wall collision. The current model is novel in the sense that it predicts the sticking probability based on the impact velocity along with the particle temperature. To test the model, deposition from a sand particle laden jet impacting on a flat coupon geometry is computed and the results obtained from the numerical model are compared with experiments (Delimont et al., 2014) conducted at Virginia Tech, on a similar geometry and flow conditions, for jet temperatures of 950 °C, 1000 °C and 1050 °C. Large Eddy Simulations (LES) are used to model the flow field and heat transfer, and sand particles are modeled using a discrete Lagrangian framework. Results quantify the impingement and deposition for 20–40 μm sand particles. The stagnation region of the target coupon is found to experience most of the impingement and deposition. For 950 °C jet temperature, around 5% of the particle impacting the coupon deposit while the deposition for 1000 °C and 1050 °C is 17% and 28%, respectively. In general, the sticking efficiencies calculated from the model show good agreement with the experiments for the temperature range considered.  相似文献   
79.
    
Problems from plastic limit load or shakedown analysis and optimal plastic design are based on the convex yield criterion and the linear equilibrium equation for the generic stress (state) vector σ. Having to take into account, in practice, stochastic variations of the vector y = y(ω) of model parameters, e.g. yield stresses, external loadings, cost coefficients, etc., the basic stochastic plastic analysis or optimal plastic design problem must be replaced – in order to get robust optimal designs/load factors – by an appropriate deterministic substitute problem. For this purpose, the existence of a statically admissible (safe) stress state vector is described first by means of an explicit scalar state function s* = s* (y,x) depending on the parameter vector y and the design vector x. The state or performance function s* (y,x) is defined by the minimum value function of a convex or linear program based on the basic safety conditions of plasticity theory: A safe (stress) state exists then if and only if s* < 0, and a safe stress state cannot be guaranteed if and only if s* ≥ 0. Hence, the probability of survival can be represented by ps = P(s* (y(ω),x)<0). Using FORM, the probability of survival is approximated then by the well‐known formula ps ∼ Φ ( |) where denotes the length of a so‐called β‐point, hence, a projection of the origin 0 to the failure domain (transformed to the space of normal distributed model parameters z(ω)=T(y(ω))). Moreover, Φ = Φ (t) denotes the distribution function of the standard N(0,1) normal distribution. Thus, the basic reliability condition, used e.g. in reliability‐based optimal plastic design or in limit load analysis problems, reads $$parallel z_{x}^{ast} parallel ge Phi ^{-1}(alpha _s ) $$ with a prescribed minimum probability αs. While in general the computation of the projection is very difficult, in the present case of elastoplastic structures, by means of the state function s* = s* (y,x) this can be done very efficiently: Using the available necessary and sufficient optimality conditions for the convex or linear optimization problem representing the state function s* = s* (y,x), an explicit parameter optimization problem can be derived for the computation of a design point . Simplifications are obtained in the standard case of piecewise linearization of the yield surfaces. In addition, several different response surface methods including the standard response surface method are also applied to compute a β‐point in order to reduce the computational time as well as having more accurate results than the first order approximation methods by using the obtained response surface function with any simulation methods such as Monte Carlo Simulation. However, for the problems having a polygon type limit state function, the standard response surface methods can not approximate well enough. Thus, a response surface method based on the piecewise regression has been developed for such problems. Applications of the methods developed to several types of structures are presented for the examples given in this paper.  相似文献   
80.
在研究矩独立基本变量对响应分布影响的重要性测度的基础上,定义了基本变量对失效概率的重要性测度. 基本变量对失效概率的重要性测度可以直接全面地给出基本变量对结构安全影响的重要程度,因而该重要性测度将更具有工程指导作用. 文中分析了所定义基本变量对失效概率影响重要性测度的基本性质,并基于鞍点线抽样在求解可靠度时不受随机变量分布形式的限制及其效率和精度较高的优点,提出了求解该重要性测度的鞍点线抽样方法.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号