首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   890篇
  免费   45篇
  国内免费   18篇
化学   77篇
力学   36篇
综合类   1篇
数学   627篇
物理学   212篇
  2024年   1篇
  2023年   12篇
  2022年   11篇
  2021年   13篇
  2020年   13篇
  2019年   6篇
  2018年   13篇
  2017年   14篇
  2016年   33篇
  2015年   21篇
  2014年   36篇
  2013年   82篇
  2012年   37篇
  2011年   79篇
  2010年   54篇
  2009年   82篇
  2008年   67篇
  2007年   58篇
  2006年   66篇
  2005年   27篇
  2004年   30篇
  2003年   17篇
  2002年   29篇
  2001年   18篇
  2000年   16篇
  1999年   18篇
  1998年   12篇
  1997年   12篇
  1996年   12篇
  1995年   7篇
  1994年   14篇
  1993年   8篇
  1992年   5篇
  1991年   8篇
  1990年   4篇
  1989年   1篇
  1988年   3篇
  1987年   2篇
  1985年   4篇
  1983年   1篇
  1982年   1篇
  1979年   4篇
  1978年   1篇
  1971年   1篇
排序方式: 共有953条查询结果,搜索用时 15 毫秒
21.
本文从理论上分析了衍射强度比偏差Δ(I/I∞)和衍射峰位偏差Δ2θ对Seemann-Bohlin准聚焦X射线衍射仪测量表面单层薄膜厚度误差的影响。分析结果表明,降低Δ(I/I∞)可提高膜厚的测量精度,在Δ(I/I∞)-定的情况下,按μρt[Sin(-1)γ+Sin(-1)(2θ-γ)]=1选择靶辐射和衍射晶面可使由Δ(I/I∞)导致的膜厚测量误差具有极小值;选择高角度衍射线有助于减小试样离焦引起的衍射峰位偏差,亦可降低因衍射角测量偏差导致的膜厚测量误差,当衍射线处于薄膜的法向2θ=γ+π/2时,角度项误差(Δt/t)(2θ)完全消除。  相似文献   
22.
The processing and error analysis of signals in flow-injection systems were systematically studied by simulation and experimental measurements. The content includes an error analysis for peak-height and peak-area signal, a least-squares filtering procedure applied to the flow-injection curve and a peak recognition to remove interferences from air bubbles. Simulation results were obtained by statistical processing of peak-height and peak-area values from Gaussian curves to which noise had been added. The experimental measurements were done by an automatic flow-injection device to obtain detailed information for each individual point of a peak. 2-(2-Arsenophenylazo)-7-(2,6-dichlorophenylazo-4-sulphonic acid)-1,8-dihydroxynaphthalene-3,6-disulphonic acid (DCSA) was used for measuring physical dispersion alone, and Fe(II)-o-phenanthroline for the measurement of both physical dispersion and chemical reaction. The results from computer simulation and experiments agreed well.  相似文献   
23.
Historically, due to the size and nature of the instrumentation, highly skilled laboratory professionals performed clinical testing in centralized laboratories. Today’s clinicians demand realtime test data at the point of care. This has led to a new generation of compact, portable instruments permitting ”laboratory” testing to be performed at or near the patient’s bedside by nonlaboratory workers who are unfamiliar with testing practices. Poorly controlled testing processes leading to poor quality test results are an insidious problem facing point of care testing today. Manufacturers are addressing this issue through instrument design. Providers of clinical test results, regardless of location, working with manufacturers and regulators must create and manage complete test systems that eliminate or minimize sources of error. The National Committee for Clinical Laboratory Standards (NCCLS) in its EP18 guideline, ”Quality management for unit-use testing,” has developed a quality management system approach specifically for test devices used for point of care testing (POCT). Simply stated, EP18 utilizes a ”sources of error” matrix to identify and address potential errors that can impact the test result. The key is the quality systems approach where all stakeholders – professionals, manufacturers and regulators – collaboratively seek ways to manage errors and ensure quality. We illustrate the use of one quality systems approach, EP18, as a means to advance the quality of test results at point of care. Received: 26 June, 2002 Accepted: 17 July 2002 Presented at the European Conference on Quality in the Spotlight in Medical Laboratories, 7–9 October 2001, Antwerp, Belgium Abbreviations NCCLS National Committee for Clinical Laboratory Standards (formerly) · POCT point of care testing · QC quality control · HACCP hazard analysis critical control points · CLIA clinical laboratory improvement amendments (of 1988) Correspondence to S. S. Ehrmeyer  相似文献   
24.
The “Guide to the expression of uncertainty in measurement” (GUM) is an extremely important document. It unifies methods for calculating measurement uncertainty and enables the consistent interpretation and comparison of measurement results, regardless of who obtained these measurements and where they were obtained. Since the document was published in 1995, it has been realised that its recommendations do not properly address an important class of measurements, namely, non-linear indirect measurements. This drawback prompted the initiation of the revision of the GUM in the Working Group 1 of the Joint Committee for Guides in Metrology, which commenced in October 2006. The upcoming revision of the GUM provides the metrological community with an opportunity to improve this important document, in particular, to reflect developments in metrology that have occurred since the first GUM publication in 1995. Thus, a discussion of the directions for this revision is important and timely. By identifying several shortcomings of the GUM and proposing directions for its improvement, we hope this article will contribute to this discussion. Papers published in this section do not necessarily reflect the opinion of the Editors, the Editorial Board and the Publisher.  相似文献   
25.
Dr. Shanahan has published two papers (Thermochim. Acta 428 (2005) 207, Thermochim. Acta 382 (2002) 95) in which he argues that excess heat claimed to be produced by cold fusion is actually caused by errors in heat measurement. In particular, he proposes that unrecognized changes in the calibration constant are produced by changes in the locations where heat is being generated within the electrolytic cell over the duration of the measurement. Because these papers may lend unwarranted support to rejection of cold fusion claims, these erroneous arguments used by Shanahan need to be answered.  相似文献   
26.
The longitudinal motions and vertical accelerations of a floating torus as well as wave motion inside the torus are studied by model tests in regular deep-water waves. Comparisons are made with linear and partly with second-order potential-flow theory for the smallest examined experimental wave height-to-wave length ratio 1/120. Reasonable agreement is obtained, in particular for the linear problem. The importance of 3D flow, hydroelasticity and strong hydrodynamic frequency dependency is documented. Experimental precision errors and bias errors, for instance, due to tank-wall interference are discussed. Numerical errors due to viscous effects are found to be secondary. Experiments show that the third and fourth harmonic accelerations of the torus matter and cannot be explained by a perturbation method with the wave steepness as a small parameter.  相似文献   
27.
Ball convergence results are very important, since they demonstrate the complexity in choosing initial points for iterative methods. One of the most important problems in the study of iterative methods is to determine the convergence ball. This ball is small in general restricting the choice of initial points. We address this problem in the case of Wang’s method utilized to determine a zero of a derivative. Finding such a zero has many applications in computational fields, especially in function optimization. In particular, we find the convergence ball of Wang’s method using hypotheses up to the second derivative in contrast to earlier studies using hypotheses up to the fourth derivative. This way, we also extend the applicability of Wang’s method. Numerical experiments used to test the convergence criteria complete this study.  相似文献   
28.
29.
This numerical study provides an error analysis of an idealized nanopore sequencing method in which ionic current measurements are used to sequence intact single‐stranded DNA in the pore, while an enzyme controls DNA motion. Examples of systematic channel errors when more than one nucleotide affects the current amplitude are detailed, which if present will persist regardless of coverage. Absent such errors, random errors associated with tracking through homopolymer regions are shown to necessitate reading known sequences (Escherichia coli K‐12) at least 140 times to achieve 99.99% accuracy (Q40). By exploiting the ability to reread each strand at each pore in an array, arbitrary positioning on an error rate versus throughput tradeoff curve is possible if systematic errors are absent, with throughput governed by the number of pores in the array and the enzyme turnover rate.  相似文献   
30.
In practical applications, information about the accuracy or ‘fidelity’ of alternative surrogate systems may be ambiguous and difficult to determine. To address this problem, we propose to treat surrogate system fidelity level as a categorical factor in optimal response surface design. To design the associated experiments, we apply the Expected Integrated Mean Squared Error optimal design criterion, which takes into account both variance and bias errors. The performance of the proposed design was compared using three test cases to four types of alternatives using the Empirical Integrated Squared Error. Because of its ability to foster relatively accurate predictions, the proposed design is recommended in fidelity experimental design, particularly when the experimenters lack sufficient information about the fidelity levels of surrogate systems. The method was applied to the case of intraday trading optimization in which data were collected from the Taiwan Futures Exchange. We also calculated the implied volatility from the Merton's Jump‐diffusion model via the fast Fourier transform algorithm with three different models of varying fidelity levels. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号