首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   131篇
  免费   0篇
  国内免费   3篇
化学   54篇
晶体学   1篇
数学   53篇
物理学   26篇
  2024年   1篇
  2022年   1篇
  2021年   1篇
  2019年   1篇
  2018年   1篇
  2017年   4篇
  2016年   4篇
  2015年   4篇
  2014年   6篇
  2013年   3篇
  2012年   9篇
  2011年   8篇
  2010年   13篇
  2009年   6篇
  2008年   10篇
  2007年   6篇
  2006年   3篇
  2005年   3篇
  2004年   4篇
  2003年   3篇
  2002年   5篇
  2001年   1篇
  1999年   5篇
  1998年   4篇
  1996年   1篇
  1995年   3篇
  1994年   3篇
  1991年   3篇
  1990年   2篇
  1989年   1篇
  1988年   2篇
  1986年   2篇
  1984年   1篇
  1983年   1篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1978年   2篇
  1975年   1篇
  1974年   1篇
  1973年   2篇
排序方式: 共有134条查询结果,搜索用时 7 毫秒
1.
The Huber criterion for data fitting is a combination of thel 1 and thel 2 criteria which is robust in the sense that the influence of wild data points can be reduced. We present a trust region and a Marquardt algorithm for Huber estimation in the case where the functions used in the fit are non-linear. It is demonstrated that the algorithms converge under the usual conditions.  相似文献   
2.
In scientific research laboratories it is rarely possible to use quality assurance schemes, developed for large-scale analysis. Instead methods have been developed to control the quality of modest numbers of analytical results by relying on statistical control: Analysis of precision serves to detect analytical errors by comparing thea priori precision of the analytical results with the actual variability observed among replicates or duplicates. The method relies on the chi-square distribution to detect excess variability and is quite sensitive even for 5–10 results. Interference control serves to detect analytical bias by comparing results obtained by two different analytical methods, each relying on a different detection principle and therefore exhibiting different influence from matrix elements; only 5–10 sets of results are required to establish whether a regression line passes through the origo. Calibration control is an essential link in the traceability of results. Only one or two samples of pure solid or aqueous standards with accurately known content need to be analyzed. Verification is carried out by analyzing certified reference materials from BCR, NIST, or others; their limited accuracy of 5–10% make them less suitable for calibration purposes.  相似文献   
3.
In this study we have investigated whether micro-solution isoelectric focusing (microsol-IEF) can be used as a pre-fractionation step prior to liquid chromatography/tandem mass spectrometry (LC/MS/MS) and if extensive sample purification of the different fractions is required. We found that, in spite of the high concentrations of buffer and detergents, no clean up of the digested microsol-IEF fractions was necessary before analysis by LC/MS/MS. We also concluded that it is possible to identify at least twice as many proteins in a glioma cell lysate with the combination of microsol-IEF and LC/MS/MS than with LC/MS/MS alone. Furthermore, most of the proteins that were identified from one microsol-IEF fraction by using analytical narrow-range two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) and peptide mass fingerprinting with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) were also identified by LC/MS/MS. Finally, we used the combination of microsol-IEF and LC/MS/MS to compare two sample preparation methods for glioma cells and found that several nuclear, mitochondria, and endoplasmic reticulum proteins were only present in the sample that had been subjected to lipid extraction by incubating the homogenized cells in chloroform/methanol/water.  相似文献   
4.
The result of a measurement refers in principle only to the amount of substance actually contributing to the analytical signal. However, an appropriate definition of the measurand must include a specification of the system for which the result of the measurement should apply. All systems being inherently heterogeneous, representativity assumes importance for the metrological quality of a measurement, and the process needed to ascertain representativity is sampling. The contribution from this characteristic must be included when expressing the uncertainty of the reported value of the measurand. Representative sampling of systems that are infinite or non-uniform was developed by Pierre Gy in his Theory of Sampling. Finite systems can achieve uniformity by mechanical treatment and mixing; the heterogeneity of these systems can be characterized by a sampling constant, expressed in units of weight, for each particular species being determined. Examples of the contribution of sampling to the uncertainty of analytical results are discussed for some biological materials. Presented at the 2nd International Conference on Metrology – Trends and Applications in Calibration and Testing Laboratories, November 4–6, 2003, Eilat, Israel.  相似文献   
5.
The limitation of current dissociative fluorescence enhancement techniques is that the lanthanide chelate structures used as molecular probes are not stable enough in one-step assays with high concentrations of complexones or metal ions in the reaction mixture since these substances interfere with lanthanide chelate conjugated to the detector molecule. Lanthanide chelates of diethylenetriaminepentaacetic acid (DTPA) are extremely stable, and we used EuDTPA derivatives conjugated to antibodies as tracers in one-step immunoassays containing high concentrations of complexones or metal ions. Enhancement solutions based on different β-diketones were developed and tested for their fluorescence-enhancing capability in immunoassays with EuDTPA-labelled antibodies. Characteristics tested were fluorescence intensity, analytical sensitivity, kinetics of complex formation and signal stability. Formation of fluorescent complexes is fast (5 min) in the presented enhancement solution with EuDTPA probes withstanding strong complexones (ethylenediaminetetra acetate (EDTA) up to 100 mM) or metal ions (up to 200 μM) in the reaction mixture, the signal is intensive, stable for 4 h and the analytical sensitivity with Eu is 40 fmol/L, Tb 130 fmol/L, Sm 2.1 pmol/L and Dy 8.5 pmol/L. With the improved fluorescence enhancement technique, EDTA and citrate plasma samples as well as samples containing relatively high concentrations of metal ions can be analysed using a one-step immunoassay format also at elevated temperatures. It facilitates four-plexing, is based on one chelate structure for detector molecule labelling and is suitable for immunoassays due to the wide dynamic range and the analytical sensitivity.  相似文献   
6.
We consider a magnetic impurity in two different S=1/2 Heisenberg bilayer antiferromagnets at their respective critical interlayer couplings separating Néel and disordered ground states. We calculate the impurity susceptibility using a quantum Monte Carlo method. With intralayer couplings in only one of the layers (Kondo lattice), we observe an anomalous Curie constant C*, as predicted on the basis of field-theoretical work [S. Sachdev, Science 286, 2479 (1999)10.1126/science.286.5449.2479]. The value C* = 0.262 +/- 0.002 is larger than the normal Curie constant C=S(S+1)/3. Our low-temperature results for a symmetric bilayer are consistent with a universal C*.  相似文献   
7.
Well known extensions of the classical transportation problem are obtained by including fixed costs for the production of goods at the supply points (facility location) and/or by introducing stochastic demand, modeled by convex nonlinear costs, at the demand points (the stochastic transportation problem, [STP]). However, the simultaneous use of concave and convex costs is not very well treated in the literature. Economies of scale often yield concave cost functions other than fixed charges, so in this paper we consider a problem with general concave costs at the supply points, as well as convex costs at the demand points. The objective function can then be represented as the difference of two convex functions, and is therefore called a d.c. function. We propose a solution method which reduces the problem to a d.c. optimization problem in a much smaller space, then solves the latter by a branch and bound procedure in which bounding is based on solving subproblems of the form of [STP]. We prove convergence of the method and report computational tests that indicate that quite large problems can be solved efficiently. Problems up to the size of 100 supply points and 500 demand points are solved. Received October 11, 1993 / Revised version received July 31, 1995 Published online November 24, 1998  相似文献   
8.
In this paper we continue the study in Lewis and Nyström (2010) [19], concerning the regularity of the free boundary in a general two-phase free boundary problem for the p-Laplace operator, by proving regularity of the free boundary assuming that the free boundary is close to a Lipschitz graph.  相似文献   
9.
On the convergence of cross decomposition   总被引:2,自引:0,他引:2  
Cross decomposition is a recent method for mixed integer programming problems, exploiting simultaneously both the primal and the dual structure of the problem, thus combining the advantages of Dantzig—Wolfe decomposition and Benders decomposition. Finite convergence of the algorithm equipped with some simple convergence tests has been proved. Stronger convergence tests have been proposed, but not shown to yield finite convergence.In this paper cross decomposition is generalized and applied to linear programming problems, mixed integer programming problems and nonlinear programming problems (with and without linear parts). Using the stronger convergence tests finite exact convergence is shown in the first cases. Unbounded cases are discussed and also included in the convergence tests. The behaviour of the algorithm when parts of the constraint matrix are zero is also discussed. The cross decomposition procedure is generalized (by using generalized Benders decomposition) in order to enable the solution of nonlinear programming problems.  相似文献   
10.
We investigate non-standard Hamiltonian effects on neutrino oscillations, which are effective additional contributions to the vacuum or matter Hamiltonian. Since these effects can enter in either the flavor or mass basis, we develop an understanding of the difference between these bases representing the underlying theoretical model. In particular, the simplest of these effects are classified as “pure” flavor or mass effects, where the appearance of such a “pure” effect can be quite plausible as a leading non-standard contribution from theoretical models. Compared to earlier studies investigating particular effects, we aim for a top–down classification of a possible “new physics” signature at future long-baseline neutrino oscillation precision experiments. We develop a general framework for such effects with two neutrino flavors and discuss the extension to three neutrino flavors, and we demonstrate the challenges for a neutrino factory to distinguish the theoretical origin of these effects with a numerical example as well. We find how the precision measurements of neutrino oscillation parameters can be altered by non-standard effects alone (not including non-standard interactions in the creation and detection processes) and that the non-standard effects on Hamiltonian level can be distinguished from other non-standard effects (such as neutrino decoherence and decay) if we consider the specific imprint of the effects on the energy spectra of several different oscillation channels at a neutrino factory.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号