首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Analytical results are often obtained from reflectance or fluorescence measurements in TLC or HPTLC with the aid of calibration lines. Curve fitting may not be done by conventional least squares methods for both the independent and the dependent variables are erroneous; errors occur in the volumes spotted on to the plate and in the reflectance/fluorescence measurements. Therefore the sum of the squares of the distances to the calibration line has to be minimized instead of the squared sum of the errors of the dependent variable. The algorithms used are derived and explained. Implications of error propagation for the analytical result are also given.  相似文献   

2.
The error propagation is given for routine analysis by in situ evaluation in TLC using the methods with internal or external standard. The internal standard method will need four different measurements of peak height or area, while the external standard method uses only two measurements. Therefore in the latter case the error of the spotting volume will cause errors in the determined concentration. By error propagation it can be shown, that the internal standard method gives better results, if the error in measuring peak height or area is less than the error of the spotting volume.  相似文献   

3.
The paper discusses errors and error propagation in respect of graphic methods in quantitative in situ measurements in TLC. An error of 0.3% is possible by measuring only peak height if optimal conditions are chosen. This is in good agreement with analysis done practically which gives errors in the order of 0.3–0.6%. If the peak area is evaluated using the approximation peak height × half width, the error is in the order of 0.6%, but in real experiments only 1.5% had been found. Systematic errors in determining peak heights are introduced by the time constant of the amplifiers and the recorder.  相似文献   

4.
Quality assurance and method validation are needed to reduce false decisions due to measurement errors. In this context accuracy and standard uncertainty for the analytical method need to be considered to ensure that the performance characteristics of the method are understood. Therefore, analytical methods ought to be validated before implementation and controlled on a regular basis during usage. For this purpose reference materials (RMs) are useful to determine the performance characteristics of methods under development. These performance parameters may be documented in the light of a method evaluation study and the documentation related to international standards and guidelines. In a method evaluation study of Pb in blood using reference samples from the Laboratoire Toxicologie du Quèbec, Canada, a difference between the systematic errors was observed using a Perkin-Elmer Model 5100 atomic absorption spectrometer and a Perkin-Elmer Model 4100 atomic absorption spectrometer, both with Zeeman background correction. For measurement of blood samples, the performance parameters obtained in the method evaluation studies, i.e. slopes and intercepts of the method evaluation function (MEF), were intended to be used for correcting the systematic errors. However, the number of MEF samples was insufficient to produce an acceptable SD for the MEF slopes to be used for correction. In a method evaluation study on valproate in plasma using the SYVA's EMIT assay on COBAS MIRA S a significant systematic error above the concentration 300 mmol dm–3 was demonstrated (slope 0.9541) and consequently the slope was used for correction of results. For analytes, where certified RMs (CRMs) exist, a systematic error of measurements can be reduced by correcting errors by assessment of the trueness as recommended in international guidelines issued by ISO or the National Institute of Standards and Technology (NIST). When possible, the analysis of several RMs, covering the concentration range of interest, is the most useful way to investigate measurement bias. Unfortunately, until recently only few RMs existed and only few had been produced and certified by specialized organizations such as NIST or the Standards, Measurements and Testing (SMT, previously BCR) programme. Due to the lack of such RMs, network organizations are nowadays established with the aim of supporting the correct use and production of high-quality CRMs.  相似文献   

5.
The formulae for prediction errors of inverse and classical calibration derived by Centner, Massart and de Jong in the Fresenius’ Journal of Analytical Chemistry (1998) 361?:?2–9 are reconsidered. All calculations assume univariate calibration by ordinary least squares regression applied to an infinite number of data pairs. Inverse calibration gives rise to an error variance which is smaller by a certain factor than that of classical calibration. This factor amounts to unity plus the ratio of the variances of the measurement errors and of the responses used for the calibration. The root mean squared error of prediction is also smaller for inverse than for classical calibration, namely by the square root of this factor. A prediction error calculated in that way agrees well with a result obtained by Monte Carlo simulations.  相似文献   

6.
The formulae for prediction errors of inverse and classical calibration derived by Centner, Massart and de Jong in the Fresenius’ Journal of Analytical Chemistry (1998) 361 : 2–9 are reconsidered. All calculations assume univariate calibration by ordinary least squares regression applied to an infinite number of data pairs. Inverse calibration gives rise to an error variance which is smaller by a certain factor than that of classical calibration. This factor amounts to unity plus the ratio of the variances of the measurement errors and of the responses used for the calibration. The root mean squared error of prediction is also smaller for inverse than for classical calibration, namely by the square root of this factor. A prediction error calculated in that way agrees well with a result obtained by Monte Carlo simulations. Received: 23 December 1999 / Revised: 14 February 2000 / Accepted: 15 February 2000  相似文献   

7.
A simulation study to evaluate the errors in rate constants of the three compartment model using the weighted integral method was performed. Ten combinations of 7 kinds of weight functions, the errors were tested in 18F fluorodeoxyglucose (18FDG) study. The error factors arising in PET measurement were statistical noise, cerebral blood volume, time shift and scanning time of PET measurement. Errors in each rate constant were within the range of 10 percent and those in k1k3/(k2+k3) within 1 percent. The weighted integral method was confirmed to be a faster method than the conventional least squares method within a reasonable error range.  相似文献   

8.
In archaeoastronomical orientation purposes, employing appropriate software, the star declination (D) is determined from the geographical coordinates of the site, the plate bearing of the horizon point (azimuth, Az), and the altitude of the horizontal point (skyline), with appropriate corrections. Errors to the second of a degree in latitude have not any apparent effect on D, errors in Az and latitude of the skyline, however, transmit significant respective percentage errors in D. This note discusses the impact of such error variation on all parameters involved in archaeoastronomical orientation, and presents the case of two small sized pyramidals in Greece.  相似文献   

9.
Riley MR  Crider HM 《Talanta》2000,52(3):473-484
Near infrared spectroscopy (NIRS) was employed to quantify five compounds, ammonium, glucose, glutamate, glutamine, and lactate, in conditions similar to those obtained in animal cell cultivations over varying ranges of analyte concentrations. These components represent the primary nutrients and wastes of animal cells for which such noninvasive monitoring schemes are required for development of accurate control schemes. Ideal cultivation conditions involve maintaining concentrations of these components as low as 1 mM each, however, it is not known if measurements of these compounds can be accurately accomplished at such a low level. We have found that NIRS measurements of these analytes over narrow and low (0-1 mM) concentration ranges yield measurement errors of roughly 11% of the concentration range. By contrast, wide concentration ranges (0-30 mM) yield measurement errors of roughly 1.6% of the concentration range. Decreasing the concentration range over which an analyte is quantified in four out of five cases decreases the optimal spectral range by 100 cm(-1) for measurement by partial least squares regression analysis. There appears a similarity in the ratio of (standard error of prediction (SEP)/concentration range) which may provide an estimation of the anticipated SEP to be obtained for measurement over a new concentration range. It was found that for the five analytes evaluated here, the ratio of SEP to concentration range divided by that obtained for a second concentration range is equal to a fairly constant value of 6.6. This relationship was found to be followed reasonably well by an extensive number of measurement results reported in the literature for similar conditions.  相似文献   

10.
Linear regression of calibration lines passing through the origin was investigated for three models of y-direction random errors: normally distributed errors with an invariable standard deviation (SD) and log normally and normally distributed errors with an invariable relative standard deviation (RSD). The weighted (weighting factor is x 2 i ), geometric and arithmetic means of the ratios y i /x i estimate the calibration slope for these models, respectively. Regression of the calibration lines with errors in both directions was also studied. The x-direction errors were assumed to be normally distributed random errors with either an invariable SD or invariable RSD, both combined with a constant relative systematic error. The random errors disperse the true, unknown x-values about the plotted, demanded x-values, which are shifted by the constant relative systematic error. The systematic error biases the slope estimate while the random errors do not. They only increase automatically the slope estimate uncertainty, in which the uncertainty component reflecting the range of the possible values of the systematic error must be additionally included. Received: 9 May 2000 Accepted: 7 March 2001  相似文献   

11.
Instrumental errors in modern controlled-potential coulometry are analyzed. The instrumental errors are classified into two main groups, namely, errors of determining the degree of completion of analyte electrolysis and errors of measuring the quantity of electricity taken for electrolysis. Various components of these errors have been estimated numerically. It has been demonstrated that, with the use of currently available electronic components, the total instrumental error can be as low as 0.01–0.02%, which allows the analysis error to be reduced down to 0.05–0.1%. Presented at the V All-Russian Conference with the Participation of CIS Countries on Electrochemical Methods of Analysis (EMA-99), Moscow, December 6–8, 1999.  相似文献   

12.
MINDO/3 calculations have been carried out for a series of branched chain alkanes in order to assess effects of branching on calculated geometries and heats of formation (ΔHf). With vicinal branching, MINDO/3 calculates the central C? C bond to be too long. Bond angles are also found to be distorted. Errors in calculated heats of formation are large when geminal branching is present and significant with vicinal branching. Branching error corrections for ΔHf have been derived and applied to a separate series of branched acyclic and cyclic compounds. For the test sample, application of the branching error corrections gave calculated structures of acyclic branched hydrocarbons with heats of formation having an average absolute error of 1.3 kcal/mole rather than 17.3 kcal/mole before correction. Cyclic branched hydrocarbons are shown to be less well corrected. Calculations of heats of reaction have also been carried out for some isomerization and cyclization reactions using the MINDO/3 and MNDO methods. It is clear from the comparisons that MNDO calculations give less severe errors for highly branched compounds but the errors are still substantial. For prediction of heats of reaction, the error-corrected calculations are shown to be superior to the “raw” calculations obtained by MINDO/3 or MNDO.  相似文献   

13.
Some model calculations with LAOCOON 3 have been used to show that the mean error in the calculated coupling constants is linearly related to the root mean square error in the transition frequencies. On the basis of this linearity it is shown that the ‘probable’ error parameter of LAOCOON 3 should be increased by a factor of 2·5 to give more realistic errors.  相似文献   

14.
The frequent publication of contradicting or meaningless kinetic parameters and the resulting criticism of the “ill-conditioned nature” of non-isothermal reaction kinetics led the authors to an examination of the sensitivity of kinetic parameters to experimental errors. Using simple mathematical deductions, conditions were given at which about 10% precision of the kinetic parameters can easily be achieved. To obtain a graphic picture about the information content of a thermoanalytical curve and the effect of the systematic measurement errors, mathematical relationships were deduced to show the dependence of the kinetic parameters on the formal (geometric) characteristics of the thermoanalytical curves.  相似文献   

15.
It is well known that linear regression analysis gives unbiased estimates of the slope and intercept of a straight line if the dependent variable y is subject to random errors of measurement while the independent variable x is not. It is much less well known that, if x is also subject to random errors of measurement, linear regression analysis yields an underestimate of the slope and a correspondingly biased estimate of the intercept. These errors cannot be removed by weighted regression. Similar errors arise in non-linear regression when the independent variable is afflicted by random errors of measurement.  相似文献   

16.
Calibration methods differ as to the number and concentration of calibration standards, and whether these are added to the samples or separate from them. The four main calibration mehods (single separate or added standard and multiple separate or added standards) and some modifications are desribed mathematically and subjected to error-propagation analysis, to examine the likely effects of errors in the analytical signal on the overall accuracy and precision of the concentration estimate. Comparison of the results throws light on the influence of the number, concentration and nature of the calibration standards, the effects of sample and standard replication, and the costs and the benefits of blank measurements. It is shown that all standard-addition methods are immune to proportional signal error, but more sensitive to nonlinearity. In separate-standard methods, all bias disappears when the true sample concentration (xs) is equal to the standard concentration or to the mean standard concentration (x). Only the multiple separate standard method is unaffected by constant error common to sample and standards, without blanking. In multiple-standard methods, precision is best at xs = x. Precision is always improved by increasing the number of sample and standard measurements; standard-addition methods respond best to sample replication. Whichever calibration method is used, recovery correction will eliminate proportional concentration error, at the cost of decreased precision.  相似文献   

17.
Dispersion, static correlation, and delocalisation errors in density functional theory are considered from the unconventional perspective of the force on a nucleus in a stretched diatomic molecule. The electrostatic theorem of Feynman is used to relate errors in the forces to errors in the electron density distortions, which in turn are related to erroneous terms in the Kohn-Sham equations. For H(2), the exact dispersion force arises from a subtle density distortion; the static correlation error leads to an overestimated force due to an exaggerated distortion. For H(2)(+), the exact force arises from a delicate balance between attractive and repulsive components; the delocalisation error leads to an underestimated force due to an underestimated distortion. The net force in H(2)(+) can become repulsive, giving the characteristic barrier in the potential energy curve. Increasing the fraction of long-range exact orbital exchange increases the distortion, reducing delocalisation error but increasing static correlation error.  相似文献   

18.
When data on the variation with time of the absorbance of a reactant or product are used to evaluate the rate constant of a first- or pseudo-first-order reaction, the precision of the result depends on the precisions with which both the time and the absorbance are measured. The natures of the dependences, and the ways in which they are affected by both constant and linearly varying background absorbances, are examined. If the standard error σt of a measurement of time is below about 0.005 t12, the standard error of the rate constant is virtually identical for an experiment in which the concentration of the reactant is followed as for one in which the concentration of the product is followed, but for larger values of σt it is better to follow the concentration of the reactant. In any event, errors in the measurements of time are much more likely to be significant than they are usually assumed to be. Other sources of error in such experiments, including constant and time-dependent background absorbances, are examined more briefly, with emphasis on the requirements that should be satisfied in work of the highest quality.  相似文献   

19.
The present paper is a review of the main theoretical and technical aspects of human error treatment (error modelling, reduction and quantification) as applied in aviation, engineering, medicine and other fields. The aim of the review is to attract the attention of analysts and specialists in metrology and quality in chemistry to the human error problem and its influence on the reliability of test results of chemical composition and associated measurement uncertainty. Therefore, the subject of human error is interpreted in the review in application to the conditions of a chemical analytical laboratory.  相似文献   

20.
The explicitly correlated second order M?ller-Plesset (MP2-R12) methods perform well in reproducing the last detail of the correlation cusp, allowing higher accuracy than can be accessed through conventional means. Nevertheless in basis sets that are practical for calculations on larger systems (i.e., around triple- or perhaps quadruple-zeta) MP2-R12 fails to bridge the divide between conventional MP2 and the MP2 basis set limit. In this contribution we analyse the sources of error in MP2-R12 calculations in such basis sets. We conclude that the main source of error is the choice of the correlation factor r12. Sources of error that must be avoided for accurate quantum chemistry include the neglect of exchange commutators and the extended Brillouin condition. The generalized Brillouin condition is found not to lead to significant errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号