首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The utility of analytical chemistry measurements in most applications is dependent on an assessment of measurement error. This paper demonstrates the use of a two-component error model in setting limits of detection and related concepts and introduces two goodness-of-fit statistics for assessing the appropriateness of the model for the data at hand. The model is applicable to analytical methods in which high concentrations are measured with approximately constant relative standard deviation. At low levels, the relative standard deviation cannot stay constant, since this implies vanishingly small absolute standard deviation. The two-component model has approximately constant standard deviation near zero concentration, and approximately constant relative standard deviation at high concentrations, a pattern that is frequently observed in practice. Here we discuss several important applications of the model to environmental monitoring and also introduce two goodness-of-fit statistics, to ascertain whether the data exhibit the error structure assumed by the model, as well as to look for problems with experimental design.  相似文献   

2.
Summary: A new error‐in‐variables method was developed to estimate the reactivity ratios in copolymerization systems. It brings the power of automatic, continuous, on‐line monitoring of polymerization (ACOMP) to copolymerization calculations. In ACOMP systems, monomer and polymer concentrations are measured by the monitoring of two independent properties of the system. The reactivity ratios are found by taking into account errors in the monomer concentrations determined from measurements and from calibration of the instruments. All the error sources are taken into account according to the error‐in‐variables method, and their effects are reflected in determining the confidence intervals of the reactivity ratios by the usual error propagation technique.

Distribution of concentrations [a] and [b] for the simulated experiment I. Random errors are 1% of the initial value in both observed variables.  相似文献   


3.
The impact of random analytical errors on the determination of metal complexation parameters of natural waters by metal titration procedures based on cathodic stripping (CSV) or anodic stripping (ASV) voltammetry is investigated by means of computer simulation. The results indicate that random analytical errors are of overriding importance in establishing the range of ligand concentrations and conditional stability constants that can be accurately determined by these techniques. Simulations incorporating realistic estimates of random analytical error show that only stability constants lying within a relatively narrow range, typically three orders of magnitude, can be determined accurately by the ASV procedure. The CSV procedure suffers from the same limitations, but is potentially more flexible in that the available detection window can be moved (but not widened) by adjustments to the method. Both techniques are capable of accurately determining ligand concentrations provided that the corresponding stability constant, K′, is greater than a threshold value which corresponds to the lower end of the available detection window for the stability constant. Realistically attainable improvements in analytical precision did not greatly improve the performance of either technique. Two graphical treatments for the evaluation of metal complexation parameters from titration data are compared: the Scatchard and Van den Berg/Ruzic plots. Simulations indicate that at least for the single-ligand model of complexation, the Van den Berg/Ruzic method is superior. The importance of the simulation results with respect to determining metal complexation parameters in natural waters is discussed. This study illustrates the value of computer simulation when complex, time-consuming analytical techniques are applied and the need for rigorous analysis of errors in producing data of environmental relevance.  相似文献   

4.
The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.  相似文献   

5.
Comparing the slopes of two regression lines is an almost daily task in analytical laboratories. The usual procedure is based on a Student’s t-test although literature differs in whether the standard errors of the slopes or the standard errors of the regressions should be employed to get a pooled standard error. In this work fundamental concepts on the use of the Student’s test were reviewed and Monte Carlo simulations were done to ascertain whether relevant differences arise when the two options are considered. It was concluded that for small sample sets (as it is usual in analytical laboratories) the Student’s t-test based on the standard error of regression models must be used and special attention must be paid on the equality of the models variances. Finally, alternative approaches were reviewed, with emphasis on a simple one based on the analysis of the covariance (ANCOVA).  相似文献   

6.
7.
The role of human being as a part of a measuring system in a chemical analytical laboratory is discussed. It is argued that a measuring system in chemical analysis includes not only measuring instruments and other devices, reagents and supplies, but also a sampling inspector and/or analyst performing a number of important operations. Without this human contribution, a measurement cannot be carried out. Human errors, therefore, influence the measurement result, i.e., the measurand estimate and the associated uncertainty. Consequently, chemical analytical and metrological communities should devote more attention to the topic of human errors, in particular at the design and development of a chemical analytical/test method and measurement procedure. Also, mapping human errors ought to be included in the program of validation of the measurement procedure (method). Teaching specialists in analytical chemistry and students how to reduce human errors in a chemical analytical laboratory and how to take into account the error residual risk, is important. Human errors and their metrological implications are suggested for consideration in future editions of the relevant documents, such as the International Vocabulary of Metrology (VIM) and the Guide to the Expression of Uncertainty in Measurement (GUM).  相似文献   

8.
Quality assurance and method validation are needed to reduce false decisions due to measurement errors. In this context accuracy and standard uncertainty for the analytical method need to be considered to ensure that the performance characteristics of the method are understood. Therefore, analytical methods ought to be validated before implementation and controlled on a regular basis during usage. For this purpose reference materials (RMs) are useful to determine the performance characteristics of methods under development. These performance parameters may be documented in the light of a method evaluation study and the documentation related to international standards and guidelines. In a method evaluation study of Pb in blood using reference samples from the Laboratoire Toxicologie du Quèbec, Canada, a difference between the systematic errors was observed using a Perkin-Elmer Model 5100 atomic absorption spectrometer and a Perkin-Elmer Model 4100 atomic absorption spectrometer, both with Zeeman background correction. For measurement of blood samples, the performance parameters obtained in the method evaluation studies, i.e. slopes and intercepts of the method evaluation function (MEF), were intended to be used for correcting the systematic errors. However, the number of MEF samples was insufficient to produce an acceptable SD for the MEF slopes to be used for correction. In a method evaluation study on valproate in plasma using the SYVA's EMIT assay on COBAS MIRA S a significant systematic error above the concentration 300 mmol dm–3 was demonstrated (slope 0.9541) and consequently the slope was used for correction of results. For analytes, where certified RMs (CRMs) exist, a systematic error of measurements can be reduced by correcting errors by assessment of the trueness as recommended in international guidelines issued by ISO or the National Institute of Standards and Technology (NIST). When possible, the analysis of several RMs, covering the concentration range of interest, is the most useful way to investigate measurement bias. Unfortunately, until recently only few RMs existed and only few had been produced and certified by specialized organizations such as NIST or the Standards, Measurements and Testing (SMT, previously BCR) programme. Due to the lack of such RMs, network organizations are nowadays established with the aim of supporting the correct use and production of high-quality CRMs.  相似文献   

9.
An overview is given on the stepwise learning programmes undertaken to identify the main sources of error associated with the determination of the mandatory organic contaminants in the marine monitoring programmes. Details are given on the preparation and use of LRMs and CRMs to maintain analytical control and quantify the laboratory errors in relation to the measurement of changes in the environment.  相似文献   

10.
A direct flow-injection atomic-absorption spectrometric (FIA-AAS) method for the assessment of inorganic arsenic compounds and their metabolites was developed and statistically evaluated by the estimation of the method evaluation function (MEF), which provides detailed information on the analytical performance of the method, i.e., the average combined uncertainty and the magnitude of potential systematic errors. The method evaluation study demonstrated that the use of standard addition was a necessity to obtain an acceptable method performance at low concentrations typical for low dose exposure. In contrast the use of calibration curves resulted in a method with reduced sensitivity and high systematic error. The developed method, using standard addition, had a limit of detection (2.9 microg/l.) sufficiently low for the determination of hydride-generating arsenic species in urine from non-exposed and low exposed persons. Organoarsenicals such as arsenobetaine and arsenocholine are not detected by this method. Hence, the contribution of these compounds derived from a diet containing seafood does not affect the monitoring of inorganic arsenic compounds after occupational or environmental exposure. The high capacity of the FIA-AAS system (three minutes per sample measured by standard addition) together with the low limit of detection makes this method suitable for biological monitoring of inorganic arsenic exposure even though standard addition is required.  相似文献   

11.
Single-molecule force spectroscopy has become a valuable tool for the investigation of intermolecular energy landscapes for a wide range of molecular associations. Atomic force microscopy (AFM) is often used as an experimental technique in these measurements, and the Bell-Evans model is commonly used in the statistical analysis of rupture forces. Most applications of the Bell-Evans model consider a constant loading rate of force applied to the intermolecular bond. The data analysis is often inconsistent because either the probe velocity or the apparent loading rate is being used as an independent parameter. These approaches provide different results when used in AFM-based experiments. Significant variations in results arise from the relative stiffness of the AFM force sensor in comparison with the stiffness of polymeric tethers that link the molecules under study to the solid surfaces. An analytical model presented here accounts for the systematic errors in force-spectroscopy parameters arising from the nonlinear loading induced by polymer tethers. The presented analytical model is based on the Bell-Evans model of the kinetics of forced dissociation and on the asymptotic models of tether stretching. The two most common data reduction procedures are analyzed, and analytical expressions for the systematic errors are provided. The model shows that the barrier width is underestimated and that the dissociation rate is significantly overestimated when force-spectroscopy data are analyzed without taking into account the elasticity of the polymeric tether. Systematic error estimates for asymptotic freely jointed chain and wormlike chain polymer models are given for comparison. The analytical model based on the asymptotic freely jointed chain stretching is employed to analyze and correct the results of the double-tether force-spectroscopy experiments of disjoining "hydrophobic bonds" between individual hexadecane molecules that are covalently tethered via poly(ethylene glycol) linkers of different lengths to the substrates and to the AFM probes. Application of the correction algorithm decreases the spread of the data from the mean value, which is particularly important for measurements of the dissociation rate, and increases the barrier width to 0.43 nm, which might be indicative of the theoretically predicted hydrophobic dewetting.  相似文献   

12.
Summary A true analytical result presumes an analytical procedure without systematic errors. They can be detected by comparing the analytical results with the true value or with an accepted reference. Systematic errors are always superimposed by the random error, therefore such a comparison must include statistical tests. The paper describes suitable test-models for typical analytical problems, as e.g. the validation of an analytical procedure or the (current) control of analytical work. Furthermore, explanations will be given for the interpretation of the test results due to the trueness of analytical data and for the consequences concerning analytical work.  相似文献   

13.
Results are presented from the INAA of 34 elements in NIST and USGS geological reference materials that were analysed relative to multielemental SRM-1633a Coal Flyash standards. The data compare favorably with works reported by other investigators. The application of historical control charts for continuous monitoring of quality assurance and detection of systematic errors is demonstrated.  相似文献   

14.
 Different schemes of analytical testing including the sampling, sample preparation and sample analysis operations are considered as applied to a lot of raw material containing recoverable precious metal. The errors resulting from the step-by-step operations of the analytical testing are estimated. Sampling and sample preparation operations are found to be significant contributors to the total error of determination of the percentage and /or weight of a precious metal of interest in a lot. Some ways to diminish both the sampling error and the total error of the analytical testing procedure are recommended. Received: 28 December 1998 · Accepted: 22 February 1999  相似文献   

15.
Summary In most cases investigations concerning the reproducibility of different analytical methods for the determination of tin in tin ores consist only in comparisons of means, i.e. detection of systematic errors. The fact that the methods use different calibrating procedures, on the one hand, and that their accuracy varies, on the other hand, makes it necessary, to look for more sensitive criteria. For this purpose, the degree of efficiency regarding the determination of the true mean of a reference method, the degree of (mutual) reproducibility, and the equivalence probability are defined and their meaning is statistically interpreted. The degree of efficiency ɛ of any two methods is defined as the ratio of their mean square errors in determining the true mean of a reference method. This quantity can be described by the ratio of the upper bounds for the probabilities of an error by the two methods. The degree of reproducibility P* of different analytical methods we understand as minimal probability of comparable measuring results. The equivalence probability Pe is defined as a-posteriori probability of the hypothesis that the two distribution functions considered are identical. The criteria ɛ, P* and Pe seem to be more suitable for statistical comparisons as compared to known statistical standard procedures, such as the t-test criterion. The applicability of these quantities was checked by the example of 6 different methods for the determination of tin in tin ores. For this purpose, it was necessary to evaluate objectively the efficiency of sample division in order to get reproducible final samples for analysis using the hierarchical two-way classification of variance analysis. General knowledge concerning the analytical methods used could be completed.  相似文献   

16.
(The efficiency of regression calculations in x-ray spectrometry)Regression fitting by the least-squares technique minimizes analytical errors. Examples in the literature indicate the possible errors of the method with regard to overfitting, extrapolation and gaps in the concentration range of the standard samples. Strict consideration shows that in practice such errors are insignificant. The problem of unnecessary parameters is discussed; it is shown that regression fitting can cope with such parameters without increased error. The method can also provide a criterion for the necessity of parameters in calibration functions. The most important sources of error in x-ray spectrometry, which are enumerated, do not include the regression calculations.  相似文献   

17.
The choice of an analytical procedure and the determination of an appropriate sampling strategy are here treated as a decision theory problem in which sampling and analytical costs are balanced against possible end-user losses due to measurement error. Measurement error is taken here to include both sampling and analytical variances, but systematic errors are not considered. The theory is developed in detail for the case exemplified by a simple accept or reject decision following an analytical measurement on a batch of material, and useful approximate formulae are given for this case. Two worked examples are given, one involving a batch production process and the other a land reclamation site.  相似文献   

18.
Enthalpy probe measurements in supersonic plasma flows are subject to various sources of error which are difficult to quantify experimentally v. The relative importance of several such errors has been assessed by means of detailed two-dimensional numerical simulations of high-speed plasma flow impinging on an enthalpy probe. The simulations show that moderate uncertainties in upstream pressure and composition (i.e., degree of ionization) can lead to significant errors in the velocity and temperature inferred from the measurements. These errors tend to be larger in velocity than temperature A second potential source of error is that enthalpy probe data are generally interepreted by means of simplified analytical relations which neglect the effects of finite-rate ionization, internal electronic excitation, thermal radiation, probe cooling, and probe sampling. The importance of these effects was also assessed, and the resulting errors were not ,significant under the conditions examined. We conclude that enthalpy probe measurements in supersonic plasma flows are use f d in situations where the upstream pressure and degree of ionization are known to reasonable accucary.  相似文献   

19.
Consequences resulting from a three-dimensional calibration model introduced in [5] are investigated. Accordingly, there exists a different statistical background for the calibration, the analytical evaluation and the validation step. If the errors of the concentration values are not negligible compared with the errors of the measured values, orthogonal calibration models have to be used instead of the common Gaussian least squares (GLS). Four different approximation models of orthogonal least squares, Wald's approximation (WA), Mandel's approximation (MA), Geometrical mean (GM), and Principal component estimation (PC) are investigated and compared with each other and with GLS by simulations and by real analytical applications. From the simulations it can be seen that GLS is affected by bias in the estimates of both slope and intercept in the case of increasing concentration error. On the other hand, the orthogonal models estimate the calibration parameter better. The best fit is obtained by Wald's approximation. It is shown by simulations and real analytical calibration problems that orthogonal calibration has to be used in all cases in which the concentration errors cannot be neglected compared to the errors of the measured values. This is in particular relevant in recovery experiments for validation by means of comparison of methods. In such cases orthogonal least squares methods have always to be applied where the use of WA is recommended. The situation is different in the case of ordinary calibration experiments. The examples considered show small existing differences between the classical GLS and the orthogonal procedures. In doubtful cases both GLS and WA should be computed where the latter should be used if significant differences appear.  相似文献   

20.
The theory of error for target factor analysis is used to derive a simple equation from which the root-mean-square error in the factor loadings can be calculated. The method is applied to a problem in gas—liquid chromatography and is shown to agree with errors estimated by the 'jackknife' method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号