首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Synek V 《Talanta》2006,70(5):1024-1034
This paper investigates the coverage probability of the uncertainty intervals determined in compliance with the GUM and EURACHEM Guide, which are defined by expanded uncertainty U about the results uncorrected with the insignificant biases and corrected with the significant biases. This coverage probability can significantly fall below the chosen level of confidence in some cases as Maroto et al. discovered by using the Monte Carlo method. Their numerical results obtained provided that only the β errors have occurred in the test significance and findings that the coverage reduction depends on the mutual proportions of the magnitudes of the systematic error, overall uncertainty and bias uncertainty are confirmed in this paper by using probability calculus and numerical integration. This problem is also studied when all possible experimental biases, both significant and insignificant, are considered. From this point of view, the reduction of the coverage probability turns out to be less severe than from the previous one. The coverage probability is also investigated for some uncertainty intervals computed in different ways than the above mentioned documents recommend. The intervals defined by U about the results corrected with both significant and insignificant bias give always the same coverage probability equalling the chosen level of confidence. The intervals with some uncertainties modified or enlarged with the insignificant biases remove or moderate the coverage reduction.  相似文献   

2.
Synek V 《Talanta》2005,65(4):829-837
In ISO Guide it is strictly recommended to correct results for the recognised significant bias, but in special cases some analysts find out practical to omit the correction and to enlarge the expanded uncertainty for the uncorrected bias instead. In this paper, four alternatively used methods computing these modified expanded uncertainties are compared according to the levels of confidence, widths and layouts of the obtained uncertainty intervals. The method, which seems to be the best, because it provides the same uncertainty intervals as in the case of the bias correction, has not been applied very much, maybe since these modified uncertainty intervals are not symmetric about the results. The three remaining investigated methods maintain their intervals symmetric, but only two of them provide intervals of the kind, that their levels of confidence reach at least the required value (95%) or a larger one. The third method defines intervals with low levels of confidence (even for small biases). It is proposed a new method, which gives symmetric intervals just with the required level of confidence. These intervals are narrower than those symmetric intervals with the sufficient level of confidence obtained by the two mentioned methods. A mathematical background of the problem and an illustrative example of calculations applying all compared methods are attached.  相似文献   

3.
Consistent treatment of measurement bias, including the question of whether or not to correct for bias, is essential for the comparability of measurement results. The case for correcting for bias is discussed, and it is shown that instances in which bias is known or suspected, but in which a specific correction cannot be justified, are comparatively common. The ISO Guide to the Expression of Uncertainty in Measurement does not provide well for this situation. It is concluded that there is a need for guidance on handling cases of uncorrected bias. Several different published approaches to the treatment of uncorrected bias and its uncertainty are critically reviewed with regard to coverage probability and simplicity of execution. On the basis of current studies, and taking into account testing laboratory needs for a simple and consistent approach with a symmetric uncertainty interval, we conclude that for most cases with large degrees of freedom, linear addition of a bias term adjusted for exact coverage ("U(e)") as described by Synek is to be preferred. This approach does, however, become more complex if degrees of freedom are low. For modest bias and low degrees of freedom, summation of bias, bias uncertainty and observed value uncertainty in quadrature ("RSSu") provides a similar interval and is simpler to adapt to reduced degrees of freedom, at the cost of a more restricted range of application if accurate coverage is desired.  相似文献   

4.
O'Donnell GE  Hibbert DB 《The Analyst》2005,130(5):721-729
Bias in an analytical measurement should be estimated and corrected for, but this is not always done. As an alternative to correction, there are a number of methods that increase the expanded uncertainty to take account of bias. All sensible combinations of correcting or enlarging uncertainty for bias, whether considered significant or not, were modeled by a Latin hypercube simulation of 125,000 iterations for a range of bias values. The fraction of results for which the result and its expanded uncertainty contained the true value of a simulated test measure and was used to assess the different methods. The strategy of estimating the bias and always correcting is consistently the best throughout the range of biases. For expansion of the uncertainty when the bias is considered significant is best done by SUMU(Max):U(C(test result))=ku(c)(C(test result))+ |delta(run)|, where k is the coverage factor (= 2 for 95% confidence interval), u(c) is the combined standard uncertainty of the measurement and delta(run) is the run bias.  相似文献   

5.
The bias of an analytical procedure is calculated in the assessment of trueness. If this experimental bias is not significant, we assume that the procedure is unbiased and, consequently, the results obtained with this procedure are not corrected for this bias. However, when assessing trueness there is always a probability of incorrectly concluding that the experimental bias is not significant. Therefore, non-significant experimental bias should be included as a component of uncertainty. In this paper, we have studied if it is always necessary to include this term and which is the best approach to include this bias in the uncertainty budget. To answer these questions, we have used the Monte-Carlo method to simulate the assessment of trueness of biased procedures and the future results these procedures provide. The results show that non-significant experimental bias should be included as a component of uncertainty when the uncertainty of this bias represents at least a 30% of the overall uncertainty. Received: 29 May 2001 Accepted: 10 December 2001  相似文献   

6.
Maroto A  Boqué R  Riu J  Rius FX 《The Analyst》2003,128(4):373-378
The trueness of an analytical method can be assessed by calculating the proportional bias of the method in terms of apparent recovery. If the apparent recovery does not differ significantly from one, the analytical method has not a significant bias. If this is the case, the bias is neglected and the uncertainty associated with this bias is included in the uncertainty budget of results. However, when assessing trueness there is always a probability of incorrectly concluding that the proportional bias is not significant. Therefore, the uncertainty of results may be underestimated. In this paper, we study how non-significant bias affects the uncertainty of analytical results. Moreover, we study how to avoid the underestimation of uncertainty by including the non-significant bias calculated in the uncertainty budget. To answer these questions, we have used the Monte-Carlo method to simulate the process of estimating the apparent recovery of a biased analytical method and, subsequently, the future results this method provides. The results of the simulation show that non-significant bias may underestimate the uncertainty of analytical results when bias contributes in more than 20% to the overall uncertainty. Uncertainty is specially underestimated when bias contributes in more than 50% to the overall uncertainty.  相似文献   

7.
Cowen S  Ellison SL 《The Analyst》2006,131(6):710-717
Different methods of treating data which lie close to a natural limit in a feasible range, such as zero or 100% mass or mole fraction, are discussed and recommendations made concerning the most appropriate. The methods considered include discarding observations beyond the limit, shifting observations to the limit, truncation of a classical confidence interval based on Student's t (coupled with shifting the result to the limit if outside the feasible range), truncation and renormalisation of an assumed normal distribution, and the maximum density interval of a Bayesian estimate based on a normal measurement distribution and a uniform prior within the feasible range. Based on consideration of bias and simulation to assess coverage, it is recommended that for most purposes, a confidence interval near a natural limit should be constructed by first calculating the usual confidence interval based on Student's t, then truncating the out-of-range portion to leave an asymmetric interval and adjusting the reported value to within the resulting interval if required. It is suggested that the original standard uncertainty is retained for uncertainty propagation purposes.  相似文献   

8.
Like in all experimental science, chemical data is affected by the limited precision of the measurement process. Quality control and traceability of experimental data require suitable approaches to express properly the degree of uncertainty. Noise and bias are nuisance effects reducing the information extractable from experimental data. However, because of the complexity of the numerical data evaluation in many chemical fields, often mean values from data analysis, e.g. multi-parametric curve fitting, are reported only. Relevant information on the interpretation limits, e.g. standard deviations or confidence limits, are either omitted or estimated. Modern techniques for handling of uncertainty in both parameter evaluation and prediction are strongly based on the calculation power of computers. Advantageously, computer-intensive methods like Monte Carlo resampling and Latin Hypercube sampling do not require sophisticated and often unavailable mathematical treatment. The statistical concepts are introduced. Applications of some computer-intensive statistical techniques to chemical problems are demonstrated.  相似文献   

9.
The new version of ISO Guide 34 requires producers of certified reference materials (CRMs) to include contributions of possible instability to the overall CRM uncertainty, to obtain a value for the uncertainty in compliance with the Guide to the Expression of the Uncertainty in Measurement (GUM). A pragmatic approach to estimating the uncertainty of stability is presented. It relies on regression analysis of stability data with subsequent testing of the slope of the regression line for significance. If the slope is found to be statistically insignificant, a shelf life is chosen and the uncertainty connected with this time is estimated via the standard deviation of the slope. This uncertainty is included in the overall uncertainty of the CRM. This approach is explained with examples showing its applicability to matrix CRMs.  相似文献   

10.
The new version of ISO Guide 34 requires producers of certified reference materials (CRMs) to include contributions of possible instability to the overall CRM uncertainty, to obtain a value for the uncertainty in compliance with the Guide to the Expression of the Uncertainty in Measurement (GUM). A pragmatic approach to estimating the uncertainty of stability is presented. It relies on regression analysis of stability data with subsequent testing of the slope of the regression line for significance. If the slope is found to be statistically insignificant, a shelf life is chosen and the uncertainty connected with this time is estimated via the standard deviation of the slope. This uncertainty is included in the overall uncertainty of the CRM. This approach is explained with examples showing its applicability to matrix CRMs. Received: 12 October 2000 / Revised: 2 January 2001 / Accepted: 3 January 2001  相似文献   

11.
A model for calculating uncertainty in routine multi-element analysis is described. The model is constructed according to the principles of GUM/EURACHEM. Control chart results are combined with other existing data and results from the actual measurement into a concentration-dependent estimate of combined standard uncertainty. Since possible sources of bias are included in the calculation, overall bias as estimated from the data is used only as a control to identify needs for modification of the model and/or the analytical procedure. For each individual sample, uncertainty can be calculated automatically based on two pre-calculated parameters together with measured concentration and instrumental standard deviation. As an example, the model is demonstrated for inductively coupled plasma-mass spectrometry (ICP-MS) analysis of sewage sludge including laboratory sub-sampling, sample preparation, and instrumental determination.  相似文献   

12.
This present study proposes a novel approach to quantifying uncertainties of constitutive relations inferred from noisy experimental data using inverse modeling. We focus on electrochemical systems in which charged species (e.g., Lithium ions) are transported in electrolyte solutions under an applied current. Such systems are typically described by the Planck-Nernst equation in which the unknown material properties are the diffusion coefficient and the transference number assumed constant or concentration-dependent. These material properties can be optimally reconstructed from time- and space-resolved concentration profiles measured during experiments using the magnetic resonance imaging (MRI) technique. However, as the measurement data is usually noisy, it is important to quantify how the presence of noise affects the uncertainty of the reconstructed material properties. We address this problem by developing a state-of-the-art Bayesian approach to uncertainty quantification in which the reconstructed material properties are recast in terms of probability distributions, allowing us to rigorously determine suitable confidence intervals. The proposed approach is first thoroughly validated using “manufactured” data exhibiting the expected behavior as the magnitude of noise is varied. Then, this approach is applied to quantify the uncertainty of the diffusion coefficient and the transference number reconstructed from experimental data revealing interesting insights. © 2018 Wiley Periodicals, Inc.  相似文献   

13.
14.
Despite the importance of stating the measurement uncertainty in chemical analysis, concepts are still not widely applied by the broader scientific community. The Guide to the expression of uncertainty in measurement approves the use of both the partial derivative approach and the Monte Carlo approach. There are two limitations to the partial derivative approach. Firstly, it involves the computation of first-order derivatives of each component of the output quantity. This requires some mathematical skills and can be tedious if the mathematical model is complex. Secondly, it is not able to predict the probability distribution of the output quantity accurately if the input quantities are not normally distributed. Knowledge of the probability distribution is essential to determine the coverage interval. The Monte Carlo approach performs random sampling from probability distributions of the input quantities; hence, there is no need to compute first-order derivatives. In addition, it gives the probability density function of the output quantity as the end result, from which the coverage interval can be determined. Here we demonstrate how the Monte Carlo approach can be easily implemented to estimate measurement uncertainty using a standard spreadsheet software program such as Microsoft Excel. It is our aim to provide the analytical community with a tool to estimate measurement uncertainty using software that is already widely available and that is so simple to apply that it can even be used by students with basic computer skills and minimal mathematical knowledge.  相似文献   

15.
Detection capabilities are important performance characteristics of analytical procedures. There are several conceptual approaches on the subject, but in most of them a level of ambiguity is presented. It is not clear which conditions of measurements should be used, and there is a relative lack of definition concerning blanks. Moreover, there are no systematic experimental studies concerning the influence of uncertainty associated with bias evaluation. A new approach based on measurement uncertainty is presented for estimating quantities that characterize capabilities of detection. It can be applied to different conditions of measurement and it is not necessary to perform an additional experiment with blanks. Starting from a modelling process of the combined uncertainty of concentration, it is possible to include in the estimated quantities the effects due to random errors and the uncertainty associated to evaluation of bias. The detection capabilities are then compared with the results obtained using some other relevant approaches. Slightly higher values were obtained with the measurement uncertainty approach due to inclusion of uncertainty associated with bias.  相似文献   

16.
The quality of analytical results is expressed by their uncertainty, as it is estimated on the basis of an uncertainty budget; little effort is, however, often spent on ascertaining the quality of the uncertainty budget. The uncertainty budget is based on circumstantial or historical data, and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between observed and expected variability is tested by means of the T-test, which follows a chi-square distribution with a number of degrees of freedom determined by the number of replicates. Significant deviations between predicted and observed variability may be caused by a variety of effects, and examples will be presented; both underestimation and overestimation may occur, each leading to correcting the influence of uncertainty components according to their influence on the variability of experimental results. Some uncertainty components can be verified only with a very small number of degrees of freedom, because their influence requires samples taken at long intervals, e.g., the acquisition of a new calibrant. It is therefore recommended to include verification of the uncertainty budget in the continuous QA/QC monitoring; this will eventually lead to a test also for such rarely occurring effects.  相似文献   

17.
The methods an analytical laboratory uses must be validated to be fit for purpose. The fitness for purpose of a quantitative method used to determine the concentration of a substance when assessing compliance to requirements can be described by the maximum measurement uncertainty. This is called the target measurement uncertainty. Acceptance criteria for precision and bias in the method validation are then established in terms of the target measurement uncertainty. The target measurement uncertainty can be decided by following a process which involves determining the required concentration range of the measurand; determining the acceptable level of risks of incorrect decisions of compliance; developing a suitable decision rule, with guard bands if appropriate; using the probability of making an incorrect decision of compliance based on the decision rule; and assessing the impact of bias. A key participant in this process is the end user of the data, the laboratory customer. This paper presents the concepts concerning target measurement uncertainty introduced in recently published international guidelines to the practicing analytical chemist who is not generally familiar with these concepts. Three examples are used to illustrate the process.  相似文献   

18.
The propagation stage of uncertainty evaluation, known as the propagation of distributions, is in most cases approached by the GUM (Guide to the Expression of Uncertainty in Measurement) uncertainty framework which is based on the law of propagation of uncertainty assigned to various input quantities and the characterization of the measurand (output quantity) by a Gaussian or a t-distribution. Recently, a Supplement to the ISO-GUM was prepared by the JCGM (Joint Committee for Guides in Metrology). This Guide gives guidance on propagating probability distributions assigned to various input quantities through a numerical simulation (Monte Carlo Method) and determining a probability distribution for the measurand.In the present work the two approaches were used to estimate the uncertainty of the direct determination of cadmium in water by graphite furnace atomic absorption spectrometry (GFAAS). The expanded uncertainty results (at 95% confidence levels) obtained with the GUM Uncertainty Framework and the Monte Carlo Method at the concentration level of 3.01 μg/L were ±0.20 μg/L and ±0.18 μg/L, respectively. Thus, the GUM Uncertainty Framework slightly overestimates the overall uncertainty by 10%. Even after taking into account additional sources of uncertainty that the GUM Uncertainty Framework considers as negligible, the Monte Carlo gives again the same uncertainty result (±0.18 μg/L). The main source of this difference is the approximation used by the GUM Uncertainty Framework in estimating the standard uncertainty of the calibration curve produced by least squares regression. Although the GUM Uncertainty Framework proves to be adequate in this particular case, generally the Monte Carlo Method has features that avoid the assumptions and the limitations of the GUM Uncertainty Framework.  相似文献   

19.
In this work, is given the Combined Standard Uncertainty (CSU) calculation procedure, which can be applied in spectrophotometric measurements. For the assessment of the computations, different approaches are discussed, such as the contribution to the Combined Standard Uncertainty of the reproducibility, the repeatability, the total bias, the calibration curve, and the type of the measurand. Results of inter-laboratory measurements confirmed the assumptions. For the minimization of the errors propagation a controlled experimental procedure was applied by this laboratory, called “errors propagation break-up” (ERBs). The uncertainty of sample concentration from a reference curve dominates the Combined Standard Uncertainty. The contribution of the method and the laboratory bias (total bias) to the CSU is insignificant under controlled conditions of a measurement. This work develops a simple methodology that can be utilized to evaluate the uncertainty and errors control on routine methods used both by academic researchers or the industrial sector.  相似文献   

20.
The residual liquid junction potential (RLJP) needs to be accounted for in pH uncertainty estimation. Attempts to do this and the currently available methods for evaluating the RLJP are critically discussed and their weak sides are pointed out. In this work an empirical approach to the problem is proposed. It is based on the use of the RLJP bias estimated on a variety of measurement conditions for a specific class of analytical objects essentially differing in ionic strength from the pH calibration buffers. The data from five independent studies, including interlaboratory comparisons, on pH measurement in low ionic strength waters were used to find the overall bias observed in the 10−4 mol dm−3 strong acid solution. The procedure includes quantifying the uncertainty of bias values from separate studies by combination of the relevant uncertainty components and testing the consistency of the data. The weighted mean bias in pH was found to be 0.043 ± 0.007 (k = 2). With this estimate, the pH measurement uncertainties calculated according to the previously suggested procedure (I. Leito, L. Strauss, E. Koort, V. Pihl, Accredit. Qual. Assur. 7 (2002) 242-249.) can be enlarged to take the uncorrected bias into account. The resulting uncertainties on the level of 0.10-0.14 (k = 2) are obtained in this way for pH measurement in water and poorly buffered aqueous solutions in the range of pH 7.5-3.5.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号