首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A previous related paper considered rounding error effects in the presence of underlying measurement error and presented a Bayesian approach to estimate instrument input random error standard deviation. This addendum to the previous paper emphasizes that the effects of random error depend on the true (and usually unknown) value of the measurand, in terms of both the variance and the item-specific bias. However, it is shown that if we assume that the true values are uniformly distributed, then instrument variance and item-specific bias can be combined into an ??effective random error variance?? and a strategy to estimate the effective random error variance is provided.  相似文献   

2.
At least three methods of calculating the random errors or variance of molecular isotopic data are presently in use. The major components of variance are differentiated and quantified here into least three to four individual components. The measurement of error of the analyte relative to a working (whether an internal or an external) standard is quantified via the statistical pooled estimate of error. A statistical method for calculating the total variance associated with the difference of two individual isotopic compositions from two isotope laboratories is given, including the variances of the laboratory (secondary) and working standards, as well as those of the analytes. An abbreviated method for estimation of of error typical for chromatographic/isotope mass spectrometric methods is also presented.  相似文献   

3.
Data from proficiency testing can be used to increase our knowledge of the performance of populations of laboratories, individual laboratories and different measurement methods. To support the evaluation and interpretation of results from proficiency testing an error model containing different random and systematic components is presented. From a single round of a proficiency testing scheme the total variation in a population of laboratories can be estimated. With results from several rounds the random variation can be separated into a laboratory and time component and for individual laboratories it is then also possible to evaluate stability and bias in relation to the population mean. By comparing results from laboratories using different methods systematic differences between methods may be indicated. By using results from several rounds a systematic difference can be partitioned into two components: a common systematic difference, possibly depending on the level, and a sample-specific component. It is essential to distinguish between these two components as the former may be eliminated by a correction while the latter must be treated as a random component in the evaluation of uncertainty. Received: 20 November 2000 Accepted: 3 January 2001  相似文献   

4.
Sampling and uncertainty of sampling are important tasks, when industrial processes are monitored. Missing values and unequal sources can cause problems in almost all industrial fields. One major problem is that during weekends samples may not be collected. On the other hand a composite sample may be collected during weekend. These systematically occurring missing values (gaps) will have an effect on the uncertainties of the measurements. Another type of missing values is random missing values. These random gaps are caused, for example, by instrument failures. Pierre Gy's sampling theory includes tools to evaluate all error components that are involved in sampling of heterogeneous materials. Variograms, introduced by Gy's sampling theory, have been developed to estimate the uncertainty of auto-correlated process measurements. Variographic experiments are utilized for estimating the variance for different sample selection strategies. The different sample selection strategies are random sampling, stratified random sampling and systematic sampling. In this paper both systematic and random gaps were estimated by using simulations and real process data. These process data were taken from bark boilers of pulp and paper mills (combustion processes). When systematic gaps were examined a linear interpolation was utilized. Also cases introducing composite sampling were studied. Aims of this paper are: (1) how reliable the variogram is to estimate the process variogram calculated from data with systematic gaps, (2) how the uncertainty of missing gap can be estimated in reporting time-averages of auto-correlated time series measurements. The results show that when systematic gaps were filled by linear interpolation only minor changes in the values of variogram were observed. The differences between the variograms were constantly smallest with composite samples. While estimating the effect of random gaps, the results show that for the non-periodic processes the stratified random sampling strategy gives more reliable results than systematic sampling strategy. Therefore stratified random sampling should be used while estimating the uncertainty of random gaps in reporting time-averages of auto-correlated time series measurements.  相似文献   

5.
Ramsey MH  Geelhoed B  Wood R  Damant AP 《The Analyst》2011,136(7):1313-1321
A realistic estimate of the uncertainty of a measurement result is essential for its reliable interpretation. Recent methods for such estimation include the contribution to uncertainty from the sampling process, but they only include the random and not the systematic effects. Sampling Proficiency Tests (SPTs) have been used previously to assess the performance of samplers, but the results can also be used to evaluate measurement uncertainty, including the systematic effects. A new SPT conducted on the determination of moisture in fresh butter is used to exemplify how SPT results can be used not only to score samplers but also to estimate uncertainty. The comparison between uncertainty evaluated within- and between-samplers is used to demonstrate that sampling bias is causing the estimates of expanded relative uncertainty to rise by over a factor of two (from 0.39% to 0.87%) in this case. General criteria are given for the experimental design and the sampling target that are required to apply this approach to measurements on any material.  相似文献   

6.
Detection capabilities are important performance characteristics of analytical procedures. There are several conceptual approaches on the subject, but in most of them a level of ambiguity is presented. It is not clear which conditions of measurements should be used, and there is a relative lack of definition concerning blanks. Moreover, there are no systematic experimental studies concerning the influence of uncertainty associated with bias evaluation. A new approach based on measurement uncertainty is presented for estimating quantities that characterize capabilities of detection. It can be applied to different conditions of measurement and it is not necessary to perform an additional experiment with blanks. Starting from a modelling process of the combined uncertainty of concentration, it is possible to include in the estimated quantities the effects due to random errors and the uncertainty associated to evaluation of bias. The detection capabilities are then compared with the results obtained using some other relevant approaches. Slightly higher values were obtained with the measurement uncertainty approach due to inclusion of uncertainty associated with bias.  相似文献   

7.
The within‐device precision for quantitative assays is the square root of the total variance, which is defined as the sum of the between‐day, between‐run, and within‐run variances under a two‐stage nested random‐effects model. Currently, methods for point and interval estimations have been proposed. However, the literature on sample size determination for within‐device precision is scarce. We propose an approach for the determination of sample size for within‐device precision. Our approach is based on the probability for which the 100(1 − α)% upper confidence bound for the within‐device precision smaller than the pre‐specified claim is greater than 1 − β. We derive the asymptotic distribution of the confidence upper bound based on the modified large‐sample method for sample size determination and allocation. Our study reveals that the dominant term for sample size determination is the between‐day variance. Results of simulation studies are reported. An example with real data is used to illustrate the proposed method. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Spectrophotometric multicomponent analysis is considerd on the basis of inverse multivariate calibration with linear methods (ordinary least squares, principal component, ridge and partial least squares regression) and with the non-linear methods ACE and the non-linear partial least squares. The performance of the different methods is compared by paired F-tests. As an estimate of the error variance the residual mean sum of squares in the analysis of variance table is used. The comparison is demonstrated for the infrared spectrometric analysis of the hydroxyl group content of brown coal measured in diffuse reflectance. Although the error variances among the calibration methods differ gradually, the differences are much less pronounced at statistical level.  相似文献   

9.
Umbrella sampling simulations, or biased molecular dynamics, can be used to calculate the free-energy change of a chemical reaction. We investigate the sources of different sampling errors and derive approximate expressions for the statistical errors when using harmonic restraints and umbrella integration analysis. This leads to generally applicable rules for the choice of the bias potential and the sampling parameters. Numerical results for simulations on an analytical model potential are presented for validation. While the derivations are based on umbrella integration analysis, the final error estimate is evaluated from the raw simulation data, and it may therefore be generally applicable as indicated by tests using the weighted histogram analysis method.  相似文献   

10.
Linear regression of calibration lines passing through the origin was investigated for three models of y-direction random errors: normally distributed errors with an invariable standard deviation (SD) and log normally and normally distributed errors with an invariable relative standard deviation (RSD). The weighted (weighting factor is x 2 i ), geometric and arithmetic means of the ratios y i /x i estimate the calibration slope for these models, respectively. Regression of the calibration lines with errors in both directions was also studied. The x-direction errors were assumed to be normally distributed random errors with either an invariable SD or invariable RSD, both combined with a constant relative systematic error. The random errors disperse the true, unknown x-values about the plotted, demanded x-values, which are shifted by the constant relative systematic error. The systematic error biases the slope estimate while the random errors do not. They only increase automatically the slope estimate uncertainty, in which the uncertainty component reflecting the range of the possible values of the systematic error must be additionally included. Received: 9 May 2000 Accepted: 7 March 2001  相似文献   

11.
The concept of "total allowable error", investigated by Westgard and co-workers over a quarter of a century for use in laboratory medicine, comprises bias as well as random elements. Yet, to minimize diagnostic misclassifications, it is necessary to have spatio-temporal comparability of results. This requires trueness obtained through metrological traceability based on a calibration hierarchy. Hereby, the result is associated with a final uncertainty of measurement purged of known biases of procedure and laboratory. The sources of bias are discussed and the importance of commutability of calibrators and analytical specificity of the measurement procedure is stressed. The practicability of traceability to various levels and the advantages of the GUM approach for estimating uncertainty are shown.  相似文献   

12.
Modeling quantitative structure–activity relationships (QSAR) is considered with an emphasis on prediction. An abundance of methods are available to develop such models. Using a harmonious approach that balances the bias and variance of predictions, the best calibration models are identified relative to the bias and variance criteria used. Criteria utilized to determine the adequacy of models are the root mean square error of calibration (RMSEC) and validation (RMSEV), respective R 2 values, and the norm of the regression vector. QSAR data from the literature are used to demonstrate concepts. For these data sets and criteria used, it is suggested that models obtained by ridge regression (RR) are more harmonious and parsimonious than models obtained by partial least squares (PLS) and principal component regression (PCR) when the data is mean-centered. The most harmonious RR models have the best bias/variance tradeoff reflected by the smallest RMSEC, RMSEV, and regression vector norms and the largest calibration and validation R 2 values. The most parsimonious RR models have the smallest effective rank.  相似文献   

13.
The mean squared error is presented as a convenient parameter to be used in assessing laboratories participating in proficiency tests. Its main advantages and disadvantages are presented and compared with the z score and the normalized error. A proficiency index is proposed as the ratio of an estimate of the mean squared error over a reference uncertainty.  相似文献   

14.
Modeling quantitative structure-activity relationships (QSAR) is considered with an emphasis on prediction. An abundance of methods are available to develop such models. Using a harmonious approach that balances the bias and variance of predictions, the best calibration models are identified relative to the bias and variance criteria used. Criteria utilized to determine the adequacy of models are the root mean square error of calibration (RMSEC) and validation (RMSEV), respective R2 values, and the norm of the regression vector. QSAR data from the literature are used to demonstrate concepts. For these data sets and criteria used, it is suggested that models obtained by ridge regression (RR) are more harmonious and parsimonious than models obtained by partial least squares (PLS) and principal component regression (PCR) when the data is mean-centered. The most harmonious RR models have the best bias/variance tradeoff, reflected by the smallest RMSEC, RMSEV, and regression vector norms and the largest calibration and validation R2 values. The most parsimonious RR models have the smallest effective rank.  相似文献   

15.
The systematic titration error which is introduced by the intersection of tangents to hyperbolic titration curves is discussed for precipitation reactions. A simple expression for the systematic titration error is derived, and S/Cx2 is proposed as a measure of the sharpness of the titration curve. The effects of the conditional solubility product (S), the concentration of the unknown component (Cx), and the ranges used for the construction of the end-point, are considered. A graphical method is presented for the selection of pairs of ranges which result in small systematic titration errors. For equal values of S/Cx2 and 1/KCx, the optimum combinations of ranges are different for precipitation and complexation titrations. The differences are not large for values smaller than about 0.002. For titration curves with a reversed L-shape, the error is calculated when the end-point is constructed by the intersection of the tangent to the second branch of the curve with volume axis; in this case equal ranges result in the same titration error for equal values of S/Cx2 and 1/KCx. The systematic titration error is equal to -S/Cx2 when the tangent to the curve is taken at fa = 3.0.  相似文献   

16.
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.  相似文献   

17.
The present contribution addresses the problem of the accurate determination of the extra-column band-broadening dispersion by applying the method of moments (MOM) on pulse response experiments. The MOM provides the only mathematically rigorous way to determine the variance of a pulse response, but suffers from the fact that the obtained variance values usually depend very strongly and unpredictably on the selected integration boundaries. In the present study, it is investigated whether the MOM cannot be made more accurate by repeating the pulse response experiment a number of times, then add all measured peaks (after retention time alignment) and subsequently perform the integration on this summed peak. Testing this approach for a number of different integration boundary detection methods consistently more accurate results are obtained than when the different repeat response pulses are integrated individually and the variances are averaged afterwards. It was also found that adding five repeats already leads to a significant improvement of the variance estimate, whereas the addition of 10-20 repeats is needed before the variance estimate converges to a steady value.  相似文献   

18.
The use of a sequential standard addition calibration (S-SAC) can introduce systematic errors into measurements results. Whilst this error for the determination of blank-corrected solutions has previously been described, no similar treatment has been available for the quantification of analyte mass fraction in blank solutions - a crucial first step in any analytical procedure. This paper presents the theory describing the measurement of blank solutions using S-SAC, derives the correction that needs to be applied following analysis, and demonstrates the systematic error that occurs if this correction is not applied. The relative magnitudes of this bias and the precision of extrapolated measurements values are also considered.  相似文献   

19.
The quantity ? = (Φ||(H ? E)Φ|) gives a measure of the error in the approximate solution, Φ (with corresponding energy expectation value E), to an eigenfunction of the Hamiltonian operator H of the system under consideration; this quantity vanishes for the exact function ψ. In a percentage scale (with 0% error for the exact function and 100% for a reference, approximate function), the error of Φ may be expressed as 100(?/?r), where ?r corresponds to the reference function (e.g., obtained with a minimal basis set). This approach eliminates the need of knowing beforehand the exact solution in order to have an estimate of the error of an approximate solution.  相似文献   

20.
The precision and bias of the coulometric Karl Fischer ASTM method D1533-00 have been assessed in a collaborative ASTM round robin program for a group of 34 laboratories. The test materials used in this study included water saturated 1-octanol (WSO), water saturated 1-butanol (WSB), and a series of new and used transformer oil samples. Fundamental systematic biases have been demonstrated in the accuracy of the measurement of water in the WSO, WSB, and transformer oil samples. The systematic bias in the measurement of the WSO and WSB standards indicates that for some laboratories either the instruments were not accurate or the quantity of the standard was not measured accurately. A second type of systematic bias consisted of measurement errors associated with the selection of the Karl Fischer solvent that was used with each instrument, and this was superimposed upon the error in the measurement of the water in the standards. Using the statistical calculation method ASTM D 6300 the repeatability and reproducibility for water in transformer oil were found to be 7 mg/kg and 14 mg/kg respectively. The method detection limit of water was 8 mg/kg oil. The method bias was estimated based on the National Institute of Standards and Technology (NIST) Standard Reference Material (SRM) 2890, WSO, since no suitable reference material for water in transformer oil was available for this study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号