首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Existing methods have been applied to estimate the uncertainty of measurement, caused by both sampling and analysis, and fitness-for-purpose of these measurements. A new approach has been taken to modify the measurement uncertainty by changing the contribution made by the sampling process. A case study on nitrate in lettuce has been used to demonstrate the applicability of this new generic approach. The sampling theory of Gy was used to predict the alterations in the sampling protocol required to achieve the necessary change in sampling uncertainty. An experimental application of this altered sampling protocol demonstrated that the predicted change in sampling uncertainty was achieved in practice. For the lettuce case study, this approach showed that composite samples containing 40 heads, rather than the usual ten heads, produced measurements of nitrate that where more fit-for-purpose.  相似文献   

2.
On three fields of arable land of (3–6)×104 m2, simple reference sampling was performed by taking up to 195 soil increments from each field applying a systematic sampling strategy. From the analytical data reference values for 15 elements were established, which should represent the average analyte mass fraction of the areas. A “point selection standard deviation” was estimated, from which a prediction of the sampling uncertainty was calculated for the application of a standard sampling protocol (X-path across the field, totally 20 increments for a composite sample). Predicted mass fractions and associated uncertainties are compared with the results of a collaborative trial of 18 experienced samplers, who had applied the standard sampling protocol on these fields. In some cases, bias between reference and collaborative values is found. Most of these biases can be explained by analyte heterogeneity across the area, in particular on one field, which was found to be highly heterogeneous for most nutrient elements. The sampling uncertainties estimated from the reference sampling were often somewhat smaller compared to those from the collaborative trial. It is suspected that the influence of sample preparation and the variation due to sampler were responsible for these differences. For the applied sampling protocol, the uncertainty contribution from sampling generally is in the same range as the uncertainty contribution from analysis. From these findings, some conclusions were drawn, especially about the consequences for a sampling protocol, if in routine sampling a demanded “certainty of trueness” for the measurement result should be met.  相似文献   

3.
A proposed sampling constant for use in geochemical analysis   总被引:1,自引:0,他引:1  
Ingamells CO  Switzer P 《Talanta》1973,20(6):547-568
The error in a determination of an element in a rock or mineral sample depends on the analytical error, the weight of sample analysed, and the nature and history of the laboratory sample. The most probable result is not independent of the weight of sample analysed. This is due to the fact that trace constituents often reside in isolated mineral grains. The chance of such mineral grains appearing in any one analysed sample becomes more remote as the sample weight decreases, even when rock or mineral samples are reduced to fine powders. Such subsampling errors can be controlled through the use of sampling constants. These may be estimated by several procedures, including repetitive determination of a constituent and physical measurement of relevant sample characteristics. Sampling constants can be usefully employed during the establishment and certification of reference samples or standards. When subsampling is deficient, analytical results may yield erroneously low values, sometimes with high precision. High precision never implies high accuracy; it may be a symptom of gross error.  相似文献   

4.
The design of an experiment for the evaluation of sampling uncertainty in the framework of the fitness for purpose concept is described in terms of probabilities (risks of the user) of type 1 and type 2 errors in decisions concerning the significance of effects influencing the sampling uncertainty and the measurement results. As a case study, an experiment based on the duplicate method for quantification of the sampling uncertainty and inhomogeneity (heterogeneity) of a melt of tin-free bronze produced in a 10-ton reflective oven is analyzed. The melt is defined as the sampling target. It is shown that the number of such targets (melts), the number of samples under analysis and the number of replicate analyses can be minimized, i.e., the size and cost of the experiment can be reduced, if the user knows which risks are acceptable. When inhomogeneity of the sampling target has a systematic character, like the decrease of the mass fraction of aluminum from the beginning to the end of the melt pouring in the case study, the inhomogeneity effect can be separated from the sampling uncertainty and evaluated according to the user’s needs.  相似文献   

5.
Modern measurement systems for food components often require use of ever smaller sample sizes, down to mg for some new microtechniques, which puts a stronger demand on development of reference materials with defined homogeneity for subsampling. One approach to evaluate the homogeneity of materials is the characterization of sampling constants, defined as that amount of material that gives a 1% error for subsampling. This approach was developed for geological sampling and has been applied in a limited way for inorganic components in food/biological materials. We have extended this approach to the determination of the sampling constants for an organic component, niacin, in the SRM 1846 Infant Formula material. This material was produced by blending of a dry vitamin mix (5% weight) into the bulk spray dried powder, for long term stability purposes. By analyzing similar aliquots of a reconstituted homogeneous fluid solution of a large sample size, in comparison to smaller portions of dry powder, an estimate of the variation due to sampling can be separated from estimates of variation due to analysis. Using either the AOAC microbiological method or a newly developed HPLC method, sampling constants for the niacin content of SRM 1846 are in the range of 1–3 g; use of smaller sub-samples can introduce significant variation into determinations using this SRM.  相似文献   

6.
Peanuts contain proteins that can cause severe allergic reactions in some sensitized individuals. Studies were conducted to determine the percentage of recovery by an enzyme-linked immunosorbent assay (ELISA) method in the analysis for peanuts in energy bars and milk chocolate and to determine the sampling, subsampling, and analytical variances associated with testing energy bars and milk chocolate for peanuts. Food products containing chocolate were selected because their composition makes sample preparation for subsampling difficult. Peanut-contaminated energy bars, noncontaminated energy bars, incurred milk chocolate containing known levels of peanuts, and peanut-free milk chocolate were used. A commercially available ELISA kit was used for analysis. The sampling, sample preparation, and analytical variances associated with each step of the test procedure to measure peanut protein were determined for energy bars. The sample preparation and analytical variances were determined for milk chocolate. Variances were found to be functions of peanut concentration. Sampling and subsampling variability associated with energy bars accounted for 96.6% of the total testing variability. Subsampling variability associated with powdered milk chocolate accounted for >60% of the total testing variability. The variability among peanut test results can be reduced by increasing sample size, subsample size, and number of analyses. For energy bars the effect of increasing sample size from 1 to 4 bars, subsample size from 5 to 20 g, and number of aliquots quantified from 1 to 2 on reducing the sampling, sample preparation, and analytical variance was demonstrated. For powdered milk chocolate, the effects of increasing subsample size from 5 to 20 g and number of aliquots quantified from 1 to 2 on reducing sample preparation and analytical variances were demonstrated. This study serves as a template for application to other foods, and for extrapolation to different sizes of samples and subsamples as well as numbers of analyses.  相似文献   

7.
Lyn JA  Ramsey MH  Fussell RJ  Wood R 《The Analyst》2003,128(11):1391-1398
A methodology is proposed, which employs duplicated primary sampling and subsequent duplicated physical preparation coupled with duplicated chemical analyses. Sample preparation duplicates should be prepared under conditions that represent normal variability in routine laboratory practice. The proposed methodology requires duplicated chemical analysis on a minimum of two of the sample preparation duplicates. Data produced from the hierarchical design is treated with robust analysis of variance (ANOVA) to generate uncertainty estimates, as standard uncertainties ('u' expressed as standard deviation), for primary sampling (ssamp), physical sample preparation (sprep) and chemical analysis (sanal). The ANOVA results allow the contribution of the sample preparation process to the overall uncertainty to be assessed. This methodology has been applied for the first time to a case study of pesticide residues in retail strawberry samples. Duplicated sample preparation was performed under ambient conditions on two consecutive days. Multi-residue analysis (quantification by GC-MS) was undertaken for a range of incurred pesticide residues including those suspected of being susceptible to loss during sample preparation procedures. Sampling and analytical uncertainties dominated at low analyte concentrations. The sample preparation process contributed up to 20% to the total variability and had a relative uncertainty (Uprep%) of up to 66% (for bupirimate at 95% confidence). Estimates of systematic errors during physical sample preparation were also made using spike recovery experiments. Four options for the estimation of measurement uncertainty are discussed, which both include and exclude systematic error arising from sample preparation and chemical analysis. A holistic approach to the combination and subsequent expression of uncertainty is advised.  相似文献   

8.
化学分析取样几乎总是一个多步骤过程,所有的步骤都会导致分析结果的总体不确定性。样品采取之后,不论后续采样过程如何精细,前期采样阶段的误差都无法在后续采样过程中更正。第一次取样是最重要的,通常其方差远远超过实验室测量的方差。但这不意味着在最终实验室分析试样制备阶段可以忽略采样理论的原理。现代分析仪器旨在处理小样本(从毫克到几克)。在这种情况下,如果样品是包含少量分析物的混合颗粒,则物料的不均匀性可能会很大以至于破坏整个分析过程。不均匀性计算和样品制备过程中基本采样误差方差的估计对于开发适用的分析程序至关重要。在样本制备的最后步骤中,新的增量本是父增量本的重要组成部分,在估算样本方差时必须考虑到这种影响。TOS提供了用于处理这些情况的工具。通过两个案例阐明了不均匀性计算的应用。在第一个例子中,评估了鸡饲料中低含量添加剂的成分不均匀性,在第二个例子中,对样品制备进行了优化,以校正用于分析硅灰石精矿中矿物杂质含量的红外仪器。在处理颗粒混合物和评估混样效率时,不均匀性评估也很重要。  相似文献   

9.
Chemical analysis is a multi-stage process, which starts with primary sampling and ends with evaluation of the resuts. Especially in trace analysis and microanalysis of solid materials, sampling can far outweigh all other sources of error. For estimating the reliability of complete analytical procedures, a method is needed which can be used to estimate the errors made in the primary and the secondary sampling and sample preparation steps. Based on Gy's theory of sampling, a computer program (SAMPEX) was written for the solution of practical sampling problems. The method involves the estimation of the sampling constant, C. For well-characterized materials, C can be estimated from the material properties. If the necessary material properties are difficult to estimate, C can be evaluated experimentally. The program can be used to solve the following problems: minimum sample size for a tolerated relative standard deviation of the fundamental sampling error; relative size for a tolerated for a given sample size; maximum particle size of the material for a specified standard deviation and sample size; balanced design of a multi-stage sampling and sample-reduction process; and sampling for particle size determination.  相似文献   

10.
 The methodology of evaluating the performance of sampling, sample preparation, and subsampling is reviewed. The requirements to be set for a successful experiment are revisited. The central role of the reference method is explained, and so is the choice of the parameters and the measurement methods. Based on the principles of the "Guide to the expression of uncertainty in measurement" (GUM), a statistical model is developed that demonstrates the influence of the experimental design on the outcome of the assessment experiment. This relationship is often overlooked in practice, as it is hardly mentioned in written standards dealing with this kind of quality assessments. The statistical framework thus developed covers the statistical procedures commonly appearing in written standards. Finally, the issue of testing the significance of the bias obtained from the experiment is discussed. Received: 14 June 1997 · Accepted: 2 September 1997  相似文献   

11.
12.
 The possibility of using interlaboratory study repeatability and reproducibility estimates as the basis for measurement uncertainty estimates is discussed. It is argued that collaborative trial reproducibility is an appropriate basis for estimating uncertainty in routine testing provided certain conditions are met by the laboratory. The primary shortcomings relate to establishment of traceability and consequent estimation of bias associated with the method, and quantitatively establishing the relevance to the single laboratory. Approaches to resolving both difficulties are proposed, the former via full implementation of trueness determination suggested in ISO 5725 : 1994 or by independent checks on individual accuracy and precision, the latter via a reconciliation procedure. The paper also discusses other factors including sampling and sample pre-treatment, change in sample matrix, and the influence of level of analyte. Received: 28 October 1997 · Accepted: 17 November 1997  相似文献   

13.
Appropriate sampling, that includes the estimation of measurement uncertainty, is proposed in preference to representative sampling without estimation of overall measurement quality. To fulfil this purpose the uncertainty estimate must include contribution from all sources, including the primary sampling, sample preparation and chemical analysis. It must also include contributions from systematic errors, such as sampling bias, rather than from random errors alone. Case studies are used to illustrate the feasibility of this approach and to show its advantages for improved reliability of interpretation of the measurements. Measurements with a high level of uncertainty (e.g. 50%) can be shown to be fit for some specified purposes using this approach. Once reliable estimates of the uncertainty are available, then a probabilistic interpretation of results can be made. This allows financial aspects to be considered in deciding upon what constitutes an acceptable level of uncertainty. In many practical situations ”representative” sampling is never fully achieved. This approach recognises this and instead, provides reliable estimates of the uncertainty around the concentration values that imperfect appropriate sampling causes. Received: 28 December 2001 Accepted: 25 April 2002  相似文献   

14.
A reference database was used for the estimation of the standard uncertainties resulting from sampling, sample preparation, and analysis of soil samples from a target area in Switzerland. This evaluation was based on an extended reference sampling of the Comparative Evaluation of European Methods for Sampling and Sample Preparation of Soils Project. Samples were taken according to the national sampling protocols of 15 European countries and were analyzed for zinc, cadmium, copper, and lead. The combined uncertainty for all laboratories was estimated according to the ISO Guide to the Expression of Uncertainty in Measurement. It was found that the sampling uncertainty was not larger than the analytical uncertainty if more than ten sample increments were taken. The uncertainty due to variation in sampling depth and sample size reduction was only significant under unfavorable conditions. On the basis of an uncertainty budget the sampling protocols can be optimized and a ranking is possible, aimed at conditions that are fit for the specific purpose.Electronic Supplementary Material Supplementary Material is available in the online version of this article at  相似文献   

15.
Lyn JA  Ramsey MH  Damant AP  Wood R 《The Analyst》2007,132(12):1231-1237
Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate of sampling uncertainty made using the modelling approach (136%, at 68% confidence) is six times larger than that found using the empirical approach (22.5%). The difficulty in establishing reliable estimates for the input variable for the modelling approach is thought to be the main cause of the discrepancy. The empirical approach to uncertainty estimation, with the automatic inclusion of sampling within the uncertainty statement, is recognised as generally the most practical procedure, providing the more reliable estimates. The modelling approach is also shown to have a useful role, especially in choosing strategies to change the sampling uncertainty, when required.  相似文献   

16.
Uncertainty-based measurement quality control   总被引:1,自引:0,他引:1  
According to a simple acceptance decision rule for measurement quality control, a measured value will be accepted if the expanded uncertainty of the measurements is not greater than a preset maximum permissible uncertainty. Otherwise, the measured value will be rejected. The expanded uncertainty may be calculated as the z-based uncertainty (the half-width of the z-interval) when the measurement population standard deviation σ is known or the sample size is large (30 or greater), or by a sample-based uncertainty estimator when σ is unknown and the sample size is small. The decision made based on the z-based uncertainty will be deterministic and may be assumed to be correct. However, the decision made based on a sample-based uncertainty estimator will be uncertain. This paper develops the mathematical formulations for computing the probability of acceptance for two sample-based uncertainty estimators: the t-based uncertainty (the half-width of the t-interval) and an unbiased uncertainty estimator. The risk of incorrect decision-making, in terms of the false acceptance probability and false rejection probability, is derived from the probability of acceptance. The theoretical analyses indicate that the t-based uncertainty may result in significantly high false rejection probability when the sample size is very small (especially for samples of size 2). For some applications, the unbiased uncertainty estimator may be superior to the t-based uncertainty for measurement quality control. Several examples from acoustic Doppler current profiler streamflow measurements are presented to demonstrate the performance of the t-based uncertainty and the unbiased uncertainty estimator.  相似文献   

17.
In every measurement procedure, it is important to know the components of measurement uncertainty affecting the quality of measured result and reliability of quantified result. The procedure for recognizing measurement uncertainty is not universal but depends on the method and sample type. It has to be made according to good laboratory practice. This paper aims at showing the comparison of measurement uncertainty component estimations for three methods using the high-performance liquid chromatography techniques: determination of the type and content of aromatic hydrocarbons in diesel fuels and petroleum distillates by normal phase high-performance liquid chromatography, determination of nitrates in water samples by ion chromatography, and determination of molecular weights of polystyrene by size exclusion chromatography technique. Both similarity and differences were found during the measurement uncertainty component estimation, and conclusions about influences of certain components on the result uncertainty were made.  相似文献   

18.
Accurate analytical results with known uncertainty are required for the safety assessment of pesticides and testing the conformity of marketed food and feed with the maximum residue limits. The available information on various sources of errors was examined with special emphasis to those which may remain unaccounted for based on the current practice of many laboratories. The method validation typically covers the steps of the pesticide residue determination from the extraction of spiked samples to the instrumental determination, which contribute to only 10–40% of total variance of results. Though the variability of sampling, sample size reduction and sample processing may amount to the 60–90% of total variance, it generally remains unnoticed leading to wrong decisions. Another important source of gross error is the mismatch of the residues analysed and those included in the relevant residue definition. Procedures which may be applied for eliminating or reducing the errors are discussed.  相似文献   

19.
Reliability of measurements of pesticide residues in food   总被引:1,自引:0,他引:1  
This paper accounts for the major sources of errors associated with pesticide residue analysis and illustrates their magnitude based on the currently available information. The sampling, sample processing and analysis may significantly influence the uncertainty and accuracy of analytical data. Their combined effects should be considered in deciding on the reliability of the results. In the case of plant material, the average random sampling (coefficient of variation, CV=28–40%) and sample processing (CV up to 100%) errors are significant components of the combined uncertainty of the results. The average relative uncertainty of the analytical phase alone is about 17–25% in the usual 0.01–10 mg/kg concentration range. The major contributor to this error can be the gas-liquid chromatography (GLC) or high-performance liquid chromatography (HPLC) analysis especially close to the lowest calibrated level. The expectable minimum of the combined relative standard uncertainty of the pesticide residue analytical results is in the range of 33–49% depending on the sample size.The gross and systematic errors may be much larger than the random error. Special attention is required to obtain representative random samples and to eliminate the loss of residues during sample preparation and processing.  相似文献   

20.
The EURACHEM/CITAC Guide “Measurement uncertainty arising from sampling” deals with the design and analysis of experiments for the evaluation of the sampling and analytical standard deviation when a defined sampling and analytical method is used for the determination of the concentration, expressed as mass fraction (mg/kg), of an analyte in a specified material. The Guide recommends reporting the relative expanded uncertainty and using it directly, i.e. it implicitly assumes that the standard deviation is proportional to the mass fraction even in case the experimental data do not support this assumption. Example A1 (and some of the other examples of the Guide) demonstrates that this can result in extreme levels of underestimation or overestimation of the uncertainty of measurement results. Hence, such recommendations should be avoided!  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号