首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Considering the uncertainty of measurement when assessing compliance with reference values given in compositional specifications and statutory limits is still a controversial matter. In theory, assessing compliance requires considering both type I (false positive) and type II (false negative) errors. The more the concentration of the analyte in the sample under investigation is close to the allowed concentration limit, the more critical it is to consider both types of errors. This paper describes how this could be done. The matter is discussed in the light of the most recent literature information.  相似文献   

2.
An overview is given of the existing standards and guidelines for analytical toxicology. Details about guidelines concerning forensic toxicology, clinical toxicology, point-of-care testing, and an area of overlap are provided. Guidelines and standards exist for forensic toxicological analysis in general and for specific situations, e.g., workplace drug testing and driving under the influence of drugs and alcohol. For workplace drug testing, detailed guidelines exist in the U.S.A., Australia, and Europe describing for example the methods used, their cut-off and the process of sample handling. Some governments describe the methods and quality requirements for blood alcohol testing for driving under the influence of alcohol in detail in their laws. In the area of clinical toxicology, guidelines not only focus on the analytical aspects of analysis but also on the timeliness of results. According to the US- and UK-based practice guidelines for the emergency department, the turn-around time should be 1 or 2 h, respectively, for a specific set of analytes. Guidelines are either being developed now or already available (e.g., workplace drug testing, breath alcohol analysis) for point-of-care testing in analytical toxicology. In the context of brain death and sexual assault cases, specific demands need to be imposed because of the unique aspects of drug analysis in these situations (variety of drugs used, low concentrations). Many guidelines and standards are available and it is up to every laboratory to choose the best ones depending on the area of activity and the legal and regulatory environment.Presented at the 10th Conference Quality in the Spotlight, March 2005, Antwerp, Belgium  相似文献   

3.
The jack-knife is a resampling method that is increasingly used for assessing the uncertainty in regression coefficient estimates, even when the predictor variables (X) are designed. Application of the jack-knife to designed data, however, violates a basic assumption underlying all resampling methods, namely that the resampled units should constitute a random sample from some distribution; the idea is to ‘resample the sample.’ This paper advances the view that the jack-knife should not be applied to estimate the uncertainty in regression coefficient estimates obtained from designed data, since a sound alternative is available. A literature data set is re-analyzed to lend support to this view.  相似文献   

4.
To couple method performance and QA in microbiological testing, uncertainty profiles have been developed according to relevant LODs and their confidence intervals. Percentage probability of failure is proposed to express this uncertainty. Analysis variance is divided into four categories: uncertainty originating from the sample, uncertainty originating from variations in procedure, uncertainty originating from the measurement system, and uncertainty originating in repeatabilitylreproducibility.  相似文献   

5.
We propose a new procedure for estimating the uncertainty in quantitative routine analysis. This procedure uses the information generated when the trueness of the analytical method is assessed from recovery assays. In this paper, we assess trueness by estimating proportional bias (in terms of recovery) and constant bias separately. The advantage of the procedure is that little extra work needs to be done to estimate the measurement uncertainty associated to routine samples. This uncertainty is considered to be correct whenever the samples used in the recovery assays are representative of the future routine samples (in terms of matrix and analyte concentration). Moreover, these samples should be analysed by varying all the factors that can affect the analytical method. If they are analysed in this fashion, the precision estimates generated in the recovery assays take into account the variability of the routine samples and also all the sources of variability of the analytical method. Other terms related to the sample heterogeneity, sample pretreatments or factors not representatively varied in the recovery assays should only be subsequently included when necessary. The ideas presented are applied to calculate the uncertainty of results obtained when analysing sulphides in wine by HS-SPME-GC.  相似文献   

6.
The determination of trace elements concentration in water by electrothermal atomic absorption spectrometry (ETAAS) is a common and well established technique in many chemical testing laboratories. However, the evaluation of measurement uncertainty results is not systematically implemented. The paper presents an easy step-by-step example leading to the evaluation of the combined standard uncertainty of copper determination in water using ETAAS. The major contributors to the overall measurement uncertainty are identified due to amount of copper in water sample that mainly depends on the absorbance measurements, due to certified reference material and due to auto-sampler volume measurements. The practical aspects how the traceability of copper concentration in water can be established and demonstrated are also pointed out.  相似文献   

7.
This work provides a direct comparison of several experimental approaches used in the literature to measure fracture toughness of rubber of rubber using single edge notched in tension (SENT) specimens, with the final aim to provide guidelines for an optimal testing procedure. Digital image correlation measurements were used to get new insights into the fracture process. SENT is experimentally advantageous because of the simple preparation from laboratory plates and the small amount of material required. The most common experimental approaches to measure fracture toughness of rubber rely on the energy release rate, measured by the tearing energy or the J-integral parameters. This work points out the importance of experimental conditions and test procedures: long specimens and short notches are preferred, identification of fracture initiation from the front view is necessary, strain energy density should not be evaluated from un-notched specimens at the critical stretch level, rather alternative strategies are shown in this work.  相似文献   

8.
Homogeneity testing and the determination of minimum sample mass are an important part of the certification of reference materials. The smallest theoretically achievable uncertainty of certified concentration values is limited by the concentration distribution of analyte in the different particle size fractions of powdered biological samples. This might be of special importance if the reference material is prepared by dry mixing, a dilution technique which is used for the production of the new and third generation of genetically modified (GMO) plant certified reference materials. For the production of dry mixed PMON 810 maize reference material a computer program was developed to calculate the theoretically smallest uncertainty for a selected sample intake. This model was used to compare three differently milled maize samples, and the effect of dilution on the uncertainty of the DNA content of GMO maize was estimated as well. In the case of a 50-mg sample mass the lowest achievable standard deviation was 2% for the sample containing 0.1% GMO and the minimum deviation was less than 0.5% for the sample containing 5% GMO. Received: 5 December 2000 / Revised: 14 March 2001 / Accepted: 19 March 2001  相似文献   

9.
A modified interval hypothesis testing procedure based on paired-sample analysis is described, as well as its application in testing equivalence between two bioanalytical laboratories or two methods. This testing procedure has the advantage of reducing the risk of wrongly concluding equivalence when in fact two laboratories or two methods are not equivalent. The advantage of using paired-sample analysis is that the test is less confounded by the intersample variability than unpaired-sample analysis when incurred biological samples with a wide range of concentrations are included in the experiments. Practical aspects including experimental design, sample size calculation and power estimation are also discussed through examples.  相似文献   

10.
Lyn JA  Ramsey MH  Damant AP  Wood R 《The Analyst》2007,132(12):1231-1237
Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate of sampling uncertainty made using the modelling approach (136%, at 68% confidence) is six times larger than that found using the empirical approach (22.5%). The difficulty in establishing reliable estimates for the input variable for the modelling approach is thought to be the main cause of the discrepancy. The empirical approach to uncertainty estimation, with the automatic inclusion of sampling within the uncertainty statement, is recognised as generally the most practical procedure, providing the more reliable estimates. The modelling approach is also shown to have a useful role, especially in choosing strategies to change the sampling uncertainty, when required.  相似文献   

11.
A proficiency testing (PT) scheme is developed for comparability assessment of results of concrete slump and compressive strength determination. The scheme is based on preparing of a test portion/sample of a concrete in-house reference material (IHRM) at a reference laboratory (RL) in the same conditions for every PT participant. Therefore, in this scheme IHRM instability is not relevant as a source of measurement/test uncertainty, while intra- and between-samples inhomogeneity parameters are evaluated using the results of RL testing of the samples taken at the beginning, the middle and the end of the PT experiment. The IHRM assigned slump and compressive strength values are calculated as averaged RL results. Their uncertainties include the measurement/test uncertainty components and the components arising from the material inhomogeneity. The test results of 25 PT participants were compared with the IHRM assigned values taking into account both the uncertainties of the assigned values and the measurement/test uncertainties of the participants. Since traceability of the IHRM assigned values to the international measurement standards and SI units cannot be stated, local comparability of the results is assessed. It is shown, that comparability of the slump and compressive strength determination results is satisfactory, while uncertainty evaluation for slump results requires additional efforts.  相似文献   

12.
Considering measurement uncertainty is mandatory in assessing conformance to legal or compositional limits, and specific guidelines are available issued by ASME, ISO and Eurachem/CITAC. However, differences between ISO and EURACHEM/CITAC wordings could induce some perplexities in the most careful readers. Possible problems arise from considering that, before performing a test, it should be decided whether it is to be a test for conformity or a test for non-conformity. This choice could perhaps require some renaming of acceptance/rejection zones as defined in the EURACHEM/CITAC Guide. A tentative solution is discussed in this contribution.  相似文献   

13.
To serve as a measurement standard, a (certified) reference material must be stable. For this purpose, the material should undergo stability testing after it has been prepared. This paper looks at the statistical aspects of stability testing. Essentially, these studies can be described with analysis of variance statistics, including variant regression analysis. The latter is used in practice for both trend analysis and for the development of expressions for extrapolations. Extrapolation of stability data is briefly touched upon, as far as the combined standard uncertainty of the reference material is concerned. There are different options to validate the extrapolations made from initial stability studies, and some of them might influence the uncertainty of the reference material and/or the shelf-life. The latter is the more commonly observed consequence of what is called ’stability monitoring’. Received: 6 October 2000 Accepted: 4 December 2000  相似文献   

14.
The characteristic features and the constituents of an identification procedure for chemical substances are discussed. This procedure is a screening of identification hypotheses followed by experimental testing of each one. The testing operation consists of comparison of the values of the quantities measured with other measurement results or reference data, resulting in the Student's ratio, the significance level, the matching of spectra, etc. The performance and the correctness of identification are expressed as "identification uncertainty", i.e. the probability of incorrect identification. The statistical significance level and other similarity values in spectra, chromatography retention parameters, etc. are the particular measures of uncertainty. Searching of prior data and estimation of the prior probability of the presence of particular compounds in the sample (matrix) to be analysed simplifies the setting up and cancelling of hypotheses during screening. Usually, identification is made by the analyst taking into account measurement results, prior information and personal considerations. The estimation of uncertainty and rules for the incorporation of prior data, make the result of identification less subjective.  相似文献   

15.
The present Table of Standard Atomic Weights (TSAW) of the elements is perhaps one of the most familiar data sets in science. Unlike most parameters in physical science whose values and uncertainties are evaluated using the “Guide to the Expression of Uncertainty in Measurement” (GUM), the majority of standard atomic-weight values and their uncertainties are consensus values, not GUM-evaluated values. The Commission on Isotopic Abundances and Atomic Weights of the International Union of Pure and Applied Chemistry (IUPAC) regularly evaluates the literature for new isotopic-abundance measurements that can lead to revised standard atomic-weight values, Ar°(E) for element E. The Commission strives to provide utmost clarity in products it disseminates, namely the TSAW and the Table of Isotopic Compositions of the Elements (TICE). In 2016, the Commission recognized that a guideline recommending the expression of uncertainty listed in parentheses following the standard atomic-weight value, for example, Ar°(Se) = 78.971(8), did not agree with the GUM, which suggests that this parenthetic notation be reserved to express standard uncertainty, not the expanded uncertainty used in the TSAW and TICE. In 2017, to eliminate this noncompliance with the GUM, a new format was adopted in which the uncertainty value is specified by the “±” symbol, for example, Ar°(Se) = 78.971 ± 0.008. To clarify the definition of uncertainty, a new footnote has been added to the TSAW. This footnote emphasizes that an atomic-weight uncertainty is a consensus (decisional) uncertainty. Not only has the Commission shielded users of the TSAW and TICE from unreliable measurements that appear in the literature as a result of unduly small uncertainties, but the aim of IUPAC has been fulfilled by which any scientist, taking any natural sample from commerce or research, can expect the sample atomic weight to lie within Ar°(E) ± its uncertainty almost all of the time.  相似文献   

16.
对石墨炉原子吸收光谱法测定虾粉中镉含量进行了不确定度评价。分析了整个测试过程中不确定度的来源,并对各不确定度分量进行了计算,当虾粉中镉的含量为0.389mg/kg时,扩展不确定度为0.008mg/kg(k=2)。  相似文献   

17.
One of the aspects of supervised pattern recognition applications which is often ignored concerns the minimum number of samples necessary to define sufficiently reliable classification rules. For discriminating techniques, criteria are available that can be used to evaluate this minimum sample size. For class-modelling techniques, however, no attention has previously been paid to this aspect. Here, the guidelines that can be applied in the use of a discriminating supervised technique are discussed, and criteria are proposed that can be applied when class-modelling supervised techniques, particularly UNEQ, are applied.  相似文献   

18.
Every attempt of using a computer to model reality has two main uncertainties: the conceptual uncertainty and the data uncertainty. The conceptual uncertainty deals with the choice of model selected for the simulation and the data uncertainty is about the precision and accuracy of the input data. They are often determined experimentally and may thus be encumbered by a number of uncertainties. Normally when treating uncertainties in input data these data are treated as independent variables. However, since many of these parameters are determined together they are actually correlated. This paper focuses on chemical stability constants, a most important parameter for chemical calculations based on speciation. Commonly in the literature they are at best given with an uncertainty interval. We propose to also give the covariance matrix thus giving the opportunity to really assess correlations. In addition we discuss the effect of these correlations on speciations.  相似文献   

19.
A model interlaboratory testing scheme was developed by the Italian National Reference Laboratory for Brucellosis. This scheme was planned for both qualitative (Rose Bengal Plate Test; RBPT) and quantitative (Complement Fixation Test; CFT) serological tests and involved a total of 42 laboratories. In the preparation of this scheme, reference was made to general protocols and guidelines and to methods reported in the literature, which were applicable to analytical chemistry laboratories. Six field sera from naturally infected animals, one positive serum at a titer below the European Union (EU) positivity threshold, and 5 sera positive at titers between 20 and 851 International Units of Complement Fixation Test (IUCFT)/mL plus one negative serum were used to produce a panel of test sera. To evaluate laboratory performances in the quantitative test for each tested sample examined, z-scores based on robust summary statistics (the median and normalized interquartile range) were used. To evaluate overall laboratory performance, 2 types of combined z-scores were used: Rescaled Sum of Scores and Sum of Squared Scores. In the case of the qualitative test (RBPT), results were analyzed by a Bayesian approach. A Beta distribution, based on the result of each laboratory, was calculated and used to estimate the probability of each laboratory giving a correct result and its uncertainty.  相似文献   

20.
Sudden sampling introduction into a membrane inlet mass spectrometer (MIMS) considerably improves the selectivity of the membrane inlet and is therefore applicable even for compounds with low permeabilities through a silicone membrane. In this study the basics of cyclic non-steady-state sudden increase sample injection were studied using a three-membrane inlet and a portable sector double-focusing mass spectrometer. The operational parameters of the inlet system providing the most efficient enrichment of volatile organic compounds (VOCs) in air were defined. Simulation of the diffusion process following sudden sample introduction into the three-membrane inlet was also carried out. Experimental testing of the three-membrane inlet system with the cyclic sudden sample injection mode for benzene, toluene, styrene, and xylene in air was performed. The simulation and the experimental results demonstrated that, when this mode is used, the VOCs/nitrogen relative enrichment factor of samples introduced into the mass spectrometer equipped with a three-membrane inlet is increased by a factor of approximately 10(5) compared with a direct introduction method. This effect may be used to decrease detection limits of compounds obtained with mass spectrometry to decrease matrix flow through the inlet at the same detection limits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号