首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the EURACHEM/CITAC draft ”Quantifying uncertainty in analytical measurement” estimations of measurement uncertainty in analytical results for linear calibration are given. In this work these estimations are compared, i.e. the uncertainty deduced from repeated observations of the sample vs. the uncertainty deduced from the standard residual deviation of the regression. As a result of this study it is shown that an uncertainty estimation based on repeated observations can give more realistic values if the condition of variance homogeneity is not correctly fulfilled in the calibration range. The complete calculation of measurement uncertainty including assessment of trueness is represented by an example concerning the determination of zinc in sediment samples using ICP-atomic emission spectrometry. Received: 9 February 2002 Accepted: 17 April 2002  相似文献   

2.
Calibration is an operation whose main objective is to know the metrological status of a measurement system. Nevertheless, in analytical sciences, calibration has special connotations since it is the basis to do the quantification of the amount of one or more components (analytes) in a sample, or to obtain the value of one or more analytical parameters related with that quantity. Regarding this subject, the aim of analytical calibration is to find an empiric relationship, called measurement function, which permits subsequently to calculate the values of the amount (x-variable) of a substance in a sample, from the measured values on it of an analytical signal (y-variable). In this paper, the metrological bases of analytical calibration and quantification are established and, the different work schemes and calibration methodologies, which can be applied depending on the characteristic of the sample (analyte+matrix) to analyse, are distinguished and discussed. Likewise, the different terms and related names are clarified. A special attention has been paid to those analytical methods which use separation techniques, in relation with its effect on calibration operations and later analytical quantification.  相似文献   

3.
A methodology for the worst case measurement uncertainty estimation for analytical methods which include an instrumental quantification step, adequate for routine determinations, is presented. Although the methodology presented should be based on a careful evaluation of the analytical method, the resulting daily calculations are very simple. The methodology is based on the estimation of the maximum value for the different sources of uncertainty and requires the definition of limiting values for certain analytical parameters. The simplification of the instrumental quantification uncertainty estimation involves the use of the standard deviation obtained from control charts relating to the concentrations estimated from the calibration curves for control standards at the highest calibration level. Three levels of simplification are suggested, as alternatives to the detailed approach, which can be selected according to the proximity of the sample results to decision limits. These approaches were applied to the determination of pesticide residues in apples (CEN, EN 12393), for which the most simplified approach showed a relative expanded uncertainty of 37.2% for a confidence level of approximately 95%.  相似文献   

4.
High-throughput quantification with label-free methods has received considerable attention in electrospray ionization(ESI)-mass spectrometry(MS),but the manner by which MS signals respond to peptide concentration remains unclear in proteomics.We developed a new mathematical formula to describe the intrinsic log-log relationship between the MS intensity response and peptide concentration in an analytical ESI process.Experimental results showed that the calibration curve is fairly fit to the log-log formula with a linear dynamic range of approximate four to five orders of magnitude.However,we found that the ionization of analytical peptides can be severely suppressed by coexisting matrix peptides,such that the calibration curve can be poorly leveled off on both ends.Our study suggests that the interferences from coexisting matrix peptides should be reduced in the ESI process to use the log-log calibration curve successfully for the high-throughput quantification.  相似文献   

5.
 It is argued that results of uncertainty calculations in chemical analysis should be taken into consideration with some caution owing to their limited generality. The issue of the uncertainty in uncertainty estimation is discussed in two aspects. The first is due to the differences between procedure-oriented and result-oriented uncertainty assessments, and the second is due to the differences between the theoretical calculation of uncertainty and its quantication using the validation (experimental) data. It is shown that the uncertainty calculation for instrumental analytical methods using a regression calibration curve is result-oriented and meaningful only until the next calibration. A scheme for evaluation of the uncertainty in uncertainty calculation by statistical analysis of experimental data is given and illustrated with examples from the author's practice. Some recommendations for the design of corresponding experiments are formulated.  相似文献   

6.
To ensure and to confirm the required traceability according to the definition given in the International Vocabulary of Basic and Standard Terms in Metrology, three main aspects need to be considered in practice: “stated reference”, “unbroken chain of calibration” and “stated uncertainty”. For a certain spectrochemical result, each of the aspects above mentioned is highly dependent on measurement uncertainty, both on its magnitude and how it was estimated. The paper describes the experience of the Romanian National Institute of Metrology (INM) in estimating measurement uncertainty during certification of reference materials, in metrological calibration and during specific analytical processes. Practical examples of the use of reference materials or certified reference materials issued by the INM to estimate measurement uncertainty are discussed for their applicability in spectrochemical and turbidity analysis. Some aspects of the use of analysis of variance (ANOVA) to obtain additional information on the components of measurement uncertainty and to identify the magnitude of individual random effects are presented.  相似文献   

7.
To ensure and to confirm the required traceability according to the definition given in the International Vocabulary of Basic and Standard Terms in Metrology, three main aspects need to be considered in practice: "stated reference", "unbroken chain of calibration" and "stated uncertainty". For a certain spectrochemical result, each of the aspects above mentioned is highly dependent on measurement uncertainty, both on its magnitude and how it was estimated. The paper describes the experience of the Romanian National Institute of Metrology (INM) in estimating measurement uncertainty during certification of reference materials, in metrological calibration and during specific analytical processes. Practical examples of the use of reference materials or certified reference materials issued by the INM to estimate measurement uncertainty are discussed for their applicability in spectrochemical and turbidity analysis. Some aspects of the use of analysis of variance (ANOVA) to obtain additional information on the components of measurement uncertainty and to identify the magnitude of individual random effects are presented.  相似文献   

8.
Meloun M  Militký J  Kupka K  Brereton RG 《Talanta》2002,57(4):721-740
Building a calibration model with detection and quantification capabilities is identical to the task of building a regression model. Although commonly used by analysts, an application of the calibration model requires at first careful attention to the three components of the regression triplet (data, model, method), examining (a) the data quality of the proposed model; (b) the model quality; (c) the LS method to be used or a fulfillment of all least-squares assumptions. This paper summarizes these components, describes the effects of deviations from assumptions and considers the correction of such deviations: identifying influential points is the first step in least-squares model building, the calibration task depends on the regression model used, and finally the least squares LS method is based on assumptions of normality of errors, homoscedasticity, independence of errors, overly influential data points and independent variables being subject to error. When some assumptions are violated, the ordinary LS is inconvenient and robust M-estimates with the iterative method of reweighted least-squares must be used. The effects of influential points, heteroscedasticity and non-normality on the calibration precision limits are also elucidated. This paper also considers the proper construction of the statistical uncertainty expressed as confidence limits predicting an unknown concentration (or amount) value, and its dependence on the regression triplet. The authors' objectives were to provide a thorough treatment that includes pertinent references, consistent nomeclature, and related mathematical formulae to show by theory and illustrative examples those approaches best suited to typical problems in analytical chemistry. Two new algorithms, calibration and linear regression written in s-plus and enabling regression triplet analysis, the estimation of calibration precision limits, critical levels, detection limits and quantification limits with the statistical uncertainty of unknown concentrations, form the goal of this paper.  相似文献   

9.
Resveratrol is a polyphenol that has numerous interesting biological properties, but, per os, it is quickly metabolized. Some of its metabolites are more concentrated than resveratrol, may have greater biological activities, and may act as a kind of store for resveratrol. Thus, to understand the biological impact of resveratrol on a physiological system, it is crucial to simultaneously analyze resveratrol and its metabolites in plasma. This study presents an analytical method based on UHPLC-Q-TOF mass spectrometry for the quantification of resveratrol and of its most common hydrophilic metabolites. The use of 13C- and D-labeled standards specific to each molecule led to a linear calibration curve on a larger concentration range than described previously. The use of high resolution mass spectrometry in the full scan mode enabled simultaneous identification and quantification of some hydrophilic metabolites not previously described in mice. In addition, UHPLC separation, allowing run times lower than 10 min, can be used in studies that requiring analysis of many samples.  相似文献   

10.
The combined uncertainty in the analytical results of solid materials for two methods (ET-AAS, analysis after prior sample digestion and direct solid sampling) are derived by applying the Guide to the Expression of Uncertainty in Measurement from the International Standards Organization. For the analysis of solid materials, generally, three uncertainty components must be considered: (i) those in the calibration, (ii) those in the unknown sample measurement and (iii) those in the analytical quality control (AQC) process. The expanded uncertainty limits for the content of cadmium and lead from analytical data of biological samples are calculated with the derived statistical estimates. For both methods the expanded uncertainty intervals are generally of similar width, if all sources of uncertainty are included. The relative uncertainty limits for the determination of cadmium range from 6% to 10%, and for the determination of lead they range from 8% to 16%. However, the different uncertainty components contribute to different degrees. Though with the calibration based on reference solutions (digestion method) the respective contribution may be negligible (precision < 3%), the uncertainty from a calibration based directly on a certified reference material (CRM) (solid sampling) may contribute significantly (precision about 10%). In contrast to that, the required AQC measurement (if the calibration is based on reference solutions) contributes an additional uncertainty component, though for the CRM calibration the AQC is “built-in”. For both methods, the uncertainty in the certified content of the CRM, which is used for AQC, must be considered. The estimation of the uncertainty components is shown to be a suitable tool for the experimental design in order to obtain a small uncertainty in the analytical result.  相似文献   

11.
This paper presents a detailed study on the calibration of a thermal desorption-gas chromatography-mass spectrometry (TD-GC-MS)-based methodology for quantification of volatile organic compounds (VOCs) in gaseous and liquid samples. For the first time, it is documented to what extent three widely encountered problems affect precise and accurate quantification, and solutions to improve calibration are proposed. The first issue deals with the limited precision in MS quantification, as exemplified by high relative standard deviations (up to 40%, n=5) on response factors of a set of 69 selected VOCs in a volatility range from 16 Pa to 85 kPa at 298 K. The addition of [(2)H(8)]toluene as an internal standard, in gaseous or liquid phase, improves this imprecision by a factor of 5. Second, the matrix in which the standard is dissolved is shown to be highly important towards calibration. Quantification of gaseous VOCs loaded on a sorbent tube using response factors obtained with liquid standards results in systematic deviations of 40-80%. Relative response factors determined by the analysis of sorbent tubes loaded with both analytes and [(2)H(8)]toluene from liquid phase are shown to offer a reliable alternative for quantification of airborne VOCs, without need for expensive and often hardly available gaseous standards. Third, a strategy is proposed involving the determination of a relative response factor being representative for a group of analytes with similar functionalities and electron impact fragmentation patterns. This group method approach indicates to be useful (RSD approximately 10%) for quantifying analytes belonging to that class but having no standards available.  相似文献   

12.
Since the uncertainty of each link in the traceability chain (measuring analytical instrument, reference material or other measurement standard) changes over the course of time, the chain lifetime is limited. The lifetime in chemical analysis is dependent on the calibration intervals of the measuring equipment and the shelf-life of the certified reference materials (CRMs) used for the calibration of the equipment. It is shown that the ordinary least squares technique, used for treatment of the calibration data, is correct only when uncertainties in the certified values of the measurement standards or CRMs are negligible. If these uncertainties increase (for example, close to the end of the calibration interval or shelf-life), they are able to influence significantly the calibration and measurement results. In such cases regression analysis of the calibration data should take into account that not only the response values are subjects to errors, but also the certified values. As an end-point criterion of the traceability chain destruction, the requirement that the uncertainty of a measurement standard should be a source of less then one-third of the uncertainty in the measurement result is applicable. An example from analytical practice based on the data of interlaboratory comparisons of ethanol determination in beer is discussed. Received: 5 October 2000 Accepted: 3 December 2000  相似文献   

13.
To guarantee that an analytical procedure gives reliable, exact and interpretable information about a sample, it must be validated. Two ambiguous parameters are detection limit and quantification limit. The determination of these limits is still of great concern and there are still a variety of procedures described in the current literature. The fundamental objective of the present work is to apply the different recommendations suggested by official guidelines for the quantitative determination of omeprazole and its impurities (omeprazole sulphone and 5-hydroxy-omeprazole) in capsules and tablets using high performance liquid chromatography with UV detection. The importance of calibration linearity in the context of the quantification limit is considered, since one of the approaches, the estimated concentrations of this limit, are deduced from the regression line. The values of the detection limit and the quantification limit obtained show that, in chromatographic analyses, the best method is that based on the use of the parameters obtained from the analytical curve, which are statistically reliable. It was shown that smaller values of the detection limit and the quantification limit were obtained by the visual approach and by the method using the signal-to-noise ratio. However, these values may reflect a subjective evaluation, prone to error and large variations. This was confirmed by showing that these methods result in values that fall outside the linear range of the method.  相似文献   

14.
Research papers in different fields of analytics indicate that the effect of matrix-induced chromatographic response enhancement (matrix effect) is a commonly encountered problem in gas chromatography applications. In this paper, an example of the effect of sample matrix on the quantitative determination of total petroleum hydrocarbons (TPH) by GC–FID in soil is presented. Two types of soil were selected for the evaluation. Extraction and analysis of the soil samples was in accordance with CEN prEN 14039. The relative systematic error resulting from the matrix effect was obtained for three different TPH concentrations by statistical comparison of the slopes of the matrix-matched calibration lines and a pure solvent calibration line. Too high TPH concentrations were obtained when conventional solvent calibration was used for quantitation. This demonstrates that matrix-matched calibration should be exploited in the determination of petroleum hydrocarbons in soil samples. However, there was also significant enhancement of the response due to an interfering matrix with decreasing analyte concentration. Enhancement seems to be especially evident in the quantification of TPH over the concentration range encountered in polluted environments. As a result, even when matrix-matched calibration is used for quantitation, it is still necessary to establish the range over which a linear response can be expected. Otherwise too high results for sample TPH concentrations will be obtained.  相似文献   

15.
The linear weighted regression model (LW) can be used to calibrate analytical instrumentation in a range of quantities (e.g. concentration or mass) wider than possible by the linear unweighted regression model, LuW (i.e. the least squares regression model), since this model can be applied when signals are not equally precise through the calibration range. If precision of signals varies within the calibration range, the regression line should be defined taking into account that more precise signals are more reliable and should count more to define regression parameters. Nevertheless, the LW requires the determination of the variation of signals precision through the calibration range. Typically, this information is collected experimentally for each calibration, requiring a large number of replicate collection of signals of calibrators. This work proposes reducing the number of signals needed to perform LW calibrations by developing models of weighing factors robust to daily variations of instrument sensibility. These models were applied to the determination of the ionic composition of the water soluble fraction of explosives. The adequacy of the developed models was tested through the analysis of control standards, certified reference materials and the ion balance of anions and cations in aqueous extracts of explosives, considering the measurement uncertainty estimated by detailed metrological models. The high success rate of the comparisons between estimated and known quantity values of reference solutions, considering results uncertainty, proves the validity of developed metrological models. The relative expanded measurement uncertainty of single determinations ranged from 1.93% to 35.7% for calibrations performed along 4 months.  相似文献   

16.
The best estimate and its standard deviation are calculated for the case when the a priori probability that the analyte is absent from the test sample is not zero. In the calculation, a generalization of the Bayesian prior that is used in the ISO 11929 standard is applied. The posterior probability density distribution of the true values, given the observed value and its uncertainty, is a linear combination of the Dirac delta function and the normalized, truncated, normal probability density distribution defined by the observed value and its uncertainty. The coefficients of this linear combination depend on the observed value and its uncertainty, as well as on the a priori probability. It is shown that for a priori probabilities larger than zero the lower level of the uncertainty interval of the best estimate reaches the unfeasible range (i.e., negative activities). However, for a priori probabilities in excess of 0.26 it reaches the unfeasible range even for positive observed values. The upper limit of the confidence interval covering a predefined fraction of the posterior is derived.  相似文献   

17.
In validation of quantitative analysis methods, knowledge of the response function is essential as it describes, within the range of application, the existing relationship between the response (the measurement signal) and the concentration or quantity of the analyte in the sample. The most common response function used is obtained by simple linear regression, estimating the regression parameters slope and intercept by the least squares method as general fitting method. The assumption in this fitting is that the response variance is a constant, whatever the concentrations within the range examined.The straight calibration line may perform unacceptably due to the presence of outliers or unexpected curvature of the line. Checking the suitability of calibration lines might be performed by calculation of a well-defined quality coefficient based on a constant standard deviation.The concentration value for a test sample calculated by interpolation from the least squares line is of little value unless it is accompanied by an estimate of its random variation expressed by a confidence interval. This confidence interval results from the uncertainty in the measurement signal, combined with the confidence interval for the regression line at that measurement signal and is characterized by a standard deviation sx0 calculated by an approximate equation. This approximate equation is only valid when the mathematical function, calculating a characteristic value g from specific regression line parameters as the slope, the standard error of the estimate and the spread of the abscissa values around their mean, is below a critical value as described in literature.It is mathematically demonstrated that with respect to this critical limit value for g, the proposed value for the quality coefficient applied as a suitability check for the linear regression line as calibration function, depends only on the number of calibration points and the spread of the abscissa values around their mean.  相似文献   

18.
The enforcement of legal limits for food safety raises the question of decision-making in the context of uncertain measurements. It also puts the question of demonstrating that measurement technique that is used is fit for the purpose of controlling legal limits. A recent European Commision (EC) decision gives some indications how to deal with this question. In the meantime, the implementation of quality systems in analytical laboratories is now a reality. While these requirements deeply modified the organization of the laboratories, it has also improved the quality of the results. The goal of this communication is to describe how two fundamental requirements of ISO 17025 standard, i.e. validation of the methods and estimation of the uncertainty of measurements, can give a way to check whether an analytical method is correctly fit for the purpose of controlling legal limits. Both these requirements are not independent and it will be shown how they can be combined. A recent approach based on the “accuracy profile” of a method was applied to the determination of acrylamide and illustrates how uncertainty can be simply derived from the data collected for validating the method. Moreover, by basing on the β-expectation tolerance interval introduced by Mee [Technometrics (1984) 26(3): 251–253], it is possible to unambiguously demonstrate the fitness for purpose of a method. Remembering that the expression of uncertainty of the measurement is also a requirement for accredited laboratories, it is shown that the uncertainty can be easily related to the trueness and precision issuing from the data collected to build the method accuracy profile. The example presented here consists in validating a method for the determination of acrylamide in pig plasma by liquid chromatography–mass spectromery (LC–MS). Concentrations are expressed as mg/l and instrumental response is peak surface. The calibration experimental design included 5×5×2 measurements and namely consisted in preparing duplicate standard solutions at five concentration levels ranging from 10 to about 5000 mg/l. This was repeated for 5 days. The validation experimental design was similar.  相似文献   

19.
Lyn JA  Ramsey MH  Wood R 《The Analyst》2002,127(9):1252-1260
The optimised uncertainty (OU) methodology is applied across a range of analyte-commodity combinations. The commodities and respective analytes under investigation were chosen to encompass a range of input factors: measurement costs (sampling and analytical), sampling uncertainties, analytical uncertainties and potential consequence costs which may be incurred as a result of misclassification. Two types of misclassification are identified-false compliance and false non-compliance. These terms can be used across a wide range of foodstuffs that have regulations requiring either minimum compositional requirements, maximum contaminant allowances or compositional specifications. The latter refers to foodstuffs with regulations that state an allowable tolerance around the compositional specification, i.e. the upper specification limit (USL) and the lower specification limit (LSL). The traditional OU methodology has been adapted so that it is applicable in these cases and has been successfully applied in practice. The Newton-Raphson method has been used to determine the optimal uncertainty value for the two case studies in which analyte concentration is assessed against a 'single threshold' regulatory requirement. This numerical method was shown to give a value of the optimal uncertainty that is practically identical to that given by the previously used method of visual inspection. The expectation of financial loss was reduced by an average of 65% over the four commodities by the application of the OU methodology, showing the benefit of the method.  相似文献   

20.
Ordinary least squares is widely applied as the standard regression method for analytical calibrations, and it is usually accepted that this regression method can be used for quantification starting at the limit of quantification. However, it requires calibration being homoscedastic and this is not common. Different calibrations have been evaluated to assess whether ordinary least squares is adequate to quantify estimates at low levels. All calibrations evaluated were linear and heteroscedastic. Despite acceptable values for precision at limit of quantification levels were obtained, ordinary least squares fitting resulted in significant and unacceptable bias at low levels. When weighted least squares regression was applied, bias at low levels was solved and accurate estimates were obtained. With heteroscedastic calibrations, limit values determined by conventional methods are only appropriate if weighted least squares are used. A “practical limit of quantification” can be determined with ordinary least squares in heteroscedastic calibrations, which should be fixed at a minimum of 20 times the value calculated with conventional methods. Biases obtained above this “practical limit” were acceptable applying ordinary least squares and no significant differences were obtained between the estimates measured using weighted and ordinary least squares when analyzing real‐world samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号