首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The evaluation of measurement uncertainty, and that of uncertainty statements of participating laboratories will be a challenge to be met in the coming years. The publication of ISO 17025 has led to the situation that testing laboratories should, to a certain extent, meet the same requirements regarding measurement uncertainty and traceability. As a consequence, proficiency test organizers should deal with the issues measurement uncertainty and traceability as well. Two common statistical models used in proficiency testing are revisited to explore the options to include the evaluation of the measurement uncertainty of the PTRV (proficiency test reference value). Furthermore, the use of this PTRV and its uncertainty estimate for assessing the uncertainty statements of the participants for the two models will be discussed. It is concluded that in analogy to Key Comparisons it is feasible to implement proficiency tests in such a way, that the new requirements can be met. Received: 29 September 2000 Accepted: 3 December 2000  相似文献   

2.
Since October 1998 the European Commission has financed a concerted action on Information System and Qualifying Criteria for Proficiency Testing Schemes within the 4th framework program. As a major result of this project EPTIS, the European Information System on Proficiency Testing Schemes which is available on the Internet since March 2000, is presented in this paper. Today EPTIS contains comprehensive information on approximately 640 proficiency testing schemes from 16 European countries providing information on the state of the art in proficiency testing in Europe. Finally some possible approaches for interlinkages and recognition of proficiency testing schemes are discussed.  相似文献   

3.
Many laboratories take part in proficiency testing schemes, external quality assessment programmes and other interlaboratory comparisons. These have many similarities but also important differences in their modus operandi and evaluation of performance of participating laboratories. This paper attempts to highlight both the similarities and differences. It also puts particular emphasis on requirements called ”target values for uncertainty” and their meaning. Received: 24 January 2001 Accepted: 25 January 2001  相似文献   

4.
 There are three stages to evaluating a laboratory's results in an interlaboratory proficiency test: establishing the correct result for the test item, determining an evaluation statistic for the particular result, and establishing an acceptable range. There are a wide variety of procedures for accomplishing these three stages and a correspondingly wide variety of statistical techniques in use. Currently in North America the largest number of laboratory proficiency test programs are in the clinical laboratory field, followed by programs for environmental laboratories that test drinking water and waste water. Proficiency testing in both of these fields is under the jurisdiction of the federal government and other regulatory and accreditation agencies. Many of the statistical procedures are specified in the regulations, to assure comparability of different programs and a fair evaluation of performance. In this article statistical procedures recommended in International Organization for Standardization Guide 43, Part 1, are discussed and compared with current practices in North America. Received: 22 April 1998 · Accepted: 12 May 1998  相似文献   

5.
This paper briefly summarises the current situation for proficiency testing (PT) in China, outlines the policy for PT of China’s national accreditation committee for laboratories (CNACL), and exemplifies activities of the CNACL’s metal working group. Received: 9 December 2000 Accepted: 14 December 2000  相似文献   

6.
 The primary objective of proficiency testing (PT) is in the provision of information and support to participating laboratories, to enable them to monitor and improve the quality of their measurements. However, other benefits can be obtained from PT. These include the comparison of data for a given measurement by different methods, the validation of new methods, and the provision of information for laboratories' customers and accreditation bodies. This paper considers the subject of method comparison, and highlights some of the approaches which can be followed, as well as the practical use to which this can be put, to benefit the analytical community more widely. This is illustrated by a case study concerning the measurement of haze in beer. In this study the United Kingdom Institute of Brewing (IoB) conducted a survey of participants in the Brewing Analytes Proficiency Scheme (BAPS). From the survey data taken together with data from the BAPS scheme, the IoB is now in a position to give guidance on the use of particular instruments and procedures, as well as consider changes to the scope of the BAPS scheme to provide greater benefits for participants concerned with measuring haze. Received: 3 March 1998 · Accepted: 9 June 1998  相似文献   

7.
 Proficiency testing is a means of assessing the ability of laboratories to competently perform specific tests and/or measurements. It supplements a laboratory's own internal quality control procedure by providing an additional external audit of their testing capability and provides laboratories with a sound basis for continuous improvement. It is also a means towards achieving comparability of measurement between laboratories. Participation is one of the few ways in which a laboratory can compare its performance with that of other laboratories. Good performance in proficiency testing schemes provides independent evidence and hence reassurance to the laboratory and its clients that its procedures, test methods and other laboratory operations are under control. For test results to have any credibility, they must be traceable to a standard of measurement, preferably in terms of SI units, and must be accompanied by a statement of uncertainty. Analytical chemists are coming to realise that this is just as true in their field as it is for physical measurements, and applies equally to proficiency testing results and laboratory test reports. Recent approaches toward ensuring the quality and comparability of proficiency testing schemes and the means of evaluating proficiency test results are described. These have led to the drafting of guidelines and subsequently to the development of international requirements for the competence of scheme providers. Received: 2 January 1999 · Accepted: 7 April 1999  相似文献   

8.
A metrological background for the selection and use of proficiency testing (PT) schemes for a limited number N of laboratories-participants (less than 20–30) is discussed. The following basic scenarios are taken into account: (1) adequate matrix certified reference materials (CRM) or in-house reference materials (IHRM) with traceable property values are available for PT use as test items; (2) no appropriate matrix CRM is available, but a CRM or IHRM with traceable property values can be applied as a spike or similar; (3) only an IHRM with limited traceability is available. The discussion also considers the effect of a limited population of PT participants N p on statistical assessment of the PT results for a given sample of N responses from this population. When N p is finite and the sample fraction N/N p is not negligible, a correction to the statistical parameters may be necessary. Scores suitable for laboratory performance assessment in such PT schemes are compared. Presented at the 3rd International Conference on Metrology, November 2006, Tel Aviv, Israel.  相似文献   

9.
Comparability and compatibility of proficiency testing (PT) results are discussed for schemes with a limited number of participants (less than 20–30) based on the use of reference materials (RMs) as test items. Since PT results are a kind of measurement/analysis/test result, their comparability is a property conditioned by traceability to measurement standards applied in the measurement process. At the same time, metrological traceability of the certified value of the RM (sent to PT participants as test items) is also important, since the PT results are compared with the RM certified value. The RM position in the calibration hierarchy of measurement standards sets the degree of comparability for PT results, which can be assessed in the scheme. However, this assessment is influenced by commutability (adequacy or match) of the matrix RM used for PT and routine samples. Compatibility of PT results is a characteristic of the collective (group) performance of the laboratories participating in PT that can be expressed as closeness of the distribution of the PT results to the distribution of the RM data. Achieving quality-of-measurement/analysis/test results in the framework of the concept “tested once, accepted everywhere” requires both comparability and compatibility of the test results.  相似文献   

10.
Many proficiency tests are operated with a consensus value derived from the participants’ results. Apart from technical issues, one of the reasons often mentioned is that proficiency tests operated with consensus values would be cheaper than those using reference values obtained from a priori characterisation measurements. The economy of a proficiency test must of course be balanced by the need of the participants, and the quality of the comparison in general. The proficiency tests selected in this study had both a reference value and a consensus value, one of which was used for assessing the performance of the participating laboratories. In this work, both a technical and an economical assessment of how the comparisons were operated is made. From the evaluation, it follows that usually the use of consensus values does not necessarily reduce the costs of a proficiency test. However, frequently it may be observed that the quality of the assessment of the laboratories is better with a reference value. Received: 11 October 2000 Accepted: 3 January 2001  相似文献   

11.
Although it seems self-evident that proficiency testing (PT) and accreditation can be expected to improve quality, their relative benefits remain uncertain as does their efficacy. The study reported here examines the following issues: (a) Why do laboratories take part in PT schemes? (b) How does participation in PT fit in with a laboratory's overall quality assurance (QA) system? (c) Is there a link between a laboratory's performance in specific PT and it's QA system? (d) How does PT performance change with time and how do laboratories respond to poor performance? The overall conclusion is that there is no evidence from the present study that laboratories with third-party assessment (accreditation and certification) perform any better in PT than laboratories without. The validity of this conclusion and its significance for the future design and operation of such schemes requires further investigation. In particular, study is required of the degree to which good performance in open PT correlates with blind PT performance, where laboratories are not aware that the samples being analysed are part of a quality assessment exercise.  相似文献   

12.
The definition of an assigned value is usually achieved by calculating mean values from the data (with different methods) or by designating reference laboratories. Neither method is completely satisfactory. In this paper a new method is presented for the definition of the assigned value for spiked samples with an unknown content of the analyte in the matrix. The method consists of two parts. The first is the estimation of the assigned values from the spiked amounts and the content in the matrix, based on the results of reference laboratories. The other is the designation of these reference laboratories by comparing their results with the assigned values. Because each of these parts requires the other, an iterative procedure is necessary. As an example, the results of a proficiency test for the analysis of copper in wastewater are used to compare the calculated values with those from other methods, e.g., the Huber estimation. Received: 25 September 2000 Accepted: 9 December 2000  相似文献   

13.
A new composite score for the evaluation of performance of proficiency testing participants is proposed. The score is based on a combination of the z-score, uncertainty of a participant’s measurement result and uncertainty of the proficiency testing scheme’s assigned value. The use of such a composite score will allow evaluation not only of the participant’s ability to determine an analyte in corresponding matrix, but also their understanding of the uncertainty in the obtained analytical result. The score may be helpful for the laboratory’s quality system and for laboratory accreditation according to ISO 17025.  相似文献   

14.
The history, origin, and development of a system for monitoring and assessing water and other environmental laboratories in the Czech Republic is described. The system started in 1991 and has matured to its present complexity with similarities to the accreditation systems found in other countries. Differences from internationally recognized procedures are being corrected step by step. During the first year of its existence ASLAB, as part of its brief, organised proficiency testing (PT) programs for fifty laboratories. Today the total number of regularly participating laboratories exceeds 700 from the Czech Republic, the Slovak Republic, and Germany. This paper describes the ASLAB PT system, discusses some experiences with its use, and describes the use of PT results in assessment of the competence of laboratories. Received: 12 October 2000 Accepted: 7 January 2001  相似文献   

15.
 A protocol has been developed illustrating the link between validation experiments, such as precision, trueness and ruggedness testing, and measurement uncertainty evaluation. By planning validation experiments with uncertainty estimation in mind, uncertainty budgets can be obtained from validation data with little additional effort. The main stages in the uncertainty estimation process are described, and the use of trueness and ruggedness studies is discussed in detail. The practical application of the protocol will be illustrated in Part 2, with reference to a method for the determination of three markers (CI solvent red 24, quinizarin and CI solvent yellow 124) in fuel oil samples. Received: 10 April 1999 / Accepted: 24 September 1999  相似文献   

16.
The author considers the fundamental differences between analyses in microbiology and those in chemistry and physics, deducing special issues for microbiological proficiency testing. He concludes that the variability and uncertainty implicit in microbiological analysis requires a broader range of proficiency scheme providers providing a broader range of services than in chemistry and physics. Received: 10 November 2000 Accepted: 3 December 2000  相似文献   

17.
Nucleic acid based clinical genetic testing has undergone explosive growth in recent years due in large part to the human genome project. Characterization of the human genome has led to a molecular understanding of the pathogenesis of many human diseases, and ultimately to clinical molecular tests becoming routinely used to diagnose a wide diversity of diseases. This rapid growth in clinical molecular genetic testing coupled with the complexity of the analytical procedures underscores the necessity for proficiency testing (i.e. external quality assessment) to allow laboratories offering such services the ability to evaluate their analytical procedures via inter-laboratory comparisons. The American College of Medical Genetics (ACMG) in partnership with the College of American Pathologists (CAP) have been offering proficiency testing for clinical molecular genetics laboratories since 1995, and presently have more than 230 laboratories from 11 countries enrolled in this program. This paper describes the evolution of this program and several challenges encountered in the delivery of a proficiency testing program for laboratories offering clinical molecular genetic services. Received: 13 April 2002 Accepted: 18 July 2002  相似文献   

18.
This paper reviews the experience of the Food Analysis Performance Assessment Scheme (FAPAS®) in operating a proficiency testing scheme for the analysis of genetically modified (GM) food. Initial rounds of proficiency testing have shown a tendency for laboratories to over-estimate GM levels, results obtained by polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA) detection methods to be significantly different and that data are skewed and not normally distributed until log-transformed. During the initial rounds, it was found that for analysis and quantification of GM material, it was not possible to assign a target value for standard deviation external to the round data, from which performance could be assessed. However, when working in a log scale, the internally derived, robust standard deviation () was found to be constant and could be used directly to predict a target value (σ) for performance assessment. Results from the first four rounds have provided valuable information and a general overview of laboratory ability. Choosing a target value for standard deviation which reflects current best practice has enabled laboratory performance to be assessed. Issues surrounding the assessment of performance are discussed which highlight some of the implications raised as a result of this initial assessment, regarding the enforcement of European labelling legislation.  相似文献   

19.
A model is presented that correlates historical proficiency test data as the log of interlaboratory standard deviations versus the log of analyte concentrations, independent of analyte (measurand) or matrix. Analytical chemistry laboratories can use this model to set their internal measurement quality objectives and to apply the uncertainty budget process to assign the maximum allowable variation in each major step in their bias-free measurement systems. Laboratories that are compliant with this model are able to pass future proficiency tests and demonstrate competence to laboratory clients and ISO 17025 accreditation bodies. Electronic supplementary material to this paper can be obtained by using the Springer LINK server located at http://dx.doi.org/ 10.1007/s007690100398-y. Received: 31 March 2001 Accepted: 11 September 2001  相似文献   

20.
Proficiency testing as a means of external quality assessment plays the role of independent evidence of laboratories’ performance. To enable laboratories to fulfil the requirements stated in legislation, methodology for evaluation of laboratories’ performance in proficiency testing schemes should incorporate principles of measurement results which are fit for intended use and incorporate evaluation of laboratories’ performances based on independent reference value. A proficiency testing scheme was designed to support Drinking Water Directive (98/83/EC) specifically. The methodology for performance evaluation, which takes into account a “fitness for purpose”-based standard deviation for proficiency assessment, is proposed and discussed in terms of requirements of the Drinking Water Directive. A ζ′-score, modified by application of target uncertainty was developed in a way that fulfils requirements defined in the legislation. As an illustration, results are reported for nitrate concentration in water. The approach presented can also be applied to other fields of measurements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号