共查询到20条相似文献,搜索用时 15 毫秒
1.
A. Baldan Adriaan M. H. van der Veen Daniela Prauß Angelika Recknagel N. Boley Steve Evans Derek Woods 《Accreditation and quality assurance》2001,6(4-5):164-167
Many proficiency tests are operated with a consensus value derived from the participants’ results. Apart from technical issues,
one of the reasons often mentioned is that proficiency tests operated with consensus values would be cheaper than those using
reference values obtained from a priori characterisation measurements. The economy of a proficiency test must of course be balanced by the need of the participants,
and the quality of the comparison in general. The proficiency tests selected in this study had both a reference value and
a consensus value, one of which was used for assessing the performance of the participating laboratories. In this work, both
a technical and an economical assessment of how the comparisons were operated is made. From the evaluation, it follows that
usually the use of consensus values does not necessarily reduce the costs of a proficiency test. However, frequently it may
be observed that the quality of the assessment of the laboratories is better with a reference value.
Received: 11 October 2000 Accepted: 3 January 2001 相似文献
2.
Phoebe Y.T. Hon 《Microchemical Journal》2011,98(1):44-50
This paper presents an international proficiency testing program (APLAC T065) on two trace elements, cadmium and lead, in an herbal sample, Herba Desmodii Styracifolii. The program was registered with a total of 109 laboratories from 42 countries. The assigned reference values of the analytes for performance assessment were provided by the organizers using an accurate gravimetric isotope dilution inductively coupled plasma-mass spectrometry technique. z-Score was used as the numerical indicator to interpret participants' competence. The between-laboratory variations for cadmium and lead were respectively 18.7% and 19.8% and the consensus values were found to be consistent with the assigned reference values. Twenty-two participants gave at least one unsatisfactory z-score, but the performance of the majority of participants on the analysis of cadmium and lead in herbal matrix was generally good when compared with the assigned reference values. 相似文献
3.
This paper briefly summarises the current situation for proficiency testing (PT) in China, outlines the policy for PT of China’s
national accreditation committee for laboratories (CNACL), and exemplifies activities of the CNACL’s metal working group.
Received: 9 December 2000 Accepted: 14 December 2000 相似文献
4.
Proficiency testing (PT) results have been used to improve traceability in chemical drinking water analysis. With a generalized
least-square regression the mass concentrations of As and Sb were calculated in a drinking water that had been used to prepare
proficiency testing samples by a spiking procedure. From the mass concentrations in the matrix and the spiked amounts, reference
values with an uncertainty budget could be calculated without the need for reference measurements. The degree to which these
reference values can be regarded as traceable is discussed. The results showed slight deviations in some samples between reference
values and consensus means. 相似文献
5.
H. O. F. Andersson 《Accreditation and quality assurance》1998,3(6):224-226
Interlaboratory comparisons (in the following abbreviated as intercomparisons) regarding tests, analyses or measurements
are among the most worthwhile measures a laboratory can take in order to confirm that its services to clients include the
provision of correct results within a stated uncertainty. They give a picture of the whole performance of the laboratory,
and they should be used much more than at present. Unfortunately such intercomparisons are, and are made, expensive and prestigious
by the formalisms employed in many cases. The connection between intercomparisons and proficiency tests and their use for
different purposes is briefly discussed. Some suggestions are made on how to improve the present state of the art, i.e. how
to increase the use of intercomparisons, how to perform them efficiently and how to make optimal use of the results.
Received: 6 December 1997 · Accepted: 30 January 1998 相似文献
6.
Manfred Golze 《Accreditation and quality assurance》2001,6(4-5):199-202
Since October 1998 the European Commission has financed a concerted action on Information System and Qualifying Criteria for
Proficiency Testing Schemes within the 4th framework program. As a major result of this project EPTIS, the European Information
System on Proficiency Testing Schemes which is available on the Internet since March 2000, is presented in this paper. Today
EPTIS contains comprehensive information on approximately 640 proficiency testing schemes from 16 European countries providing
information on the state of the art in proficiency testing in Europe. Finally some possible approaches for interlinkages and
recognition of proficiency testing schemes are discussed. 相似文献
7.
N. Boley 《Accreditation and quality assurance》1998,3(11):459-461
The primary objective of proficiency testing (PT) is in the provision of information and support to participating laboratories,
to enable them to monitor and improve the quality of their measurements. However, other benefits can be obtained from PT.
These include the comparison of data for a given measurement by different methods, the validation of new methods, and the
provision of information for laboratories' customers and accreditation bodies. This paper considers the subject of method
comparison, and highlights some of the approaches which can be followed, as well as the practical use to which this can be
put, to benefit the analytical community more widely. This is illustrated by a case study concerning the measurement of haze
in beer. In this study the United Kingdom Institute of Brewing (IoB) conducted a survey of participants in the Brewing Analytes
Proficiency Scheme (BAPS). From the survey data taken together with data from the BAPS scheme, the IoB is now in a position
to give guidance on the use of particular instruments and procedures, as well as consider changes to the scope of the BAPS
scheme to provide greater benefits for participants concerned with measuring haze.
Received: 3 March 1998 · Accepted: 9 June 1998 相似文献
8.
The history, origin, and development of a system for monitoring and assessing water and other environmental laboratories in
the Czech Republic is described. The system started in 1991 and has matured to its present complexity with similarities to
the accreditation systems found in other countries. Differences from internationally recognized procedures are being corrected
step by step. During the first year of its existence ASLAB, as part of its brief, organised proficiency testing (PT) programs
for fifty laboratories. Today the total number of regularly participating laboratories exceeds 700 from the Czech Republic,
the Slovak Republic, and Germany. This paper describes the ASLAB PT system, discusses some experiences with its use, and describes
the use of PT results in assessment of the competence of laboratories.
Received: 12 October 2000 Accepted: 7 January 2001 相似文献
9.
Adriaan M. H. van der Veen 《Accreditation and quality assurance》2001,6(4-5):160-163
The evaluation of measurement uncertainty, and that of uncertainty statements of participating laboratories will be a challenge
to be met in the coming years. The publication of ISO 17025 has led to the situation that testing laboratories should, to
a certain extent, meet the same requirements regarding measurement uncertainty and traceability. As a consequence, proficiency
test organizers should deal with the issues measurement uncertainty and traceability as well. Two common statistical models
used in proficiency testing are revisited to explore the options to include the evaluation of the measurement uncertainty
of the PTRV (proficiency test reference value). Furthermore, the use of this PTRV and its uncertainty estimate for assessing
the uncertainty statements of the participants for the two models will be discussed. It is concluded that in analogy to Key
Comparisons it is feasible to implement proficiency tests in such a way, that the new requirements can be met.
Received: 29 September 2000 Accepted: 3 December 2000 相似文献
10.
Ian Robert Juniper 《Accreditation and quality assurance》1999,4(8):336-341
Proficiency testing is a means of assessing the ability of laboratories to competently perform specific tests and/or measurements.
It supplements a laboratory's own internal quality control procedure by providing an additional external audit of their testing
capability and provides laboratories with a sound basis for continuous improvement. It is also a means towards achieving comparability
of measurement between laboratories. Participation is one of the few ways in which a laboratory can compare its performance
with that of other laboratories. Good performance in proficiency testing schemes provides independent evidence and hence reassurance
to the laboratory and its clients that its procedures, test methods and other laboratory operations are under control. For
test results to have any credibility, they must be traceable to a standard of measurement, preferably in terms of SI units,
and must be accompanied by a statement of uncertainty. Analytical chemists are coming to realise that this is just as true
in their field as it is for physical measurements, and applies equally to proficiency testing results and laboratory test
reports. Recent approaches toward ensuring the quality and comparability of proficiency testing schemes and the means of evaluating
proficiency test results are described. These have led to the drafting of guidelines and subsequently to the development of
international requirements for the competence of scheme providers.
Received: 2 January 1999 · Accepted: 7 April 1999 相似文献
11.
Daniel William Tholen 《Accreditation and quality assurance》1998,3(9):362-366
There are three stages to evaluating a laboratory's results in an interlaboratory proficiency test: establishing the correct
result for the test item, determining an evaluation statistic for the particular result, and establishing an acceptable range.
There are a wide variety of procedures for accomplishing these three stages and a correspondingly wide variety of statistical
techniques in use. Currently in North America the largest number of laboratory proficiency test programs are in the clinical
laboratory field, followed by programs for environmental laboratories that test drinking water and waste water. Proficiency
testing in both of these fields is under the jurisdiction of the federal government and other regulatory and accreditation
agencies. Many of the statistical procedures are specified in the regulations, to assure comparability of different programs
and a fair evaluation of performance. In this article statistical procedures recommended in International Organization for
Standardization Guide 43, Part 1, are discussed and compared with current practices in North America.
Received: 22 April 1998 · Accepted: 12 May 1998 相似文献
12.
This paper reviews the experience of the Food Analysis Performance Assessment Scheme (FAPAS®) in operating a proficiency testing scheme for the analysis of genetically modified (GM) food. Initial rounds of proficiency testing have shown a tendency for laboratories to over-estimate GM levels, results obtained by polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA) detection methods to be significantly different and that data are skewed and not normally distributed until log-transformed. During the initial rounds, it was found that for analysis and quantification of GM material, it was not possible to assign a target value for standard deviation external to the round data, from which performance could be assessed. However, when working in a log scale, the internally derived, robust standard deviation (^σ) was found to be constant and could be used directly to predict a target value (σ) for performance assessment. Results from the first four rounds have provided valuable information and a general overview of laboratory ability. Choosing a target value for standard deviation which reflects current best practice has enabled laboratory performance to be assessed. Issues surrounding the assessment of performance are discussed which highlight some of the implications raised as a result of this initial assessment, regarding the enforcement of European labelling legislation. 相似文献
13.
Although it seems self-evident that proficiency testing (PT) and accreditation can be expected to improve quality, their relative benefits remain uncertain as does their efficacy. The study reported here examines the following issues: (a) Why do laboratories take part in PT schemes? (b) How does participation in PT fit in with a laboratory's overall quality assurance (QA) system? (c) Is there a link between a laboratory's performance in specific PT and it's QA system? (d) How does PT performance change with time and how do laboratories respond to poor performance? The overall conclusion is that there is no evidence from the present study that laboratories with third-party assessment (accreditation and certification) perform any better in PT than laboratories without. The validity of this conclusion and its significance for the future design and operation of such schemes requires further investigation. In particular, study is required of the degree to which good performance in open PT correlates with blind PT performance, where laboratories are not aware that the samples being analysed are part of a quality assessment exercise. 相似文献
14.
Göran Nilsson 《Accreditation and quality assurance》2001,6(4-5):147-150
Data from proficiency testing can be used to increase our knowledge of the performance of populations of laboratories, individual
laboratories and different measurement methods. To support the evaluation and interpretation of results from proficiency testing
an error model containing different random and systematic components is presented. From a single round of a proficiency testing
scheme the total variation in a population of laboratories can be estimated. With results from several rounds the random variation
can be separated into a laboratory and time component and for individual laboratories it is then also possible to evaluate
stability and bias in relation to the population mean. By comparing results from laboratories using different methods systematic
differences between methods may be indicated. By using results from several rounds a systematic difference can be partitioned
into two components: a common systematic difference, possibly depending on the level, and a sample-specific component. It
is essential to distinguish between these two components as the former may be eliminated by a correction while the latter
must be treated as a random component in the evaluation of uncertainty.
Received: 20 November 2000 Accepted: 3 January 2001 相似文献
15.
Analyses of waste water are routinely performed to monitor the level of contamination. To verify the quality of such determinations
the National Institute of Chemistry, with the support of the Ministry of Environment and Spatial Planning and the Slovenian
Accreditation Agency, organizes interlaboratory comparisons. Over the last 3 years, five interlaboratory trials named "MPP-Waste
Water" were organized. Each round attracted around 50 participants, mostly from Slovenia and some from abroad, which enabled
the testing of SIST ISO methods or alternative methods. We prepared samples for determination of harmful substances that are
important for the characterization of waste water; physico-chemical parameters (pH), global parameters – chemical oxygen demand
(COD), biochemical oxygen demand (BOD5), metals (mercury, cadmium, copper, nickel, lead and chromium (VI)), nutrients (ammonia and total phosphorus), anions (chloride,
nitrite, nitrate, sulphate) and toxicity to Daphnia magna. For the analysis of each parameter we prepared two samples at two different concentration levels. The materials used in
the proficiency testing were carefully prepared and their homogeneity and stability were verified. The purpose of this scheme
was to enable participants to check their day-to-day analytical performance. The results should enable the participants to
improve the quality of their analyses.
Received: 24 October 2002 Accepted: 2 January 2003
Acknowledgments We would like to thank the Ministry of Environment and Spatial Planning and the Ministry for Education, Science and Sport for providing financial support. We would like to thank members of the Technical Committee: Mrs. Marjana Kovacˇicˇ, Dr. Katja Otrin-Debevc, Prof. Dr. Marjan Veber and Mrs. Boža Gregorc for their valuable support. Special thanks are due to Dr. Adrian Van der Veen who helped us in running the first PT.
Presented at CERMM-3, Central European Reference Materials and Measurements Conference: The function of reference materials in the measurement process, May 30–June 1, 2002, Rogaška Slatina, Slovenia
Correspondence to M. Cotman 相似文献
Received: 24 October 2002 Accepted: 2 January 2003
Acknowledgments We would like to thank the Ministry of Environment and Spatial Planning and the Ministry for Education, Science and Sport for providing financial support. We would like to thank members of the Technical Committee: Mrs. Marjana Kovacˇicˇ, Dr. Katja Otrin-Debevc, Prof. Dr. Marjan Veber and Mrs. Boža Gregorc for their valuable support. Special thanks are due to Dr. Adrian Van der Veen who helped us in running the first PT.
Presented at CERMM-3, Central European Reference Materials and Measurements Conference: The function of reference materials in the measurement process, May 30–June 1, 2002, Rogaška Slatina, Slovenia
Correspondence to M. Cotman 相似文献
16.
Daniel W. Tholen 《Accreditation and quality assurance》2002,7(4):146-152
There is evidence to support the notion that interlaboratory comparisons (ILCs) are an effective tool for laboratory improvement.
However, despite widespread experience and anecdotal evidence of improvements there are few published studies demonstrating
any benefits from ILCs– in any field of testing. Published demonstrations of benefits can help justify the growing use of
ILCs. ILCs and proficiency testing have been common for many years in medical laboratories; there has been open information
on the results of ILCs, and there has been standardization of results from thousands of laboratories. These studies show general
improvement over time in several areas of testing in different countries. Many articles cite specific reasons for the improvements,
either proven or supposed. An early version of this paper was presented at the International Laboratory Accreditation Cooperation
Conference ”ILAC 2000” in Washington D.C., on 31October, 2000.
Received: 10 February 2001 Accepted: 21 January 2002 相似文献
17.
I. Kuselman Ioannis Papadakis Wolfhard Wegscheider 《Accreditation and quality assurance》2001,6(2):78-79
A new composite score for the evaluation of performance of proficiency testing participants is proposed. The score is based
on a combination of the z-score, uncertainty of a participant’s measurement result and uncertainty of the proficiency testing
scheme’s assigned value. The use of such a composite score will allow evaluation not only of the participant’s ability to
determine an analyte in corresponding matrix, but also their understanding of the uncertainty in the obtained analytical result.
The score may be helpful for the laboratory’s quality system and for laboratory accreditation according to ISO 17025. 相似文献
18.
N. P. Boley Paul De Bièvre Philip D. P. Taylor Adam Uldall 《Accreditation and quality assurance》2001,6(6):244-251
Many laboratories take part in proficiency testing schemes, external quality assessment programmes and other interlaboratory
comparisons. These have many similarities but also important differences in their modus operandi and evaluation of performance of participating laboratories. This paper attempts to highlight both the similarities and differences. It also puts particular emphasis on requirements
called ”target values for uncertainty” and their meaning.
Received: 24 January 2001 Accepted: 25 January 2001 相似文献
19.
Siu Kay Wong 《Accreditation and quality assurance》2005,10(8):409-414
Proficiency testing (PT) is an essential tool used by laboratory accreditation bodies to assess the competency of laboratories.
Because of limited resources of PT providers or for other reasons, the assigned reference value used in the calculation of
z-score values has usually been derived from some sort of consensus value obtained by central tendency estimators such as the
arithmetic mean or robust mean. However, if the assigned reference value deviates significantly from the ‘true value’ of the
analyte in the test material, laboratories’ performance will be evaluated incorrectly. This paper evaluates the use of consensus
values in proficiency testing programmes using the Monte Carlo simulation technique. The results indicated that the deviation
of the assigned value from the true value could be as large as 40%, depending on the parameters of the proficiency testing
programmes under investigation such as sample homogeneity, number of participant laboratories, concentration level, method
precision and laboratory bias. To study how these parameters affect the degree of discrepancy between the consensus value
and the true value, a fractional factorial design was also applied. The findings indicate that the number of participating
laboratories and the distribution of laboratory bias were the prime two factors affecting the deviation of the consensus value
from the true value. 相似文献
20.
This paper describes how the LGC and the Nuffield Curriculum Projects Centre set up an analysis competition for 16- to 19-year-old
students. The competition was based on the procedure for proficiency testing. The results and reports give some insight into
current standards of teaching and learning about analytical procedures and the treatment of uncertainty in courses at this
level. The outcomes justify the production of a good-practice guide for teachers so that they can introduce concepts of valid
analytical measurement into pre-university courses. 相似文献