共查询到20条相似文献,搜索用时 15 毫秒
1.
Göran Nilsson 《Accreditation and quality assurance》2001,6(4-5):147-150
Data from proficiency testing can be used to increase our knowledge of the performance of populations of laboratories, individual
laboratories and different measurement methods. To support the evaluation and interpretation of results from proficiency testing
an error model containing different random and systematic components is presented. From a single round of a proficiency testing
scheme the total variation in a population of laboratories can be estimated. With results from several rounds the random variation
can be separated into a laboratory and time component and for individual laboratories it is then also possible to evaluate
stability and bias in relation to the population mean. By comparing results from laboratories using different methods systematic
differences between methods may be indicated. By using results from several rounds a systematic difference can be partitioned
into two components: a common systematic difference, possibly depending on the level, and a sample-specific component. It
is essential to distinguish between these two components as the former may be eliminated by a correction while the latter
must be treated as a random component in the evaluation of uncertainty.
Received: 20 November 2000 Accepted: 3 January 2001 相似文献
2.
N. Boley 《Accreditation and quality assurance》1998,3(11):459-461
The primary objective of proficiency testing (PT) is in the provision of information and support to participating laboratories,
to enable them to monitor and improve the quality of their measurements. However, other benefits can be obtained from PT.
These include the comparison of data for a given measurement by different methods, the validation of new methods, and the
provision of information for laboratories' customers and accreditation bodies. This paper considers the subject of method
comparison, and highlights some of the approaches which can be followed, as well as the practical use to which this can be
put, to benefit the analytical community more widely. This is illustrated by a case study concerning the measurement of haze
in beer. In this study the United Kingdom Institute of Brewing (IoB) conducted a survey of participants in the Brewing Analytes
Proficiency Scheme (BAPS). From the survey data taken together with data from the BAPS scheme, the IoB is now in a position
to give guidance on the use of particular instruments and procedures, as well as consider changes to the scope of the BAPS
scheme to provide greater benefits for participants concerned with measuring haze.
Received: 3 March 1998 · Accepted: 9 June 1998 相似文献
3.
N. P. Boley Paul De Bièvre Philip D. P. Taylor Adam Uldall 《Accreditation and quality assurance》2001,6(6):244-251
Many laboratories take part in proficiency testing schemes, external quality assessment programmes and other interlaboratory
comparisons. These have many similarities but also important differences in their modus operandi and evaluation of performance of participating laboratories. This paper attempts to highlight both the similarities and differences. It also puts particular emphasis on requirements
called ”target values for uncertainty” and their meaning.
Received: 24 January 2001 Accepted: 25 January 2001 相似文献
4.
Ian Robert Juniper 《Accreditation and quality assurance》1999,4(8):336-341
Proficiency testing is a means of assessing the ability of laboratories to competently perform specific tests and/or measurements.
It supplements a laboratory's own internal quality control procedure by providing an additional external audit of their testing
capability and provides laboratories with a sound basis for continuous improvement. It is also a means towards achieving comparability
of measurement between laboratories. Participation is one of the few ways in which a laboratory can compare its performance
with that of other laboratories. Good performance in proficiency testing schemes provides independent evidence and hence reassurance
to the laboratory and its clients that its procedures, test methods and other laboratory operations are under control. For
test results to have any credibility, they must be traceable to a standard of measurement, preferably in terms of SI units,
and must be accompanied by a statement of uncertainty. Analytical chemists are coming to realise that this is just as true
in their field as it is for physical measurements, and applies equally to proficiency testing results and laboratory test
reports. Recent approaches toward ensuring the quality and comparability of proficiency testing schemes and the means of evaluating
proficiency test results are described. These have led to the drafting of guidelines and subsequently to the development of
international requirements for the competence of scheme providers.
Received: 2 January 1999 · Accepted: 7 April 1999 相似文献
5.
This paper briefly summarises the current situation for proficiency testing (PT) in China, outlines the policy for PT of China’s
national accreditation committee for laboratories (CNACL), and exemplifies activities of the CNACL’s metal working group.
Received: 9 December 2000 Accepted: 14 December 2000 相似文献
6.
Manfred Golze 《Accreditation and quality assurance》2001,6(4-5):199-202
Since October 1998 the European Commission has financed a concerted action on Information System and Qualifying Criteria for
Proficiency Testing Schemes within the 4th framework program. As a major result of this project EPTIS, the European Information
System on Proficiency Testing Schemes which is available on the Internet since March 2000, is presented in this paper. Today
EPTIS contains comprehensive information on approximately 640 proficiency testing schemes from 16 European countries providing
information on the state of the art in proficiency testing in Europe. Finally some possible approaches for interlinkages and
recognition of proficiency testing schemes are discussed. 相似文献
7.
Robert Rej 《Accreditation and quality assurance》2002,7(8-9):335-340
Proficiency testing and external quality assurance of medical laboratories is now entering its sixth decade. These activities
comprise a broad range of applications including: providing participants and public health authorities with estimates of measurement
uncertainty and national infrastructure; providing education; provision of a practical basis for accreditation and regulatory
compliance. All branches of medical laboratory science have employed external quality assurance as a basis for improvement
and comparability. The opportunities and challenges reviewed here include: the proper establishment of multiple target values
in comparison to a system of traceability to reference or definitive methods; the problems of matrix effects and commutability
of patient and proficiency test samples; generating information on laboratory infrastructure and trends in analytical technique
and performance; providing education and setting goals for laboratory improvement; problems of specimen distribution; application
of Internet technology; the role of programs in legal mandates and accreditation.
Received: 24 April 2002 Accepted: 11 July 2002 相似文献
8.
Daniel William Tholen 《Accreditation and quality assurance》1998,3(9):362-366
There are three stages to evaluating a laboratory's results in an interlaboratory proficiency test: establishing the correct
result for the test item, determining an evaluation statistic for the particular result, and establishing an acceptable range.
There are a wide variety of procedures for accomplishing these three stages and a correspondingly wide variety of statistical
techniques in use. Currently in North America the largest number of laboratory proficiency test programs are in the clinical
laboratory field, followed by programs for environmental laboratories that test drinking water and waste water. Proficiency
testing in both of these fields is under the jurisdiction of the federal government and other regulatory and accreditation
agencies. Many of the statistical procedures are specified in the regulations, to assure comparability of different programs
and a fair evaluation of performance. In this article statistical procedures recommended in International Organization for
Standardization Guide 43, Part 1, are discussed and compared with current practices in North America.
Received: 22 April 1998 · Accepted: 12 May 1998 相似文献
9.
The definition of an assigned value is usually achieved by calculating mean values from the data (with different methods)
or by designating reference laboratories. Neither method is completely satisfactory. In this paper a new method is presented
for the definition of the assigned value for spiked samples with an unknown content of the analyte in the matrix. The method
consists of two parts. The first is the estimation of the assigned values from the spiked amounts and the content in the matrix,
based on the results of reference laboratories. The other is the designation of these reference laboratories by comparing
their results with the assigned values. Because each of these parts requires the other, an iterative procedure is necessary.
As an example, the results of a proficiency test for the analysis of copper in wastewater are used to compare the calculated
values with those from other methods, e.g., the Huber estimation.
Received: 25 September 2000 Accepted: 9 December 2000 相似文献
10.
A. Baldan Adriaan M. H. van der Veen Daniela Prauß Angelika Recknagel N. Boley Steve Evans Derek Woods 《Accreditation and quality assurance》2001,6(4-5):164-167
Many proficiency tests are operated with a consensus value derived from the participants’ results. Apart from technical issues,
one of the reasons often mentioned is that proficiency tests operated with consensus values would be cheaper than those using
reference values obtained from a priori characterisation measurements. The economy of a proficiency test must of course be balanced by the need of the participants,
and the quality of the comparison in general. The proficiency tests selected in this study had both a reference value and
a consensus value, one of which was used for assessing the performance of the participating laboratories. In this work, both
a technical and an economical assessment of how the comparisons were operated is made. From the evaluation, it follows that
usually the use of consensus values does not necessarily reduce the costs of a proficiency test. However, frequently it may
be observed that the quality of the assessment of the laboratories is better with a reference value.
Received: 11 October 2000 Accepted: 3 January 2001 相似文献
11.
The history, origin, and development of a system for monitoring and assessing water and other environmental laboratories in
the Czech Republic is described. The system started in 1991 and has matured to its present complexity with similarities to
the accreditation systems found in other countries. Differences from internationally recognized procedures are being corrected
step by step. During the first year of its existence ASLAB, as part of its brief, organised proficiency testing (PT) programs
for fifty laboratories. Today the total number of regularly participating laboratories exceeds 700 from the Czech Republic,
the Slovak Republic, and Germany. This paper describes the ASLAB PT system, discusses some experiences with its use, and describes
the use of PT results in assessment of the competence of laboratories.
Received: 12 October 2000 Accepted: 7 January 2001 相似文献
12.
Ilya Kuselman Maria Belli Stephen L. R. Ellison Ales Fajgelj Umberto Sansone Wolfhard Wegscheider 《Accreditation and quality assurance》2007,12(11):563-567
Comparability and compatibility of proficiency testing (PT) results are discussed for schemes with a limited number of participants
(less than 20–30) based on the use of reference materials (RMs) as test items. Since PT results are a kind of measurement/analysis/test
result, their comparability is a property conditioned by traceability to measurement standards applied in the measurement
process. At the same time, metrological traceability of the certified value of the RM (sent to PT participants as test items)
is also important, since the PT results are compared with the RM certified value. The RM position in the calibration hierarchy
of measurement standards sets the degree of comparability for PT results, which can be assessed in the scheme. However, this
assessment is influenced by commutability (adequacy or match) of the matrix RM used for PT and routine samples. Compatibility
of PT results is a characteristic of the collective (group) performance of the laboratories participating in PT that can be
expressed as closeness of the distribution of the PT results to the distribution of the RM data. Achieving quality-of-measurement/analysis/test
results in the framework of the concept “tested once, accepted everywhere” requires both comparability and compatibility of
the test results. 相似文献
13.
Proficiency testing (PT) results have been used to improve traceability in chemical drinking water analysis. With a generalized
least-square regression the mass concentrations of As and Sb were calculated in a drinking water that had been used to prepare
proficiency testing samples by a spiking procedure. From the mass concentrations in the matrix and the spiked amounts, reference
values with an uncertainty budget could be calculated without the need for reference measurements. The degree to which these
reference values can be regarded as traceable is discussed. The results showed slight deviations in some samples between reference
values and consensus means. 相似文献
14.
Although it seems self-evident that proficiency testing (PT) and accreditation can be expected to improve quality, their relative benefits remain uncertain as does their efficacy. The study reported here examines the following issues: (a) Why do laboratories take part in PT schemes? (b) How does participation in PT fit in with a laboratory's overall quality assurance (QA) system? (c) Is there a link between a laboratory's performance in specific PT and it's QA system? (d) How does PT performance change with time and how do laboratories respond to poor performance? The overall conclusion is that there is no evidence from the present study that laboratories with third-party assessment (accreditation and certification) perform any better in PT than laboratories without. The validity of this conclusion and its significance for the future design and operation of such schemes requires further investigation. In particular, study is required of the degree to which good performance in open PT correlates with blind PT performance, where laboratories are not aware that the samples being analysed are part of a quality assessment exercise. 相似文献
15.
I. Kuselman Ioannis Papadakis Wolfhard Wegscheider 《Accreditation and quality assurance》2001,6(2):78-79
A new composite score for the evaluation of performance of proficiency testing participants is proposed. The score is based
on a combination of the z-score, uncertainty of a participant’s measurement result and uncertainty of the proficiency testing
scheme’s assigned value. The use of such a composite score will allow evaluation not only of the participant’s ability to
determine an analyte in corresponding matrix, but also their understanding of the uncertainty in the obtained analytical result.
The score may be helpful for the laboratory’s quality system and for laboratory accreditation according to ISO 17025. 相似文献
16.
Maria Belli Stephen L. R. Ellison Ales Fajgelj Ilya Kuselman Umberto Sansone Wolfhard Wegscheider 《Accreditation and quality assurance》2007,12(8):391-398
A metrological background for the selection and use of proficiency testing (PT) schemes for a limited number N of laboratories-participants (less than 20–30) is discussed. The following basic scenarios are taken into account: (1) adequate
matrix certified reference materials (CRM) or in-house reference materials (IHRM) with traceable property values are available
for PT use as test items; (2) no appropriate matrix CRM is available, but a CRM or IHRM with traceable property values can
be applied as a spike or similar; (3) only an IHRM with limited traceability is available. The discussion also considers the
effect of a limited population of PT participants N
p on statistical assessment of the PT results for a given sample of N responses from this population. When N
p is finite and the sample fraction N/N
p is not negligible, a correction to the statistical parameters may be necessary. Scores suitable for laboratory performance
assessment in such PT schemes are compared.
Presented at the 3rd International Conference on Metrology, November 2006, Tel Aviv, Israel. 相似文献
17.
Keith Jewell 《Accreditation and quality assurance》2001,6(4-5):154-159
The author considers the fundamental differences between analyses in microbiology and those in chemistry and physics, deducing
special issues for microbiological proficiency testing. He concludes that the variability and uncertainty implicit in microbiological
analysis requires a broader range of proficiency scheme providers providing a broader range of services than in chemistry
and physics.
Received: 10 November 2000 Accepted: 3 December 2000 相似文献
18.
Siu Kay Wong 《Accreditation and quality assurance》2005,10(8):409-414
Proficiency testing (PT) is an essential tool used by laboratory accreditation bodies to assess the competency of laboratories.
Because of limited resources of PT providers or for other reasons, the assigned reference value used in the calculation of
z-score values has usually been derived from some sort of consensus value obtained by central tendency estimators such as the
arithmetic mean or robust mean. However, if the assigned reference value deviates significantly from the ‘true value’ of the
analyte in the test material, laboratories’ performance will be evaluated incorrectly. This paper evaluates the use of consensus
values in proficiency testing programmes using the Monte Carlo simulation technique. The results indicated that the deviation
of the assigned value from the true value could be as large as 40%, depending on the parameters of the proficiency testing
programmes under investigation such as sample homogeneity, number of participant laboratories, concentration level, method
precision and laboratory bias. To study how these parameters affect the degree of discrepancy between the consensus value
and the true value, a fractional factorial design was also applied. The findings indicate that the number of participating
laboratories and the distribution of laboratory bias were the prime two factors affecting the deviation of the consensus value
from the true value. 相似文献
19.
M. Buzoianu 《Accreditation and quality assurance》2000,5(6):231-237
In practice there are three aspects that need to be considered in order to achieve the required traceability according to
its definition: the 'stated reference', the 'unbroken chain of calibrations' and the "stated uncertainty". For a certain chemical
result, each of these aspects highly depends on the measurement uncertainty, both on its magnitude and how it was estimated.
Therefore, the paper describes the experience of the Romanian National Institute of Metrology in estimating measurement uncertainty
during the certification of reference materials (RMs), in metrological activities (calibration, pattern approval, periodical
verification, etc.), as well as during the analytical measurement process. Practical examples of estimation of measurement
uncertainty using RMs or certified reference materials are discussed for their applicability in spectrophotometric and turbidimetric
analysis. Use of the analysis of variance to obtain some additional information on the components of measurement uncertainty
and to identify the magnitude of individual random effects is described.
Received: 12 November 1999 / Accepted: 25 February 2000 相似文献
20.
This paper reviews the experience of the Food Analysis Performance Assessment Scheme (FAPAS®) in operating a proficiency testing scheme for the analysis of genetically modified (GM) food. Initial rounds of proficiency testing have shown a tendency for laboratories to over-estimate GM levels, results obtained by polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA) detection methods to be significantly different and that data are skewed and not normally distributed until log-transformed. During the initial rounds, it was found that for analysis and quantification of GM material, it was not possible to assign a target value for standard deviation external to the round data, from which performance could be assessed. However, when working in a log scale, the internally derived, robust standard deviation (^σ) was found to be constant and could be used directly to predict a target value (σ) for performance assessment. Results from the first four rounds have provided valuable information and a general overview of laboratory ability. Choosing a target value for standard deviation which reflects current best practice has enabled laboratory performance to be assessed. Issues surrounding the assessment of performance are discussed which highlight some of the implications raised as a result of this initial assessment, regarding the enforcement of European labelling legislation. 相似文献