首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the U.S., all clinical laboratory testing is regulated by the Clinical Laboratory Improvement Amendments (CLIA) (). The CLIA link test quality and adherence to a body of testing regulations intended to ensure accurate, reliable, and timely patient test results. The goal of the CLIA legislation was to ensure a minimum, fundamental level of quality. In the context of “NEXUS,” quality must “go beyond getting the ‘right’ answer on the ‘right’ patient that can be interpreted against ‘right’ reference values. CLIA regulations with specific minimum, performance requirements, or safeguards, are designed to prevent testing errors. The US Institute of Medicine found that testing processes fail as a result of human error, lack of documentation, and lack of test management. In the latest (2004) interpretations of CLIA regulations, the minimum quality control requirement continues to be analyzing at least two external, liquid quality control materials per test per day. In 1995, we proposed that the responsibility for achieving quality test results shifts from the sole purview of the laboratory director to an “alliance” of laboratory professionals, manufacturers, and regulators. The EQC (equivalent quality control) concept as proposed is a positive step in achieving this alliance. With the obvious lack of scientific and statistical robustness, EQC falls far short of ensuring quality. Achieving the “NEXUS Vision” for quality laboratory testing will not come solely from laboratory professionals. The NEXUS is about how to ensure the full-quality assessment of the testing process – pre-analytical, analytical, and post-analytical.Presented at the 10th Conference Quality in the Spotlight, March 2005, Antwerp, Belgium.  相似文献   

2.
A “yes–no” type of criterion is proposed for the assessment of comparability of proficiency testing (PT) results when the PT scheme is based on a metrological approach, i.e. on the use of a reference material as the test sample, etc. The criterion tests a null hypothesis concerning the insignificance of a bias of the mean of the results from a traceable value certified in the reference material used for the PT. Reliability of such assessment is determined by the probabilities of not rejecting the null hypothesis when it is true, and rejecting it when it is false (the alternative hypothesis is true). It is shown that a number of chemical, metrological and statistical reasons should be taken into account for careful formulation of the hypotheses, enabling the avoidance of an erroneous assessment of the comparability. The criterion can be helpful for PT providers and laboratory accreditation bodies in analysis of PT results.  相似文献   

3.
Diagnostic strategies can have various goals at two levels: to facilitate the diagnostic process on the cognitive level, and to serve considerations on the level of the doctor–patient relationship. Requests for laboratory tests could be intended to exclude a disease or to affirm the presence of disease. Thirdly, tactical motives to smoothen the negotiations between doctor and patient probably seem to be important as well. These three intentions differ in prior probability, should lead to different sets of tests, and to different interpretations. Even the cut-off points should differ. This leads to three different decision strategies, both at requesting, as at interpreting the results. Following this line of thought, post-test probabilities are more suitable than normal ranges. Excluding strategy: this is the most prevalent. However, the disadvantage of an excluding strategy (prior 1–5%) is a false-positive result. A positive test result should lead to follow-up by wait and see or by repeated testing. More extensive testing usually is not a very sensible strategy. In practice, physicians simply ignore slightly abnormal values. Mentally they put the cut-off points for normality more broader. The number of tests is small. Confirmative strategy: the disadvantage of a confirmative intention (prior 10–30%) is a false-negative result. Follow-up without testing, repeated testing, or even accepting marginal normal results as abnormal is a proper strategy. The number of tests is moderate to high. Tactical strategy: the tactical intention strategy to reassure the patient – or avoid referrals – could lead to ignoring all slightly positive test results by choosing a higher cut-off point. Actually, considering the usual insignificant diagnostic gain when testing for tactical reasons, all test results are clinically insignificant, unsuspected outliers excluded. Here, a very limited set of tests should be chosen. The laboratory test is the currency in mutual trading medical expectations and relationship considerations between doctor and patient. The number of tests is minimal. If the physician chooses a strategy, a limited range of prior probability is chosen. Then a possibly computerized algorithm produces a “Value (posterior probability)” as test result, replacing “Value (normal ranges)”. Thus one number less on the lab form, yielding more significant information.Presented at the 10th Conference Quality in the Spotlight, March 2005, Antwerp, Belgium  相似文献   

4.
Historically, due to the size and nature of the instrumentation, highly skilled laboratory professionals performed clinical testing in centralized laboratories. Today’s clinicians demand realtime test data at the point of care. This has led to a new generation of compact, portable instruments permitting ”laboratory” testing to be performed at or near the patient’s bedside by nonlaboratory workers who are unfamiliar with testing practices. Poorly controlled testing processes leading to poor quality test results are an insidious problem facing point of care testing today. Manufacturers are addressing this issue through instrument design. Providers of clinical test results, regardless of location, working with manufacturers and regulators must create and manage complete test systems that eliminate or minimize sources of error. The National Committee for Clinical Laboratory Standards (NCCLS) in its EP18 guideline, ”Quality management for unit-use testing,” has developed a quality management system approach specifically for test devices used for point of care testing (POCT). Simply stated, EP18 utilizes a ”sources of error” matrix to identify and address potential errors that can impact the test result. The key is the quality systems approach where all stakeholders – professionals, manufacturers and regulators – collaboratively seek ways to manage errors and ensure quality. We illustrate the use of one quality systems approach, EP18, as a means to advance the quality of test results at point of care. Received: 26 June, 2002 Accepted: 17 July 2002 Presented at the European Conference on Quality in the Spotlight in Medical Laboratories, 7–9 October 2001, Antwerp, Belgium Abbreviations NCCLS National Committee for Clinical Laboratory Standards (formerly) · POCT point of care testing · QC quality control · HACCP hazard analysis critical control points · CLIA clinical laboratory improvement amendments (of 1988) Correspondence to S. S. Ehrmeyer  相似文献   

5.
Van Eenoo and Delbeke in Accred Qual Assur (2009) have criticized Faber (in Accred Qual Assur, 2009) for not taking “all factors under consideration when making his claims”. Here, it is detailed that their criticism is based on a misunderstanding of examples that were merely intended to be illustrative. Motivated by this criticism, further discussion is provided that may help in the pursuit of more fair and effective doping tests, here exemplified by chromatography with mass spectrometric detection. Surely, any doping test can only be improved or even optimized if the risks of false positives and false negatives are well defined. This requirement is consistent with a basic principle concerning mathematical approximations (Parlett in “The symmetric eigenvalue problem”, Prentice-Hall, Englewood Cliffs, 1980): apart from just being good, they should be known to be good. Author’s reply to the response on “Regulations in the field of residue and doping analysis...” Papers published in this section do not necessarily reflect the opinion of the Editors, the Editorial Board and the Publisher.  相似文献   

6.
The dispersion of results from proficiency tests for the analysis of pesticide residues in foodstuffs suggests that improvements in the compatibility of measurement results are needed. Currently observed divergences can make the evaluation conclusion on foodstuffs compliance with certain legislation dependent on the consulted laboratory. This work discusses the origin and impact of this lack of compatibility, following the metrological concepts presented at the latest version of the “International Vocabulary of Metrology” (VIM3), thus allowing for a clear diagnostic of the problem. The reporting of results from different measurement methods uncorrected for the observed analyte recovery makes them traceable to different “operationally defined measurement procedures” (VIM3) and, therefore, not comparable. When results from different measurement methods are reported corrected for analyte recovery, R, and R is different for spiked and incurred residues, measurement results may be not compatible if this effect is not considered on the uncertainty budget. This discussion is illustrated with metrological models for any possible combination of “measurement performance” and “maximum residue level”. These models are complemented with experimental data of the analysis of pesticide residues in a sample of ginseng powder from a proficiency test. The adopted experimental design allowed the identification of additional threats to metrological compatibility in this field. Solutions to the faced problem are discussed for practicability and impact on regulatory issues. The use of a universal “reference measurement procedure” proves to be the most feasible way of ensuring comparability of measurements in this field.  相似文献   

7.
A recent US Institute of Medicine report indicated that up to 98,000 deaths and more than 1 million injuries occur each year in the United States due to medical errors. These include diagnostic errors, such as an error or delay in diagnosis, failure to employ indicated tests and the use of outmoded tests. Laboratory tests provide up to 80% of the information used by physicians to make important medical decisions, therefore it is important to determine how often laboratory testing mistakes occur, whether they cause patient harm, where they are most likely to occur in the testing process, and how to prevent them from occurring. A review of the literature and a US Quality Institute Conference in 2003 indicates that errors in laboratory medicine occur most often in the pre-analytical and post-analytical steps in the testing process, but most of the quality improvement efforts focus on improving the analytical process. Measures must be developed and employed to reduce the potential for mistakes in laboratory medicine, including better indicators for the quality of laboratory service. Users of laboratory services must be linked with the laboratorys information system to assist them with decisions about test ordering, patient preparation, and test interpretation. Quality assessment efforts need to be expanded beyond external quality assessment programs to encompass the detection of non-analytical mistakes and improving communication between the users of and providers of laboratory services. The actual number of mistakes in laboratory testing is not fully recognized, because no widespread process is in place to either determine how often mistakes occur or to systematically eliminate sources of error. We also tend to focus on mistakes that result in adverse events, not the near misses that cause no observable harm. The users of laboratory services must become aware of where testing mistakes can occur and actively participate in designing processes to prevent mistakes. Most importantly, healthcare institutions need to adopt a culture of safety, which is implemented at all levels of the organization. This includes establishing closer links between providers of laboratory services and others in the healthcare delivery system. This was the theme of a 2003 Quality Institute Conference aimed at making the laboratory a key partner in patient safety. Plans to create a permanent public–private partnership, called the Institute for Quality in Laboratory Medicine, whose mission is to promote improvements in the use of laboratory tests and laboratory services are underway.Presented at the 9th Conference on Quality in the Spotlight, 18–19 March 2004, Antwerp, Belgium.  相似文献   

8.
Since the advent of the Guide to the expression of Uncertainty in Measurement (GUM) in 1995 laying the principles of uncertainty evaluation numerous projects have been carried out to develop alternative practical methods that are easier to implement namely when it is impossible to model the measurement process for technical or economical aspects. In this paper, the author presents the recent evolution of measurement uncertainty evaluation methods. The evaluation of measurement uncertainty can be presented according to two axes based on intralaboratory and interlaboratory approaches. The intralaboratory approach includes “the modelling approach” (application of the procedure described in section 8 of the GUM, known as GUM uncertainty framework) and “the single laboratory validation approach”. The interlaboratory approaches are based on collaborative studies and they are respectively named “interlaboratory validation approach” and “proficiency testing approach”.  相似文献   

9.
Point-of-care testing (POCT) in patients with ischemic heart disease is driven by the time-critical need for fast, specific, and accurate results to initiate therapy instantly. According to current guidelines, the results of the cardiac marker testing should be available to the physician within 30 min (“vein-to-brain” time) to initiate therapy within 60–90 min (“door-to-needle” time) after the patient has arrived at the emergency room or intensive care unit. This article reviews the current efforts to meet this goal (1) by implementing POCT of established biochemical markers such as cardiac troponins, creatine kinase MB, and myoglobin, in accelerated diagnosis and management workflow schemes, (2) by improving current POCT methods to obtain more accurate, more specific, and even faster tests through the integration of optical and electrochemical sensor technology, and (3) by identifying new markers for the very early and sensitive detection of myocardial ischemia and necrosis. Furthermore, the specific requirements for cardiac POCT in regard to analytical performance, comparability, and diagnostic sensitivity/specificity are discussed. For the future, the integration of new immunooptical and electrochemical chip technology might speed up diagnosis even further. However, every new development will have to meet the stringent method validation criteria set for corresponding central laboratory testing.  相似文献   

10.
The two most important concepts in metrology are certainly “traceability to standards” and “measurement uncertainty evaluation”. So far the questions related to these concepts have been reasonably solved in the metrology of “classical quantities”, but for the introduction of metrological concepts in new fields, such as chemistry and biology, a lot of problems remain and must be solved in order to support international arrangements. In this presentation, the authors want to develop the strategy implemented at Laboratoire national de métrologie et d’essais (LNE) in metrology in chemistry and biology. The strategy is based on: (1) pure solutions for calibration of analytical instruments, (2) use of certified reference materials (matrix reference materials), and (3) participation to proficiency testing schemes. Examples will be presented in organic and inorganic chemistry. For laboratory medicine, proficiency testing providers play an important role in the organization of External Quality Assessment Schemes. For the time being, the reference value or the assigned value of the comparison is calculated with the results obtained by the participants. This assigned value is not often traceable to SI units. One of the methods suggested by LNE is to ensure the metrological traceability to SI units of the assigned value for the more critical quantities carried on analytes by implementing the Joint committee for traceability in laboratory medicine reference methods.  相似文献   

11.
 The need for “quality” in near patient testing (NPT) has been acknowledged since the mid 1980s. The commonest biochemical NPT device is the dry reagent strip or “dipstick” for urinalysis. Dipsticks may be read in three ways, against the color chart printed along the side of the bottle, using a benchreader (the color chart printed on a flat card) or using an electronic reader. This report uses the results of a urinalysis quality assurance (QA) program, over 1998, to evaluate the “error” rates which occur using the three different reading methods. The QA samples are buffered aqueous solutions which are “spiked” to give concentrations midway between two color blocks for each analyte. Results are scored as ±1 if a color block adjacent to the target value, ±2 for results two color blocks (defined as “error”) and ±3 for results three color blocks (defined as “gross error”) from the target value. Analysis of the results show that the error rates are similar reading visually by either method, but greatly reduced when read electronically. Some persisting errors when using the electronic reader are explained by observation studies. The study highlights the value of a urinalysis QA program for NPT urinalysis in understanding the error rates of this simple but ubiquitous test. Received: 10 July 2000 / Accepted: 10 July 2000  相似文献   

12.
On the basis of R&D results on laboratory and pilot levels the possibility of production and expansion of “Tomskneftekhim” Ltd. brand assortment of polyolefin plastics with the use of contemporary titaniummagnesium catalysts (TMC) is analyzed. The production results of “Tomskneftekhim” Ltd. polypropylene output, reached due to the use of catalyst systems of I (aluminothermal TiCl3) and II (MSK-TiCl3) generations, and forecasting results for system of IV generation (TMC) are presented in the report. The experimental testing of propylene polymerization kinetic features, raw material quality, and influence of catalyst system composition and conditions of formation on exploitation characteristics of TMC was carried out by the example of two trade marks of TMC. The conclusion was drawn that the realization by “Tomskneftekhim” Ltd. of a complex program of R&D activities for implementation of the IV generation catalyst system in the suspension technology of the polypropylene production will increase the competitiveness of the process.  相似文献   

13.
A novel graphical method (‘Kiri plots’) for the presentation of proficiency test exercise results is presented. The Kiri plot visualises the evaluation of the proficiency test results based on three statistical tests (the z score, the zeta score and the relative uncertainty outlier test) by defining six zones including a central “in agreement” zone.  相似文献   

14.
The following report gives an overview on work done in the Catalysis Laboratory of the Department of Chemistry, National University of Singapore over the last 15 years (1989–2004). Much of this work can be described as “characterization of catalytically active surfaces through test reactions”. The methods, systems studied and the reactions that we evaluated will be described. The review will mostly concentrate on work from the authors’ laboratory, but other relevant work will also be cited.  相似文献   

15.
Lateral flow (immuno)assays are currently used for qualitative, semiquantitative and to some extent quantitative monitoring in resource-poor or non-laboratory environments. Applications include tests on pathogens, drugs, hormones and metabolites in biomedical, phytosanitary, veterinary, feed/food and environmental settings. We describe principles of current formats, applications, limitations and perspectives for quantitative monitoring. We illustrate the potentials and limitations of analysis with lateral flow (immuno)assays using a literature survey and a SWOT analysis (acronym for “strengths, weaknesses, opportunities, threats”). Articles referred to in this survey were searched for on MEDLINE, Scopus and in references of reviewed papers. Search terms included “immunochromatography”, “sol particle immunoassay”, “lateral flow immunoassay” and “dipstick assay”.  相似文献   

16.
Thanks to an algebraic duality property of reduced states, the Schmidt best approximation theorems have important corollaries in the rigorous theory of two-electron moleculae. In turn, the “harmonium model” or “Moshinsky atom” constitutes a non-trivial laboratory bench for energy functionals proposed over the years (1964–today), purporting to recover the full ground state of the system from knowledge of the reduced 1-body matrix. That model is usually regarded as solvable; however, some important aspects of it, in particular the exact energy and full state functionals—unraveling the “phase dilemma” for the system—had not been calculated heretofore. The solution is given here, made plain by working with Wigner quasiprobabilities on phase space. It allows in principle for thorough discussions of the merits of several approximate functionals popular in the theoretical chemical physics literature; in this respect, at the end we focus on Gill’s “Wigner intracule” method for the correlation energy.  相似文献   

17.
The ability of multivariate analysis methods such as hierarchical cluster analysis, principal component analysis and partial least squares-discriminant analysis (PLS-DA) to achieve olive oil classification based on the olive fruit varieties from their triacylglycerols profile, have been investigated. The variations in the raw chromatographic data sets of 56 olive oil samples were studied by high-temperature gas chromatography with (ion trap) mass spectrometry detection. The olive oil samples were of four different categories (“extra-virgin olive oil”, “virgin olive oil”, “olive oil” and “olive-pomace” oil), and for the “extra-virgin” category, six different well-identified olive oil varieties (“hojiblanca”, “manzanilla”, “picual”, “cornicabra”, “arbequina” and “frantoio”) and some blends of unidentified varieties. Moreover, by pre-processing methods of chemometric (to linearise the response of the variables) such as peak-shifting, baseline (weighted least squares) and mean centering, it was possible to improve the model and grouping between different varieties of olive oils. By using the first three principal components, it was possible to account for 79.50% of the information on the original data. The fitted PLS-DA model succeeded in classifying the samples. Correct classification rates were assessed by cross-validation.  相似文献   

18.
Developing and testing improved alarm algorithms is a priority of the Radiation Portal Monitor Project (RPMP) at PNNL. Improved algorithms may reduce the potential impediments that radiation screening presents to the flow of commerce, without affecting the detection sensitivity to sources of interest. However, assessing alarm-algorithm performance involves calculation of both detection probabilities and false alarm rates. For statistical confidence, this requires a large amount of data from drive-through (or “dynamic”) scenarios both with, and without, sources of interest, but this is usually not feasible. Instead, an “injection-study” procedure is used to approximate the profiles of drive-through commercial data with sources of interest present. This procedure adds net-counts from a pre-defined set of simulated sources to raw, gross-count drive-through data randomly selected from archived RPM data. The accuracy of the procedure — particularly the spatial distribution of the injected counts — has not been fully examined. This report describes the use of previously constructed and validated MCNP computer models for assessing the current injection-study procedure. In particular, this report focuses on the functions used to distribute the injected counts throughout the raw drive-through spatial profiles, and for suggesting a new class of injection spatial distributions that more closely resemble actual cargo scenarios.  相似文献   

19.
The estimation of uncertainty in organic elemental analysis for C, H, N and S is reported. Both “bottom up” and “top down” strategies are used for uncertainty calculations. The bottom up approach used the results of C, H, N, and S obtained from the homogeneity study of two pure chemicals (toluene-4-sulfonamide and 4(6)-methyl-2-thiouracil). Two calibration systems, K factor and calibration curve, were applied in this study and no significant differences were obtained. For the “top down” approach, we used the data obtained from a proficiency test on both pure chemicals from among 45 Spanish laboratories. Both approaches are compared and discussed below.  相似文献   

20.
Summary An analytical evaluation of an HPLC method with diode array detection to separate and quantify polyphenolic compounds from pears has been made. The method was applied to the quantitative analysis of phenolics from five pear horticultural cultivars (“Agua”, “Blanquilla”, “Conference”, “Pasagrana” and “Decana”) in both peel and pulp matrices and evaluated for precision and accuracy. Precision was taken as the reproducibility in peak area of the polyphenols of interest as well as in the slope of calibration graphs. Values ranged 2–5%. Accuracy was evaluated by recovery of all polyphenolic compounds from both peel and pulp in all pears investigated. Accuracy values ranged 92–102%, and were independent of the polyphenolic structure, horticultural cultivar and matrix. Identification was by comparing retention times and UV spectra with those of standards when commercially available. When not available commercially, provisional identification was according to spectral characteristics as well as from isolation and hydrolysis data. Application of the method revealed differences between peel and pulp in all cases studied; the higher levels of phenolics were found in the peels. “Decana” and “Pasagrana” cultivars showed the highest phenolic content compounds whereas “Conference” showed the lowest.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号