共查询到20条相似文献,搜索用时 15 毫秒
1.
Over half of the failures in drug development are due to problems with the absorption, distribution, metabolism, excretion, and toxicity, or ADME/Tox properties of a candidate compound. The utilization of in silico tools to predict ADME/Tox and physicochemical properties holds great potential for reducing the attrition rate in drug research and development, as this technology can prioritize candidate compounds in the pharmaceutical R&D pipeline. However, a major concern surrounding the use of in silico ADME/Tox technology is the reliability of the property predictions. Bio-Rad Laboratories, Inc. has created a computational environment that addresses these concerns. This environment is referred to as KnowItAll. Within this platform are encoded a number of ADME/Tox predictors, the ability to validate these predictors with/without in-house data and models, as well as build a 'consensus' model that may be a much better model than any of the individual predictive model. The KnowItAll system can handle two types of predictions: real number and categorical classification. 相似文献
2.
The use of proteomic data for compound characterisation and toxicity prediction has recently gathered much interest in the pharmaceutical industry, particularly with the development of new high-throughput proteomic techniques such as surface-enhanced laser desorption/ionisation time of flight mass spectrometry (SELDI-ToF) mass spectrometry. To validate these techniques, comparison with established methods such as clinical chemistry endpoints is required; however, there is currently no statistical method available to assess whether the proteomic data describes the same toxicological information as the clinical chemistry data. In this paper, generalised procrustes analysis (GPA) is applied to obtain a consensus between SELDI-ToF data and clinical chemistry data, both obtained from a study of cholestasis in rats. The significance of the consensus and the dimension of the consensus space are diagnosed by a newly developed randomisation F-test method for GPA [Food Qual. Pref. 13 (2002) 191]. Two kinds of matching were considered, using individual animals or treatment groups as samples in GPA. The results show that the SELDI-ToF data has significant consensus with clinical chemistry data, and that the consensus can be visualised in the significant dimensions of group average space. 相似文献
3.
The reliability of analytical data is very important to forensic and clinical toxicologists for the correct interpretation
of toxicological findings. This makes (bio)analytical method validation an integral part of quality management and accreditation
in analytical toxicology. Therefore, consensus should be reached in this field on the kind and extent of validation experiments
as well as on acceptance criteria for validation parameters. In this review, the most important papers published on this topic
since 1991 have been reviewed. Terminology, theoretical and practical aspects as well as implications for forensic and clinical
toxicology of the following validation parameters are discussed: selectivity (specificity), calibration model (linearity),
accuracy, precision, limits, stability, recovery and ruggedness (robustness).
Received: 16 June 2002 Accepted: 12 July 2002
Part of this review was published in the communications of the International Association of Forensic Toxicologists (TIAFT;
TIAFT Bulletin 32 (2002): 16–23) and of the Society for Forensic and Toxicologic Chemistry (GTFCH; Toxichem and Krimitech
68 (2001): 116-126).
Correspondence to F. T. Peters 相似文献
4.
Pharmaceutical use of finasteride (Dilaprost®) has been well documents in the peer-reviewed literature; however, the presence of trace amounts of related substances (impurities) in finasteride may influence the tharapeutic efficacy and safely. Due to limited information available, the objective of this study was to develop a quantification method for the three impurities of finasteride using high performance liquid chromatography (HPLC) with an ultraviolet (UV) detector. The compounds (impurities) of finasteride that are registered with the European Pharmacopeia, which we sought to validate are: -N-(1,1-dimethylethyl)-3-oxo-4-aza-5α-androstane-17β-carboxamide (impurity A), methyl 3-oxo-4-aza-5α-androst-1-ene-17β-carboxylate (impurity B), and -N-(1,1-dimehylethyl)-3-oxo-4-azaandrosta-1,5-diene-17β-carboxamide (impurity C). Analyses were performed using a Nova Pac C 18 column for HPLC with isocratic elution. Detection was carried out at 210 nm, the concentration of the three impurities was in the range was 1.5–4.5 μg mL −1 at ambient temperature with a mobile phase of water + acetonitrile + tetrahydrofuran (80:10:10, v/v/v) and the flow rate was 2.0 mL min −1. The recoveries were: 101.35 ± 0.62% (impurity A), 101.60 ± 2.66% (impurity B) and 101.97 ± 2.05% (impurity C). Validation of the method yielded fairly good results as it relates to the precision and accuracy. It is, therefore, concluded that the method would be suitable for not only the separation and determination of processed impurities to monitor the reactions, but also for the quality assurance of finasteride and its related substances. 相似文献
6.
Summary The automation of chromatographic systems is of increasing interest to industry and research laboratories in routine applications.
Besides potentially saving time or making better use of available instrumentation, automation also improves the quality of
results by producing more precise and more reproducible HPLC data. The need for the validation of methods and qualification
of instruments is increasingly recognised in order to ensure compliance with legal requirements (e.g. in the pharmaceutical
industry) and to ensure the reliability of analytical results. Possibilities and requirements for automated HPLC systems are
elaborated. Emphasis is placed on defining the goals of validation and on discussing different aspects of the validation of
LC methods, system suitability tests, ruggedness of methods and the transfer of LC methods from laboratory to laboratory.
Adequate strategies of HPLC method development provide very useful information on the validation and ruggedness of LC methods. 相似文献
7.
Ketoprofen is a non-steroidal anti-inflammatory drug (NSAID) widely used to treat rheumatoid arthritis and other inflammatory diseases. Normally used by oral route, this drug presents numerous side effects related to this administration route, such as nausea, dyspepsia, diarrhea, constipation and even renal complications. To avoid that, topical administration of ketoprofen represents a good alternative, since this drug has both partition coefficient and aqueous solubility suitable for skin application, compared to other NSAIDs. In this study, we describe the production of a nanoemulsion containing ketoprofen, its skin permeation and in vitro release study and a novel validation method to analyze this drug in the permeation samples and a forced degradation study using skin and nanoemulsion samples. The new HPLC method was validated, with all specifications in accordance with validation parameters and with an easy chromatographic condition. Forced degradation study revealed that ketoprofen is sensitive to acid and basic hydrolysis, developing degradation peaks after exposure to these factors. Concerning in vitro release from the nanoemulsion, release curves presented first order profile and were not similar to each other. After 8 h, 85% of ketoprofen was release from the nanoemulsion matrix while 49% was release from control group. In skin permeation study, nanoemulsion enabled ketoprofen to pass through the skin and enhanced retention in the epidermis and stratum corneum, layer on which the formulation presented statistically different values compared to the control group. 相似文献
9.
Summary A capillary zone electrophoresis method has been developed and validated for the analysis of chlortetracycline and related
substances. The influence of the type of buffer, pH and concentration of the buffer were investigated. In all cases 1 mM EDTA
was added to prevent metal ion complexation. Instrumental parameters such as capillary temperature and applied voltage were
optimised. The following methods is proposed: capillary: fused silica, 44 cm (36 cm effective length), 50 μm i.d.; buffer:
120 mM sodium tetraborate including 1mM EDTA at pH 8.5; voltage: 10 kV; temperature: 25°C; detection wavelength: 280 nm. The
robustness of the method has been examined by means of a full-fraction factorial design. The parameters for validation namely
relative standard deviation, linearity, precision, limit of detection and limit of quantitation are also reported. 相似文献
10.
Assessment of accuracy of analytical methods is a fundamental stage in method validation. The use of validation standards enables the assessment of both trueness and precision of analytical methods at the same time. Procedures of intra-laboratory testing of method accuracy using validation standards are outlined and discussed. 相似文献
11.
Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods. 相似文献
12.
This article discusses problems of validating classification models especially in datasets where sample sizes are small and the number of variables is large. It describes the use of percentage correctly classified (%CC) as an indicator for success of a classification model. For small datasets, %CC should not be used uncritically and its interpretation depends on sample size. It illustrates the use of a common classification method, discriminant partial least squares (D-PLS) on a randomly generated dataset of 200 samples and 200 variables. An aim of the classifier is to determine whether the null hypothesis (there is no distinction between two classes) can be rejected. Autoprediction gives an 84.5% CC. It is shown that, if there is variable selection, it must be performed independently on the training set to obtain a CC close to 50% on the test set; otherwise, over-optimistic and false conclusions can be reached about the ability to classify samples into groups. Finally, two aims of determining the quality of a model are frequently confused, namely optimisation (often used to determine the most appropriate number of components in a model) and independent validation; to overcome this, the data should be split into three groups. There are often difficulties with model building if validation and optimisation have been done on different groups of samples, especially using iterative methods, each group being modelled using properties, such as a different number of components or different variables. 相似文献
13.
Fractions of Cu and Zn species in legume samples (common white bean, pea, chick pea and lentil seeds and defatted soybean flour) were analysed by on-line hyphenation of size exclusion chromatography and inductively coupled plasma-mass spectrometry. Samples were extracted by 0.02 mol l −1 Tris–HCl buffer solution, pH 7.5. The extraction efficiency lay in the region 60–90 and 60–80% for Cu and Zn, respectively. Quantification of elements in the individual chromatographic fractions was carried out by isotope dilution (ID) and external calibration (EC) techniques. For ID analysis the chromatographic effluent was mixed with the flow of 65Cu and 68Zn isotope enriched solution and the isotope ratio values 63Cu/ 65Cu and ( 64Zn+ 66Zn)/ 68Zn were measured. In the case of EC technique calibration solutions of elements were injected to the flow of mobile phase by the second injector. Prior entering detector the effluent was mixed with the flow of internal standard solution (In, 50 μg l −1). Both methods have similar precision, however the behaviour of both studied elements was not the same. The chromatographic analysis itself was the main source of variability in the case of Cu. For Zn species analysis, the extraction process and the manipulation with the extract, played the significant role too. It was probably caused by lower stability of the present zinc chelates. The total amounts of Zn found in all chromatographic fractions represented 85–95% of Zn in sampled extract whereas those of Cu approached 100%. In case of small peaks the results of ID and EC were not the same. The EC results were lower then ID results. The great deal of results uncertainty accounts for the precision. 相似文献
14.
Unbiased evaluation of classification and calibration methods is important, especially as these methods are applied to increasingly complex data sets that are under-determined. Precision bounds, such as confidence intervals, are required for interpreting any experimental result. Using bootstrapped Latin partitions to evaluate classification and calibration models, bounds on the average predictions were obtained. These bounds characterize sources of variation attributed to building the model and the composition of the training set with respect to the test set. Furthermore, precision bounds on the average of the model-variable loadings allow the significance of characteristic features to be estimated. The procedure for bootstrapped Latin partitions is given and demonstrated with synthetic data sets for classification using linear discriminant analysis and fuzzy rule-building expert systems, and for calibration using partial least squares regression with one and three properties. All analyses were implemented on a personal computer with the longest evaluation requiring 6-h processing time. Analysis of variance and matched sample t-tests were also used to demonstrate the statistical power of these tests. 相似文献
15.
A HJPLC method for the determination of acyclovir in plasma is described. The method is simple and sensitive enough for bioequiva-lence studies, where a large number of plasma samples with low acyclovir concertration are involved. The procedure is based on the deproteinization of plasma with perchloric acid and separation of acyclovir on a Hypersil ODS Column at pH 5.6 with UV detection. The calibration standards are linear up to at least 4000 ng/mL and the limit of quantification is 10 ng/mL. 相似文献
16.
The Quality Assurance Department of Medix Diacor Labservice evaluated a two-way method validation procedure for serum lithium
quantification in therapeutic drug monitoring In the process of a company fusion and rationalization of two considerably large
production lines, three independent ion-selective electrode (ISE) methods were surveyed, among many others. While tailoring
the new medical laboratory production, subcontracting from a collaborating company was discontinued. Likewise, modernization
of the ISE instrumentation was unavoidable to increase throughput and effectiveness. It was important that the new result
levels should be comparable both with the former subcontractor's levels and with the levels reported from the previously existing
instrumentation. The aim of this study was to evaluate the most crucial performance characteristics of a novel lithium method
in comparison to the two ISE test methods being withdrawn. The standardized lithium test method was inspected in terms of
linear measurement range, analytical variation, bias, past and on-going proficiency testing, in addition to method comparison,
to achieve the desired analytical goals. Fulfilling the accreditation requirements in terms of the introduced method validation
parameters is discussed.
Received: 19 April 2000 / Accepted: 26 July 2000 相似文献
17.
Existing software and computer systems in laboratories require retrospective evaluation and validation if their initial validation
was not formally documented. The key steps in this process are similar to those for the validation of new software and systems:
user requirements and system specification, formal qualification, and procedures to ensure ongoing performance during routine
operation. The main difference is that frequently qualification of an existing system is based primarily on reliable operation
and proof of performance in the past rather than on qualification during development and installation.
Received: 30 April 1998 · Accepted: 2 June 1998 相似文献
18.
Software and computer systems are tested during all development phases. The user requirements and functional specifications
documents are reviewed by programmers and typical anticipated users. The design specifications are reviewed by peers in one
to two day sessions and the source code is inspected by peers, if necessary. Finally, the function and performance of the
system is tested by typical anticipated users outside the development department in a real laboratory environment. All development
phases including test activities and the final release follow a well-documented procedure.
Received: 17 May 1997 · Accepted: 30 June 1997 相似文献
19.
Installation and operational qualification are important steps in the overall validation and qualification process for software
and computer systems. This article guides users of such systems step by step through the installation and operational qualification
procedures. It provides guidelines on what should be tested and documented during installation prior to routine use. The author
also presents procedures for the qualification of software using chromatographic data systems and a network server for central
archiving as examples.
Received: 31 October 1997 · Accepted: 25 November 1997 相似文献
20.
When software and computer systems are purchased from vendors, the user is still responsible for the overall validation.
Because the development validation can only be done by the developers, the user can delegate this part to the vendor. The
user's firm should have a vendor qualification program in place to check for this. The type of qualification depends very
much on the type and complexity of software and can go from documented evidence of ISO 9001 or equivalent certification for
off-the-shelf products to direct audit for software that has been developed on a contract basis. Using a variety of practical
examples, the article will help to find the optimal qualification procedure.
Received: 8 August 1997 · Accepted: 12 September 1997 相似文献
|