首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One of the major problems in the signal comparison of chromatographic data is the variability of response caused by instrumental drifts and others instabilities. Measures of quality control and evaluation of conformity are inherently sensitive to shift. It is essential to be able to compare test samples to reference samples in an evolutionary analytical environment by offsetting the inevitable drift. Therefore, prior to any multivariate analysis, the alignment of analytical signals is a compulsory preprocessing step. During recent years, many researchers have taken a greater interest in the study of the alignment. The present paper is an updated review on the alignment algorithms, methods, and improvements used in chromatography. The study is dedicated to one‐dimensional signals. Several of the exposed methods have common theoretical bases and can differ through their optimization methods. The main issue for the operator is to choose the appropriate method according to the type of signals to be processed.  相似文献   

2.
The Interval Correlation Optimised Shifting algorithm (icoshift) has recently been introduced for the alignment of nuclear magnetic resonance spectra. The method is based on an insertion/deletion model to shift intervals of spectra/chromatograms and relies on an efficient Fast Fourier Transform based computation core that allows the alignment of large data sets in a few seconds on a standard personal computer. The potential of this programme for the alignment of chromatographic data is outlined with focus on the model used for the correction function. The efficacy of the algorithm is demonstrated on a chromatographic data set with 45 chromatograms of 64,000 data points. Computation time is significantly reduced compared to the Correlation Optimised Warping (COW) algorithm, which is widely used for the alignment of chromatographic signals. Moreover, icoshift proved to perform better than COW in terms of quality of the alignment (viz. of simplicity and peak factor), but without the need for computationally expensive optimisations of the warping meta-parameters required by COW. Principal component analysis (PCA) is used to show how a significant reduction on data complexity was achieved, improving the ability to highlight chemical differences amongst the samples.  相似文献   

3.
In drug design, often enough, no structural information on a particular receptor protein is available. However, frequently a considerable number of different ligands is known together with their measured binding affinities towards a receptor under consideration. In such a situation, a set of plausible relative superpositions of different ligands, hopefully approximating their putative binding geometry, is usually the method of choice for preparing data for the subsequent application of 3D methods that analyze the similarity or diversity of the ligands. Examples are 3D-QSAR studies, pharmacophore elucidation, and receptor modeling. An aggravating fact is that ligands are usually quite flexible and a rigorous analysis has to incorporate molecular flexibility. We review the past six years of scientific publishing on molecular superposition. Our focus lies on automatic procedures to be performed on arbitrary molecular structures. Methodical aspects are our main concern here. Accordingly, plain application studies with few methodical elements are omitted in this presentation. While this review cannot mention every contribution to this actively developing field, we intend to provide pointers to the recent literature providing important contributions to computational methods for the structural alignment of molecules. Finally we provide a perspective on how superposition methods can effectively be used for the purpose of virtual database screening. In our opinion it is the ultimate goal to detect analogues in structure databases of nontrivial size in order to narrow down the search space for subsequent experiments.  相似文献   

4.
The analytical methods mass spectrometry, UV/Vis, IR, Raman, Fluorometry, XRD, Mössbauer, and NMR used to elucidate chemical structure are evaluated regarding their capabilities to be used as primary analytical techniques in quantitative measurements, considering the criteria in the CCQM definition of primary methods. This includes a review of the respective measurement equations, the evaluation of the measurement uncertainty, and a discussion of evidence for the “highest metrological level”, as obtained from intercomparisons in contest with other methods. It is shown that only few methods fulfill the CCQM criteria. Quantitative NMR spectroscopy is one of them and may be considered as a potential primary method as recommended by CCQM because of being free of empirical factors in the uncertainty budget.  相似文献   

5.
In medical products, shelf-life after thermoplastic processing and sterilization is important, and ionizing radiation has become a preferred sterilization mode for medical devices. We have employed successfully thermal analytical methods to predict shelf-life for many polyolefin materials. However, as the material of construction becoming more sophisticated: multiphase alloys and blends, multi-layer constructions etc., issues existed that require clarification as to what extent these methodologies are applicable. We have employed thermal analytical methods in conjunction with spectroscopic and morphological methods to study the applicability and limitation of these techniques. Results were combined with real life and simulated aging experiments and presented in this article.  相似文献   

6.
Nowadays, numerous metabolite concentrations can readily be determined in a given biological sample by high-throughput analytical methods. However, such raw analytical data comprise noninformative components due to many disturbances normally occurring in the analyses of biological material. To eliminate those unwanted original analytical data components, advanced chemometric data preprocessing methods might be of help. Here, such methods are applied to electrophoretic nucleoside profiles in urine samples of cancer patients and healthy volunteers. In this study, three warping methods: dynamic time warping (DTW), correlation optimized warping (COW), and parametric time warping (PTW) were examined on two sets of electrophoretic data by means of quality of peaks alignment, time of preprocessing, and way of customization. The application of warping methods helped to limit shifting of peaks and enabled differentiation between whole electropherograms of healthy and cancer patients objectively by a principal component analysis (PCA). The evaluation of preprocessed data and raw data by PC analysis confirms differences between the applied warping tools and proves their suitability in metabonomic data interpretation.  相似文献   

7.
Multiple sequence alignment is a basic tool in computational genomics. The art of multiple sequence alignment is about placing gaps. This paper presents a heuristic algorithm that improves multiple protein sequences alignment iteratively. A consistency-based objective function is used to evaluate the candidate moves. During the iterative optimization, well-aligned regions can be detected and kept intact. Columns of gaps will be inserted to assist the algorithm to escape from local optimal alignments. The algorithm has been evaluated using the BAliBASE benchmark alignment database. Results show that the performance of the algorithm does not depend on initial or seed alignments much. Given a perfect consistency library, the algorithm is able to produce alignments that are close to the global optimum. We demonstrate that the algorithm is able to refine alignments produced by other software, including ClustalW, SAGA and T-COFFEE. The program is available upon request.  相似文献   

8.
Nuclear magnetic resonance (NMR) is a well-known analytical technique for the analysis of complex mixtures. Its quantitative capability makes it ideally suited to metabolomics or lipidomics studies involving large sample collections of complex biological samples. To overcome the ubiquitous limitation of spectral overcrowding when recording 1D NMR spectra on such samples, the acquisition of 2D NMR spectra allows a better separation between overlapped resonances while yielding accurate quantitative data when appropriate analytical protocols are implemented. Moreover, the experiment duration can be considerably reduced by applying fast acquisition methods. Here, we describe the general workflow to acquire fast quantitative 2D NMR spectra in the “omics” context. It is illustrated on three representative and complementary experiments: UF COSY, ZF-TOCSY with nonuniform sampling, and HSQC with nonuniform sampling. After giving some details and recommendations on how to apply this protocol, its implementation in the case of targeted and untargeted metabolomics/lipidomics studies is described.  相似文献   

9.
Quality of botanical products is a great uncertainty that consumers, clinicians, regulators, and researchers face. Definitions of quality abound, and include specifications for sanitation, adventitious agents (pesticides, metals, weeds), and content of natural chemicals. Because dietary supplements (DS) are often complex mixtures, they pose analytical challenges and method validation may be difficult. In response to product quality concerns and the need for validated and publicly available methods for DS analysis, the US Congress directed the Office of Dietary Supplements (ODS) at the National Institutes of Health (NIH) to accelerate an ongoing methods validation process, and the Dietary Supplements Methods and Reference Materials Program was created. The program was constructed from stakeholder input and incorporates several federal procurement and granting mechanisms in a coordinated and interlocking framework. The framework facilitates validation of analytical methods, analytical standards, and reference materials.  相似文献   

10.
The characterization of herbal extracts to compare samples from different origin is important for robust production and quality control strategies. This characterization is now mainly performed by analysis of selected marker compounds. Metabolic fingerprinting of full metabolite profiles of plant extracts aims at a more rapid and thorough screening or classification of plant material. We will show that HPLC is an appropriate technique for metabolic fingerprinting of secondary metabolites, given that adequate preprocessing of raw profiles is performed. Additional variation, which results from sample preparation and changing measurement conditions, usually obscures the information of interest in these raw profiles. This paper illustrates the importance of preprocessing of chromatographic fingerprinting data. Different alignment methods are discussed as well as the influence of normalization. Weighted principal component analysis is introduced as a valuable alternative to autoscaling of data. LC-UV data on Willow (Salix sp.) extracts is used to evaluate these preprocessing methods and their influence on exploratory data analysis.  相似文献   

11.
The introduction of sustainable development concepts to analytical laboratories has recently gained interest, however, most conventional high‐performance liquid chromatography methods do not consider either the effect of the used chemicals or the amount of produced waste on the environment. The aim of this work was to prove that conventional methods can be replaced by greener ones with the same analytical parameters. The suggested methods were designed so that they neither use nor produce harmful chemicals and produce minimum waste to be used in routine analysis without harming the environment. This was achieved by using green mobile phases and short run times. Four mixtures were chosen as models for this study; clidinium bromide/chlordiazepoxide hydrochloride, phenobarbitone/pipenzolate bromide, mebeverine hydrochloride/sulpiride, and chlorphenoxamine hydrochloride/caffeine/8‐chlorotheophylline either in their bulk powder or in their dosage forms. The methods were validated with respect to linearity, precision, accuracy, system suitability, and robustness. The developed methods were compared to the reported conventional high‐performance liquid chromatography methods regarding their greenness profile. The suggested methods were found to be greener and more time‐ and solvent‐saving than the reported ones; hence they can be used for routine analysis of the studied mixtures without harming the environment.  相似文献   

12.
Summary Analytical protocols have been adapted for the study of hydrocarbons at the trace level in the environment. Various samples, including sediments and biota, were collected from the Kuwaiti environment, treated according to the protocol and analyzed by chromatographic and spectroscopic methods. The methods used were synchronous scanning fluorescence spectroscopy (SSFS); high-performance liquid chromatography (HPLC) on C18 reversed-phase and NH2 normal-phase columns with UV and fluorescence detectors; gas chromatography on fused-silica capillary columns (GC) with flame ionization detector (FID), mass spectrometer (MS) and flame photometric detector (FPD); and high-resolution molecular spectrofluorimetry in Shpol'skii matrix at 10 K (HRSS). The different methods were found to give complementary information. SSFS was useful for fast evaluation and preliminary assessment of oil pollution during extended programs; it permitted sample selection for deeper analyses but, when applied to biota, needed special care in the clean-up procedure. GC/FID, was used to analyze saturated and ethylenic compounds and was useful for obtaining information on the origin of hydrocarbons but inconvenient for analyzing the aromatic fraction. GC/FPD was difficult to use with sediment samples and yielded little information on biota samples, although it did permit confirmation of high oil contamination in some examples. HPLC on a normal-phase column with UV and fluorescence detectors was useful for the fractionation of samples and for the separation of different families of aromatic compounds according to aromatic carbon number. GC/MS was used to quantify polycyclic aromatic hydrocarbons (PAHs) of less than four cycles but was not sensitive enough for PAHs of higher molecular weight. HRSS, however, was useful for the quantification of heavy PAHs and was also faster, could be automated, and gave accurate results. However, in an oil-pollution study, it must be backed up by the other techniques. In fact, no single analytical technique was found to be sufficient, and only judicious combinations of the tested techniques yielded adequate information on the origin of hydrocarbons in the environment.  相似文献   

13.
The factors that control the alignment of LC side-chain polymers in directing a.c. and d.c. electric fields are critically reviewed. The principles involved when alignment is attempted by cooling from the melt in the presence of an electric field or by direct application of the field to a material in its liquid-crystalline (LC) state are outlined, and the difficulties which may be encountered in practice are described and evaluated. An “electrical cleaning” method whereby the low-frequency conductance losses in a sample may be reduced is described and is applied to LC polymer materials. Experimental dielectric data are presented and analysed for two LC polymers which contain azo-groups in the side chains.  相似文献   

14.
Due to the developing insights in the theory of chromatography, column manufacturers of any kind (industrial, academic) nowadays have a broad array of experimental column testing tools at their disposal. The present tutorial aims at helping the novice in the field getting an overview of these tools and provides a fixed procedure to carry out the subsequent steps in the column quality analysis (guided via an Excel template file). After some brief introduction about the main equations, the reader is taken step by step through the theories underlying the measurement methods for the different column and performance parameters. In the final section, the reader is taken through the different items in the Excel template.  相似文献   

15.
In the past years, it has been recognised that the levodopa therapy may be improved with therapeutic regimens including a catechol-O-methyltransferase (COMT) inhibitor. At the present time, tolcapone and entacapone are the only two COMT inhibitors available in the market. However, further COMT inhibitors are under development for Parkinson's disease, namely nebicapone and opicapone (formerly known as BIA 9-1067). In addition, the nitecapone, another well-known COMT inhibitor, is also in preclinical development but for neuropathic pain. Since the 1990s different liquid chromatography methods have been developed and validated to quantify tolcapone, entacapone, nitecapone, nebicapone and some metabolites in biological samples, particularly in plasma samples obtained from rodent and human species. These bioanalytical methods have been primarily used to support pharmacokinetic assays with such COMT inhibitors in non-clinical and clinical studies. As these inhibitors present hydrophobic groups in their chemical structures, reversed-phase liquid chromatography has been used as the major approach for the determination of such compounds, especially high-performance liquid chromatography coupled to ultraviolet detection (HPLC-UV), electrochemical detection (HPLC-ECD) and mass spectrometry detection (HPLC–MS). Regarding the sample preparation, the traditional liquid–liquid extraction (LLE) and solid-phase extraction (SPE) were also the most widely used procedures for extraction of the analytes of interest prior to the analysis of samples. Thus, this review aimed to gather, for the first time, sufficient background information about the bioanalytical chromatographic methods which have been already developed and applied for the determination of tolcapone, entacapone, nitecapone, nebicapone and their metabolites. Moreover, some pharmacokinetic aspects of the COMT inhibitors with interest from a bioanalytical perspective were also addressed.  相似文献   

16.
Metabolomics is the discipline where endogenous and exogenous metabolites are assessed, identified and quantified in different biological samples. Metabolites are crucial components of biological system and highly informative about its functional state, due to their closeness to functional endpoints and to the organism's phenotypes. Nuclear Magnetic Resonance (NMR) spectroscopy, next to Mass Spectrometry (MS), is one of the main metabolomics analytical platforms. The technological developments in the field of NMR spectroscopy have enabled the identification and quantitative measurement of the many metabolites in a single sample of biofluids in a non-targeted and non-destructive manner. Combination of NMR spectra of biofluids and pattern recognition methods has driven forward the application of metabolomics in the field of biomarker discovery. The importance of metabolomics in diagnostics, e.g. in identifying biomarkers or defining pathological status, has been growing exponentially as evidenced by the number of published papers. In this review, we describe the developments in data acquisition and multivariate analysis of NMR-based metabolomics data, with particular emphasis on the metabolomics of Cerebrospinal Fluid (CSF) and biomarker discovery in Multiple Sclerosis (MScl).  相似文献   

17.
Two of the chiroptical spectroscopic methods, namely optical rotatory dispersion (ORD) and electronic circular dichroism (ECD), have been around for several decades. But their use in determining the absolute configuration and predominant conformation is gaining renewed interest with the availability of quantum mechanical methods for predicting ORD and ECD. Two other methods, namely vibrational circular dichroism (VCD) and vibrational Raman optical activity (VROA), are relatively new and offer convenient approaches for deducing the structural information in chiral molecules. With the availability of quantum mechanical programs for predicting VCD and VROA, these methods have attracted numerous new researchers to this area. This review summarizes the latest developments in these four areas and provides examples where more than one method has been used to confirm the information obtained from individual methods.  相似文献   

18.
The preprocessing of chromatograms, such as the alignment of retention time shifts, is often a crucial step in the proper data analysis chain. Here, an efficient approach to align shifted chromatographic signals, longest distance shifting, is presented and highlighted. The performance of this novel strategy was demonstrated by using both simulated chromatograms that covered the different kinds of retention time shifts and the real experimental chromatograms of Pudilan Xiaoyan Tablets obtained by high‐performance liquid chromatography with photodiode array detection. The averaged correlation coefficient for experimental chromatograms were in the range of 0.9517–0.9840 and the peak factor was 0.9989. As a comparison, all the chromatograms have also been aligned using correlation optimized warping and Interval Correlation Optimized Shifting algorithms. The obtained results indicate that the longest distance shifting algorithm is simpler, faster and more effective, and will be potentially suitable for the alignment of other types of signals.  相似文献   

19.
通过荧光光谱法和紫外吸收光谱法研究了超氧化物歧化酶和茉莉酸甲酯之间的相互作用。经过荧光光谱法的研究表明,超氧化物歧化酶和茉莉酸甲酯的相互作用是一个静态猝灭的过程,它们之间是以氢键和范德华力结合的,同时得到超氧化物歧化酶和茉莉酸甲酯的结合位点数n,表观结合常数K和热力学参数ΔH,ΔG,ΔS。而根据非放射性能量转移的F rster理论和同步荧光光谱证明茉莉酸甲酯和超氧化物歧化酶的相互作用位点同时接近超氧化物歧化酶色氨酸(Trp)和酪氨酸(Tyr)残基。  相似文献   

20.
This tutorial provides a concise overview of support vector machines and different closely related techniques for pattern classification. The tutorial starts with the formulation of support vector machines for classification. The method of least squares support vector machines is explained. Approaches to retrieve a probabilistic interpretation are covered and it is explained how the binary classification techniques can be extended to multi-class methods. Kernel logistic regression, which is closely related to iteratively weighted least squares support vector machines, is discussed. Different practical aspects of these methods are addressed: the issue of feature selection, parameter tuning, unbalanced data sets, model evaluation and statistical comparison. The different concepts are illustrated on three real-life applications in the field of metabolomics, genetics and proteomics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号