首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The interpretation of raw signals in capillary CE can be challenging if there are unknown peaks, or the signal is corrupt due to baseline fluctuations, EOF velocity drift, etc. Signal processing could be required before results can be interpreted. A suite of signal processing algorithms has been developed for CE data analysis, specifically for use in field experiments for the detection of nerve agents using portable CE instruments. Everything from baseline correction and electropherogram alignment to peak matching and identification is included in these programs. Baseline correction is achieved by interpolating a new baseline according to points found using all local extremes, by applying an appropriate outliers test. Irreproducible migration times are corrected by compensating for EOF drift, measured with the aid of thermal marks. Thermal marks are small disturbances in the capillary created by punctual heating that move with the velocity of EOF. Peaks in the sample electropherogram are identified using a fuzzy matching algorithm, by comparing peaks from the sample electropherogram to peaks from a reference electropherogram.  相似文献   

3.
A novel technique for removal of three-dimensional background drift in comprehensive two-dimensional (2D) liquid chromatography coupled with diode array detection (LCxLC-DAD) data is proposed. The basic idea is to perform trilinear decomposition on the instrumental response data, which is based on the alternating trilinear decomposition (ATLD) algorithm. In model construction, the background drift is modeled as one component or factor as well as the analytes of interest, hence, the drift is explicitly included into the calibration. The method involves performing trilinear decomposition on the raw data, then extracting the background component and subtracting this background data from the raw data, leaving the analytes' signal on a flat baseline. Simultaneous evaluation of three-dimensional background drift and true signals may improve the quality of the data. This method is applied to the determination and removal of three-dimensional background drifts in simulated multidimensional data as well as experimental comprehensive two-dimensional liquid chromatographic data. It is shown that this technique yield a good removal of background drift, without the need to perform a blank chromatographic run, and required no prior knowledge about the sample composition.  相似文献   

4.
Metabolic fingerprinting of biofluids such as urine can be used to detect and analyse differences between individuals. However, before pattern recognition methods can be utilised for classification, preprocessing techniques for the denoising, baseline removal, normalisation and alignment of electropherograms must be applied. Here a MEKC method using diode array detection has been used for high-resolution separation of both charged and neutral metabolites. Novel and generic algorithms have been developed for use prior to multivariate data analysis. Alignment is achieved by combining the use of reference peaks with a method that uses information from multiple wavelengths to align electropherograms to a reference signal. This metabolic fingerprinting approach by MEKC has been applied for the first time to urine samples from autistic and control children in a nontargeted and unbiased search for markers for autism. Although no biomarkers for autism could be determined using MEKC data here, the general approach presented could also be applied to the processing of other data collected by CE with UV-Vis detection.  相似文献   

5.
Noisy data has always been a problem to the experimental community. Effective removal of noise from data is important for better understanding and interpretation of experimental results. Over the years, several methods have evolved for filtering the noise present in the data. Fast Fourier transform (FFT) based filters are widely used because they provide precise information about the frequency content of the experimental data, which is used for filtering of noise. However, FFT assumes that the experimental data is stationary. This means that: (i) the deterministic part of the experimental data obtained from a system is at steady state without any transients and has frequency components which do not vary with respect to time and (ii) noise corrupting the experimental data is wide sense stationary, that is, mean and variance of the noise does not statistically vary with respect to time. Several approaches, for example, short time Fourier transform (STFT) and wavelet transform‐based filters, have been developed to handle transient data corrupted with nonstationary noise (mean and variance of noise varies with respect to time) data. Both these approaches provide time and frequency information about the data (time at which a particular frequency is present in the signal). However, these filtering approaches have the following drawbacks: (i) STFT requires identification of an optimal window length within which the data is stationary, which is difficult and (ii) there are theoretical limits on simultaneous time and frequency resolution. Hence, filtering of noise is compromised. Recently, empirical mode decomposition (EMD) has been used in several applications to decompose a given nonstationary data segment into several characteristic oscillatory components called intrinsic mode functions (IMFs). Fourier transform of these IMFs identifies the frequency content in the signal, which can be used for removal of noisy IMFs and reconstruction of the filtered signal. In this work, we propose an algorithm for effective filtering of noise using an EMD‐based FFT approach for applications in polymer physics. The advantages of the proposed approach are: (i) it uses the precise frequency information provided by the FFT and, therefore, efficiently filters a wide variety of noise and (ii) the EMD approach can effectively obtain IMFs from both nonstationary as well as nonlinear experimental data. The utility of the proposed approach is illustrated using an analytical model and also through two typical laboratory experiments in polymer physics wherein the material response is nonstationary; standard filtering approaches are often inappropriate in such cases. © 2010 Wiley Periodicals, Inc. J Polym Sci Part B: Polym Phys, 2011  相似文献   

6.
基于小波理论的化学谱图数据自适应滤波方法研究   总被引:8,自引:0,他引:8  
运用小波理论,利用噪声与真实信号小波变换极大模性态之间的显著差异,提出了一类新的化学谱图数据自适应滤波算法,从根本上突破了现有算法均依据信噪频率特性进行滤波的传统模式.经大量色谱谱图数据处理试验证明,这种算法具有无需设置初始参数,消除人为误差因素对分析计算结果的影响,信噪分离性能好及峰位和峰高保持不变等一系列优点,其鲁棒性、自适应性和谱峰保真度完全符合仪器分析信号处理的要求.  相似文献   

7.
The spectra processing step is crucial in metabolomics approaches, especially for proton NMR metabolomics profiling. During this step, noise reduction, baseline correction, peak alignment and reduction of the 1D 1H-NMR spectral data are required in order to allow biological information to be highlighted through further statistical analyses. Above all, data reduction (binning or bucketing) strongly impacts subsequent statistical data analysis and potential biomarker discovery. Here, we propose an efficient spectra processing method which also provides helpful support for compound identification using a new data reduction algorithm that produces relevant variables, called buckets. These buckets are the result of the extraction of all relevant peaks contained in the complex mixture spectra, rid of any non-significant signal. Taking advantage of the concentration variability of each compound in a series of samples and based on significant correlations that link these buckets together into clusters, the method further proposes automatic assignment of metabolites by matching these clusters with the spectra of reference compounds from the Human Metabolome Database or a home-made database. This new method is applied to a set of simulated 1H-NMR spectra to determine the effect of some processing parameters and, as a proof of concept, to a tomato 1H-NMR dataset to test its ability to recover the fruit extract compositions. The implementation code for both clustering and matching steps is available upon request to the corresponding author.
Figure
Illustration of the processing approach from spectra bucketing to the proposal of candidate compounds, using a set of six simulated NMR spectra. First, the ERVA method of data reduction is applied to the spectra after noise processing, generating buckets as shown for two spectra regions. Second, the correlation matrix between bucket intensities is computed and a correlation threshold is applied for bucket clustering. The cluster shown gathers two sub-clusters (A and B), each being intra-connected with higher correlations (r?>?0.996) than the interconnections (r?<?0.994). Third, matching of the cluster with using a reference compound library provides a list of candidate compounds. Last, for validation, the reference spectrum of proline is shown with the corresponding matched regions highlighted.  相似文献   

8.
A detailed depth characterization of multilayered polymeric systems is a very attractive topic. Currently, the use of cluster primary ion beams in time‐of‐flight secondary ion mass spectrometry allows molecular depth profiling of organic and polymeric materials. Because typical raw data may contain thousands of peaks, the amount of information to manage grows rapidly and widely, so that data reduction techniques become indispensable in order to extract the most significant information from the given dataset. Here, we show how the wavelet‐based signal processing technique can be applied to the compression of the giant raw data acquired during time‐of‐flight secondary ion mass spectrometry molecular depth‐profiling experiments. We tested the approach on data acquired by analyzing a model sample consisting of polyelectrolyte‐based multilayers spin‐cast on silicon. Numerous wavelet mother functions and several compression levels were investigated. We propose some estimators of the filtering quality in order to find the highest ‘safe’ approximation value in terms of peaks area modification, signal to noise ratio, and mass resolution retention. The compression procedure allowed to obtain a dataset straightforwardly ‘manageable’ without any peak‐picking procedure or detailed peak integration. Moreover, we show that multivariate analysis, namely, principal component analysis, can be successfully combined to the results of the wavelet‐filtering, providing a simple and reliable method for extracting the relevant information from raw datasets. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
It is often desirable to selectively remove corrupting or uninteresting signals from complex NMR spectra without disturbing overlapping or nearby signals. For biofluids in particular, removal of solvent and urea signals is important for retaining quantitative accuracy in NMR‐based metabonomics. This article presents a novel algorithm for efficient filtering of unwanted signals using the filter diagonalization method (FDM). Unwanted signals are modeled in the time domain using FDM. This modeled signal is subtracted from the original free induction decay. The resulting corrected signal is then processed using established workflow. The algorithm is found to be reliable and fast. By eliminating large, broad, uninteresting signals, many spectra can be subjected to fully automated absolute value processing, allowing objective preparation of spectra for pattern recognition analysis. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Peak recognition imitating human judgement   总被引:1,自引:0,他引:1  
B. Schirm  H. Wätzig 《Chromatographia》1998,48(5-6):331-346
Summary Peak integration is still a major source of error in analytical techniques such as chromatography (LC and GC), aapillary electrophoresis (CE), spectrosocpy, and electrochemistry. If the baseline is complex, e.g. because of matrix effects, or if the peak shape is irregular, e.g. because of peak tailing, the results are often not satisfactory when classical procedures are used. These shortcomings arise because of the stepwise appearance of the chromatogram. An algorithm that copies the human method of considering baseline and peaks as a whole has already been introduced. Here the use of a straight line as a baseline model led to an improvement in several instances. The baseline is, however, usually not exactly straight and rigid. A baseline model with flexible properties is more advantageous. Thus the smoothing cubic spline function is applied in this work. Here the rigidity can be controlled by use of a parameterp k. The prediction interval of the spline is used for iterative distinction between baseline and peak regions. Afterwards straightforward optimization of the peak boundaries is applied. More than 50 series of consecutive injections of the same sample (n=40 on average) were used to test the performance of this procedure. The same raw data have been integrated by means of the algorithm described here and by use of commercially available software. The reproducibility of the main component peak are within the series was taken as a measure of integration quality. Typically the new procedure reducesRSD % by approximately 33% (e.g. from 1.5% to 1.0%). The improvement is even more impressive for difficult samples with complex matrices, e.g. blood plasma or polymer excipients. for such samples improvements of up to a factor of 6 are obtained.  相似文献   

11.
Hyperspectral imaging (HSI) is a method for exploring spatial and spectral information associated with the distribution of the different compounds in a chemical or biological sample. Amongst the multivariate image analysis tools utilized to decompose the raw data into a bilinear model, multivariate curve resolution alternating least squares (MCR‐ALS) can be applied to obtain the distribution maps and pure spectra of the components of the sample image. However, a requirement is to have the data in a two‐way matrix. Thus, a preliminary step consists of unfolding the raw HSI data into a single‐pixel direction. Consequently, through this data manipulation, the information regarding pixel neighboring is lost, and spatial information cannot directly be constrained on the component profiles in the current MCR‐ALS algorithm. In this short communication, we propose an adaptation of the MCR‐ALS framework, enabling the potential implementation of any variation of spatial constraint. This can be achieved by adding, at each least‐squares step, refolding/unfolding of the distribution maps for the components. The implementation of segmentation, shape smoothness, and image modeling as spatial constraints is proposed as a proof of concept. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
In the work, a new signal processing algorithm is presented and applied for the transformation of the sigmoidal shaped curves, registered in normal pulse voltammetry (NPV), to the peak shaped curves. The method is based on the continuous wavelet transform (CWT) and specially constructed mother wavelet defined using the ideal wave‐shaped curve. Transformation of the signal, elimination of noises and separation of overlapping data can be achieved in one step by means of proposed procedure. The operation and effectiveness of the algorithm is presented using ideal, noised and overlapping simulated curves. The process of simultaneous determination of lead and indium as well as transformation of curves registered for elements' ions: lead, indium, thallium and cadmium is described. The obtained results show substantial improvement of the performance of NPV.  相似文献   

13.
Orthogonal WAVElet correction (OWAVEC) is a pre-processing method aimed at simultaneously accomplishing two essential needs in multivariate calibration, signal correction and data compression, by combining the application of an orthogonal signal correction algorithm to remove information unrelated to a certain response with the great potential that wavelet analysis has shown for signal processing. In the previous version of the OWAVEC method, once the wavelet coefficients matrix had been computed from NIR spectra and deflated from irrelevant information in the orthogonalization step, effective data compression was achieved by selecting those largest correlation/variance wavelet coefficients serving as the basis for the development of a reliable regression model. This paper presents an evolution of the OWAVEC method, maintaining the first two stages in its application procedure (wavelet signal decomposition and direct orthogonalization) intact but incorporating genetic algorithms as a wavelet coefficients selection method to perform data compression and to improve the quality of the regression models developed later. Several specific applications dealing with diverse NIR regression problems are analyzed to evaluate the actual performance of the new OWAVEC method. Results provided by OWAVEC are also compared with those obtained with original data and with other orthogonal signal correction methods.  相似文献   

14.
Determining the rank of a chemical matrix is the first step in many multivariate, chemometric studies. Rank is defined as the minimum number of linearly independent factors after deletion of factors that contribute to random, nonlinear, uncorrelated errors. Adding a matrix of rank 1 to a data matrix not only increases the rank by one unit but also perturbs the primary factor axes, having little effect on the secondary axes associated with the random errors in the measurements. The primary rank of a data matrix can be determined by comparing the residual variances obtained from principal component analysis (PCA) of the original data matrix to those obtained from an augmented matrix. The ratio of the residual variances between adjacent factor levels represents a Fisher ratio that can be used to distinguish the primary factors (chemical as well as instrumental factors) from the secondary factors (experimental errors). The results gleaned from model studies as well as those from experimental studies are used to illustrate the efficacy of the proposed methodology. The method is independent of the nature of the error distribution. Limitations and precautions are discussed. An algorithm, written in MATLAB format, is included. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
An efficient QP partitioning algorithm to compute the eigenvalues, eigenvectors, and the dynamics of large molecular systems of a particular type is presented. Compared to straightforward diagonalization, the algorithm displays favorable scaling (proportional to N(T)2) as a function of N(T), the size of the Hamiltonian matrix. In addition, the algorithm is trivially parallelizable, necessitating no "cross-talk" between nodes, thus enjoying the full linear speedup of parallelization. Moreover, the method requires very modest storage space, even for extremely large matrices. The method has also been enhanced through the development of a coarse-grained approximation, enabling an increase of the basis set size to unprecedented levels (10(8)-10(10) in the current application). The QP algorithm is applied to the dynamics of electronic internal conversion in a 24 vibrational-mode model of pyrazine. A performance comparison with other dynamical methods is presented, along with results for the decay dynamics of pyrazine and a discussion of resonance line shapes.  相似文献   

16.
The interferences of cobalt chloride on the determination of bismuth by electrothermal atomic absorption spectrometry (ETAAS) were examined using a dual cavity platform (DCP), which allows the gas-phase and condensed phase interferences to be distinguished. Effects of pyrolysis temperature, pyrolysis time, atomization temperature, heating rate in the atomization step, gas-flow rate in the pyrolysis and atomization steps, interferent mass and atomization from wall on sensitivity as well as atomization signals were studied to explain the interference mechanisms. The mechanism proposed for each experiment was verified with other subsequent sets of experiments. Finally, modifiers pipetted on the thermally treated sample+interferent mixture and pyrolyzed at different temperatures provided very useful information for the existence of volatilization losses of analyte before the atomization step. All experiments confirmed that when low pyrolysis temperatures are applied, the main interference mechanisms are the gas-phase reaction between bismuth and decomposition products of cobalt chloride in the atomization step. On the other hand, at elevated temperatures, the removal of a volatile compound formed between analyte and matrix constituents is responsible for some temperature-dependent interferences, although gas-phase interferences still continue. The experiments performed with colloidal palladium and nickel nitrate showed that the modifier behaves as both a matrix modifier and analyte modifier, possibly delaying the vaporization of either analyte or modifier or both of them.  相似文献   

17.
To date, no comprehensive comparison of streaming potential coupling coefficient collection or processing techniques has been made. Here, time-varying streaming potential and dc streaming potential data collection and processing techniques are presented and compared. The time-varying streaming potential data include sinusoidal and transient data. The collection techniques include acquiring dc streaming potentials at various pressures, acquiring time-varying streaming potentials at varying pressure, acquiring streaming potentials as a function of frequency, and collecting time-varying raw data. The processing techniques include dc filtering, rms processing, cross-correlation, spectral analysis, and plotting of raw time-varying streaming potential versus raw pressure data. The results show that all processing methods yield the same coupling coefficient within 3%. The analysis also shows that if there is a good signal-to-noise ratio, all processing methods perform satisfactorily. If the signal-to-noise ratio is poor, then the spectral analysis outperforms the other processing methods. The data collection methods are all adequate, but individual applications may make one method superior to another. Copyright 2001 Academic Press.  相似文献   

18.
拉曼光谱成像技术是基于拉曼散射效应所开发的一项现代检测技术,在现代生产、科学研究过程中使用非常广泛。拉曼光谱信号受荧光效应和仪器等方面的影响,往往会产生基线漂移,严重影响对信号特征的进一步提取。因此,必须对拉曼光谱信号进行基线校正。传统的基线校正方法,只针对单一光谱信号,计算量较大,在处理由大量拉曼信号组成的成像数据时,耗时较长且效果不佳。该文提出一种基于临近比较的快速基线校正方法,根据在相同背景下采集的光谱之间的相关性,实现快速基线校正,提高了拉曼成像数据的处理速度。  相似文献   

19.
Wei Z  Bing-Ren X 《Talanta》2006,70(2):267-271
Based on the stochastic resonance theory, a new single-well potential stochastic resonance algorithm (SSR) to improve the signal-to-noise ratio (SNR) is presented. In the new algorithm, stochastic resonance takes place in a single-well potential driven only by the noise. The effect on the proposed algorithm is discussed. By using simulated and experimental data sets, it is proven that the signal-to-noise ratio (SNR) of the weak signal can be greatly enhanced by this method. The new single-well potential stochastic resonance algorithm (SSR) may be a promising tool to extend instrumental linear range and to improve the accuracy of trace analysis. The research enlarges the application scope of single-well potential to nonlinear signal processing.  相似文献   

20.
A common feature of many modern technologies used in proteomics - including nuclear magnetic resonance imaging and mass spectrometry - is the generation of large amounts of data for each subject in an experiment. Extracting the signal from the background noise, however, poses significant challenges. One important part of signal extraction is the correct identification of the baseline level of the data. In this article, we propose a new algorithm (the “BXR algorithm”) for baseline estimation that can be directly applied to different types of spectroscopic data, but also can be specifically tailored to different technologies. We then show how to adapt the algorithm to a particular technology - matrix-assisted laser desorption/ionization Fourier transform ion cyclotron resonance mass spectrometry - which is rapidly gaining popularity as an analytic tool in proteomics. Finally, we compare the performance of our algorithm to that of existing algorithms for baseline estimation.The BXR algorithm is computationally efficient, robust to the type of one-sided signal that occurs in many modern applications (including NMR and mass spectrometry), and improves on existing baseline estimation algorithms. It is implemented as the function baseline in the R package FTICRMS, available either from the Comprehensive R Archive Network (http://www.r-project.org/) or from the first author.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号