首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The non-parametric kinetic method (NPK) is a method for the processing of thermoanalytical data, which does not make any assumption about the functionality of the reaction rate with the degree of conversion or with the temperature. This method has not been widely used due to its mathematical sophistication and difficulty of automation. The original NPK method uses only the first (maximum) singular value whereas additional information could be drawn from the remaining singular values. A hypothetical application of the NPK, which uses all the significant singular values (modified version of the NPK), is the separation of two or more steps of a complex decomposition reaction. Using simulated data, we have demonstrated that the modified version of the NPK is not useful to discriminate among the decomposition steps of a consecutive complex decomposition reaction scheme. Nevertheless, the analysis of the relative strength of the singular values is useful to assess the degree of separability of the temperature and conversion functions, which are the outcome of the NPK. Taking into account the relative magnitude of the first singular value with regard to the remaining singular values, we have proposed an automated two-scan version of the NPK method which guarantees two separable functions. As the separability of both temperature and conversion functions is the imperative assumption of the single-step kinetics approximation, the two-scan NPK method can be used as a testing method for those methods based on this approximation, the model-free and model-fitting methods.  相似文献   

2.
快速傅立叶变换用于色谱噪声平滑及微弱信号的检测   总被引:7,自引:0,他引:7  
杨黎  许国旺  张玉奎  卢佩章 《色谱》1998,16(5):386-389
应用快速傅立叶变换法(FFT)对色谱噪声进行平滑处理及微弱信号检测,同时,也与其他数字滤波法进行了比较。结果表明,利用FFT法可以很好地对噪声进行平滑处理,使信噪比提高18倍,为更好地进行痕量组分的色谱微弱信号检测打下了基础。  相似文献   

3.
Determining the rank of a trilinear data array is a first step in subsequent trilinear component decomposition. Different from estimating the rank of bilinear data, it is more difficult to decide the significant number of component to fit the trilinear decompositions exactly. General methods of rank estimation utilize the information contained in the singular values but ignore information from eigenvectors. In this paper, a rank estimating method specifically for trilinear data arrays is proposed. It uses the idea of direct trilinear decomposition (DTLD) to compress the cube matrix into two pseudo sample matrices which are then decomposed by singular value decomposition. Two eigenvectors combined with the projection technique are used to estimate the rank of trilinear data arrays. Simulated trilinear data arrays with homoscedastic and heteroscedastic noise, different noise levels, high collinearity, and real three-way data arrays have been used to illustrate the feasibility of the proposed method. Compared with other factor-determining methods, for example use of the factor indication function (IND), residual percentage variance (RPV), and the two-mode subspace comparison approach (TMSC), the results showed that the new method can give more reliable answers under the different conditions applied.   相似文献   

4.
采用现场紫外光谱及圆二色谱电化学方法研究了微过氧化物酶-11的电化学还原过程.同时应用奇异值分解最小二乘法和双对数法对所得光谱数据进行处理.研究发现,电化学还原过程诱导微过氧化物酶-11的构象由无规卷曲向α螺旋转变,这为进一步理解生物电子传递过程与生物分子构象转变机理提供了基础信息.  相似文献   

5.
Many problems in chemistry, physics and engineering requires the inversion of a Fredholm integral equation of first order. To find the solution of this problem one has to deal with ill-posed problem and special techniques have to be used. In this paper two of the most common methods, the Tikhonov regularization and the singular value decomposition, are compared when finding the solution of a model integral equation. The regularization parameter in the Tikhonov regularization and the dimension of the subspaces in the singular value decomposition were chosen using the L curve criterion. The analytical solution of the model integral equation was taken as a reference to analyze the results. The advantages of each method, with the presence of errors in the data, is presented and it is argumented the superiority of the singular value decomposition when dealing with this kind of problem.  相似文献   

6.
Selection of the successful optimization strategy is an essential part of solving numerous practical problems yet often is a nontrivial task, especially when a function to be optimized is multidimensional and involves statistical data. Here we propose a robust optimization scheme, referred to as NR/SVD-Cdyn, which is based on a combination of the Newton–Raphson (NR) method along with singular value decomposition (SVD), and demonstrate its performance by numerically solving a system of the weighted histogram analysis method equations. Our results show significant improvement over the direct iteration and conventional NR optimization methods. The proposed scheme is universal and could be used for solving various optimization problems in the field of computational chemistry such as parameter fitting for the methods of molecular mechanics and semiempirical quantum-mechanical methods. © 2019 Wiley Periodicals, Inc.  相似文献   

7.
In this work, we define the quality of selective regions in a data matrix acquired by two-way instrumental methods. We name the quality parameter as the accumulated analytical signal (AAS) and link this quality measure to the resolution quality. The AAS is calculated as the first singular value divided by the second from a singular value decomposition of the selective region. We also extend this measure to systems containing more than two analytes and define the quality of zero-concentration windows (ZCWs). These regions are crucial in the resolution step. The quality parameter of this region is named as the net accumulated analytical signal (NAAS). It is calculated as the last significant singular value divided by the first non-significant singular value from a singular value decomposition of the ZCW. Since it is sometimes difficult to decide the elution regions by local rank analysis, we introduce a shifting procedure. The different elution regions are shifted and the system is resolved using the new elution windows. Indication of a good resolution is found when a stable solution appears.  相似文献   

8.
A novel method, a subspace projection of pseudo high-way data array (SPPH), was developed for estimating the chemical rank of high-way data arrays. The proposed method determines the chemical rank through performing singular value decomposition (SVD) on the slice matrices of original high-way data array to produce a pseudo high-way data array and employing the idea of the difference of the original truncated data set and the pseudo one. Compared with traditional methods, it uses the information from eigenvectors combined with the projection residual to estimate the rank of the three-way data arrays instead of using the eigenvalue. In order to demonstrate the excellent performance of the new method, simulated and real three-way data arrays were carried out by the proposed method. The results showed that the proposed method could accurately and quickly determine the chemical rank to fit the trilinear model. Moreover, the newly proposed method was compared with the other four factor-determining methods, i.e. factor indicator function (IND), ADD-ONE-UP, core consistency diagnostic (CORCONDIA) and two-mode subspace comparison (TMSC) approaches. It was found that the proposed method can deal with more complex situations with existence of severe collinearity and trace concentration than many other methods can and performs well in practical applications.  相似文献   

9.
Linear and non-linear calibration methods (principal component regression (PCR), partial least squares regression (PLS), and neural networks (NN)) were applied to a slightly non-linear Raman data set. Because of the large size of this data set, recently introduced linear calibration methods, specifically optimised for speed, were also used. These fast methods achieve speed improvement by using the Lanczos decomposition for the singular value decomposition steps of the calibration procedures, and for some of their variants, by optimising the models without cross-validation (CV). Linear methods could deal with the slight non-linearity present in the data by including extra components, therefore, performing comparably to NNs. The fast methods performed as well as their classical equivalents in terms of precision in prediction, but the results were obtained considerably faster. It, however, appeared that CV remains the most appropriate method for model complexity estimation.  相似文献   

10.
Resolution of the spectra of the intermediates in the photocycle of wild-type bacteriorhodopsin (BR) was achieved by singular value decomposition with exponential-fit-assisted self-modeling (SVD-EFASM) treatment of multichannel difference spectra measured at 5 degrees C during the course of the photocycle. New is the finding that two spectrally distinct L intermediates, L(1) and L(2), form sequentially. Our conclusion is that the photocycle is more complex than most published schemes. The dissection of the spectrally different L forms eliminates stoichiometric discrepancies usually appearing as systematically varying total intermediate concentrations before the onset of BR recovery. In addition, our analysis reveals that the red tails in the spectra of K and L(1) are more substantial than those of L(2) and BR. We suggest that these subtle differences in the shapes of the spectra reflect torsional and/or environmental differences in the retinyl chromophore.  相似文献   

11.
The present work provides a detailed investigation on the use of singular value decomposition (SVD) to solve the linear least-squares problem (LLS) for the purposes of obtaining potential-derived atom-centered point charges (PD charges) from the ab initio molecular electrostatic potential (V(QM)). Given the SVD of any PD charge calculation LLS problem, it was concluded that (1) all singular vectors are not necessary to obtain the optimal set of PD charges and (2) the most effective set of singular vectors do not necessarily correspond to those with the largest singular values. It is shown that the efficient use of singular vectors can provide statistically well-defined PD charges when compared with conventional PD charge calculation methods without sacrificing the agreement with V(QM). As can be expected, the methodology outlined here is independent of the algorithm for sampling V(QM) as well as the basis set used to calculate V(QM). An algorithm is provided to select the best set of singular vectors used for optimal PD charge calculations. To minimize the subjective comparisons of different PD charge sets, we also provide an objective criterion for determining if two sets of PD charges are significantly different from one another.  相似文献   

12.
The singular value decomposition of the n-particle excitation operator as determined by coupled cluster or perturbation theory is used to extract the dominant and interesting electron-electron correlations from complex molecular wave functions. As an example of the very general formalism, the decomposition of the T(2) operator obtained from coupled cluster doubles calculations is used to analyze the strength and character of pair correlations in a variety of molecules with interesting electronic structure. The magnitude of the largest singular value(s) determines the strength of the correlation(s), and the corresponding right- and left-hand singular vectors characterize the physical and spatial nature of the correlations. The primary advantage of this tool over natural orbital analysis is that it provides direct associations between the occupied and virtual geminals involved in the correlations.  相似文献   

13.
Many modern data analysis methods involve computing a matrix singular value decomposition (SVD) or eigenvalue decomposition (EVD). Principal component analysis is the time‐honored example, but more recent applications include latent semantic indexing (LSI), hypertext induced topic selection (HITS), clustering, classification, etc. Though the SVD and EVD are well established and can be computed via state‐of‐the‐art algorithms, it is not commonly mentioned that there is an intrinsic sign indeterminacy that can significantly impact the conclusions and interpretations drawn from their results. Here we provide a solution to the sign ambiguity problem and show how it leads to more sensible solutions. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
基于小波变换平滑主成分分析   总被引:3,自引:0,他引:3  
小波变换具有很强的信号分离能力,很容易把随机噪音从信号中分离出来,从而提高信号的信噪比。本文把小波变换引入到因子分析中,提出了基于小波变换平滑主成分分析,该算法既保留普通主成分分析的正交分解,又具备了小波变换的信号分离能力。模拟数据和实验数据的结果表明,该算法具有从低信噪比的数据中提取出有用信息,并提高信号的信噪比。迭代目标变换因子分析处理实验数据的结果表明,基于小波变换平滑主成分分析的处理结果优  相似文献   

15.
Recent studies using quantum mechanics energy decomposition methods, for example, SAPT and ALMO, have revealed that the charge transfer energy may play an important role in short ranged inter‐molecular interactions, and have a different distance dependence comparing with the polarization energy. However, the charge transfer energy component has been ignored in most current polarizable or non‐polarizable force fields. In this work, first, we proposed an empirical decomposition of SAPT induction energy into charge transfer and polarization energy that mimics the regularized SAPT method (ED‐SAPT). This empirical decomposition is free of the divergence issue, hence providing a good reference for force field development. Then, we further extended this concept in the context of AMOEBA polarizable force field, proposed a consistent approach to treat the charge transfer phenomenon. Current results show a promising application of this charge transfer model in future force field development. © 2017 Wiley Periodicals, Inc.  相似文献   

16.
Due to the experimental errors, the chemical effect of minor reactions, and some physical effects of heat and mass transfer, there usually exists much noise in the mass loss data resulted from thermal decomposition experiments, and thus high quality smoothing algorithm plays an important role in obtaining reliable derivative thermogravimetric (DTG) curves required for differential kinetic analysis. In this paper three smoothing methods, i.e. Moving Average smoothing, Gaussian smoothing, and Vondrak smoothing, are investigated in detail for pre-treatment of biomass decomposition data to obtain the DTG curves, and the smoothing results are compared. It is concluded that by choosing reasonable smoothing parameters based on the spectrum analysis of the data, the Gaussian smoothing and Vondrak smoothing can be reliably used to obtain DTG curves. The kinetic parameters calculated from the original TG curves and smoothed DTG curves have excellent agreement, and thus the Gaussian and Vondrak smoothing algorithms can be used directly and accurately in kinetic analysis.This work was sponsored by National Natural Science Foundation of China under Grants 50346038 and 50323005, the China NKBRSF project (No. 2001CB409600), the Anhui Excellent Youth Scientist Fundation (2004–2005), the Specialized Research Fund for the Doctoral Program of Higher Education and the National Key Technologies R&D Programme (2001BA510B09-03).  相似文献   

17.
For the determination of total phosphorus in waters by flow-injection analysis, a continuous microwave oven decomposition with subsequent amperometric detection of orthophosphate is proposed. The percentage digestion was examined for two different decomposition reagents and by varying the pH of the carrier and the length and diameter of the digestion coil. With potassium peroxodisulphate decomposition the recoveries of phosphorus vary from 91 to 100% for organic phosphorus compounds, and with perchloric acid decomposition the recoveries vary from 60 to 70% for inorganic polyphosphates. Calibration graphs are linear for up to 30 mg P l?1, the determination limit is 0.1 mg P l?1 and the precision of the method is 3% (relative standard deviation) (n = 5) at 5 mg P l?1. The sampling rate is 20 h?1. Good recoveries of phosphorus after addition to domestic waste water sample are obtained.  相似文献   

18.
Estimating an appropriate chemical rank of a three-way data array is very important to second-order calibration. In this paper, a simple linear transform incorporating Monte Carlo simulation approach (LTMC) to estimate the chemical rank of a three-way data array was suggested. The new method determines the chemical rank through performing a simple linear transform procedure on the original cube matrix to produce two subspaces by singular value decomposition. One of two subspaces is derived from the original three-way data array itself and the other is derived from a new three-way data array produced by the linear transformation of the original one. Projection technique incorporating the Monte Carlo approach acts as distinguishing criterion to choose the appropriate component number of the system. Simulated three-way trilinear data arrays with different noise types (homoscedastic and heteroscedastic), various noise level as well as high collinearity are used to illustrate the feasibility of the new method. The results have shown that the new method could yield accurate results with different conditions appended. The feasibility of the new method is also confirmed by two real arrays, HPLC-DAD data and excitation-emission fluorescent data. All the results are compared with the other three factor-determining methods: factor indicator function (IND), core consistency diagnostic (CORCONDIA) and two-mode subspace comparison (TMSC) approach. It shows that the newly proposed algorithm can objectively and quickly determine the chemical rank to fit the trilinear model.  相似文献   

19.
RNA-seq data are challenging existing omics data analytics for its volume and complexity. Although quite a few computational models were proposed from different standing points to conduct differential expression (D.E.) analysis, almost all these methods do not provide a rigorous feature selection for high-dimensional RNA-seq count data. Instead, most or even all genes are invited into differential calls no matter they have real contributions to data variations or not. Thus, it would inevitably affect the robustness of D.E. analysis and lead to the increase of false positive ratios.In this study, we presented a novel feature selection method: nonnegative singular value approximation (NSVA) to enhance RNA-seq differential expression analysis by taking advantage of RNA-seq count data's non-negativity. As a variance-based feature selection method, it selects genes according to its contribution to the first singular value direction of input data in a data-driven approach. It demonstrates robustness to depth bias and gene length bias in feature selection in comparison with its five peer methods. Combining with state-of-the-art RNA-seq differential expression analysis, it contributes to enhancing differential expression analysis by lowering false discovery rates caused by the biases. Furthermore, we demonstrated the effectiveness of the proposed feature selection by proposing a data-driven differential expression analysis: NSVA-seq, besides conducting network marker discovery.  相似文献   

20.
The partial least squares modeling based on singular value decomposition was applied for the simultaneous spectrophotometric determination of Co(II), Ni(II) and Cu(II) as their ammonium 2-amino-1-cyclohexan-1-dithiocarbamate complexes. The latent variable calculation in this partial least squares modeling is not an iterative technique. The detection limits for Co(II), Ni(II) and Cu(II) were 0.072, 0.021 and 0.063 mug/ml, respectively. The application of the method was confirmed by analysis of these metals in sample alloys.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号