首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Linear and nonlinear regression analyses based on the least-squares method play a fundamental role in the evaluation of scientific data. A large number of valuable papers have dealt with various applications of the least-squares method in the chemical literature. They, however, usually contain tremendous formulas for computing the error estimates of the estimated parameters. This paper presents a simple numerical solution based on the well-known simplex method to this problem. Elaborate enzyme kinetic data published earlier have been chosen to test the simplex method extended with error estimation. The capability of the numerical method is demonstrated by the revision of the originally calculated error propagation. Our program might prove useful in handling either chemical or biological data.  相似文献   

2.
The estimation of chemical kinetic rate constants for any non-trivial model is complex due to the nonlinear effects of second order chemical reactions. We developed an algorithm to accomplish this goal based on the Damped Least Squares (DLS) inversion method and then tested the effectiveness of this method on the McKillop–Geeves (MG) model of thin filament regulation. The kinetics of MG model is defined by a set of nonlinear ordinary differential equations (ODEs) that predict the evolution of troponin–tropomyosin–actin and actin–myosin states. The values of the rate constants are estimated by integrating these ODEs numerically and fitting them to a series of stopped-flow pyrene fluorescence transients of myosin-S1 fragment binding to regulated actin in solution. The accuracy and robustness of the estimated rate constants are evaluated for DLS and two other methods, namely quasi-Newton (QN) and simulated annealing (SA). The comparison of these methods revealed that SA provides the best estimates of the model parameters because of its global optimization scheme. However it converges slowly and does quantify the uniqueness of the estimated parameters. On the other hand the QN method converges rapidly but only if the initial guess of the parameters is close to the optimum values, otherwise it diverges. Overall, the DLS method proves to be the most convenient method. It converges fast and was able to provide excellent estimates of kinetic parameters. Furthermore, DLS provides the model resolution matrix, which quantifies the interdependence of model parameters thereby evaluating the uniqueness of their estimated values. This property is essential for estimating of the dependence of the model parameters on experimental conditions (e.g. Ca2+ concentration) when it is assessed from noisy experimental data such as pyrene fluorescence from stopped-flow transients. The advantages of the DLS method observed in this study should be further examined in other physicochemical systems to firmly establish the observed effectiveness of DSL vs. the other parameter estimation methods.  相似文献   

3.
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.  相似文献   

4.
The parameters obtained from a kinetic analysis of thermoanalytical data often exhibit a conversion‐dependent behavior. A novel incremental isoconversional method able to deal with this phenomenon is proposed. The kinetic model is directly fitted to the experimental data using nonlinear orthogonal least squares procedure. The data are processed without transformations, so their error distribution is preserved. As the objective function is based on a maximum likelihood approach, reliable uncertainties of the parameters can be estimated. In contrast to other methods, the activation energy and the pre‐exponential factor are treated as equally important kinetic parameters and are estimated simultaneously. Validity of the method is verified on simulated data, including a dataset with local nonlinearity in the temperature variation. A practical application on the nonisothermal cold crystallization of polyethylene terephthalate is presented. © 2014 Wiley Periodicals, Inc.  相似文献   

5.
In this present article, genetic algorithms and multilayer perceptron neural network (MLPNN) have been integrated in order to reduce the complexity of an optimization problem. A data-driven identification method based on MLPNN and optimal design of experiments is described in detail. The nonlinear model of an extractive ethanol process, represented by a MLPNN, is optimized using real-coded and binary-coded genetic algorithms to determine the optimal operational conditions. In order to check the validity of the computational modeling, the results were compared with the optimization of a deterministic model, whose kinetic parameters were experimentally determined as functions of the temperature.  相似文献   

6.
The same experimental data can often be equally well described by multiple mathematically equivalent kinetic schemes. In the present work, we investigate several model‐fitting algorithms and their ability to distinguish between mechanisms and derive the correct kinetic parameters for several different reaction classes involving consecutive reactions. We have conducted numerical experiments using synthetic experimental data for six classes of consecutive reactions involving different combinations of first‐ and second‐order processes. The synthetic data mimic time‐dependent absorption data as would be obtained from spectroscopic investigations of chemical kinetic processes. The connections between mathematically equivalent solutions are investigated, and analytical expressions describing these connections are derived. Ten optimization algorithms based on nonlinear least squares methods are compared in terms of their computational cost and frequency of convergence to global solutions. Performance is discussed, and a preferred method is recommended. A response surface visualization technique of projecting five‐dimensional data onto the three‐dimensional search space of the minimal function values is developed.  相似文献   

7.
An inside-variance estimation method (IVEM) for binary interaction parameter regression in thermodynamic models is proposed. This maximum likelihood method involves the re-computation of the variance for each iteration of the optimization procedure, automatically re-weighting the objective function. Most of the maximum likelihood approaches currently used to regress the parameters of thermodynamic models fix the variances, converting the problem into a traditional weighted least squares minimization. However, such approaches lead to residual variances (between measured and calculated values) that are inconsistent with the fixed variances and, thus, do not necessarily produce optimum parameters for prediction purposes. The new method (IVEM) substantially improves fluid phase equilibria predictions (as shown by the examples presented) by maintaining consistency between the residual variances and the variance used in the objective function. This results in better parameter estimation and to a direct measure of the uncertainty in the model prediction.  相似文献   

8.
Mathematical modeling of drug delivery is of increasing academic and industrial importance in many aspects. In this paper, we propose an optimization approach for the estimation of the parameters characterizing the diffusion process of a drug from a spherical porous polymer device to an external finite volume. The approach is based on a nonlinear least-squares method and a novel mathematical model which takes into consideration both boundary layer effect and initial burst phenomenon. An analytical solution to the model is derived and a formula for the ratio of the mass released in a given time interval and the total mass released in infinite time is also obtained. The approach has been tested using experimental data of the diffusion of prednisolone 21-hemisuccinate sodium salt from spherical devices made of porous poly(2-hydroxyethyl methacrylate) hydrogels. The effectiveness and accuracy of the method are well demonstrated by the numerical results. The model was used to determine the diffusion parameters including the effective diffusion coefficient of the drug from a series of devices that vary in both the porous structure and the drug loading levels. The computed diffusion parameters are discussed in relation to the physical properties of the devices.  相似文献   

9.
The estimation of parameters in semi-empirical models is essential in numerous areas of engineering and applied science. In many cases, these models are described by a set of ordinary-differential equations or by a set of differential-algebraic equations. Due to the presence of non-convexities of functions participating in these equations, current gradient-based optimization methods can guarantee only locally optimal solutions. This deficiency can have a marked impact on the operation of chemical processes from the economical, environmental and safety points of view and it thus motivates the development of global optimization algorithms. This paper presents a global optimization method which guarantees ɛ-convergence to the global solution. The approach consists in the transformation of the dynamic optimization problem into a nonlinear programming problem (NLP) using the method of orthogonal collocation on finite elements. Rigorous convex underestimators of the nonconvex NLP problem are employed within the spatial branch-and-bound method and solved to global optimality. The proposed method was applied to two example problems dealing with parameter estimation from time series data.  相似文献   

10.
On the Statistical Calibration of Physical Models   总被引:1,自引:0,他引:1       下载免费PDF全文
We introduce a novel statistical calibration framework for physical models, relying on probabilistic embedding of model discrepancy error within the model. For clarity of illustration, we take the measurement errors out of consideration, calibrating a chemical model of interest with respect to a more detailed model, considered as “truth” for the present purpose. We employ Bayesian statistical methods for such model‐to‐model calibration and demonstrate their capabilities on simple synthetic models, leading to a well‐defined parameter estimation problem that employs approximate Bayesian computation. The method is then demonstrated on two case studies for calibration of kinetic rate parameters for methane air chemistry, where ignition time information from a detailed elementary‐step kinetic model is used to estimate rate coefficients of a simple chemical mechanism. We show that the calibrated model predictions fit the data and that uncertainty in these predictions is consistent in a mean‐square sense with the discrepancy from the detailed model data.  相似文献   

11.
The transfer of thermodynamic parameters governing retention of a molecule in gas chromatography from a reference column to a target column is a difficult problem. Successful transfer demands a mechanism whereby the column geometries of both columns can be determined with high accuracy. This is the second part in a series of three papers. In Part I of this work we introduced a new approach to determine the actual effective geometry of a reference column and thermodynamic‐based parameters of a suite of compounds on the column. Part II, presented here, illustrates the rapid estimation of the effective inner diameter (or length) and the effective phase ratio of a target column. The estimation model based on the principle of least squares; a fast Quasi‐Newton optimization algorithm was developed to provide adequate computational speed. The model and optimization algorithm were tested and validated using simulated and experimental data. This study, together with the work in Parts I and III, demonstrates a method that improves the transferability of thermodynamic models of gas chromatography retention between gas chromatography columns.  相似文献   

12.
The Partial least squares class model (PLSCM) was recently proposed for multivariate quality control based on a partial least squares (PLS) regression procedure. This paper presents a case study of quality control of peanut oils based on mid‐infrared (MIR) spectroscopy and class models, focusing mainly on the following aspects: (i) to explain the meanings of PLSCM components and make comparisons between PLSCM and soft independent modeling of class analogy (SIMCA); (ii) to correct the estimation of the original PLSCM confidence interval by considering a nonzero intercept term for center estimation; (iii) to investigate the potential of MIR spectroscopy combined with class models for identifying peanut oils with low doping concentrations of other edible oils. It is demonstrated that PLSCM is actually different from the ordinary PLS procedure, but it estimates the class center and class dispersion in the framework of a latent variable projection model. While SIMCA projects the original variables onto a few dimensions explaining most of the data variances, PLSCM components consider simultaneously the explained variances and the compactness of samples belonging to the same class. The analysis results indicate PLSCM is an intuitive and easy‐to‐use tool to tackle one‐class problems and has comparable performance with SIMCA. The advantages of PLSCM might be attributed to the great success and well‐established foundations of PLS. For PLSCM, the optimization of model complexity and estimation of decision region can be performed as in multivariate calibration routines. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
The method of successive estimation of regression parameters, which is widely used in nonlinear regression analysis, is applied to obtain kinetic information from spectral data for the case when the spectra of individual components are unknown. Using a model example with a two-step successive reaction, the reliability of the algorithm is demonstrated. To compare the proposed method with other known methods for estimating kinetic parameters literature data are used. All simulations were done using a new software for nonlinear regression analysis: FITTER. The proposed approach is especially useful when the spectra of reaction components are unknown and when formal calibration methods do not provide desirable accuracy.  相似文献   

14.
H Liu  T Zhang  L Yan  H Fang  Y Chang 《The Analyst》2012,137(16):3862-3873
Spectroscopic data often suffer from common problems of bands overlapping and random noise. In this paper, we show that the issue of overlapping peaks can be considered as a maximum a posterior (MAP) problem and be solved by minimizing an object functional that includes a likelihood term and two prior terms. In the MAP framework, the likelihood probability density function (PDF) is constructed based on a spectral observation model, a robust Huber-Markov model is used as spectra prior PDF, and the kernel prior is described based on a parametric Gaussian function. Moreover, we describe an efficient optimization scheme that alternates between latent spectrum recovery and blur kernel estimation until convergence. The major novelty of the proposed algorithm is that it can estimate the kernel slit width and latent spectrum simultaneously. Comparative results with other deconvolution methods suggest that the proposed method can recover spectral structural details as well as suppress noise effectively.  相似文献   

15.
Differential scanning calorimetry is well suited to record heat productions of chemical and physical processes as data for the following kinetic analysis. To obtain kinetic parameters of complex reactions, nonlinear optimization methods have to be used. Such complex reaction systems are polymerizations. We tried to evaluate measurements of the epoxy cure and the polymerization of β-propiolactame with simple and complex models. In both cases the simple models did not produce satisfactory results. But by using complex models a successful fitting of the measured data was possible. Our investigation shows that the combination of DSC and modern nonlinear evaluation methods presents a suitable tool for the kinetic investigation of polymerizations.  相似文献   

16.
The main purpose of this study is to review the current state of the problem of the impact of gaseous environment on the kinetics of solid-state decompositions. Three different theoretical approaches to the interpretation of the decomposition kinetics have been considered. As it follows from the literature published over the past 80 years, the Arrhenius and Knudsen–Langmuir approaches based on the assumption of two different reaction mechanisms (congruent and incongruent) could not solve the problem. At the same time, successes in the application of the thermochemical approach that is based on the assumption of a unitary congruent dissociative vaporization mechanism with condensation of oversaturated vapor remain unnoticed by the TA community. Taking into account this situation, the author has outlined the key points of the thermochemical kinetics in a compact but rigorous and complete form once more. The revised kinetic equations for the different modes of decomposition, several important interrelations between the kinetic parameters, and, finally, the results in the interpretation or reappraisal of the main effects related to the impact of gaseous environment on the kinetics have been considered. In the framework of the thermochemical approach, the problem being discussed may be considered nowadays practically resolved.  相似文献   

17.
In this work, the reaction scheme for the esterification of palm fatty acid distillate performed under the noncatalytic and high‐temperature condition (230–290°C) was investigated with a rigorous mathematical modeling. The esterification reaction was assumed to be the pseudo–homogeneous second‐order reversible reaction, and the mass transfer effectiveness factor (η) was introduced in the modeling framework to systematically and collectively consider both evaporation and reaction, which are simultaneously and competitively occurred in the liquid phase. The nonlinear programming problem was constructed with the objective function consisting of the errors between experimental data and the estimated values from the reaction model. The problem was solved by using the Nelder–Mead simplex algorithm to identify kinetic parameters, reaction rate constants, and mass transfer coefficients. The values of mass transfer coefficients were found to follow the Hertz–Knudsen relation and expressed as a function of reaction temperature. From the reaction rate constants obtained from the proposed kinetic models, the apparent activation energy was estimated to be 43.98 kJ/mol, which is lower than the value obtained from the reaction using heterogeneous catalysts. This low value indicates that reactants and products behave as an acid catalyst at relatively high operating temperature and constant pressure.  相似文献   

18.
A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.  相似文献   

19.
A new approach for parameter estimation in chemical kinetics has been recently proposed (Ross et al. Proc. Natl. Acad. Sci. U.S.A. 2010, 107, 12777). It makes use of an optimization criterion based on a Generalized Fisher Equation (GFE). Its utility has been demonstrated with two reaction mechanisms, the chlorite-iodide and Oregonator, which are computationally stiff systems. In this Article, the performance of the GFE-based algorithm is compared to that obtained from minimization of the squared distances between the observed and predicted concentrations obtained by solving the corresponding initial value problem (we call this latter approach "traditional" for simplicity). Comparison of the proposed GFE-based optimization method with the "traditional" one has revealed their differences in performance. This difference can be seen as a trade-off between speed (which favors GFE) and accuracy (which favors the traditional method). The chlorite-iodide and Oregonator systems are again chosen as case studies. An identifiability analysis is performed for both of them, followed by an optimal experimental design based on the Fisher Information Matrix (FIM). This allows to identify and overcome most of the previously encountered identifiability issues, improving the estimation accuracy. With the new data, obtained from optimally designed experiments, it is now possible to estimate effectively more parameters than with the previous data. This result, which holds for both GFE-based and traditional methods, stresses the importance of an appropriate experimental design. Finally, a new hybrid method that combines advantages from the GFE and traditional approaches is presented.  相似文献   

20.
It is known that in the three-dimensional structure of a protein, certain amino acids can interact with each other in order to provide structural integrity or aid in its catalytic function. If these positions are mutated the loss of this interaction usually leads to a non-functional protein. Directed evolution experiments, which probe the sequence space of a protein through mutations in search for an improved variant, frequently result in such inactive sequences. In this work, we address the use of machine learning algorithms, Boolean learning and support vector machines (SVMs), to find such pairs of amino acid positions. The recombination method of imparting mutations was simulated to create in silico sequences that were used as training data for the algorithms. The two algorithms were combined together to develop an approach that weighs the structural risk as well as the empirical risk to solve the problem. This strategy was adapted to a multi-round framework of experiments where the data generated in the present round is used to design experiments for the next round to improve the generated library, as well as the estimation of the interacting positions. It is observed that this strategy can greatly improve the number of functional variants that are generated as well as the average number of mutations that can be made in the library.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号