首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Non‐steady‐state kinetic measurements contain a wealth of information about catalytic reactions and other gas–solid chemical interactions, which is extracted from experimental data via kinetic models. The standard mathematical framework of microkinetic models, which are typically used in computational catalysis and for advanced modeling of steady‐state data, encounters multiple challenges when applied to non‐steady‐state data. Robust phenomenological models, such as the steady‐state Langmuir–Hinshelwood–Hougen–Watson equations, are presently unavailable for non‐steady‐state data. Herein, a novel modeling framework is proposed to fulfill this need. The rate‐reactivity model (RRM) is formulated in terms of experimentally observable quantities including the gaseous transformation rates, concentrations, and surface uptakes. The model is linear with respect to these quantities and their pairwise products, and it is also linear in terms of its parameters (reactivities). The RRM parameters have a clear physicochemical meaning and fully characterize the kinetic behavior of a specific catalyst state, but unlike microkinetic models that rely on hypothetical surface intermediates and specific reaction networks, the RRM does not require any assumptions regarding the underlying mechanism. The systematic RRM‐based procedure outlined in this paper enables an effective comparison of various catalysts and the construction of more detailed microkinetic models in a rational manner. The model was applied to temporal analysis of products pulse‐response data as an example, but it is more generally applicable to other non‐steady‐state techniques that provide time‐resolved rates and concentrations. Several numerical examples are given to illustrate the application of the model to simple model reactions.  相似文献   

2.
We have studied rapid calibration models to predict the composition of a variety of biomass feedstocks by correlating near-infrared (NIR) spectroscopic data to compositional data produced using traditional wet chemical analysis techniques. The rapid calibration models are developed using multivariate statistical analysis of the spectroscopic and wet chemical data. This work discusses the latest versions of the NIR calibration models for corn stover feedstock and dilute-acid pretreated corn stover. Measures of the calibration precision and uncertainty are presented. No statistically significant differences (p = 0.05) are seen between NIR calibration models built using different mathematical pretreatments. Finally, two common algorithms for building NIR calibration models are compared; no statistically significant differences (p = 0.05) are seen for the major constituents glucan, xylan, and lignin, but the algorithms did produce different predictions for total extractives. A single calibration model combining the corn stover feedstock and dilute-acid pretreated corn stover samples gave less satisfactory predictions than the separate models.  相似文献   

3.
Calibration of forcefields for molecular simulation should account for the measurement uncertainty of the reference dataset and for the model inadequacy, i.e., the inability of the force-field/simulation pair to reproduce experimental data within their uncertainty range. In all rigour, the resulting uncertainty of calibrated force-field parameters is a source of uncertainty for simulation predictions. Various calibration strategies and calibration models within the Bayesian calibration/prediction framework are explored in the present article. In the case of Lennard-Jones potential for Argon, we show that prediction uncertainty for thermodynamical and transport properties, albeit very small, is larger than statistical simulation uncertainty.  相似文献   

4.
The goal of the paper is to automatize the construction and parameterization of kinetic reaction mechanisms that can describe a set of experimentally measured concentration versus time curves. Using the framework and theorems of formal reaction kinetics, first, we build a set of possible mechanisms with a given number of measured and unmeasured (real or fictitious) species and reaction steps that fulfill some chemically reasonable requirements. Then we fit all the corresponding mass-action kinetic models and offer the best one to the chemist to help explain the underlying chemical phenomenon or to use it for predictions. We demonstrate the use of the method via two simple examples: on an artificial, simulated set of data and on a small real-life data set. The method can also be used to do a kind of lumping to generate a model that can reproduce the simulation results of a detailed mechanism with less species and thereby can largely accelerate spatially inhomogeneous simulations.  相似文献   

5.
In the context of limiting the environmental impact of transportation, this critical review discusses new directions which are being followed in the development of more predictive and more accurate detailed chemical kinetic models for the combustion of fuels. In the first part, the performance of current models, especially in terms of the prediction of pollutant formation, is evaluated. In the next parts, recent methods and ways to improve these models are described. An emphasis is given on the development of detailed models based on elementary reactions, on the production of the related thermochemical and kinetic parameters, and on the experimental techniques available to produce the data necessary to evaluate model predictions under well defined conditions (212 references).  相似文献   

6.
In analytical chemistry applications, statistical calibration models are commonly used to estimate the true value of an unknown specimen. In this article, we consider a heteroscedastic controlled calibration model in which both dependent and independent variables are subject to heteroscedastic measurement errors. The main task of using this model is to estimate the true value of an unknown regressor (independent variable) under the condition that a set of observations on its corresponding response (dependent variable) is available. We introduce four estimation methods to the problem of interest, including generalized least squares (GLS), modified least squares, corrected score, and expectation maximization‐based (EM‐based) methods. Furthermore, an interval estimation based on an asymptotic method is also derived. We compare their performance through detailed simulation studies. In consequence, GLS and EM‐based methods are recommended in practical use. A real data example is given to illustrate the application of the calibration model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
8.
This perspective gives our views on general aspects and future directions of gas‐phase atmospheric chemical kinetic mechanism development, emphasizing on the work needed for the sustainable development of chemically detailed mechanisms that reflect current kinetic, mechanistic, and theoretical knowledge. Current and future mechanism development efforts and research needs are discussed, including software‐aided autogeneration and maintenance of kinetic models as a future‐proof approach for atmospheric model development. There is an overarching need for the evaluation and extension of structure‐activity relationships (SARs) that predict the properties and reactions of the many multifunctionalized compounds in the atmosphere that are at the core of detailed mechanisms, but for which no direct chemical data are available. Here, we discuss the experimental and theoretical data needed to support the development of mechanisms and SARs, the types of SARs relevant to atmospheric chemistry, the current status and limitations of SARs for various types of atmospheric reactions, the status of thermochemical estimates needed for mechanism development, and our outlook for the future. The authors have recently formed a SAR evaluation working group to address these issues.  相似文献   

9.
The exit or desorption of free radicals from latex particles is an important kinetic process in an emulsion polymerization. This article unites a successful theory of radical absorption (i.e., initiator efficiency), based on propagation in the aqueous phase being the rate determining step for entry of charged free radicals, with a detailed model of radical desorption. The result is a kinetic scheme applicable to true “zero-one” systems (i.e., where entry of a radical into a latex particle already containing a radical results in instantaneous termination), which is still, with a number of generally applicable assumptions, relatively simple. Indeed, in many physically reasonable limits, the kinetic representation reduces to a single rate equation. Specific experimental techniques of particular significance and methods of analysis of kinetic data are detailed and discussed. A methodology for both assessing the applicability of the model and its more probable limits, via use of known rate coefficients and theoretical predictions, is outlined and then applied to the representative monomers, styrene and methyl methacrylate. A detailed application of the theory and illustration of the methodology of model discrimination via experiment is contained in the second article of this series. © 1994 John Wiley & Sons, Inc.  相似文献   

10.
Numerous mathematical tools intended to adjust rate constants employed in complex detailed kinetic models to make them consistent with multiple sets of experimental data have been reported in the literature. Application of such model optimization methods typically begins with the assignment of uncertainties in the absolute rate constants in a starting model, followed by variation of the rate constants within these uncertainty bounds to tune rate parameters to match model outputs to experimental observations. The present work examines the impact of including information on relative reaction rates in the optimization strategy, which is not typically done in current implementations. It is shown that where such rate constant data are available, the available parameter space changes dramatically due to the correlations inherent in such measurements. Relative rate constants are typically measured with greater relative accuracy than corresponding absolute rate constant measurements. This greater accuracy further reduces the available parameter space, which significantly affects the uncertainty in the model outcomes as a result of kinetic parameter uncertainties. We demonstrate this effect by considering a simple example case emulating an ignition event and show that use of relative rate measurements leads to a significantly smaller uncertainty in the output ignition delay time in comparison with results based on absolute measurements. This is true even though the same range of absolute rate constants is sampled in each case. Implications of the results with respect to the maintenance of physically realistic kinetics in optimized models are discussed, and suggestions are made for the path forward in the refinement of detailed kinetic models.  相似文献   

11.
The uncatalyzed hydrolysis and removal of xylan from corn stover is markedly enhanced when operation is changed from batch to continuous flow through conditions, and the increase in hemicellulose removal with flow rate is inconsistent with predictions by widely used first-order kinetic models. Mass transfer or other physical effects could influence the hydrolysis rate, and two models reported in the literature for other applications were adapted to investigate whether incorporation of mass transfer into the kinetics could explain xylan removal in both batch and continuous flow through reactors on a more consistent basis. It was found that a simple leaching model and a pore diffusion/leaching model could describe batch and flow through data with accuracy similar to that of conventional batch models and could provide a more rational explanation for changes in performance with flow rate.  相似文献   

12.
Data obtained from an archived nine-year aging study on S5370 foam were used to develop compression set and stress-strain aging models. Compression set was characterized using a first order kinetic model and the stress-strain relationship was analyzed using a material model previously described by Rusch for flexible foams. The models were fitted to data from the aging study using Bayesian methods, which easily accommodate uncertainties in the test conditions and provide probability distributions of the model parameters. The parameter distributions were sampled using a Markov chain Monte Carlo algorithm and incorporated to effect prediction intervals and compared to data obtained from independent studies for the purpose of validation. Compression set data from the short time study of Patel and Skinner are shown to predict significantly higher compression sets, which are attributed to additional crosslinking reactions and other phenomena that do not dominate the long term aging behavior. Using data from the nine-year study, the time period required to achieve a given compression set at 25 °C is increased by 20 years or more over the predictions of Patel and Skinner. The activation energy applicable near room temperature is similar to that reported by Patel and Skinner, which is consistent with numerous physical and catalyzed chemical mechanisms. Finally, load retention predictions from the stress-strain aging model agree with independent studies at test gaps that are larger than or equal to a zero gradient test gap limit.  相似文献   

13.
This paper presents a Bayesian approach to the development of spectroscopic calibration models. By formulating the linear regression in a probabilistic framework, a Bayesian linear regression model is derived, and a specific optimization method, i.e. Bayesian evidence approximation, is utilized to estimate the model “hyper-parameters”. The relation of the proposed approach to the calibration models in the literature is discussed, including ridge regression and Gaussian process model. The Bayesian model may be modified for the calibration of multivariate response variables. Furthermore, a variable selection strategy is implemented within the Bayesian framework, the motivation being that the predictive performance may be improved by selecting a subset of the most informative spectral variables. The Bayesian calibration models are applied to two spectroscopic data sets, and they demonstrate improved prediction results in comparison with the benchmark method of partial least squares.  相似文献   

14.
Crystallization analysis fractionation (Crystaf) is a polymer characterization technique used to estimate chemical composition distributions (CCDs) of semicrystalline copolymers. The Crystaf profile can be transformed into a CCD using a calibration curve that relates average comonomer content to peak crystallization temperature. The calibration curve depends on copolymer molecular properties and Crystaf operation conditions. In this investigation, we applied a crystallization kinetics model to simulate Crystaf calibration curves and to quantify how Crystaf calibration curves depend on these factors. We applied the model to estimate the CCDs of three ethylene/1‐hexene copolymers from Crystaf profiles measured at different cooling rates and showed that our predictions agree well with the CCDs described by Stockmayer's distribution. We have also used this new methodology to investigate the effects of cooling rate, molecular weight, and comonomer type on Crystaf profiles and calibration curves. © 2009 Wiley Periodicals, Inc. J Polym Sci Part B: Polym Phys 47: 866–876, 2009  相似文献   

15.
A facile mass spectrometric kinetic method for quantitative analysis of chiral compounds was developed by integrating mass spectrometry based on chemical derivatization and the spectral shape deformation quantitative theory. Chemical derivatization was employed to introduce diastereomeric environments to the chiral compounds of interest, resulting in different abundance distribution patterns of fragment ions of the derivatization products of enantiomers in mass spectrometry. The quantitative information of the chiral compounds of interest was extracted from complex mass spectral data by an advanced calibration model derived based on the spectral shape deformation quantitative theory. The performance of the proposed method was tested on the quantitative analysis of R‐propranolol in propranolol tablets. Experimental results demonstrated that it could achieve accurate and precise concentration ratio predictions for R‐propranolol with an average relative predictive error (ARPE) of about 4%, considerably better than the corresponding results of the mass spectrometric method based on chemical derivatization and the univariate ratiometric model (ARPE: about 12%). The limit of detection (LOD) and limit of quantification (LOQ) of the proposed method for the concentration ratio of R‐propranolol were estimated to be 1.5% and 6.0%, respectively. The proposed method is complementary to the existing methods designed for the quantification of enantiomers such as the Cooks kinetic method.  相似文献   

16.
17.
It is shown and explained in detail by four examples generated from known kinetic models that simplified evaluation procedures--initial rate studies, individual exponential curve fitting method--may inherently lead to inappropriate chemical conclusions, even in the case of relatively simple kinetic systems. It is also shown that in the case of all four examples the simultaneous curve fitting immediately reveals the defectiveness of the kinetic model obtained from the simplified evaluation procedures. We therefore propose the extensive usage of the simultaneous curve fitting of all the kinetic traces to avoid these pitfalls and to find the appropriate kinetic models.  相似文献   

18.
An updated H2/O2 kinetic model based on that of Li et al. (Int J Chem Kinet 36, 2004, 566–575) is presented and tested against a wide range of combustion targets. The primary motivations of the model revision are to incorporate recent improvements in rate constant treatment and resolve discrepancies between experimental data and predictions using recently published kinetic models in dilute, high‐pressure flames. Attempts are made to identify major remaining sources of uncertainties, in both the reaction rate parameters and the assumptions of the kinetic model, affecting predictions of relevant combustion behavior. With regard to model parameters, present uncertainties in the temperature and pressure dependence of rate constants for HO2 formation and consumption reactions are demonstrated to substantially affect predictive capabilities at high‐pressure, low‐temperature conditions. With regard to model assumptions, calculations are performed to investigate several reactions/processes that have not received much attention previously. Results from ab initio calculations and modeling studies imply that inclusion of H + HO2 = H2O + O in the kinetic model might be warranted, though further studies are necessary to ascertain its role in combustion modeling. In addition, it appears that characterization of nonlinear bath‐gas mixture rule behavior for H + O2(+ M) = HO2(+ M) in multicomponent bath gases might be necessary to predict high‐pressure flame speeds within ~15%. The updated model is tested against all of the previous validation targets considered by Li et al. as well as new targets from a number of recent studies. Special attention is devoted to establishing a context for evaluating model performance against experimental data by careful consideration of uncertainties in measurements, initial conditions, and physical model assumptions. For example, ignition delay times in shock tubes are shown to be sensitive to potential impurity effects, which have been suggested to accelerate early radical pool growth in shock tube speciation studies. In addition, speciation predictions in burner‐stabilized flames are found to be more sensitive to uncertainties in experimental boundary conditions than to uncertainties in kinetics and transport. Predictions using the present model adequately reproduce previous validation targets and show substantially improved agreement against recent high‐pressure flame speed and shock tube speciation measurements. Comparisons of predictions of several other kinetic models with the experimental data for nearly the entire validation set used here are also provided in the Supporting Information. © 2011 Wiley Periodicals, Inc. Int J Chem Kinet 44: 444–474, 2012  相似文献   

19.
A detailed chemical kinetic model for oxidation of carbonyl sulfide (OCS) has been developed, based on a critical evaluation of data from the literature. The mechanism has been validated against experimental results from batch reactors, flow reactors, and shock tubes. The model predicts satisfactorily oxidation of OCS over a wide range of stoichiometric air–fuel ratios (0.5 ), temperatures (450–1700 K), and pressures (0.02–3.0 atm) under dry conditions. The governing reaction mechanisms are outlined based on calculations with the kinetic model. The oxidation rate of OCS is controlled by the competition between chain‐branching and ‐propagating steps; modeling predictions are particularly sensitive to the branching fraction for the OCS + O reaction to form CO + SO or CO2 + S.  相似文献   

20.
We apply a Bayesian parameter estimation technique to a chemical kinetic mechanism for n‐propylbenzene oxidation in a shock tube to propagate errors in experimental data to errors in Arrhenius parameters and predicted species concentrations. We find that, to apply the methodology successfully, conventional optimization is required as a preliminary step. This is carried out in two stages: First, a quasi‐random global search using a Sobol low‐discrepancy sequence is conducted, followed by a local optimization by means of a hybrid gradient‐descent/Newton iteration method. The concentrations of 37 species at a variety of temperatures, pressures, and equivalence ratios are optimized against a total of 2378 experimental observations. We then apply the Bayesian methodology to study the influence of uncertainties in the experimental measurements on some of the Arrhenius parameters in the model as well as some of the predicted species concentrations. Markov chain Monte Carlo algorithms are employed to sample from the posterior probability densities, making use of polynomial surrogates of higher order fitted to the model responses. We conclude that the methodology provides a useful tool for the analysis of distributions of model parameters and responses, in particular their uncertainties and correlations. Limitations of the method are discussed. For example, we find that using second‐order response surfaces and assuming normal distributions for propagated errors is largely adequate, but not always.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号