首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We present a new methodology to determine the rate-limiting adsorption kinetics mechanism (diffusion-controlled vs mixed diffusion-barrier controlled), including deducing the kinetics parameters (the diffusion coefficient, D, and the energy-barrier parameter, beta), from the experimental short-time dynamic surface tension (DST) data. The new methodology has the following advantages over the existing procedure used to analyze the experimental DST data: (a) it does not require using a model for the equilibrium adsorption isotherm, and (b) it only requires using the experimental short-time DST data measured at two initial surfactant bulk solution concentrations. We apply the new methodology to analyze the experimental short-time DST data of the following alkyl poly(ethylene oxide), CiEj, nonionic surfactants: C12E4, C12E6, C12E8, and C10E8 measured using the pendant-bubble apparatus. We find that for C12E4 and C12E6, the effect of the energy barrier on the overall rate of surfactant adsorption can be neglected for surfactant bulk solution concentrations below their respective critical micelle concentrations (CMCs), and therefore, that the rate-limiting adsorption kinetics mechanism for C12E4 and C12E6 is diffusion-controlled at any of their premicellar surfactant bulk solution concentrations. On the other hand, for C12E8 and C10E8, we find that their respective CMC values are large enough to observe a significant effect of the energy barrier on the overall rate of surfactant adsorption. In other words, for C12E8 and C10E8, the rate-limiting adsorption kinetics mechanism shifts from diffusion-controlled to mixed diffusion-barrier controlled as their premicellar surfactant bulk solution concentrations increase. We test the new methodology by predicting the short-time DST profiles at other initial surfactant bulk solution concentrations, and then comparing the predicted DST profiles with those measured experimentally. Very good agreement is obtained for the four CiEj nonionic surfactants considered. We also compare the results of implementing the new methodology with those of implementing the existing procedure, and conclude that using a model for the equilibrium adsorption isotherm can lead not only to different values of D and beta, but it can also lead to a completely different determination of the rate-limiting adsorption kinetics mechanism. Since the new methodology proposed here does not require using a model for the equilibrium adsorption isotherm, we conclude that it should provide a more reliable determination of the rate-limiting adsorption kinetics mechanism, including the deduced kinetics parameters, D and beta.  相似文献   

2.
We present a method for fitting curves acquired by chemical shift titration experiments, in the frame of a three‐step complexation mechanism. To that end, we have implemented a fitting procedure, based on a nonlinear least squares fitting method, that determines the best fitting curve using a “coarse grid search” approach and provides distributions for the different parameters of the complexation model that are compatible with the experimental precision. The resulting analysis protocol is first described and validated on a theoretical data set. We show its ability to converge to the true parameter values of the simulated reaction scheme and to evaluate complexation constants together with multidimensional uncertainties. Then, we apply this protocol to the study of the supramolecular interactions, in aqueous solution, between a lanthanide complex and three different model molecules, using NMR titration experiments. We show that within the uncertainty that can be evaluated from the parameter distributions generated during our analysis, the affinities between the lanthanide derivative and each model molecule can be discriminated, and we propose values for the corresponding thermodynamic constants. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
This work describes a polymer reaction engineering framework for understanding how catalyst kinetic parameters affect the microstructure of polyolefins made with single‐ or multi‐site catalysts. Moreover, a methodology for deconvolution and kinetic parameters estimation is presented to estimate the reactivity ratios of multi‐site catalysts based on the combination of polymerization, fractionation, and spectroscopic techniques, namely, gel permeation chromatography‐IR and carbon‐13 nuclear magnetic resonance spectroscopy. The methodology capabilities are then demonstrated and validated using a case study simulated via a Monte Carlo model including random noise in order to better represent experimental result uncertainties. The methodology can reverse engineer experimental results and estimate all relevant reaction performance parameters.  相似文献   

4.
This work aims at studying the optimization of an on‐line capillary electrophoresis (CE)‐based tryptic digestion methodology for the analysis of therapeutic polypeptides (PP). With this methodology, a mixture of surrogate peptide fragments and amino acid were produced on‐line by trypsin cleavage (enzymatic digestion) and subsequently analyzed using the same capillary. The resulting automation of all steps such as injection, mixing, incubation, separation and detection minimizes the possible errors and saves experimental time. In this paper, we first study the differents parameters influencing PP cleavage inside the capillary (plug length, reactant concentration, incubation time, diffusion and electrophoretic plugs mixing). In a second part, the optimization of the electrophoretic separation conditions of generated hydrolysis products (nature, pH and ionic strength (I) of the background electrolyte (BGE)) is described. Using the optimized conditions, excellent repeatability was obtained in terms of separation (migration times) and proteolysis (number of products from enzymatic hydrolysis and corresponding amounts) demonstrating the robustness of the proposed methodology.  相似文献   

5.
In this paper, we present a Bayesian approach for estimation in the skew‐normal calibration model, as well as the conditional posterior distributions which are useful for implementing the Gibbs sampler. Data transformation is thus avoided by using the methodology proposed. Model fitting is implemented by proposing the asymmetric deviance information criterion, ADIC, a modification of the ordinary DIC. We also report an application of the model studied by using a real data set, related to the relationship between the resistance and the elasticity of a sample of concrete beams. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
On the Statistical Calibration of Physical Models   总被引:1,自引:0,他引:1       下载免费PDF全文
We introduce a novel statistical calibration framework for physical models, relying on probabilistic embedding of model discrepancy error within the model. For clarity of illustration, we take the measurement errors out of consideration, calibrating a chemical model of interest with respect to a more detailed model, considered as “truth” for the present purpose. We employ Bayesian statistical methods for such model‐to‐model calibration and demonstrate their capabilities on simple synthetic models, leading to a well‐defined parameter estimation problem that employs approximate Bayesian computation. The method is then demonstrated on two case studies for calibration of kinetic rate parameters for methane air chemistry, where ignition time information from a detailed elementary‐step kinetic model is used to estimate rate coefficients of a simple chemical mechanism. We show that the calibrated model predictions fit the data and that uncertainty in these predictions is consistent in a mean‐square sense with the discrepancy from the detailed model data.  相似文献   

7.
In this paper we develop a new theory to evaluate the nucleation rate in the framework of the EMLD-DNT model. Beyond the model, our theory deals with cluster translation and exclusion, effects that have been virtually ignored in classical nucleation theory. We apply the model to the case of 1-pentanol, and compare the predictions with experimental results. We find an excellent agreement between the nucleation rate predicted by our theory and experimental data. The distinguishing feature of the model is its ability to predict successfully the rate of formation of the critical nucleus without the use of an intermolecular potential, employing only macroscopic thermodynamic properties.  相似文献   

8.
9.
We have demonstrated an informatics methodology for finding correlations between the full profile Fourier transform infrared spectra of polycrystalline 3C‐silicon carbide (poly‐SiC) films and their growth conditions, thereby developing high‐throughput structure‐process relationships. Because SiC films are a structural element in photonic sensors, this paper focuses on the interpretation of their optical response, the multivariate tracking of critical processing pathways, and the identification of controlling processing mechanisms. Using principal component analysis, we have developed a data analysis tool to aid in the assessment of the relative contributions of experimental parameters in low‐pressure chemical vapor deposition processes to optical responses on the basis of the size of eigenvalues of the spectral data set. The applied methodology for identifying spectral relationships of stoichiometry, dopant chemistry, and microstructure of poly‐SiC provides more effective guidelines to manipulate optical responses by controlling multiple experimental parameters. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
Bayesian latent variable regression (BLVR) aims to utilize all available information for empirical modeling via a Bayesian framework. Such information includes prior knowledge about the underlying variables, model parameters and measurement error distributions. This paper improves upon the existing optimization‐based BLVR (BLVR‐OPT) method [1] by developing a sampling‐based Bayesian latent variable regression (BLVR‐S) method that relies on Gibbs sampling. Use of the sampling‐based framework not only provides point estimates, but its ability to generate samples that represent the posterior distribution of the unknown variables, also readily provides error bounds. Features and advantages of this method are demonstrated via examples based on simulated data and real Near‐Infrared (NIR) spectroscopy data. Practical aspects of Bayesian modeling such as determining when the extra computation may be worth the effort are addressed by an empirical study of the effects of the amount of training data and signal to noise ratio (SNR). The benefits of BLVR seem to be most significant when the number of measurements is limited and when noise in output variables is relatively large. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

11.
Validation of complex chemical models relies increasingly on uncertainty propagation and sensitivity analysis with Monte Carlo sampling methods. The utility and accuracy of this approach depend on the proper definition of probability density functions for the uncertain parameters of the model. Taking into account the existing correlations between input parameters is essential to a reliable uncertainty budget for the model outputs. We address here the problem of branching ratios between product channels of a reaction, which are correlated by the unit value of their sum. We compare the uncertainties on predicted time-dependent and equilibrium species concentrations due to input samples, either uncorrelated or explicitly correlated by a Dirichlet distribution. The method is applied to the case of Titan ionospheric chemistry, with the aim of estimating the effect of branching ratio correlations on the uncertainty balance of equilibrium densities in a complex model.  相似文献   

12.
A comprehensive and hierarchical optimization of a joint hydrogen and syngas combustion mechanism has been carried out. The Kéromnès et al. (Combust Flame, 2013, 160, 995–1011) mechanism for syngas combustion was updated with our recently optimized hydrogen combustion mechanism (Varga et al., Proc Combust Inst, 2015, 35, 589–596) and optimized using a comprehensive set of direct and indirect experimental data relevant to hydrogen and syngas combustion. The collection of experimental data consisted of ignition measurements in shock tubes and rapid compression machines, burning velocity measurements, and species profiles measured using shock tubes, flow reactors, and jet‐stirred reactors. The experimental conditions covered wide ranges of temperatures (800–2500 K), pressures (0.5–50 bar), equivalence ratios (? = 0.3–5.0), and C/H ratios (0–3). In total, 48 Arrhenius parameters and 5 third‐body collision efficiency parameters of 18 elementary reactions were optimized using these experimental data. A large number of directly measured rate coefficient values belonging to 15 of the reaction steps were also utilized. The optimization has resulted in a H2/CO combustion mechanism, which is applicable to a wide range of conditions. Moreover, new recommended rate parameters with their covariance matrix and temperature‐dependent uncertainty ranges of the optimized rate coefficients are provided. The optimized mechanism was compared to 19 recent hydrogen and syngas combustion mechanisms and is shown to provide the best reproduction of the experimental data.  相似文献   

13.
A diversity of multiresponse optimization methods has been introduced in the literature; however, their performance has not been thoroughly explored, and only a classical desirability‐based criterion has been commonly used. With the aim of contributing to help practitioners in selecting an effective criterion for solving multiresponse optimization problems developed under the response surface methodology framework, and thus to find compromise solutions that are technically and economically more favorable, the working ability of several easy‐to‐use criteria is evaluated and compared with that of a theoretically sound method. Four case studies with different numbers and types of responses are considered. Less‐sophisticated criteria were able to generate solutions similar to those generated by sophisticated methods, even when the objective is to depict the Pareto frontier in problems with conflicting responses. Two easy‐to‐use criteria that require less‐subjective information from the user yielded solutions similar to those of a classical desirability‐based criterion. Preference parameters range and increment impact on optimal solutions were also evaluated.  相似文献   

14.
Monumental, recent and rapidly continuing, improvements in the capabilities of ab initio theoretical kinetics calculations provides reason to believe that progress in the field of chemical kinetics can be accelerated through a corresponding evolution of the role of theory in kinetic modeling and its relationship with experiment. The present article reviews and provides additional demonstrations of the unique advantages that arise when theoretical and experimental data across multiple scales are considered on equal footing, including the relevant uncertainties of both, within a single mathematical framework. Namely, the multiscale informatics framework simultaneously integrates information from a wide variety of sources and scales: ab initio electronic structure calculations of molecular properties, rate constant determinations for individual reactions, and measured global observables of multireaction systems. The resulting model representation consists of a set of theoretical kinetics parameters (with constrained uncertainties) that are related through elementary kinetics models to rate constants (with propagated uncertainties) that in turn are related through physical models to global observables (with propagated uncertainties). An overview of the approach and typical implementation is provided along with a brief discussion of the major uncertainties (parametric and structural) in theoretical kinetics calculations, kinetic models for complex chemical mechanisms, and physical models for experiments. Higher levels of automation in all aspects, including closed‐loop autonomous mixed‐experimental‐and‐computational model improvement, are advocated for facilitating scalability of the approach to larger systems with reasonable human effort and computational cost. The unique advantages of combining theoretical and experimental data across multiple scales are illustrated through a series of examples. Previous results demonstrating the utility of simultaneous interpretation of theoretical and experimental data for assessing consistency in complex systems and for reliable, physics‐based extrapolation of limited data are briefly summarized. New results are presented to demonstrate the high predictive accuracy of multiscale informed models for both small (molecular properties) and large (global observables) scales. These new results provide examples where the optimization yields physically realistic parameter adjustments and where physical model uncertainties in experiments are larger than kinetic model uncertainties. New results are also presented to demonstrate the utility of the multiscale informatics approach for design of experiments and theoretical calculations, accounting for both theoretical and experimental existing knowledge as well as relevant parametric and structural uncertainties in interpreting potential new data. These new results provide examples where neglecting structural uncertainties in design of experiments results in failure to identify the most worthwhile experiment. Further progress in the chemical kinetics field (particularly at the intersection of theory, kinetic modeling, and experiment) would benefit from increased attention to understanding parametric and structural uncertainties for all three—the uncertainty magnitude and cross‐correlations among model parameters as well as limitations of the model structures themselves.  相似文献   

15.
We describe a systematic method of optimizing mass spectrometric (MS) detection for ion chromatographic (IC) analysis of common anions and three selected organic acids using response surface methodology (RSM). RSM was utilized in this study because it minimized the number of experiments required to achieve the optimum MS response and included the interactions between individual parameters for multivariable optimization. Five MS parameters, including probe temperature, nebulizer gas, assistant makeup flow, needle voltage and cone voltage, were screened and systematically optimized by two steps. Central composite design (CCD) was used to design the experiment points and a quadratic model was applied to fit the experimental data. Analysis of variance (ANOVA) was carried out to evaluate the validity of the statistical model and to determine the most significant parameters for MS response. The optimum MS conditions for each analyte were summarized and the method optimum condition was achieved by applying desirability function. Our observation showed good agreements between statistically predicted optimum response and the responses collected at the predicted optimum condition. Operable range of each parameter (with normalized MS response greater than 0.8 for each analyte) was provided for general anionic IC/MS applications. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
We present an innovative, multiscale computational approach to probe the behaviour of polymer–clay nanocomposites (PCNs). Our modeling recipe is based on 1) quantum/force‐field‐based atomistic simulation to derive interaction energies among all system components; 2) mapping of these values onto mesoscopic bead–field (MBF) hybrid‐method parameters; 3) mesoscopic simulations to determine system density distributions and morphologies (i.e., intercalated versus exfoliated); and 4) simulations at finite‐element levels to calculate the relative macroscopic properties. The entire computational procedure has been applied to two well‐known PCN systems, namely Nylon 6/Cloisite 20A and Nylon 6/Cloisite 30B, as test materials, and their mechanical properties were predicted in excellent agreement with the available experimental data. Importantly, our methodology is a truly bottom‐up approach, and no “learning from experiment” was needed in any step of the entire procedure.  相似文献   

17.
The linear solvent strength model was used to predict coverage in online comprehensive two‐dimensional reversed‐phase liquid chromatography. The prediction model uses a parallelogram to describe the separation space covered with peaks in a system with limited orthogonality. The corners of the parallelogram are assumed to behave like chromatographic peaks and the position of these pseudo‐compounds was predicted. A mix of 25 polycyclic aromatic compounds were used as a test. The precision of the prediction, span 0–25, was tested by varying input parameters, and was found to be acceptable with root mean square errors of 3. The accuracy of the prediction was assessed by comparing with the experimental coverages. Less than half of experimental coverages were outside prediction ± 1 × root mean square error and none outside prediction ± 2 × root mean square error. Accuracy was lower when retention factors were low, or when gradient conditions affected parameters not included in the model, e.g. second dimension gradient time affects the second dimension equilibration time. The concept shows promise as a tool for gradient optimization in online comprehensive two‐dimensional liquid chromatography, as it mitigates the tedious registration and modeling of all sample constituents, a circumstance that is particularly appealing when dealing with complex samples.  相似文献   

18.
The weighted histogram analysis method (WHAM) is a powerful approach to estimate molecular free energy surfaces (FES) from biased simulation data. Bayesian reformulations of WHAM are valuable in proving statistically optimal use of the data and providing a transparent means to incorporate regularizing priors and estimate statistical uncertainties. In this work, we develop a fully Bayesian treatment of WHAM to generate statistically optimal FES estimates in any number of biasing dimensions under arbitrary choices of the Bayes prior. Rigorous uncertainty estimates are generated by Metropolis‐Hastings sampling from the Bayes posterior. We also report a means to project the FES and its uncertainties into arbitrary auxiliary order parameters beyond those in which biased sampling was conducted. We demonstrate the approaches in applications of alanine dipeptide and the unthreading of a synthetic mimic of the astexin‐3 lasso peptide. Open‐source MATLAB and Python implementations of our codes are available for free public download. © 2017 Wiley Periodicals, Inc.  相似文献   

19.
In this study, we use thermodynamic theory to develop a novel model that allows for the quantitative determination of the Gibbs free energy of adhesion for the initial bacterial attachment process. This model eliminates the need to calculate interfacial free energies and instead relies on easily measurable contact angles to determine DeltaG(adh). We experimentally verify our model using real-time observation of the initial attachment of Pseudomonas putida to methyl- and hydroxyl-terminated self-assembled monolayers. We also test the applicability of our model to a variety of experimental conditions using data available in the literature. We show that the initial attachment process is governed by dispersion forces and is accurately predicted by our model. Also, we find that our model is simple to apply and accurate for a variety of experimental conditions.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号