首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Validation of complex chemical models relies increasingly on uncertainty propagation and sensitivity analysis with Monte Carlo sampling methods. The utility and accuracy of this approach depend on the proper definition of probability density functions for the uncertain parameters of the model. Taking into account the existing correlations between input parameters is essential to a reliable uncertainty budget for the model outputs. We address here the problem of branching ratios between product channels of a reaction, which are correlated by the unit value of their sum. We compare the uncertainties on predicted time-dependent and equilibrium species concentrations due to input samples, either uncorrelated or explicitly correlated by a Dirichlet distribution. The method is applied to the case of Titan ionospheric chemistry, with the aim of estimating the effect of branching ratio correlations on the uncertainty balance of equilibrium densities in a complex model.  相似文献   

2.
The provision of uncertainty estimates along with measurement results or values computed thereof is metrologically mandatory. This is in particular true for observational data related to climate change, and thermodynamic properties of geophysical substances derived thereof, such as of air, seawater or ice. The recent International Thermodynamic Equation of Seawater 2010 (TEOS-10) provides such properties in a comprehensive and highly accurate way, derived from empirical thermodynamic potentials released by the International Association for the Properties of Water and Steam (IAPWS). Currently, there are no generally recognised algorithms available for a systematic and comprehensive estimation of uncertainties for arbitrary properties derived from those potentials at arbitrary input values, based on the experimental uncertainties of the laboratory data that were used originally for the correlations during the construction process. In particular, standard formulas for the uncertainty propagation which do not account for mutual uncertainty correlations between different coefficients tend to systematically and significantly overestimate the uncertainties of derived quantities, which may lead to practically useless results. In this paper, stochastic ensembles of thermodynamic potentials, derived from randomly modified input data, are considered statistically to provide analytical formulas for the computation of the covariance matrix of the related regression coefficients, from which in turn uncertainty estimates for any derived property can be computed a posteriori. For illustration purposes, simple analytical application examples of the general formalism are briefly discussed in greater detail.  相似文献   

3.
Chemical analyses become more and more expensive to perform, and more and more research is based on numerical simulation. However, all numerical models are subject to uncertainties, either in the implementation of the model towards reality or simply that the desired input data are not explicitly known without uncertainties. These uncertainties will affect the predictability of any model and thus it is of vital interest that the effect of these uncertainties is known. Especially if the result of the simulations is a topic upon which serious decisions are going to be made. In this paper, we outline a simple and rather straight forward approach to the effect of uncertainties on such simple chemical calculations as solubility of a solid phase in a given water and the chemical speciation of a solution. In addition, we also touch upon the much more complicated matter of uncertainties in sorption modelling??a subject which will be enlightened in much greater detail in an upcoming NEA publication in the matter.  相似文献   

4.
A model for calculating correlations between the activities of the same gamma-ray emitter calculated from different peaks in its spectrum is presented. The correlation coefficients can be expressed by the relative uncertainties of the input quantities. The use of this model in the calculation of the mean activity prevents the calculation of an excessively small uncertainty of the mean, since averaging of correlated uncertainty components is avoided.  相似文献   

5.
Speciation calculations are often the base upon which further and more important conclusions are drawn, e.g., solubilities and sorption estimates used for retention of hazardous materials. Since speciation calculations are based on experimentally determined stability constants of the relevant chemical reactions, the measurement and experimental uncertainty in these constants will affect the reliability of the simulation output. The present knowledge of the thermodynamic data relevant for predicting the behaviour of a complex chemical system is quite heterogeneous. In order to predict the impact of these uncertainties on the reliability of a simulation output requires sophisticated modelling codes. In this paper, we will present a computer program, LJUNGSKILE, which utilises the thermodynamic equilibrium code PHREEQC to statistically calculate uncertainties in speciation based on uncertainties in stability constants. A short example is included.  相似文献   

6.
Uncertainties in Solubility Calculations   总被引:1,自引:0,他引:1  
Summary.  When considering the possible migration of hazardous elements in groundwater, one has to take into account several phenomena, e.g. solubility, ion exchange, adsorption, matrix diffusion, and transport paths. Here, we focus upon the solubility which in turn depends on several more or less uncertain chemical properties. Uncertainties in the data during laboratory experiments aiming at measurements of thermodynamic constants may cause uncertainties in the amount of some species of several tenths of the relative mass fraction. The thermodynamic data may then be used for solubility calculations under different conditions and water compositions. Clearly, there are several uncertainties associated with solubility calculations in the rock-water system. First, there is the effect of uncertainties in thermodynamic data such as stability and solubility constants, and also enthalpies of reaction if the water is not at room temperature. Furthermore, there are the rock-water interactions which will change the water composition as different minerals come in contact with the water flowing through a system of fractures. Studies in mineralogy to an accuracy good enough for modeling of water evolution are difficult to perform, and therefore the mineral composition of the rock and thus the water composition should be treated as parameters subjected to uncertainties. In addition, there are also conceptual uncertainties with respect to input data. The calculation of a solubility should be an easy task for every chemist, but in fact results differing by orders of magnitude are found even when the modelers have used the same computer program and the same data. In this paper, uncertainties associated with solubility calculations are discussed. The results are exemplified on the calculated solubilities of some actinides in groundwater from crystalline rock. Received August 21, 2000. Accepted (revised) May 18, 2001  相似文献   

7.
The paper describes experiments for the evaluation of uncertainties associated with a number of chromatographic parameters. Studies of the analysis of vitamins by HPLC illustrate the estimation of the uncertainties associated with experimental "input" parameters such as the detector wavelength, column temperature and mobile phase flow-rate. Experimental design techniques, which allow the efficient study a number of parameters simultaneously, are described. Multiple linear regression was used to fit response surfaces to the data. The resulting equations were used in the estimation of the uncertainties. Three approaches to uncertainty calculation were compared--Kragten's spreadsheet, symmetric spreadsheet and algebraic differentiation. In cases where non-linearity in the model was significant, agreement between the uncertainty estimates was poor as the spreadsheet approaches do not include second-order uncertainty terms.  相似文献   

8.
Mass balances of ash and potassium for a fluidized bed combustor were performed incorporating measurement uncertainties. The total output mass of ash or a chemical element should be equal to the mass in the input fuel; however, this is not often achieved. A realistic estimation of recovery uncertainty can support the reliability of a mass balance. Estimation of uncertainty helps to establish a reliable evaluation of the recovery ratio of ash mass and elemental mass. This may clarify whether any apparent lack in closing the mass balance can be attributed to uncertainties. The evaluation of measurement uncertainty for different matrices, namely coal, biomass, sand and ashes from different streams was based on internal quality control data and external quality control data, namely analysis of samples from proficiency tests or use of a certified reference material. The evaluation of intermediate precision and trueness allowed the estimation of measurement uncertainty. Due to the different physic and chemical characteristics of the studied matrices, the uncertainty of precision was evaluated using R-charts of data obtained from the analysis of duplicates for the majority of samples. This allowed evaluating sample heterogeneity effects. The instrumental acceptance criterion was also considered and included in the combined uncertainty. The trueness was evaluated using data from several proficiency tests and from analysis of a certified reference material or sample spiking. Statistically significant bias was included.  相似文献   

9.
Numerous mathematical tools intended to adjust rate constants employed in complex detailed kinetic models to make them consistent with multiple sets of experimental data have been reported in the literature. Application of such model optimization methods typically begins with the assignment of uncertainties in the absolute rate constants in a starting model, followed by variation of the rate constants within these uncertainty bounds to tune rate parameters to match model outputs to experimental observations. The present work examines the impact of including information on relative reaction rates in the optimization strategy, which is not typically done in current implementations. It is shown that where such rate constant data are available, the available parameter space changes dramatically due to the correlations inherent in such measurements. Relative rate constants are typically measured with greater relative accuracy than corresponding absolute rate constant measurements. This greater accuracy further reduces the available parameter space, which significantly affects the uncertainty in the model outcomes as a result of kinetic parameter uncertainties. We demonstrate this effect by considering a simple example case emulating an ignition event and show that use of relative rate measurements leads to a significantly smaller uncertainty in the output ignition delay time in comparison with results based on absolute measurements. This is true even though the same range of absolute rate constants is sampled in each case. Implications of the results with respect to the maintenance of physically realistic kinetics in optimized models are discussed, and suggestions are made for the path forward in the refinement of detailed kinetic models.  相似文献   

10.
Uncertainty analysis is a useful tool for inspecting and improving detailed kinetic mechanisms because it can identify the greatest sources of model output error. Owing to the very nonlinear relationship between kinetic and thermodynamic parameters and computed concentrations, model predictions can be extremely sensitive to uncertainties in some parameters while uncertainties in other parameters can be irrelevant. Error propagation becomes even more convoluted in automatically generated kinetic models, where input uncertainties are correlated through kinetic rate rules and thermodynamic group values. Local and global uncertainty analyses were implemented and used to analyze error propagation in Reaction Mechanism Generator (RMG), an open-source software for generating kinetic models. A framework for automatically assigning parameter uncertainties to estimated thermodynamics and kinetics was created, enabling tracking of correlated uncertainties. Local first-order uncertainty propagation was implemented using sensitivities computed natively within RMG. Global uncertainty analysis was implemented using adaptive Smolyak pseudospectral approximations as implemented in the MIT Uncertainty Quantification Library to efficiently compute and construct polynomial chaos expansions to approximate the dependence of outputs on a subset of uncertain inputs. Cantera was used as a backend for simulating the reactor system in the global analysis. Analyses were performed for a phenyldodecane pyrolysis model. Local and global methods demonstrated similar trends; however, many uncertainties were significantly overestimated by the local analysis. Both local and global analyses show that correlated uncertainties based on kinetic rate rules and thermochemical groups drastically reduce a model's degrees of freedom and have a large impact on the determination of the most influential input parameters. These results highlight the necessity of incorporating uncertainty analysis in the mechanism generation workflow.  相似文献   

11.
Monumental, recent and rapidly continuing, improvements in the capabilities of ab initio theoretical kinetics calculations provides reason to believe that progress in the field of chemical kinetics can be accelerated through a corresponding evolution of the role of theory in kinetic modeling and its relationship with experiment. The present article reviews and provides additional demonstrations of the unique advantages that arise when theoretical and experimental data across multiple scales are considered on equal footing, including the relevant uncertainties of both, within a single mathematical framework. Namely, the multiscale informatics framework simultaneously integrates information from a wide variety of sources and scales: ab initio electronic structure calculations of molecular properties, rate constant determinations for individual reactions, and measured global observables of multireaction systems. The resulting model representation consists of a set of theoretical kinetics parameters (with constrained uncertainties) that are related through elementary kinetics models to rate constants (with propagated uncertainties) that in turn are related through physical models to global observables (with propagated uncertainties). An overview of the approach and typical implementation is provided along with a brief discussion of the major uncertainties (parametric and structural) in theoretical kinetics calculations, kinetic models for complex chemical mechanisms, and physical models for experiments. Higher levels of automation in all aspects, including closed‐loop autonomous mixed‐experimental‐and‐computational model improvement, are advocated for facilitating scalability of the approach to larger systems with reasonable human effort and computational cost. The unique advantages of combining theoretical and experimental data across multiple scales are illustrated through a series of examples. Previous results demonstrating the utility of simultaneous interpretation of theoretical and experimental data for assessing consistency in complex systems and for reliable, physics‐based extrapolation of limited data are briefly summarized. New results are presented to demonstrate the high predictive accuracy of multiscale informed models for both small (molecular properties) and large (global observables) scales. These new results provide examples where the optimization yields physically realistic parameter adjustments and where physical model uncertainties in experiments are larger than kinetic model uncertainties. New results are also presented to demonstrate the utility of the multiscale informatics approach for design of experiments and theoretical calculations, accounting for both theoretical and experimental existing knowledge as well as relevant parametric and structural uncertainties in interpreting potential new data. These new results provide examples where neglecting structural uncertainties in design of experiments results in failure to identify the most worthwhile experiment. Further progress in the chemical kinetics field (particularly at the intersection of theory, kinetic modeling, and experiment) would benefit from increased attention to understanding parametric and structural uncertainties for all three—the uncertainty magnitude and cross‐correlations among model parameters as well as limitations of the model structures themselves.  相似文献   

12.
Use of repeated measurements in quantitative chemical analysis is common but leads to the problem of how to combine the measurement values and produce a result with an uncertainty following the GUM. There is often confusion between repeated indications or observations of an input quantity, for whose uncertainty the GUM prescribes a type A evaluation, and complete measurements repeated on multiple sub-samples, as considered here. A solution for combining repeated measurement results and their individual uncertainties based on simple interval logic is proposed here. The individual measurement values and their uncertainties are compared with the calculated average value to see if this implies that another, possibly unknown, source of uncertainty is present. The model of the individual results is modified for this possible between-replicate effect so that the repeated measurements are consistent. Lack of consistency is a strong indication that the measurement is not fully under control and needs further development or investigation. This is not always possible, however and the method given here is proposed to ensure that the values of the repeated measurements agree with each other. A simple numerical example is given showing how the method can be implemented in practice.  相似文献   

13.
Since the uncertainty of each link in the traceability chain (measuring analytical instrument, reference material or other measurement standard) changes over the course of time, the chain lifetime is limited. The lifetime in chemical analysis is dependent on the calibration intervals of the measuring equipment and the shelf-life of the certified reference materials (CRMs) used for the calibration of the equipment. It is shown that the ordinary least squares technique, used for treatment of the calibration data, is correct only when uncertainties in the certified values of the measurement standards or CRMs are negligible. If these uncertainties increase (for example, close to the end of the calibration interval or shelf-life), they are able to influence significantly the calibration and measurement results. In such cases regression analysis of the calibration data should take into account that not only the response values are subjects to errors, but also the certified values. As an end-point criterion of the traceability chain destruction, the requirement that the uncertainty of a measurement standard should be a source of less then one-third of the uncertainty in the measurement result is applicable. An example from analytical practice based on the data of interlaboratory comparisons of ethanol determination in beer is discussed. Received: 5 October 2000 Accepted: 3 December 2000  相似文献   

14.
A method for the determination of colorimetric uncertainties has been developed in order to meet the requirements for accreditation by the UK accreditation service (UKAS), which include a statement of uncertainty for all certified quantities. The values of the principal sources of spectrophotometric uncertainty are first determined and are used to calculate corresponding components of colorimetric uncertainty using a simple model. The components of uncertainty analysed are 100% level (diffuse reflectance), photometric non-linearity, dark level and wavelength error. Bandwidth error is not significant for NPL surface colour standards because a small bandwidth is always used.

The gloss trap error and specular beam error are determined and corrected so that only the uncertainties after correction need be considered. These can be treated as components of dark level uncertainty. The uncertainties are determined for the following colour data: x, y, Y, u′, v′, L*, a* and b* for the CIE 10° Standard Observer and the CIE Standard Illuminant D65 for three geometries: specular included, specular excluded and 0°/45°. These are now quoted routinely on NPL certificates for ceramic colour standards, white and black ceramic tile standards and Russian opal standards.  相似文献   


15.
Sensitivity analysis is an important tool in model validation and evaluation that has been employed extensively in the analysis of chemical kinetic models of combustion processes. The input parameters of a chemical kinetic model are always associated with some uncertainties, and the effects of these uncertainties on the predicted combustion properties can be determined through sensitivity analysis. In this work, first- and second-order global and local sensitivity coefficients of ignition delay time with respect to the scaling factor for reaction rate constants in chemical kinetic mechanisms for combustion of H2, methane, n-butane, and n-heptane are examined. In the sensitivity analysis performed here, the output of the model is taken to be natural logarithm of ignition delay time and the input parameters are the natural logarithms of the factors that scale the reaction rate constants. The output of the model is expressed as a polynomial function of the input parameters, with up to coupling between two input parameters in the present sensitivity analysis. This polynomial function is determined by varying one or two input parameters, and allows the determination of both local and global sensitivity coefficients. The order of the polynomial function in the present work is four, and the factor that scales the reaction rate constant is in the range from 1/e to e, where e is the base of the natural logarithm. A relatively small number of sample runs are required in this approach compared to the global sensitivity analysis based on the highly dimensional model representation method, which utilizes random sampling of input (RS-HDMR). In RS-HDMR, sensitivity coefficients are determined only for the rate constants of a limited number of reactions; the present approach, by contrast, affords sensitivity coefficients for a larger number of reactions. Reactions and reaction pairs with the largest sensitivity coefficients are listed for ignition delay times of four typical fuels. Global sensitivity coefficients are always positive, while local sensitivity coefficients can be either positive or negative. A negative local sensitivity coefficient indicates that the reaction promotes ignition, while a positive local sensitivity coefficient suggests that the reaction actually suppresses ignition. Our results show that important reactions or reaction pairs identified by global sensitivity analysis are usually rather similar to those based on local sensitivity analysis. This finding can probably be attributed to the fact that the values of input parameters are within a rather small range in the sensitivity analysis, and nonlinear effects for such a small range of parameters are negligible. It is possible to determine global sensitivity coefficients by varying the input parameters over a larger range using the present approach. Such analysis shows that correlation effects between an important reaction and a minor reaction can have relatively sizable second-order sensitivity coefficient in some cases. On the other hand, first-order global sensitivity coefficients in the present approach will be affected by coupling between two reactions, and some results of the first-order global sensitivity analysis will be different from those determined by local sensitivity analysis or global sensitivity analysis under conditions where the correlation effects of two reactions are neglected. The present sensitivity analysis approach provides valuable information on important reactions as well as correlated effects of two reactions on the combustion characteristics of a chemical kinetic mechanism. In addition, the analysis can also be employed to aid global sensitivity analysis using RS-HDMR, where global sensitivity coefficients are determined more reliably.  相似文献   

16.
Procedures for estimating the measurement uncertainty for the acidity constant Ka (or the pKa value) in different media (I=0 and I=0.1 mol L(-1) KCl), as determined by potentiometric titration, are presented. The uncertainty budgets (the relative contributions of the different input quantities to the uncertainty in the result) of the pKa (I=0) and pKa (I=0.1 mol L(-1) KCl) values are compared. Unlike the values themselves, the uncertainties and uncertainty budgets of the values are comparable. The uncertainty estimation procedures are based on mathematical models of pKa measurement and involve the identification and quantification of individual uncertainty sources according to the ISO GUM approach. The mathematical model involves 52 and 48 input parameters for pKa (I=0) and pKa (I=0.1 mol L(-1) KCl), respectively. The relative importance of each source of uncertainty is discussed. In both cases, the main contributors to the uncertainty budget are the uncertainty components due to the hydrogen ion concentration/activity measurement, which provide 63.7% (for pKa (I=0)) and 89.3% (for pKa (I=0.1 mol L(-1) KCl)) of the uncertainty. The remaining uncertainty contributions arise mostly from the limited purity of the acid. From this work, it is clear that the uncertainties of the pKa (I=0.1 mol L(-1) KCl) values tend to be lower than those of the pKa (I=0) values. The main reasons for this are that: (1) the uncertainty due to the residual liquid junction potential is nominally absent in the case of pKa (I=0.1 mol L(-1) KCl) due to the similarly high concentrations of background electrolyte in the calibration solutions and measured solution; (2) the electrode system is more stable in solutions containing the 0.1 mol L(-1) KCl background electrolyte and so the readings obtained in these solutions are more stable.  相似文献   

17.
The measurement uncertainty of the determination of free and total carbohydrates in soluble (instant) coffee using high-performance anion exchange chromatography with pulsed amperometric detection according to AOAC Method 995.13 and ISO standard 11292 was calculated. This method is important with regard to monitoring several carbohydrate concentrations and is used to assess the authenticity of soluble coffee. We followed the recommendations of the ISO, Eurachem, and Valid Analytical Measurement (VAM) guides: individual uncertainty contributions u(x) were identified, quantified, and expressed as relative standard deviations related to each specific source u(x)/x or RSD(x). Eventually, they were combined to yield the standard uncertainty and the relative standard uncertainty of a given carbohydrate concentration, c, that is respectively u(c) and u(c)/c. As a result of our study, we could demonstrate that the overall repeatability of the carbohydrate determination in duplicate, RSD(r); the repeatability of the integration of the peak area of the carbohydrate standards, RSD(r(area)(ST)); and the uncertainty of the linear calibration model used in our laboratory, RSD (linST), are the most significant contributions to the total uncertainty. The u(c)/c values thus determined differ for each carbohydrate and depend on their concentrations. The least standard uncertainties that can be achieved are about 2.5%. The question of trueness in the total carbohydrate assay (determination of monosaccharides obtained upon hydrolysis of coffee oligo- and polysaccharides) was addressed. For this purpose, we analyzed the data of 2 different collaborative trials in which our laboratory took part.  相似文献   

18.
After a measurement, a measured value and a measurement uncertainty are produced as a measurement result. By a repeated measurement, another measurement result is produced. Between the individual results of the two measurements, it is shown that there may be a significant correlation. A correlation coefficient can be determined when a GUM-compliant uncertainty budget for a measurement is available. Utilizing the correlations between the N individual results, an equation is derived to combine the N individual uncertainties of N measurements. Using the newly derived equation including the correlation coefficient, three measurement uncertainties of three measurement results are combined as an example. The combined uncertainty is compared with the uncertainty of a measurement which treats the three individual measurements as one process. Papers published in this section do not necessarily reflect the opinion of the editors, the editorial board, or the publisher.  相似文献   

19.
Isotope-dilution mass spectrometry (IDMS) is considered to be a method without significant correction factors. It is also believed that this method is well understood. But unfortunately a large number of different uncertainty budgets have been published that consider different correction factors. These differences lead to conflicting combined uncertainties especially in trace analysis. It is described how the known correction factors must be considered in the uncertainty budget of values determined by IDMS combined with ICP-MS (ICP-IDMS). The corrections applied are dead time, background, interference, mass discrimination, blank correction and air buoyancy.IDMS measurements consist always of a series of isotope abundance ratio measurements and can be done according to different measurement protocols. Because the measurement protocols of IDMS are often rather sophisticated, correlations of influence quantities are difficult to identify. Therefore the measurement protocol has to be carefully considered in the specification of the measurand and a strategy is presented to properly account for these correlations. This will be exemplified for the estimation of mass fractions of platinum group elements (PGEs) and Re in the geological reference material UB-N (from CRPG-CNRS, Nancy in France) with ICP-IDMS. The PGEs with more than one isotope and the element Re are measured with on-line cation-exchange chromatography coupled to a quadrupole ICP-MS. All contents are below 10 µg kg–1. Only osmium is separated from the matrix by direct sparging of OsO4 into the plasma. This leads to transient signals for all PGEs and Re. It is possible to estimate the combined uncertainties and keep them favourably small despite the low contents, the transient signals and the sophisticated correction model.  相似文献   

20.
In virtual drug screening, the chemical diversity of hits is an important factor, along with their predicted activity. Moreover, interim results are of interest for directing the further research, and their diversity is also desirable. In this paper, we consider a problem of obtaining a diverse set of virtual screening hits in a short time. To this end, we propose a mathematical model of task scheduling for virtual drug screening in high-performance computational systems as a congestion game between computational nodes to find the equilibrium solutions for best balancing the number of interim hits with their chemical diversity. The model considers the heterogeneous environment with workload uncertainty, processing time uncertainty, and limited knowledge about the input dataset structure. We perform computational experiments and evaluate the performance of the developed approach considering organic molecules database GDB-9. The used set of molecules is rich enough to demonstrate the feasibility and practicability of proposed solutions. We compare the algorithm with two known heuristics used in practice and observe that game-based scheduling outperforms them by the hit discovery rate and chemical diversity at earlier steps. Based on these results, we use a social utility metric for assessing the efficiency of our equilibrium solutions and show that they reach greatest values.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号