首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Spectrophotometric multicomponent analysis is considerd on the basis of inverse multivariate calibration with linear methods (ordinary least squares, principal component, ridge and partial least squares regression) and with the non-linear methods ACE and the non-linear partial least squares. The performance of the different methods is compared by paired F-tests. As an estimate of the error variance the residual mean sum of squares in the analysis of variance table is used. The comparison is demonstrated for the infrared spectrometric analysis of the hydroxyl group content of brown coal measured in diffuse reflectance. Although the error variances among the calibration methods differ gradually, the differences are much less pronounced at statistical level.  相似文献   

2.
The isotope ratio of each of the light elements preserves individual information on the origin and history of organic natural compounds. Therefore, a multi-element isotope ratio analysis is the most efficient means for the origin and authenticity assignment of food, and also for the solution of various problems in ecology, archaeology and criminology. Due to the extraordinary relative abundances of the elements hydrogen, carbon, nitrogen and sulfur in some biological material and to the need for individual sample preparations for H and S, their isotope ratio determination currently requires at least three independent procedures and approximately 1 h of work. We present here a system for the integrated elemental and isotope ratio analysis of all four elements in one sample within 20 min. The system consists of an elemental analyser coupled to an isotope ratio mass spectrometer with an inlet system for four reference gases (N(2), CO(2), H(2) and SO(2)). The combustion gases are separated by reversible adsorption and determined by a thermoconductivity detector; H(2)O is reduced to H(2). The analyser is able to combust samples with up to 100 mg of organic material, sufficient to analyse samples with even unusual elemental ratios, in one run. A comparison of the isotope ratios of samples of water, fruit juices, cheese and ethanol from wine, analysed by the four-element analyser and by classical methods and systems, respectively, yielded excellent agreements. The sensitivity of the device for the isotope ratio measurement of C and N corresponds to that of other systems. It is less by a factor of four for H and by a factor of two for S, and the error ranges are identical to those of other systems.  相似文献   

3.
Summary The effects of concentration, separation and spectral similarity as factors influencing the accuracy of iterative target testing factor analysis (ITT-FA) are investigated for three component systems by the application of analysis of variance (ANOVAR). ANOVAR is applied over a range of peak separations to map the changing effects of the three factors with increasing overlap. Two error responses were measured and analysed, (a) Relative cluster error (RCE) a measure of the error over all peaks in a cluster and (b) Relative peak error (RPE) the error of an individual peak. Multicomponent analysis (MCA) a method requiringa priori spectral information, is used as a referee method for ITT-FA.  相似文献   

4.
Recent work has demonstrated the Bennett acceptance ratio method is the best asymptotically unbiased method for determining the equilibrium free energy between two end states given work distributions collected from either equilibrium and nonequilibrium data. However, it is still not clear what the practical advantage of this acceptance ratio method is over other common methods in atomistic simulations. In this study, we first review theoretical estimates of the bias and variance of exponential averaging (EXP), thermodynamic integration (TI), and the Bennett acceptance ratio (BAR). In the process, we present a new simple scheme for computing the variance and bias of many estimators, and demonstrate the connections between BAR and the weighted histogram analysis method. Next, a series of analytically solvable toy problems is examined to shed more light on the relative performance in terms of the bias and efficiency of these three methods. Interestingly, it is impossible to conclusively identify a "best" method for calculating the free energy, as each of the three methods performs more efficiently than the others in at least one situation examined in these toy problems. Finally, sample problems of the insertion/deletion of both a Lennard-Jones particle and a much larger molecule in TIP3P water are examined by these three methods. In all tests of atomistic systems, free energies obtained with BAR have significantly lower bias and smaller variance than when using EXP or TI, especially when the overlap in phase space between end states is small. For example, BAR can extract as much information from multiple fast, far-from-equilibrium simulations as from fewer simulations near equilibrium, which EXP cannot. Although TI and sometimes even EXP can be somewhat more efficient in idealized toy problems, in the realistic atomistic situations tested in this paper, BAR is significantly more efficient than all other methods.  相似文献   

5.
Improving the efficiency of free energy calculations is important for many biological and materials design applications, such as protein-ligand binding affinities in drug design, partitioning between immiscible liquids, and determining molecular association in soft materials. We show that for any pair potential, moderately accurate estimation of the radial distribution function for a solute molecule is sufficient to accurately estimate the statistical variance of a sampling along a free energy pathway. This allows inexpensive analytical identification of low statistical error free energy pathways. We employ a variety of methods to estimate the radial distribution function (RDF) and find that the computationally cheap two-body "dilute gas" limit performs as well or better than 3D-RISM theory and other approximations for identifying low variance free energy pathways. With a RDF estimate in hand, we can search for pairwise interaction potentials that produce low variance. We give an example of a search minimizing statistical variance of solvation free energy over the entire parameter space of a generalized "soft core" potential. The free energy pathway arising from this optimization procedure has lower curvature in the variance and reduces the total variance by at least 50% compared to the traditional soft core solvation pathway. We also demonstrate that this optimized pathway allows free energies to be estimated with fewer intermediate states due to its low curvature. This free energy variance optimization technique is generalizable to solvation in any homogeneous fluid and for any type of pairwise potential and can be performed in minutes to hours, depending on the method used to estimate g(r).  相似文献   

6.
A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400 +/- 20 micro g) of calcium carbonate. This new method streamlines the classical phosphoric acid/calcium carbonate (H(3)PO(4)/CaCO(3)) reaction method by making use of a recently available Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. Conditions for which the H(3)PO(4)/CaCO(3) reaction produced reproducible and accurate results with minimal error had to be determined. When the acid/carbonate reaction temperature was kept at 26 degrees C and the reaction time was between 24 and 54 h, the precision of the carbon and oxygen isotope ratios for pooled samples from three reference standard materials was 相似文献   

7.
In stable isotope ratio mass spectrometry (IRMS), the stable isotopic composition of samples is measured relative to the isotopic composition of a working gas. This measured isotopic composition must be converted and reported on the respective international stable isotope reference scale for the accurate interlaboratory comparison of results. This data conversion procedure, commonly called normalization, is the first set of calculations done by the users. In this paper, we present a discussion and mathematical formulation of several existing routinely used normalization procedures. These conversion procedures include: single-point anchoring (versus working gas and certified reference standard), modified single-point normalization, linear shift between the measured and the true isotopic composition of two certified reference standards, two-point and multi-point linear normalization methods. Mathematically, the modified single-point, two-point, and multi-point normalization methods are essentially the same. By utilizing laboratory analytical data, the accuracy of the various normalization methods (given by the difference between the true and the normalized isotopic composition) has been compared. Our computations suggest that single-point anchoring produces normalization errors that exceed the maximum total uncertainties (e.g. 0.1 per thousand for delta(13)C) often reported in the literature, and, therefore, that it must not be used for routinely anchoring stable isotope measurement results to the appropriate international scales. However, any normalization method using two or more certified reference standards produces a smaller normalization error provided that the isotopic composition of the standards brackets the isotopic composition of unknown samples.  相似文献   

8.
The nonlinear fitting method, based on the ordinary least squares approach, is one of several methods that have been applied to fit experimental data into well-known profiles and to estimate their spectral parameters. Besides linearization measurement errors, the main drawback of this approach is the high variance of the spectral parameters to be estimated. This is due to the overlapping of individual components, which leads to ambiguous fitting. In this paper, we propose a simple mathematical tool in terms of a fractional derivative (FD) to determine the overlapping band spectral parameters. This is possible because of several positive effects of FD connected with the behavior of its zero-crossing and maximal amplitude. For acquiring a stable and unbiased FD estimate, we utilize the statistical regularization method and the regularized iterative algorithm when a priori constraints on a sought derivative are available. Along with the well-known distributions such as Lorentzian, Gaussian and their linear combinations, the Tsallis distribution is used as a model to correctly assign overlapping bands. To demonstrate the power of the method, we estimate unresolved band spectral parameters of synthetic and experimental infra-red spectra.  相似文献   

9.
The convergence behavior of free energy calculations has been explored in more detail than in any previously reported work, using a model system of two neon atoms in a periodic box of water. We find that for thermodynamic integration-type free energy calculations as much as a nanosecond or more molecular dynamics sampling is required to obtain a fully converged value for a single λ point of the integrand. The concept of “free energy derivatives” with respect to the individual parameters of the force field is introduced. This formalism allows the total convergence of the simulation to be deconvoluted into components. A determination of the statistical “sampling ratio” from these simulations indicates that for window-type free energy calculations carried out in a periodic waterbox of typical size at least 0.6 ps of sampling should be performed at each window (0.7 ps if constraint contributions to the free energy are being determined). General methods to estimate and reduce the error in thermodynamic integration and free energy perturbation calculations are discussed. We show that the difficulty in applying such methods is determining a reliable estimate of the correlation length from a short series of data. © 1994 by John Wiley & Sons, Inc.  相似文献   

10.
The propagation of uncertainties associated with the stable oxygen isotope reference materials through a multi-point normalisation procedure was evaluated in this study using Monte Carlo (MC) simulation. We quantified the normalisation error for a particular selection of reference materials and their number of replicates, when the choice of standards is restricted to either nitrates, sulphates or organic reference materials alone, and in comparison with when this restriction was relaxed. A lower uncertainty in stable oxygen isotope analyses of solid materials performed using High-Temperature Pyrolysis (HTP) can be readily achieved through an optimal selection of reference materials. Among the currently available certified reference materials the best performing pairs minimising the normalisation errors are USGS35 and USGS34 for nitrates; IAEA-SO-6 and IAEA-SO-5 for sulphates; and IAEA-601 and IAEA-602 for organic materials. The normalisation error can be reduced further--by approximately half--if each of these two analysed reference materials is replicated four times. The overall optimal selection among all nine considered reference materials is the IAEA-602 and IAEA-SO-6 pair. If each of these two reference materials is replicated four times the maximum predicted normalisation error will equal 0.22‰, the minimum normalisation error 0.12‰, and the mean normalisation error 0.15‰ over the natural range of δ(18)O variability. We argue that the proposed approach provides useful insights into reference material selection and in assessing the propagation of analytical error through normalisation procedures in stable oxygen isotope studies.  相似文献   

11.
For olive oil production a metal hammer-decanter olive processing line was compared to a traditional metal hammer-press line, a discontinuous method which, if properly used, yields high-quality virgin olive oils. Galega, Carrasquenha and Cobrançosa olives (traditional Portuguese varieties) were studied. The analysis of the aroma compounds was performed after headspace-solid phase micro extraction. The analytical results obtained after comprehensive gas chromatography in tandem with time of flight mass spectrometry (GC × GC/ToFMS) for these three different olive oil varieties, from a single year harvest and processed with two different extraction technologies, were compared using statistical image treatment, by means of ImageJ software, for fingerprint recognitions and compared with principal component analysis when the area data of each chromatographic spot of the contour plots were considered. The differences used to classify the olive oils studied under different groups after principal component analysis were observed independently of the treatment used (peak areas or the sum of the pixels counts). When the individual peak areas were considered, more then 75.7% of the total variance is explained by the first two principal components while in the case where the data were subjected to image treatment 84.0% of the total variance is explained by the first two principal components. In both cases the first and second principal components present eigenvalues higher then 1.0. Fingerprint image monitoring of the aroma compounds of the olive oil allowed a rapid differentiation of the three varieties studied as well as the extraction methods used. The volatile compounds responsible for their characterization were tentatively identified in a bi-dimensional polar/non-polar column set in the GC × GC/Tof-MS apparatus. This methodology allowed the reduction of the number of compounds needed for matrices characterization, preserving the efficiency of the discrimination, when compared with the traditional methods where the identification of all peaks is needed.  相似文献   

12.
Two alternative partial least squares (PLS) methods, averaged PLS and weighted average PLS, are proposed and compared with the classical PLS in terms of root mean square error of prediction (RMSEP) for three real data sets. These methods compute the (weighted) average of PLS models with different complexity. The prediction abilities of the alternative methods are comparable to that of the classical PLS but they do not require to determine how many components should be included in the model. They are also more robust in the sense that the quality of prediction depends less on a good choice of the number of components to be included. In addition, weighted average PLS is also compared with the weighted average part of LOCAL, a published method that also applies weighted average PLS, with however an entirely different weighting scheme.  相似文献   

13.
For Fourier transform mass spectrometry analysis of high mass ions, the signals from closely spaced isotope peaks undergo periodic destructive interference, producing a beat pattern in the time-domain signal. The mass spectra that are obtained by sampling transient signals for less than two beat periods exhibit an error in the relative abundances that are measured. This effect is shown to cause significant errors in the measurement of the relative abundances of the components of polymer distributions, leading to errors in the derived average molecular weights for such samples. Computer simulations show that isotope beating causes this error to increase as the duration of an acquired transient becomes short compared to the beating period. This error becomes insignificant when the transient is acquired for longer than twice the beat period. Experimental data are presented for polymers in which an oligomeric distribution of monoisotopic peaks is produced by stored waveform inverse Fourier transform ejection of all 13C-containing isotope peaks. The data show that the isotope beating-induced abundance errors are eliminated when there are no isotope peaks present.  相似文献   

14.
The biochemical conversion of cellulosic biomass to liquid transportation fuels includes the breakdown of biomass into its soluble, fermentable components. Pretreatment, the initial step in the conversion process, results in heterogeneous slurry comprised of both soluble and insoluble biomass components. For the purpose of tracking the progress of the conversion process, it is important to be able to accurately measure the fraction of insoluble biomass solids in the slurry. The current standard method involves separating the solids from the free liquor and then repeatedly washing the solids to remove the soluble fraction, a laborious and tedious process susceptible to operator variations. In this paper, we propose an alternative method for calculating the fraction of insoluble solids which does not require a washing step. The proposed method involves measuring the dry matter content of the whole slurry as well as the dry matter content in the isolated liquor fraction. We compared the two methods using three different pretreated biomass slurry samples and two oven-drying techniques for determining dry matter content, an important measurement for both methods. We also evaluated a large set of fraction insoluble solids data collected from previously analyzed pretreated samples. The proposed new method provided statistically equivalent results to the standard washing method when an infrared balance was used for determining dry matter content in the controlled measurement experiment. Similarly, in the large historical data set, there was no statistical difference shown between the wash and no-wash methods. The new method is offered as an alternative method for determining the fraction of insoluble solids.  相似文献   

15.
Kasemsumran S  Du YP  Li BY  Maruo K  Ozaki Y 《The Analyst》2006,131(4):529-537
A new cross validation method called moving window cross validation (MWCV) is proposed in this study, as a novel method for selecting the rational number of components for building an efficient calibration model in analytical chemistry. This method works with an innovative pattern to split a validation set by a number of given windows that move synchronously along proper subsets of all the samples. Calculations for the mean value of all mean squares error in cross validations (MSECVs) for all splitting forms are made for different numbers of components, and then the optimal number of components for the model can be selected. Performance of MWCV is compared with that of two cross validation methods, leave-one-out cross validation (LOOCV) and Monte Carlo cross validation (MCCV), for partial least squares (PLS) models developed on one simulated data set and two real near-infrared (NIR) spectral data sets. The results reveal that MWCV can avoid a tendency to over-fit the data. Selection of the optimal number of components can be easily made by MWCV because it yields a global minimum in root MSECV at the optimal number of components. Changes in the window size and window number of MWCV do not greatly influence the selection of the number of components. MWCV is demonstrated to be an effective, simple and accurate cross validation method.  相似文献   

16.
A PC-based interactive programme is described which is designed to help in suggesting the best estimate of the true value of an analyte content from results of collective studies aiming at deriving consensus values and/or reference material preparation by employing combined statistical and analytical considerations. The Grubbs, Dixon, Huber tests, and the coefficients of skewness and curtosis tests are used for outlier detection, the Bartlett, Cochran, and the standard error tests are employed for testing variance homogeneity testing and/or variance outliers identification, while the normality of results distribution is tested according to the Kolmogoroff-Smirnoff-Lilliefors and Shapiro-Wilk tests. One-way analysis of variance (ANOVA) is employed to test differences among means of results obtained in different conditions (laboratories, analytical methods, etc.) and to calculate the overall mean and its confidence interval accordingly. Points for an analytical discussion are given which should be considered prior to a decision whether a result of a trace element determination, identified as an outlier from statistical reasons, should be rejected.  相似文献   

17.
The new method for the investigation of fast isotope exchange has been elaborated. It includes combination of the two procedures: (1) isotope exchange study without analytical separation of the components (electrolytical and diaphragm exchange methods); (2) method of constant velocity supply of the radioactively labelled component into the exchange system. The paper presents the derivation and analysis of correlations. The possibilities of the methods are discussed.  相似文献   

18.
The accuracy of several low-cost methods (harmonic oscillator approximation, CT-Comega, SR-TDPPI-HS, and TDPPI-HS) for calculating one-dimensional hindered rotor (1D-HR) partition functions is assessed for a test set of 644 rotations in 104 organic molecules, using full torsional eigenvalue summation (TES) as a benchmark. For methods requiring full rotational potentials, the effect of the resolution at which the rotational potential was calculated was also assessed. Although lower-cost methods such as Pitzer's Tables are appropriate when potentials can be adequately described by simple cosine curves, these were found to show large errors (as much as 3 orders of magnitude) for non-cosine curve potentials. In those cases, it is found that the TDPPI-HS method in conjunction with a potential compiled at a resolution of 60 degrees offers the best compromise between accuracy and computational expense. It can reproduce the benchmark values of the partition functions for an individual mode to within a factor of 2; its average error is just of a factor of 1.08. The corresponding error in the overall internal rotational partition functions of the molecules studied is less than a factor of 4 in all cases. Excellent cost-effective performance is also offered by CT-Comega, which requires only the geometries, energies, and frequencies of the distinguishable minima in the potential. With this method the geometric mean error in individual partition functions is 1.14, the maximum error is a modest 2.98 and the resulting error in the total 1D-HR partition function of a molecule is less than a factor of 5 in all cases.  相似文献   

19.
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.  相似文献   

20.
Simultaneous determination of binary mixtures pyridoxine hydrochloride and thiamine hydrochloride in a vitamin combination using UV-visible spectrophotometry and classical least squares (CLS) and three newly developed genetic algorithm (GA) based multivariate calibration methods was demonstrated. The three genetic multivariate calibration methods are Genetic Classical Least Squares (GCLS), Genetic Inverse Least Squares (GILS) and Genetic Regression (GR). The sample data set contains the UV-visible spectra of 30 synthetic mixtures (8 to 40 microg/ml) of these vitamins and 10 tablets containing 250 mg from each vitamin. The spectra cover the range from 200 to 330 nm in 0.1 nm intervals. Several calibration models were built with the four methods for the two components. Overall, the standard error of calibration (SEC) and the standard error of prediction (SEP) for the synthetic data were in the range of <0.01 and 0.43 microg/ml for all the four methods. The SEP values for the tablets were in the range of 2.91 and 11.51 mg/tablets. A comparison of genetic algorithm selected wavelengths for each component using GR method was also included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号