首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Abstract

The Kalman filter based techniques are adapted to solve the most general form of Tung's integral formula, i. e. when a non-uniform, non-symmetric calibration model is employed to correct chromatograms obtained in size exclusion chromatography from instrumental broadening errors. Through this method, the inverse smoothing of a chromatogram contaminated with measurement noise of known statistics is optimally performed by minimizing the estimation error variance. The method is numerically very “robust”, improves the signal to noise ratio, provides good validation checks, and does not involve any previous parameter estimation procedure.  相似文献   

2.
The results of a numerical solution of the inverse problem of light beating spectroscopy using the Tikhonov regularization method are presented. The developed algorithm takes into account the positivity of the solution and the constant background signal. The regularization parameter of the method is determined based on the a priori information on the noise amplitude of the input signal. The results of the reconstruction of model uni-, bi-, and multimodal distributions are presented. For examples of numerically generated spectra, the effects of the measured spectrum noise, background signal, and frequency bandwidth of the signal measurement on the characteristics of the reconstructed distributions are studied.  相似文献   

3.
The performance of chromatography data analysis software packages is of cardinal importance when the precision and the accuracy of a chromatographic system are evaluated. Users cannot rely on a procedure generating chromatographic data of known accuracy. Holistic approaches cannot always be entirely trusted. We propose a new method consisting in validating a data analysis package against computer generated chromatograms of exactly known characteristics by feeding these chromatograms into the vendor supplied software and comparing the results supplied by the software and the exact answers. We simulated symmetrical and tailing chromatograms and processed these signals with the Agilent Technologies (formerly Hewlett-Packard) ChemStation software. The noise profile (i.e. the power spectrum of the baseline) was determined for a HPLC UV detector prior to the calculations, and chromatograms of different signal-to-noise ratios were used for the analysis. For every chromatogram, we simulated 25 replicates with identical signal-to-noise ratios but different noise sequences. In this manner, both the random and the systematic errors of the retention data and peak shape characteristics can be evaluated. When analyzing tailing peaks, we simulated the effects of extra-column band broadening and those of column overload. Our calculations show that the general performance of the data analysis system studied is excellent. The contribution of the random error originating from the data analysis procedure is in most cases negligible compared to the repeatability of the chromatographic measurement itself.  相似文献   

4.
In atomic absorption spectrometry the determination of the spectral background is very difficult. The problem of the background measurement is namely to determine the spectral background under the very narrow resonance line, which cannot be resolved by the monochromator. The normal method of background measurement with a source of continuous radiation is correct only in that case when the background is a spectral continuum, e.g., a dissociation continuum of molecules. The background measurement may be faulty, if the background is due to line-rich electronic excitation spectra of molecules. The actual background is totally dependent upon whether or not a rotational line of the molecular spectrum coincides with the, atomic absorption line. This cannot be deduced from measurements obtained from those atomic absorption instruments normally used. Therefore in order to avoid systematic errors in atomic absorption analysis a systematic study of interfering molecular spectra with high resolution instruments is the only way to solve this problem.  相似文献   

5.
A nebulizer-centric response function model of the analytical inductively coupled argon plasma ion source was used to investigate the statistical frequency distributions and noise reduction factors of simultaneously measured flicker noise limited isotope ion signals and their ratios. The response function model was extended by assuming i) a single gaussian distributed random noise source (nebulizer gas pressure fluctuations) and ii) the isotope ion signal response is a parabolic function of the nebulizer gas pressure.Model calculations of ion signal and signal ratio histograms were obtained by applying the statistical method of translation to the non-linear response function model of the plasma. Histograms of Ni, Cu, Pr, Tl and Pb isotope ion signals measured using a multi-collector plasma mass spectrometer were, without exception, negative skew. Histograms of the corresponding isotope ratios of Ni, Cu, Tl and Pb were either positive or negative skew. There was a complete agreement between the measured and model calculated histogram skew properties.The nebulizer-centric response function model was also used to investigate the effect of non-linear response functions on the effectiveness of noise cancellation by signal division. An alternative noise correction procedure suitable for parabolic signal response functions was derived and applied to measurements of isotope ratios of Cu, Ni, Pb and Tl. The largest noise reduction factors were always obtained when the non-linearity of the response functions was taken into account by the isotope ratio calculation.Possible applications of the nebulizer-centric response function model to other types of analytical instrumentation, large amplitude signal noise sources (e.g., lasers, pumped nebulizers) and analytical error in isotope ratio measurements by multi-collector plasma mass spectrometry are discussed.  相似文献   

6.
The application of a new method to the multivariate analysis of incomplete data sets is described. The new method, called maximum likelihood principal component analysis (MLPCA), is analogous to conventional principal component analysis (PCA), but incorporates measurement error variance information in the decomposition of multivariate data. Missing measurements can be handled in a reliable and simple manner by assigning large measurement uncertainties to them. The problem of missing data is pervasive in chemistry, and MLPCA is applied to three sets of experimental data to illustrate its utility. For exploratory data analysis, a data set from the analysis of archeological artifacts is used to show that the principal components extracted by MLPCA retain much of the original information even when a significant number of measurements are missing. Maximum likelihood projections of censored data can often preserve original clusters among the samples and can, through the propagation of error, indicate which samples are likely to be projected erroneously. To demonstrate its utility in modeling applications, MLPCA is also applied in the development of a model for chromatographic retention based on a data set which is only 80% complete. MLPCA can predict missing values and assign error estimates to these points. Finally, the problem of calibration transfer between instruments can be regarded as a missing data problem in which entire spectra are missing on the ‘slave’ instrument. Using NIR spectra obtained from two instruments, it is shown that spectra on the slave instrument can be predicted from a small subset of calibration transfer samples even if a different wavelength range is employed. Concentration prediction errors obtained by this approach were comparable to cross-validation errors obtained for the slave instrument when all spectra were available.  相似文献   

7.
NMR-based metabolomics is characterized by high throughput measurements of the signal intensities of complex mixtures of metabolites in biological samples by assaying, typically, bio-fluids or tissue homogenates. The ultimate goal is to obtain relevant biological information regarding the dissimilarity in patho-physiological conditions that the samples experience. For a long time now, this information has been obtained through the analysis of measured NMR signals via multivariate statistics.NMR data are quite complex and the use of such multivariate statistical methods as principal components analysis (PCA) for their analysis assumes that the data are multivariate normal with errors that are identical, independent and normally distributed (i.e. iid normal). There is a consensus that these assumptions are not always true for these data and, thus, several methods have been devised to transform the data or weight them prior to analysis by PCA. The structure of NMR measurement noise, or the extent to which violations of error homoscedasticity affect PCA results have neither been characterized nor investigated.A comprehensive characterization of measurement uncertainties in NMR based metabolomics was achieved in this work using an experiment designed to capture contributions of several sources of error to the total variance in the measurements. The noise structure was found to be heteroscedastic and highly correlated with spectral characteristics that are similar to the mean of the spectra and their standard deviation. A model was subsequently developed that potentially allows errors in NMR measurements to be accurately estimated without the need for extensive replication.  相似文献   

8.
The signal-to-noise ratio is an important property of NMR spectra. It allows to compare the sensitivity of experiments, the performance of hardware, etc. Its measurement is usually done in a rudimentary manner involving manual operation of selecting separately a region of the spectrum with signal and noise, respectively, applying some operation and returning the signal-to-noise ratio. We introduce here a simple method based on the analysis of the distribution of point intensities in one- and two-dimensional spectra. The signal/artifact/noise plots, (SAN plots) allows one to present in a graphical manner qualitative and quantitative information about spectra. It will be shown that besides measuring signal and noise levels, SAN plots are also quite useful to visualize and compare artifacts within a series of spectra. Some basic properties of the SAN plots are illustrated with simple application.  相似文献   

9.
A method is presented for determining optimal peak integration intervals on the basis of known peak shapes and noise characteristics. General theoretical considerations lead to conditions yielding optimal integration intervals. Examples of frequently occurring peak shapes and noise types are given, such as gaussian or skewed peak shapes, with band-limited first-order noise or flicker noise superimposed. The optimal integration intervals are approximately independent of the signal-to-noise ratio, and they are considerably smaller than the integration intervals normally used. The resulting expected peak area estimation errors are compared with the estimation error resulting from peak maximum amplitude measurement. On the basis of this comparison, rules of thumb are proposed to determine whether piak maximum measurement or peak integration yields the best results. Simulation of different peak shapes with noise superimposed confirms the results obtained. A flexible method is presented for the optimal measurement of the area of peaks with an unknown shape. This on-line method is simple, and could be used as a simple peak-finding procedure. The method requires almost no computer memory, and can be implemented on a microcomputer. A simulated example of this procedure is given.  相似文献   

10.
Abstract

The paper discusses sources of errors which affect the accuracy of optical densitometry for the quantitative evaluation of TL chromatograms and flat bed electropherograms. Both 1D and 2D separations are considered. Relevant error sources fall into two classes. One which is deterministic in nature and one which has stochastic character. The main source of deterministic errors is the non-linear relation between densitometric response and concentration distribution. Linearizing transforms of the reponse signal are available which significantly reduce the magnitude of the error. They are most effective when “point” scanning is used but much less so with slit scanning. Stochastic sources produce noise which affects the measured value of the signal amplitude. Most important is generally the “optical” noise of the blank medium. In most cases it is this factor, which ultimately determines sensitivity and accuracy of the method. Efforts to reduce optical noise are, therefore, an important design feature for high performance instrument. Most effective in this regard is two wave lengths scanning. The technique is, however, applicable only if the response characteristic of the medium is spectrally flat over a range which is significantly wider than the spectral absorbance characteristic of the separate.  相似文献   

11.
Two new algorithms for automated processing of liquid chromatography/mass spectrometry (LC/MS) data are presented. These algorithms were developed from an analysis of the noise and artifact distribution in such data. The noise distribution was analyzed by preparing histograms of the signal intensity in LC/MS data. These histograms are well fit by a sum of two normal distributions in the log scale. One new algorithm, median filtering, provides increased performance compared to averaging adjacent scans in removing noise that is not normally distributed in the linear scale. Another new algorithm, vectorized peak detection, provides increased robustness with respect to variation in the noise and artifact distribution compared to methods based on determining an intensity threshold for the entire dataset. Vectorized peak detection also permits the incorporation of existing algorithms for peak detection in ion chromatograms and/or mass spectra. The application of these methods to LC/MS spectra of complex biological samples is described.  相似文献   

12.
Cooperative interactions of signal transduction and environmental noise are investigated with a coupled hormone system, in which selective explicit internal signal stochastic resonance (EISSR) is observed. More specifically, the large peak of a period-2 oscillation (i.e., a strong signal) is greatly amplified by the environmental noise while the small peak (i.e., a weak signal) does not exhibit cooperative interactions with noise. The EISSR phenomenon could be controlled by adjusting the frequency or amplitude of an external signal and a critical amplitude for external signal is found. Significantly, the maximal signal-to-noise ratio increases almost linearly with the increment of control parameter, despite that the magnitude of the large peak is decreased. In addition, the noise does not alter the fundamental frequencies of the strong signal and the weak signal, which implicates that the system can keep its intrinsic oscillatory state and resist the effect of environmental fluctuations.  相似文献   

13.
Appropriate sampling, that includes the estimation of measurement uncertainty, is proposed in preference to representative sampling without estimation of overall measurement quality. To fulfil this purpose the uncertainty estimate must include contribution from all sources, including the primary sampling, sample preparation and chemical analysis. It must also include contributions from systematic errors, such as sampling bias, rather than from random errors alone. Case studies are used to illustrate the feasibility of this approach and to show its advantages for improved reliability of interpretation of the measurements. Measurements with a high level of uncertainty (e.g. 50%) can be shown to be fit for some specified purposes using this approach. Once reliable estimates of the uncertainty are available, then a probabilistic interpretation of results can be made. This allows financial aspects to be considered in deciding upon what constitutes an acceptable level of uncertainty. In many practical situations ”representative” sampling is never fully achieved. This approach recognises this and instead, provides reliable estimates of the uncertainty around the concentration values that imperfect appropriate sampling causes. Received: 28 December 2001 Accepted: 25 April 2002  相似文献   

14.
We have developed a method to make real-time, continuous, noninvasive measurements of muscle oxygenation (Mox) from the surface of the skin. A key development was measurement in both the visible and near infrared (NIR) regions. Measurement of both oxygenated and deoxygenated myoglobin and hemoglobin resulted in a more accurate measurement of Mox than could be achieved with measurement of only the deoxygenated components, as in traditional near-infrared spectroscopy (NIRS). Using the second derivative with respect to wavelength reduced the effects of scattering on the spectra and also made oxygenated and deoxygenated forms more distinguishable from each other. Selecting spectral bands where oxygenated and deoxygenated forms absorb filtered out noise and spectral features unrelated to Mox. NIR and visible bands were scaled relative to each other in order to correct for errors introduced by normalization. Multivariate Curve Resolution (MCR) was used to estimate Mox from spectra within each data set collected from healthy subjects. A Locally Weighted Regression (LWR) model was built from calibration set spectra and associated Mox values from 20 subjects using 2562 spectra. LWR and Partial Least Squares (PLS) allow accurate measurement of Mox despite variations in skin pigment or fat layer thickness in different subjects. The method estimated Mox in five healthy subjects with an RMSE of 5.4%.  相似文献   

15.
To ensure the reliability of results, analytical laboratories require a continuous qualitycontrol program which must take account of both systematic and random errors. Analyses of reference materials can be used to estimate systematic errors but estimates of random errors (precision) tend to be optimistic, mainly because reference materials cannot be put through the whole analytical process (e.g., primary sampling is often a major source of error). Estimates of precision must be based on routine samples. If duplicate determinations are done on routine samples, the precision can be estimated reliably. Within the optimum concentration range of analytical method (usually starting from 5-10 times the detection limit), the relative standard deviation (sr can be regarded as being almost constant or independent of concentration. The precision can then be estimated by first calculating the sr value of each pair of results. Individually, these are not reliable estimates of the true sr, but they can be regarded as independent measurements of the same sr and so can be pooled to obtain a more reliable estimate of precision with the number of duplicates as the degrees of freedom. The applicabiilty of the method is tested on soil, rock and ore samples.  相似文献   

16.
In the present study a new approach for the chemometric background correction in on-line gradient LC–FTIR is introduced. For this purpose, the spectral changes of the elution mixture during gradient elution were analyzed applying 2D correlation spectroscopy. The fundamentals of the new background correction algorithm, based on polynomial fits calculated from a reference spectra matrix (Polyfit-RSM method) are explained. The Polyfit-RSM approach was applied on blank gradient runs as well as on LC–FTIR data obtained from the injection of a soft drink sample using acetonitrile:water as eluent. Results found were critically assessed and compared to those obtained by two previous background correction methods which are likewise based on the use of a reference spectra matrix (RSM). The Polyfit-RSM method provided lower noise levels throughout the whole spectral range than other alternative background correction methods, an excellent recovery of analyte spectra as well as chromatograms with a low noise level and also free from baseline shifts. A significant finding, which implies a major advantage for the practical applicability of the algorithm, is that the size of the RSMs can be reduced without affecting the accuracy of the correction method.  相似文献   

17.
In the present paper, the capabilities of differential field-flow fractionation, i. e., the determination of an incremental quantity of a colloidal species, e. g., an uptake adsorbed mass, determined by the joint use of two independent FFF measurements, over a species and the same modified species respectively, are considered. The different error types, those related to the retention time determinations and those coming from the operating parameter fluctuations were considered. The different components were computed with reference to SdFFF determinations of bare polystyrene (PS) submicronic particles and the same PS particles covered by IgG. Comparison was made between theoretically computed precision and experiments. The error coming from the experimental measurement of retention times was identified to be the main source of errors. Accordingly, it was possible to make explicit the detection limits and the confidence intervals of the adsorbed mass uptake, as a function of experimental quantities such as the retention ratio, the detector calibration ratio, the injected quantity, the baseline noise, and the void time relative error. An experimentally determined and theoretically foreseen dependence of both the experimental detection and confidence limits (approximately +/- 10(-17) g) on the square root of the injected concentration, for constant injected volume, was found.  相似文献   

18.
Biotransformation products of two potential antineoplastic agents, benfluron and dimefluron, are characterized using our integrated approach based on the combination of high-performance liquid chromatography (HPLC) separation of phase I and phase II metabolites followed by photodiode-array UV detection and electrospray ionization tandem mass spectrometry (MS/MS). High mass accuracy measurement allows confirmation of an elemental composition and metabolic reactions according to exact mass defects. The combination of different HPLC/MS/MS scans, such as reconstructed ion current chromatograms, constant neutral loss chromatograms or exact mass filtration, helps the unambiguous detection of low abundance metabolites. The arene oxidation, N-oxidation, N-demethylation, O-demethylation, carbonyl reduction, glucuronidation and sulfation are typical mechanisms of the metabolite formation. The interpretation of their tandem mass spectra enables the distinction of demethylation position (N- vs. O-) as well as to differentiate N-oxidation from arene oxidation for both phase I and phase II metabolites. Two metabolic pathways are rather unusual for rat samples, i.e., glucosylation and double glucuronidation. The formation of metabolites that lead to a significant change in the chromophoric system of studied compounds, such as the reduction of carbonyl group in 7H-benzo[c]fluorene-7-one chromophore, is reflected in their UV spectra, which provides valuable complementary information to MS/MS data.  相似文献   

19.
One of the major issues within the context of the fully automated development of chromatographic methods consists of the automated detection and identification of peaks coming from complex samples such as multi-component pharmaceutical formulations or stability studies of these formulations. The same problem can also occur with plant materials or biological matrices. This step is thus critical and time-consuming, especially when a Design of Experiments (DOE) approach is used to generate chromatograms. The use of DOE will often maximize the changes of the analytical conditions in order to explore an experimental domain. Unfortunately, this generally provides very different and “unpredictable” chromatograms which can be difficult to interpret, thus complicating peak detection and peak tracking (i.e. matching peaks among all the chromatograms). In this context, Independent Components Analysis (ICA), a new statistically based signal processing methods was investigated to solve this problem. The ICA principle assumes that the observed signal is the resultant of several phenomena (known as sources) and that all these sources are statistically independent. Under those assumptions, ICA is able to recover the sources which will have a high probability of representing the constitutive components of a chromatogram. In the present study, ICA was successfully applied for the first time to HPLC–UV-DAD chromatograms and it was shown that ICA allows differentiation of noise and artifact components from those of interest by applying clustering methods based on high-order statistics computed on these components. Furthermore, on the basis of the described numerical strategy, it was also possible to reconstruct a cleaned chromatogram with minimum influence of noise and baseline artifacts. This can present a significant advance towards the objective of providing helpful tools for the automated development of liquid chromatography (LC) methods. It seems that analytical investigations could be shortened when using this type of methodologies.  相似文献   

20.
A method for the calculation of the limits of detection (LD) and quantification (LQ) for the analysis of organochlorine compounds in serum is described. The method is based on the analysis of proficiency testing materials, an external quality assessment for selected pollutants, and the study of the signal/noise ratio of chromatograms obtained from GC-ECD injection. This method provides representative results for matrix effects, instrumental variability and extraction recoveries in the analysis of serum samples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号