首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 2 毫秒
1.
Equations based on the equilibrium relationship between a sulphate-containing solution and solid lead sulphate were derived and applied to calibration graphs for the determination of sulphate. The special character of the linear graphs extends not only the applicability of linear calibration and standard addition procedures to the indirect determination of sulphate, but also the reliability of the analytical results. Experiments with the flow-injection technique confirmed the applicability of the proposed linear graphs.  相似文献   

2.
Multi-wavelength detectors offer improved detection capabilities for liquid chromatographic methods, but require improvements in data analysis methodology to utilize all the available information. In this work, various methods for improving the quantitative results obtained from liquid chromatography with full spectrum fluorescence detection were studied. The availability of multi-wavelenght information allows overlapped chromatographic peaks to be resolved. Different approaches were investigated for developing calibration models that use all of the available spectral information, and are compatible with a variety of methods for quantification, including factor analysis, Kalman filtering and rank annihilation. These methods were compared for their ability to resolve overlapped chromatographic peaks and their accuracy in quantification.  相似文献   

3.
Summary In HPLC calibration the expressions lowest calibration limit and determination limit are defined in statistical terms. The lowest calibration limit is the minimum mass in the measured series of calibration points. It is calculated from the confidence interval of the inverse of the calibration function as the lowest mass limit that may be differentiated from zero mass with a preset probability of error. If the calculated lowest calibration limit is lower than the actual data, points at lower concentration may be measured. The determination limit is the smallest concentration of an analysis that is differentiated from the concentration zero or an apparent blind value in the calibration curve with a given probability of error.Using two different UV-detectors (variable wavelength and photodiode-array) the lowest calibration limit is experimentally evaluated and compared with specific data for the detectors.Dedicated to Prof. Dr. E. Bayer, Tübingen on ocassion of his 60th birthday.  相似文献   

4.
In order to accommodate continually changing tasks in the [μl/l]–[nl/m3] ranges of gas chromatographic trace analysis of gas phases and vapor phases, a simple and time-saving calibration technique is presented which renders unnecessary conventional test mixtures of the abovementioned concentration ranges. This new method is based on the simulation of such mixtures at the inlet of the GC unit with the aid of commercially available multiway sampling valves of various volumes by means of partial pressure sampling.  相似文献   

5.
Only two computer-controlled microsolenoid devices, namely two micropumps or one micropump and one microvalve, are sufficient for the construction of on-line dilution modules useful in several flow analytical systems for the calibration using single standard. Three simple constructions of such modules were tested and compared. The most promising is the one based on the concept of a microvalve controlling dilution ratio of the standard and a solenoid micropump playing a double role: solution pumping device and mixing segments homogenizer. All investigated modules were tested with paired emitter detector diode (PEDD) as photometric flow-through detector and bromothymol blue as a model analyte. The best module was implemented into more advanced flow-injection system dedicated for optical detection of alkaline phosphatase activity using UV-PEDD-based flow-through detector for the enzyme reaction product.  相似文献   

6.
The application of a new method to the multivariate analysis of incomplete data sets is described. The new method, called maximum likelihood principal component analysis (MLPCA), is analogous to conventional principal component analysis (PCA), but incorporates measurement error variance information in the decomposition of multivariate data. Missing measurements can be handled in a reliable and simple manner by assigning large measurement uncertainties to them. The problem of missing data is pervasive in chemistry, and MLPCA is applied to three sets of experimental data to illustrate its utility. For exploratory data analysis, a data set from the analysis of archeological artifacts is used to show that the principal components extracted by MLPCA retain much of the original information even when a significant number of measurements are missing. Maximum likelihood projections of censored data can often preserve original clusters among the samples and can, through the propagation of error, indicate which samples are likely to be projected erroneously. To demonstrate its utility in modeling applications, MLPCA is also applied in the development of a model for chromatographic retention based on a data set which is only 80% complete. MLPCA can predict missing values and assign error estimates to these points. Finally, the problem of calibration transfer between instruments can be regarded as a missing data problem in which entire spectra are missing on the ‘slave’ instrument. Using NIR spectra obtained from two instruments, it is shown that spectra on the slave instrument can be predicted from a small subset of calibration transfer samples even if a different wavelength range is employed. Concentration prediction errors obtained by this approach were comparable to cross-validation errors obtained for the slave instrument when all spectra were available.  相似文献   

7.
The international standard ISO 11843 specifies basic methods to design experiments for estimation of critical values referring to the capability of detection. The detection capability depends on the experimental design, the calibration model used, and the errors of the measurement process. This study reports how the specification of the calibration points within the calibration range can be used as a-priori information for evaluation of calibration uncertainty without any consideration of the response variables of the calibration. As result of investigation of the experimental designs, calibration points within the calibration range can be omitted without significant changes of calibration uncertainty. The approach is demonstrated at a practical example, the determination of arsenic in surface water samples taken from a river in Germany.  相似文献   

8.
Some aspects of the fundamental problems of chemometrics are reviewed based on the research work undertaken in this laboratory. The topics touched upon Include analytical information theory, experimental design and optimization, sampling, analytical detection theory, calibration, signal processing, chemical pattern recognition, quantitative structure-activity relationships, digital simulation, and teaching chemometrics as a chemical discipline.  相似文献   

9.
A flow injection calibration system, originally designed and tested in our laboratory, is presented here as a versatile analytical tool serving for calibration purposes. It is characterized by a simple construction, easy operation and the possibility of offering rich measurement information with the use of a single standard solution. It is shown that depending on instrumental conditions and the composition of the calibration solutions the manifold is able to realize various calibration procedures according to interpolative, extrapolative and integration modes. As an experimental example, the determination of calcium in the cabbage sample by flame atomic absorption spectrometry has been calibrated by means of dilution method (DM) in the integration mode.  相似文献   

10.
A method is proposed for the simultaneous determination of albumin and immunoglobulin G (IgG1) with fluorescence spectroscopy and multivariate calibration with partial least squares regression (PLS). The influence of some instrumental parameters were investigated with two experimental designs comprising 19 and 11 experiments, respectively. The investigated parameters were excitation and emission slit, detection voltage and scan rate. When a suitable instrumental setting had been found, a minor calibration and test set were analysed and evaluated. Thereafter, a larger calibration of albumin and IgG1 was made out of 26 samples (0-42 μg ml−1 albumin and 0-12.7 μg ml−1 IgG1). This calibration was validated with a test set consisting of 14 samples in the same concentration range. The precision of the method was estimated by analysing two test set samples for six times each. The scan modes tested were emission scan and synchronous scan Δ60 nm. The results showed that the method could be used for determination of albumin and IgG1 (albumin, root mean square error of prediction (RMSEP) <2, relative standard error of prediction (RSEP) <6% and IgG1, RMSEP <1, RSEP <8%) in spite of the overlapping fluorescence of the two compounds. The estimated precision was relative standard deviation (R.S.D.) <1.7%. The method was finally applied for the analysis of some sample fractions from an albumin standard used in affinity chromatography.  相似文献   

11.
Two different pulse calibration techniques to estimate the total quantities of evolved gaseous substances formed in thermogravimetric (TG)–FTIR runs were compared and assessed. A gas-pulse calibration method was based on the use of a specific device able of sending a known quantity of a gaseous compound of interest to the FTIR analyzer. A second calibration method was based on the vaporization in the TG analyzer of liquid solutions of the compound of interest. Data obtained by these techniques were compared to those from conventional concentration-based calibration. The results confirmed the reliability of pulse calibration techniques to obtain quantitative data on evolved gaseous products in TG–FTIR applications. Moreover, both the gas-pulse and the vaporization-based calibration techniques proved to have several advantages with respect to conventional techniques. Among these are the need of a more limited number of standards and no need for online gas dilution systems.  相似文献   

12.
Blanco M  Cueva-Mestanza R  Peguero A 《Talanta》2011,85(4):2218-2225
Using an appropriate set of samples to construct the calibration set is crucial with a view to ensuring accurate multivariate calibration of NIR spectroscopic data. In this work, we developed and optimized a new methodology for incorporating physical variability in pharmaceutical production based on the NIR spectrum for the process. Such a spectrum contains the spectral changes caused by each treatment applied to the component mixture during the production process. The proposed methodology involves adding a set of process spectra (viz. difference spectra between those for production tablets and a laboratory mixture of identical nominal composition) to the set of laboratory samples, which span the wanted concentration range, in order to construct a calibration set incorporating all physical changes undergone by the samples in each step of the production process. The best calibration model among those tested was selected by establishing the influence of spectral pretreatments used to obtain the process spectrum and construct the calibration models, and also by determining the multiplying factor m to be applied to the process spectra in order to ensure incorporation of all variability sources into the calibration model. The specific samples to be included in the calibration set were selected by principal component analysis (PCA). To this end, the new methodology for constructing calibration sets for determining the Active Principle Ingredients (API) and excipients was applied to Irbesartan tablets and validated by application to the API and excipients of paracetamol tablets. The proposed methodology provides simple, robust calibration models for determining the different components of a pharmaceutical formulation.  相似文献   

13.
 A combination of "black box" and "calendar-time" methods for the determination of calibration intervals of an analytical measuring instrument is discussed. Since the methods require information on the distributions of the calibration parameters, such information is described for an atomic absorption spectrophotometer, as an example. The hypotheses on the normal distribution of the calibration parameters are tested using the ω2-criterion and accepted at 0.90–0.95 levels of confidence. Corresponding control charts are designed for indication of warning and action limits of the calibration parameters, and diagnoses of outliers in further calibrations. Control charts indicate also when the calibration should be done according to the full program of the equipment manufacturer. Received: 15 April 2000 / Accepted: 24 July 2000  相似文献   

14.
Many scientific instruments produce multivariate images characterized by three-way tables, an element of which represents the intensity value at a spatial location for a given spectral channel. A problem frequently encountered is to attempt estimating the contributions of some compounds at each location of these images. Usual regression methods of calibration, such as PLS, require having a matrix of calibration X (n × p) and the corresponding vector y of the dependent variable (n × 1). X can be built up by sampling pixel-vectors in the images, but y is sometimes difficult to obtain, if the surface of the samples is formed by chemically heterogeneous regions. In this case, the quantitative analyses related to y may be difficult, if the pixels represent very small areas (for example on microscopic images) or very large ones (satellite images). This is for example the case when dealing with biological solid samples representing different tissues. Direct Calibration (DC), sometimes referred to as “spectral unmixing”, do not require having such a calibration set. However, it is indeed needed to have both a matrix of “perturbing” pixel-vectors (noted K) and a vector of the “pure” component spectrum to be analyzed (p), which are more easily obtainable. For estimating the contribution, the unknown pixel vector x and the pure spectrum p are first projected orthogonally onto K giving the vectors x onto p, respectively. The contribution is then estimated by a second projection of x onto p. A method, based on principal component analysis, for determining the optimal dimensions of K is proposed. DC was applied on a collection of multivariate images of kernel of wheat to estimate the proportion of three tissues, namely out-layers, “waxyendosperm and normal endosperm. The eventual results are presented as images of wheat kernels in false colors associated to the estimated proportions of the tissues. It is shown that DC is appropriate for estimating contributions in situations in which the more usual methods of calibration cannot be applied.  相似文献   

15.
To date, few efforts have been made to take simultaneous advantage of the local nature of spectral data in both the time and frequency domains in a single regression model. We describe here the use of a novel chemometrics algorithm using the wavelet transform. We call the algorithm dual-domain regression, as the regression step defines a weighted model in the time-domain based on the contributions of parallel, frequency-domain models made from wavelet coefficients reflecting different scales. In principle, any regression method can be used, and implementation of the algorithm using partial least squares regression and principal component regression are reported here. The performance of the models produced from the algorithm is generally superior to that of regular partial least squares (PLS) or principal component regression (PCR) models applied to data restricted to a single domain. Dual-domain PLS and PCR algorithms are applied to near infrared (NIR) spectral datasets of Cargill corn samples and sets of spectra collected on batch chemical reactions run in different reactors to illustrate the improved robustness of the modeling.  相似文献   

16.
Data fusion in multivariate calibration transfer   总被引:1,自引:0,他引:1  
We report the use of stacked partial least-squares regression and stacked dual-domain regression analysis with four commonly used techniques for calibration transfer to improve predictive performance from transferred multivariate calibration models. The predictive performance from three conventional calibration transfer methods, piecewise direct standardization (PDS), orthogonal signal correction (OSC) and model updating (MUP), requiring standards measured on both instruments, was significantly improved from data fusion either by stacking of wavelet scales or by stacking of spectral intervals, as demonstrated by transfer of calibrations developed on near-infrared spectra of synthetic gasoline. Stacking did not produce as significant an improvement for calibration transfer using a finite impulse response (FIR) filter, but application of SPLS regression to FIR-transferred spectra improves predictive performance of the transferred model.  相似文献   

17.
Carbon dioxide (CO2) is a greenhouse gas that makes by far the largest contribution to the global warming of the Earth's atmosphere. For the measurements of atmospheric CO2 a non-dispersive infrared analyzer (NDIR) and gas chromatography are conventionally being used. We explored whether and to what degree argon content can influence the determination of atmospheric CO2 using the comparison of CO2 concentrations between the sample gas mixtures with varying Ar amounts at 0 and 18.6 mmol mol−1 and the calibration gas mixtures with Ar at 8.4, 9.1, and 9.3 mmol mol−1. We newly discovered that variation of Ar content in calibration gas mixtures could undermine accuracy for precise and accurate determination of atmospheric CO2 in background air. The differences in CO2 concentration due to the variation of Ar content in the calibration gas mixtures were negligible (<±0.03 μmol mol−1) for NDIR systems whereas they noticeably increased (<±1.09 μmol mol−1) especially for the modified GC systems to enhance instrumental sensitivity. We found that the thermal mass flow controller is the main source of the differences although such differences appeared only in the presence of a flow restrictor in GC systems. For reliable monitoring of real atmospheric CO2 samples, one should use calibration gas mixtures that contain Ar content close to the level (9.332 mmol mol−1) in the ambient air as possible. Practical guidelines were highlighted relating to selection of appropriate analytical approaches for the accurate and precise measurements of atmospheric CO2. In addition, theoretical implications from the findings were addressed.  相似文献   

18.
The measurement uncertainty of iodine determination in NIST standard reference material (SRM) 1549 using radiochemical neutron activation analysis (RNAA) was studied. This method is based on ignition of the irradiated sample [127I(n,)128I, t1/2=25 min, E=422.9 keV] in an oxygen atmosphere, followed by absorption of iodine in a reducing acid solution and its purification by a selective extraction–stripping–reextraction cycle. The purified solution of iodine in CHCl3 was transferred to a well-type HPGe detector for -ray measurement of the induced radionuclide 128I. The detection limit of the method employed under the conditions described was 1 ng/g. The reproducibility of iodine determination in the SRM was 3.6% (12 determinations within 1 month), calculated by the analysis of variance procedure. Using the commercially available software program GUM Workbench and the recommendations of the Eurachem/CITAC Guide, we evaluated the uncertainty budget for this RNAA method and the relative uncertainty obtained was 3.6%. The largest uncertainty contributions were due to the repeatability of the chemical yield determination, the count rate of the induced nuclide in the standard and sample, the mass of the carrier and the mass of the irradiation standard.  相似文献   

19.
活化分析   总被引:1,自引:0,他引:1  
本文是《分析试验室》定期评述中“活化分析”的第6篇文章,它对我国活化分析领域在2003年6月-2005年12月的工作进展作了较全面的评述。内容包括活化分析方法学及其在生命科学、环境科学、地学和考古、国家安全等领域的应用,并展望了活化分析的发展趋势。  相似文献   

20.
For calorimetric investigations where the reconstruction of the real heat-flow rate during the experiment is required, the usual electrical calibration with constant power is often not sufficient. However, the use of a chemical calibration is limited by the number of suitable and certified reactions showing a known dynamics of the heat-flow rate. Therefore, a computer controlled electrical calibration unit was developed capable of duplicating any simulated reaction power-time curve in the calorimetric cell. The calibration unit consists of a universal simulation software with an interface to a programmable precision current supply connected to the calibration heater inside the calorimetric vessel. The reliability of the electrical calibration was proved with a continuous titration calorimeter using different recommended test reactions (TRIS+HCl, dilution of aqueous KCl, NaCl and urea solutions).

In order to test the electrical calibration procedure in the dynamic mode the heat-flow rate of a first-order equilibrium reaction during a continuous titration experiment was simulated. It is demonstrated that the combination of simulation software and electrical calibration hardware provides a very close adaptation of calibration and experiment, allows a more reliable estimation of experimental uncertainties, simplifies the verification of complex data analysis procedures and opens up new possibilities for the optimization of experimental parameters.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号