首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
Chemical analysis is a multi-stage process, which starts with primary sampling and ends with evaluation of the resuts. Especially in trace analysis and microanalysis of solid materials, sampling can far outweigh all other sources of error. For estimating the reliability of complete analytical procedures, a method is needed which can be used to estimate the errors made in the primary and the secondary sampling and sample preparation steps. Based on Gy's theory of sampling, a computer program (SAMPEX) was written for the solution of practical sampling problems. The method involves the estimation of the sampling constant, C. For well-characterized materials, C can be estimated from the material properties. If the necessary material properties are difficult to estimate, C can be evaluated experimentally. The program can be used to solve the following problems: minimum sample size for a tolerated relative standard deviation of the fundamental sampling error; relative size for a tolerated for a given sample size; maximum particle size of the material for a specified standard deviation and sample size; balanced design of a multi-stage sampling and sample-reduction process; and sampling for particle size determination.  相似文献   

2.
One of the major problems involved in the direct analysis of solid samples by electrothermal atomic absorption spectrometry (ETAAS) lies in the calibration step because non-spectral interference effects are often pronounced. Three standardization techniques have been described and used in solid sampling-ETAAS: (i) standard additions method; (ii) calibration relative to a certified reference material; and (iii) calibration curve technique. However, an adequate statistical evaluation of the uncertainty in the analyte concentration in the solid sample is most frequently neglected, and reported errors may be seriously underestimated. This can be attributed directly to the complexity of the statistical expressions required to accurately account for errors in each of the calibration techniques mentioned above, and the general lack of relevant reference literature. The object of this work has been to develop a computer package which will perform the necessary statistical analyses of solid sampling-ETAAS data; the result is the program “SOLIDS” described here in the form of an electronic publication in Spectrochimica Acta Electronica, the electronic section of Spectrochimica Acta Part B. The program could also be useful in other analytical fields where similar calibration methods are used. The hard copy text, outlining the calibration models and their associated errors, is accompanied by a diskette containing the program, some data files, and a manual. Use of the program is exemplified in the text, with some of the data files discussed included on the diskette which, together with the manual, should enable the reader to become familiarized with the operation of the program, and the results generated.  相似文献   

3.
A tandem mass spectral database system consists of a library of reference spectra and a search program. State‐of‐the‐art search programs show a high tolerance for variability in compound‐specific fragmentation patterns produced by collision‐induced decomposition and enable sensitive and specific ‘identity search’. In this communication, performance characteristics of two search algorithms combined with the ‘Wiley Registry of Tandem Mass Spectral Data, MSforID’ (Wiley Registry MSMS, John Wiley and Sons, Hoboken, NJ, USA) were evaluated. The search algorithms tested were the MSMS search algorithm implemented in the NIST MS Search program 2.0g (NIST, Gaithersburg, MD, USA) and the MSforID algorithm (John Wiley and Sons, Hoboken, NJ, USA). Sample spectra were acquired on different instruments and, thus, covered a broad range of possible experimental conditions or were generated in silico. For each algorithm, more than 30 000 matches were performed. Statistical evaluation of the library search results revealed that principally both search algorithms can be combined with the Wiley Registry MSMS to create a reliable identification tool. It appears, however, that a higher degree of spectral similarity is necessary to obtain a correct match with the NIST MS Search program. This characteristic of the NIST MS Search program has a positive effect on specificity as it helps to avoid false positive matches (type I errors), but reduces sensitivity. Thus, particularly with sample spectra acquired on instruments differing in their setup from tandem‐in‐space type fragmentation, a comparably higher number of false negative matches (type II errors) were observed by searching the Wiley Registry MSMS. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
A method is presented to monitor for accuracy and precision of chemical analyses based on the use of a control reference sample (CRS) and blind duplicates of project samples. The major advantage of the method is that it works with real samples instead of international (or laboratory) reference standards. Thus, it will take into account changes in absolute or relative errors over the whole observed concentration range. For each project and each sample type, respectively, and for all elements analysed, it provides realistic estimates of precision and — if there are any determinations at low concentrations — of the practical instead of the rather meaningless theoretical detection limit. By introducing robust and resistant statistics it is possible to drastically reduce the number of samples necessary for the monitoring procedure. As an additional advantage, these statistics are independent of any distribution model and solely reflect the data structure. A program for the whole monitoring procedure, the Laboratory Control Package (LCP), has been written in FORTRAN. It can be implemented on any personal computer with graphic capabilities.  相似文献   

5.
A simple experimental method for the efficiency calibration of germanium detectors especially for environmental samples is presented, using only the natural radionuclides in the sample. The method is based on the fact that for the energy range above 300 keV the full-energy-peak efficiency of a Ge detector can be described in a first order approximation by a linear interpolation curve in the log-log display with errors lying normally under 5%. Photons with different energies which are emitted from one radionuclide yield count rates which are correlated to the corresponding efficiencies. From this correlation one coefficient of the interpolation curve — a first order polynomial — can be calculated. The second coefficient can be obtained by the count rate of40K, resulting from KCl, which is mixed homogeneously with the sample. Expecially for environmental samples with large volumes, this method is very useful, because it takes into account the self-absorption of photons in the sample.  相似文献   

6.
Window diagrams that optimize for the separation of only one or a few components in a complex mixture are applied to on-line process analysis when speed of analysis is more important than a complete separation of all components in the sample. A window diagram based on retention indexes is the most useful for quickly evaluating the feasibility of a given pair of phases. The one with the most useful output is one based on partition ratios, as this can be used directly with the columns in hand. A PC-based spreadsheet program with integral specific retention volume data for the common liquid phases is described as a tool for selecting the optimum ratio of lengths for the columns in hand.  相似文献   

7.
The present paper focuses on determining the number of PLS components by using resampling methods such as cross validation (CV), Monte Carlo cross validation (MCCV), bootstrapping (BS), etc. To resample the training data, random non‐negative weights are assigned to the original training samples and a sample‐weighted PLS model is developed without increasing the computational burden much. Random weighting is a generalization of the traditional resampling methods and is expected to have a lower risk of getting an insufficient training set. For prediction, only the training samples with random weights less than a threshold value are selected to ensure that the prediction samples have less influence on training. For complicated data, because the optimal number of PLS components is often not unique or readily distinguished and there might exist an optimal region of model complexity, the distribution of prediction errors can be more useful than a single value of root mean squared error of prediction (RMSEP). Therefore, the distribution of prediction errors are estimated by repeated random sample weighting and used to determine model complexity. RSW is compared with its traditional counterparts like CV, MCCV, BS and a recently proposed randomization test method to demonstrate its usefulness. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
Inductively coupled plasma mass spectrometry of environmental and biological samples is often hampered by spectral and non-spectral interferences. Spectral interferences, caused by the limited resolution of the quadrupole mass spectrometer, can be eliminated in a variety of ways. For their identification inspection of a signal versus carrier gas flowrate is useful. Anion exchange allows the removal of most S and Cl containing compounds, which are at the origin of the majority of spectral interferences. Matrix modification, for example the addition of ethanol and subsequent optimization of the gas flow rates in a number of cases enables the reduction of the interferences to insignificant values. Often a mathematical correction based on isotopic signal ratios can be applied. Non-spectral interferences can be divided in reversible, that is occurring while the sample is being measured, and irreversible matrix effects, that is clogging of the nebulizer and sampling orifices or deposition on the torch or in the ion lens stack. The errors associated with non-spectral interferences can be eliminated by appropriate calibration procedures, adapted sample preparation or limitation of the amount of sample delivered to nebulizer, plasma and sampling devices, for example by the application of flow injection. Applications of all the elimination procedures are described for the analysis of sea-water, estuarine water, soil and sewage extracts, percolate water, urine, serum and wine.  相似文献   

9.
就样品的比热在相变前后均是温度函数的一般情况 ,通过动态地模拟DTA测量样品所经历的整个相变过程 ,验证了DTA中该相变潜热公式及相应的面积方法的正确性。证明对于可逆相变 ,采用文章所建议的面积法 ,把升降温过程中测得的相变潜热取一算术平均值 ,可最大限度地减少由于样品内温度梯度及样品比热是温度函数所带来的测量误差  相似文献   

10.
 A method is proposed which involves sample pretreatment followed by electrothermal atomic absorption spectrometry (ETAAS) for determination of cadmium in human urine. A microwave digestion system was devised to accommodate double-closed vessels for simultaneous digestion of batches of up to 24 urine samples in about 20 min. After digestion, matrix substances which might interfere were removed using silica-immobilized 8-hydroxyquinoline (I-8HOQ) columns. The analyte adsorbed on the column was then eluted with dilute nitric acid solution and determined by ETAAS using a fast temperature program. Neither ashing steps in the furnace heating program nor use of matrix modifiers was necessary. The accuracy, precision, limit of detection, and sample throughput of the method were evaluated. With meticulous control of systematic errors which may be introduced in the pretreatment procedures, the present method can serve as a reference technique for the analysis of Cd in urine samples. Received: 29 July 1996/Revised: 30 September 1996/Accepted: 13 October 1996  相似文献   

11.
 A method is proposed which involves sample pretreatment followed by electrothermal atomic absorption spectrometry (ETAAS) for determination of cadmium in human urine. A microwave digestion system was devised to accommodate double-closed vessels for simultaneous digestion of batches of up to 24 urine samples in about 20 min. After digestion, matrix substances which might interfere were removed using silica-immobilized 8-hydroxyquinoline (I-8HOQ) columns. The analyte adsorbed on the column was then eluted with dilute nitric acid solution and determined by ETAAS using a fast temperature program. Neither ashing steps in the furnace heating program nor use of matrix modifiers was necessary. The accuracy, precision, limit of detection, and sample throughput of the method were evaluated. With meticulous control of systematic errors which may be introduced in the pretreatment procedures, the present method can serve as a reference technique for the analysis of Cd in urine samples. Received: 29 July 1996/Revised: 30 September 1996/Accepted: 13 October 1996  相似文献   

12.
A new external calibration procedure for FT-ICR mass spectrometry is presented, stepwise-external calibration. This method is demonstrated for MALDI analysis of peptide mixtures, but is applicable to any ionization method. For this procedure, the masses of analyte peaks are first accurately measured at a low trapping potential (0.63 V) using external calibration. These accurately determined (< 1 ppm accuracy) analyte peaks are used as internal calibrant points for a second mass spectrum that is acquired for the same sample at a higher trapping potential (1.0 V). The second mass spectrum has a approximately 10-fold improvement in detection dynamic range compared with the first spectrum acquired at a low trapping potential. A calibration equation that accounts for local and global space charge is shown to provide mass accuracy with external calibration that is nearly identical to that of internal calibration, without the drawbacks of experimental complexity or reduction of abundance dynamic range. For the 609 mass peaks measured using stepwise-external calibration method, the root-mean-square error is 0.9 ppm. The errors appear to have a Gaussian distribution; 99.3% of the mass errors are shown to lie within three times the sample standard deviation (2.6 ppm) of their true value.  相似文献   

13.
Total Reflection X-ray Fluorescence analysis (TXRF) is widely used in semiconductor industry for the analysis of silicon wafer surfaces. Typically an external standard is used for the calibration of the spectrometer. This is sensitive to errors in quantification. For small sample amounts the thin film approximation is valid, absorption effects of the exciting and the detected radiation are neglected and the relation between sample amount and fluorescence intensity is linear. For higher total sample amounts deviations from linearity have been observed (saturation effect). These deviations are one of the difficulties for external standard quantification.A theoretical determination of the ideal TXRF sample shape is content of the presented work with the aim to improve the calibration process and therefore the quantification.The fluorescence intensity emitted by different theoretical sample shapes was calculated, whereby several parameters have been varied (excitation energy, density, diameter/height ratio of the sample). It was investigated which sample shape leads to the highest fluorescence intensity and exhibits the lowest saturation effect. The comparison of the different sample shapes showed that the ring shape matches the ideal TXRF sample shape best.  相似文献   

14.
We apply a Bayesian parameter estimation technique to a chemical kinetic mechanism for n‐propylbenzene oxidation in a shock tube to propagate errors in experimental data to errors in Arrhenius parameters and predicted species concentrations. We find that, to apply the methodology successfully, conventional optimization is required as a preliminary step. This is carried out in two stages: First, a quasi‐random global search using a Sobol low‐discrepancy sequence is conducted, followed by a local optimization by means of a hybrid gradient‐descent/Newton iteration method. The concentrations of 37 species at a variety of temperatures, pressures, and equivalence ratios are optimized against a total of 2378 experimental observations. We then apply the Bayesian methodology to study the influence of uncertainties in the experimental measurements on some of the Arrhenius parameters in the model as well as some of the predicted species concentrations. Markov chain Monte Carlo algorithms are employed to sample from the posterior probability densities, making use of polynomial surrogates of higher order fitted to the model responses. We conclude that the methodology provides a useful tool for the analysis of distributions of model parameters and responses, in particular their uncertainties and correlations. Limitations of the method are discussed. For example, we find that using second‐order response surfaces and assuming normal distributions for propagated errors is largely adequate, but not always.  相似文献   

15.
In this work, is given the Combined Standard Uncertainty (CSU) calculation procedure, which can be applied in spectrophotometric measurements. For the assessment of the computations, different approaches are discussed, such as the contribution to the Combined Standard Uncertainty of the reproducibility, the repeatability, the total bias, the calibration curve, and the type of the measurand. Results of inter-laboratory measurements confirmed the assumptions. For the minimization of the errors propagation a controlled experimental procedure was applied by this laboratory, called “errors propagation break-up” (ERBs). The uncertainty of sample concentration from a reference curve dominates the Combined Standard Uncertainty. The contribution of the method and the laboratory bias (total bias) to the CSU is insignificant under controlled conditions of a measurement. This work develops a simple methodology that can be utilized to evaluate the uncertainty and errors control on routine methods used both by academic researchers or the industrial sector.  相似文献   

16.
17.
Total reflection X-ray fluorescence analysis (TXRF) offers a nondestructive qualitative and quantitative analysis of trace elements. Due to its outstanding properties TXRF is widely used in the semiconductor industry for the analysis of silicon wafer surfaces and in the chemical analysis of liquid samples. Two problems occur in quantification: the large statistical uncertainty in wafer surface analysis and the validity of using an internal standard in chemical analysis. In general TXRF is known to allow for linear calibration. For small sample amounts (low nanogram (ng) region) the thin film approximation is valid neglecting absorption effects of the exciting and the detected radiation. For higher total amounts of samples deviations from the linear relation between fluorescence intensity and sample amount can be observed. This could be caused by the sample itself because inhomogeneities and different sample shapes can lead to differences of the emitted fluorescence intensities and high statistical errors. The aim of the study was to investigate the elemental distribution inside a sample. Single and multi-element samples were investigated with Synchrotron-radiation-induced micro X-ray Fluorescence Analysis (SR-μ-XRF) and with an optical microscope. It could be proven that the microscope images are all based on the investigated elements. This allows the determination of the sample shape and potential inhomogeneities using only light microscope images. For the multi-element samples, it was furthermore shown that the elemental distribution inside the samples is homogeneous. This justifies internal standard quantification.  相似文献   

18.
商业交易的装运铁矿石取样应当按照ISO 3082进行。尽管近年来合规性已有所改善,但在一些方面仍存在明显问题,包括:1)大流量样品截取器的设计和操作;2)在二级取样阶段对一级增量样品的划分;3)缩分之前用于减小样品粒度的破碎机的破碎性能;4)在每个取样阶段保留的样品质量;5)样品缩分。取样是测量过程开始的地方,因此如果样品不具有代表性,整个过程在开始时就被破坏了。正确取样的“金规则”是“被取样物料的所有部分都必须有相等的概率被收集到并成为最终分析样品的一部分”。如果不遵守这一规则,那么就很容易引入偏差。例如,在实践中不可能从库存料堆或船舶现场采集到具有代表性的样品。必须在构建堆料或分解堆料时,或在船舶的装载或卸载时取样。在满足“正确取样的金规则”后,取样站设计还应符合以下要求:1)采集的样品质量必须足够大,以考虑粒度大小的影响,将基本误差、分组误差和离析误差降低到可接受的水平;2)需要采取足够数量的增量样品,以将长期质量波动误差降低到可接受的水平;3)应正确地选择取样位置,以避免由于诸如斗轮取料机和离心泵等设备存在的质量周期性变化所造成的影响;4)附加错误诸如样品污染、样品溢出、颗粒降解和操作员失误等,需要从一开始就加以消除。因此为了消除不良的取样做法,确保取样能够兑现应有的承诺,需要通过改进工作人员的培训和提高认识来确保所取样品没有重大偏差,确保样品的总体精度满足所需的任务要求。  相似文献   

19.
A new software package (THESEUS) has been assembled for the analysis of the DSC data, Concerning the thermal denaturation of biological macromolecules. The system is useful to obtain accurate physico-chemical information, bypassing the casual and systematic errors, very common in these experiments. It can also be used for handling data from other instruments and methodologies giving thermodynamic, spectroscopic or other kind of data as a function of temperature. Because many of the researches in this field are of exploratory nature and continuously new unfolding mechanisms are described or hypothesized in the current literature, we have written and assembled this powerful and flexible program of general applicability, in order to put the operator in a position to control each step of the calculation procedure and use his own experience for choosing the better way to solve unexpected problems.  相似文献   

20.
This article is a criticism of the strategy of adding (isotope labelled) internal standards of semi volatile hydrophobic organic compounds directly on the surface of particulate samples matrix such as sediment, soil and fly ash, etc. in a small aliquot (mL) of solvent, before trace level analysis. The use of the internal standard is intended to compensate for incomplete extractions, clean-up losses, dilution errors and instrument variations. However, direct addition of internal standards to sample matrices creates two possibilities for inaccurate results by processes only affecting the internal standard: First, evaporation losses of standard from the sample matrix during evaporation of the carrier solvent. Second, the native analyte and internal standard sorb to the sample matrix with differing force. Both processes can introduce systematic and random error to the result. A systematic error of 74% due to evaporation losses of tetra chlorinated dibenzo-p-dioxins is observed, while the corresponding error for octa chlorinated dioxin is 0%. The associated random error is 45% for tetra down to 1–4% relative standard deviations for hepta and octa chlorinated dioxins. For laboratory staff the evaporation losses of standard (and native) compounds causes, besides dust, an additional risk of inhalation exposure. The internal standard should instead be added to the extraction solvent after the extraction. Smaller systematical errors (10–20%) and associated random errors due to irreversible sorption are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号