首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In validation of quantitative analysis methods, knowledge of the response function is essential as it describes, within the range of application, the existing relationship between the response (the measurement signal) and the concentration or quantity of the analyte in the sample. The most common response function used is obtained by simple linear regression, estimating the regression parameters slope and intercept by the least squares method as general fitting method. The assumption in this fitting is that the response variance is a constant, whatever the concentrations within the range examined.The straight calibration line may perform unacceptably due to the presence of outliers or unexpected curvature of the line. Checking the suitability of calibration lines might be performed by calculation of a well-defined quality coefficient based on a constant standard deviation.The concentration value for a test sample calculated by interpolation from the least squares line is of little value unless it is accompanied by an estimate of its random variation expressed by a confidence interval. This confidence interval results from the uncertainty in the measurement signal, combined with the confidence interval for the regression line at that measurement signal and is characterized by a standard deviation sx0 calculated by an approximate equation. This approximate equation is only valid when the mathematical function, calculating a characteristic value g from specific regression line parameters as the slope, the standard error of the estimate and the spread of the abscissa values around their mean, is below a critical value as described in literature.It is mathematically demonstrated that with respect to this critical limit value for g, the proposed value for the quality coefficient applied as a suitability check for the linear regression line as calibration function, depends only on the number of calibration points and the spread of the abscissa values around their mean.  相似文献   

2.
Linear regression of calibration lines passing through the origin was investigated for three models of y-direction random errors: normally distributed errors with an invariable standard deviation (SD) and log normally and normally distributed errors with an invariable relative standard deviation (RSD). The weighted (weighting factor is x 2 i ), geometric and arithmetic means of the ratios y i /x i estimate the calibration slope for these models, respectively. Regression of the calibration lines with errors in both directions was also studied. The x-direction errors were assumed to be normally distributed random errors with either an invariable SD or invariable RSD, both combined with a constant relative systematic error. The random errors disperse the true, unknown x-values about the plotted, demanded x-values, which are shifted by the constant relative systematic error. The systematic error biases the slope estimate while the random errors do not. They only increase automatically the slope estimate uncertainty, in which the uncertainty component reflecting the range of the possible values of the systematic error must be additionally included. Received: 9 May 2000 Accepted: 7 March 2001  相似文献   

3.
Mathematical procedures for calibration require assumptions to be made, e.g. the homogeneity of variances and the mathematical relationship between the analyte content x and the signal y. Little is known about the magnitude of errors arising from incorrect assumptions. The variation of the standard deviation of the analytical procedure with the content of the analyte, the selection of the type of mathematical relationship between x and y, and the types of errors made in testing hypotheses are discussed. In certain practical situations, the standard deviation (s.d.) is nearly independent ofx if x < 22p (p = detection limit) and the relative standard deviation (r.s.d.) is nearly independent of x if x > 50p. If the s.d. is constant, calibration relations of the typey = a + bx are frequently to be preferred; with a constant r.s.d.. relations of the type log y = a + b logx have advantages.  相似文献   

4.
Errors in the determination of low concentrations by spectrophotometry are investigated with the uranium-thiocyanate system as an example. The reagent blank has significant absorption and measurements are made at 375 nm instead of the λmax. The error in the intercept of the calibration curve is an important factor in such measurements and the errors involved in the estimation of 1 μg/ml (normal working range 4–40μg/ml) are studied. It is shown that both random and systematic errors associated with the intercept are responsible for observed errors. The two types of errors are resolved by ANOVA (analysis of variance). The error in the measurement of a single value is estimated and compared with measured values for different calibration ranges. It is seen that two factors predominantly influence the error in the measured concentration — the variance from regression and the closeness of the measured value to the mean of the calibration range.  相似文献   

5.
Carniti, P., Cori, L. and Ragaini, V., 1978. A Critical Analysis of the Hand and Othmer-Tobias Correlations. Fluid Phase Equilibria, 2: 39–47.The Hand (H) and Othmer-Tobias (OT) correlations are extensively analyzed on 83 aqueous and 26 non-aqueous ternary systems. Twenty solutropic systems are also included. 65% and 58% of the systems give a linear correlation coefficient greater than 0.99 for H and OT respectively.An analysis of the sensitivity to systematic or random errors demonstrates the H correlation to be highly insensitive. A remarkable insensitivity, although not completly for some systematic errors, is also demonstrated for the OT correlation. However there is sufficient proof that they are both unsuitable as a check of experimental data, even though they are used for this purpose.The relation, not necessarily linear, derived from the H correlation, log(x32/x22) vs. log(x31/x11) is useful in locating the plait point.  相似文献   

6.
In analytical chemistry, the evaluation on performance accuracy of an analytical method is an important issue. When an adjusted or new method (test method) is developed, the linear measurement error model is commonly used to compare it with another reference method. For this routine practice, the measurements on the reference method can be placed on the x‐axis, whereas those of the test method on the y‐axis, then the slope of this linear relationship indicates the agreement between them and also the performance of the test method. Under the assumption that both variables are subject to heteroscedastic measurement errors, a novel approach based on the concepts of a generalized pivotal quantity (GPQ) is proposed to construct confidence intervals for the slope. Its performance is compared with two maximum likelihood estimation (MLE)‐based approaches through simulation studies. It is shown that the proposed GPQ‐based approach is capable of maintaining the empirical coverage probabilities close to the nominal level and yielding reasonable expected lengths. The GPQ‐based approach can be recommended in practical use because of its easy implementation and better performance than the MLE‐based approaches in most simulation scenarios. Two real datasets are given to illustrate the approaches. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
An iterative method is described for determining the reaction order and activation energy from TG curves. The method makes use of equations to represent the temperature integrals which are derived using numerical relationships in terms of E, T, and empirical constants. Like the method of Reich and Stivala, the computation involves varying the value of n until the appropriate linear relationship gives an intercept of zero. The slope of the line is YEx, where Y and X are constants in the equation -log I = YEx(1/T) +log Ew+ U The method is tested using data obtained by means of a fourth order Runge-Kutta solution of the rate law for both Arrhenius and non-Arrhenius cases.  相似文献   

8.
It has been shown that the MARLAP (Multi-Agency Radiological Laboratory Analytical Protocols) for estimating the Currie detection limit, which is based on ‘critical values of the non-centrality parameter of the non-central t distribution’, is intrinsically biased, even if no calibration curve or regression is used. This completed the refutation of the method, begun in Part 2. With the field cleared of obstructions, the true theory underlying Currie's limits of decision, detection and quantification, as they apply in a simple linear chemical measurement system (CMS) having heteroscedastic, Gaussian measurement noise and using weighted least squares (WLS) processing, was then derived. Extensive Monte Carlo simulations were performed, on 900 million independent calibration curves, for linear, “hockey stick” and quadratic noise precision models (NPMs). With errorless NPM parameters, all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Even with as much as 30% noise on all of the relevant NPM parameters, the worst absolute errors in rates of false positives and false negatives, was only 0.3%.  相似文献   

9.
《Analytical letters》2012,45(12):893-899
Abstract

Error propagation in linear titration methods are considered in terms of the precision in measurement of the slopes and intercepts of the titration curve. Simulated curves including random errors were analyzed. The results indicate that the analytical precision is independent of the angle between the curves when the difference in slopes in fixed.  相似文献   

10.
The Kováts retention index system with n-alkanes as reference standards has properties not fully explored when single, isolated or stand-alone analytes are analyzed by isothermal gas chromatography. When a homologous series of analytes are analyzed by either linear or non-linear temperature-programmed gas chromatography, the retention data of the entire series can be treated systematically to produce an I vs. Z plot that is linear, thereby giving insight into the relationship between chemical structure and retention index. Dead time tM is both instrument and temperature dependent. With no dead time tM adjustment, the retention indices of analytes calculated from experimental retention times by the method of either linear or logarithmic interpolation give statistically identical values. Linear regression analysis of the data shows the slope as methylene value (A) and intercept as functionality constant or group retention factor (GRF) of the homologous series. The A and (GRF) values vary with chemical structures, intermolecular electronic and steric interactions, and polarity of column liquid phases, and can link gas chromatographic retention index to chemical structure. Examples of the influence of molecular electronic effects and steric effects on retention index are given and discussed.  相似文献   

11.
N. Rodríguez  L.A. Sarabia 《Talanta》2009,77(3):1129-782
In this work, a four-way tensor is used to model the quenching effect in fluorescent measurements. By means of the analysis of excitation-emission matrices obtained in the determination of tetracycline in tea, which acts as quencher, it is shown as the impossibility to use a calibration, or an addition standard based on a three-way model. It is analysed the quencher multiplicative effect made on the tetracycline signal by means of an ANOVA. However, by arranging the experimental data in a four-way tensor, it is viable to perform a calibration based on the parallel factor analysis, PARAFAC, decomposition and a four-way partial least squares, 4-PLS, regression to quantify the tetracycline in the presence of the matrix quencher effect. 4-PLS calibration provides better results. In the range from 40 to 220 μg L−1 gives an average of relative errors in absolute value equal to 8.02% in prediction (3.40% in calibration). The repeatability as standard deviation in this range is 5.08 μg L−1 and the method is accurate, slope and intercept being statistically equal to 1 and 0, respectively when a regression calculated versus true concentration is performed. Moreover, it has a decision limit (CCα) of 13.87 μg L−1 for a probability of false positive, α, equal to 0.05 and a capability of detection (CCβ) of 26.63 μg L−1 (for probabilities of false positive, α, false negative, β, equals to 0.05).  相似文献   

12.
A sensitive, selective, precise and stability indicating high-performance thin-layer chromatographic method of analysis of nelfinavir mesylate both as a bulk drug and in formulations was developed and validated. The method employed TLC aluminium plates precoated with silica gel 60F-254 as the stationary phase. The solvent system consisted of toluene-methanol-acetone (7:1.5:1.5, v/v/v). This system was found to give compact spots for nelfinavir mesylate (Rf value of 0.45±0.02). Nelfinavir mesylate was subjected to acid and alkali hydrolysis, oxidation, dry heat treatment and photodegradation. Also the peaks of degraded products were well resolved from the pure drug with significantly different Rf values. Densitometric analysis of nelfinavir mesylate was carried out in the absorbance mode at 250 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.002 in the concentration range of 1000-6000 ng per spot. The mean value of correlation coefficient, slope and intercept were 0.999±0.002, 0.014±0.001 and 21.73±1.26, respectively. The method was validated for precision, robustness and recovery. The limits of detection and quantitation were 60 and 140 ng per spot, respectively. Statistical analysis proves that the method is repeatable and selective for the estimation of the said drug. As the method could effectively separate the drug from its degradation products, it can be employed as a stability indicating one.  相似文献   

13.
The retention behavior of 100 peptides was studied during high-performance liquid chromatography on a C18 column using aqueous trifluoroacetic acid as the mobile phase and acetonitrile as the mobile phase modifier in a linear gradient elution system. Retention times of the peptides were linearly related to the logarithm of the sum of Rekker's constants (R.F. Rekker, The Hydrophobic Fragmental Constant, Elsevier, Amsterdam, 1977, p. 301) for the constituent amino acid. Assuming this relationship, the best fit constants for this system were computed by non-linear multiple regression analysis. Using the new constants, it is possible to predict retention times for a wide variety of peptides at any slope of linear gradient, if the amino acid composition is known. It also enables accurate prediction of the retention time of peptides, whose amino acid composition in not known, after an analytical run with an alternate gradient.  相似文献   

14.
15.
Abstract

The applicability of molecular parameters calculated on the bases of molecular mechanics have been investigated for the prediction of reversed-phase retention behavior of structurally unrelated series of drug molecules. Non-polar, non-polar unsaturated and polar surface areas, surface energies, dipole moments, van der Waals radii and hydrophobicity values expressed by the logarithm of the octanol/water partition coefficients have been calculated from the molecular structure. The reversed-phase retention behavior was described by the slope and the intercept of the straight lines obtained by plotting the log k1 values against the acetonitrile concentration of the mobile phase. The acetonitrile concentration (OP%0) which was needed for the log k1 = 0 retention was also calculated from the slope and intercept values. Step-wise linear regression analyses have been applied for revealing the correlations between the investigated parameters. The slope values could be described by the difference of the non-polar and non-polar accessible surface areas or by the total surface energy values and the van der Waals radii. The intercept values could be described by the hydrophobicity parameter, the slope and the reciprocal values of dipole moment. The acetonitrile concentration for the log k'=O retention (OP%0) could have been calculated from the hydrophobicity and the non-polar unsaturated surface area values of the investigated compounds.  相似文献   

16.
It is becoming increasingly common in quantitative structure/activity relationship (QSAR) analyses to use external test sets to evaluate the likely stability and predictivity of the models obtained. In some cases, such as those involving variable selection, an internal test set – i.e., a cross-validation set – is also used. Care is sometimes taken to ensure that the subsets used exhibit response and/or property distributions similar to those of the data set as a whole, but more often the individual observations are simply assigned `at random.' In the special case of MLR without variable selection, it can be analytically demonstrated that this strategy is inferior to others. Most particularly, D-optimal design performs better if the form of the regression equation is known and the variables involved are well behaved. This report introduces an alternative, non-parametric approach termed `boosted leave-many-out' (boosted LMO) cross-validation. In this method, relatively small training sets are chosen by applying optimizable k-dissimilarity selection (OptiSim) using a small subsample size (k = 4, in this case), with the unselected observations being reserved as a test set for the corresponding reduced model. Predictive errors for the full model are then estimated by aggregating results over several such analyses. The countervailing effects of training and test set size, diversity, and representativeness on PLS model statistics are described for CoMFA analysis of a large data set of COX2 inhibitors.  相似文献   

17.
In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so‐called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
《Analytical letters》2012,45(14):1149-1158
Abstract

When a solution containing a single reactant is subjected to kinetic analysis with a reagent giving rise to a pseudo-first-order reaction, non-linear regression analysis of the concentrationtime data yields a random scatter of the residuals around the best fit to the pseudo-first-order equation. If the same equation is used when a second reactant is also present, systematic errors arise and yield a deviation plot having a characteristic shape. If the amplitude of that plot is substantially larger than the random error of measurement, the presence of the reactant can be detected, and its concentration can then be evaluated by non-linear regression onto the equation that takes its presence into account. The amplitude passes through a maximum as the relative concentration of the second reactant increases, or as the ratio of the rate constants increases. For any given ratio of concentrations, detection of the second reactant is impossible unless the ratio of the rate constants lies within a certain range, which will be governed by the data-acquisition schedule employed. For the particular schedule assumed here, examination of these dependences shows, for example, that it should be possible to detect the second reactant if its concentration is 2.5 per cent of that of the first reactant and if the ratio of the rate constants is between 7.1 and 21.7.  相似文献   

19.
We deal with collaborative studies where each of k laboratories performs n repeated binary measurements (measurement result x = 0: “not detected”; measurement result x = 1: “detected”), and present a simple method of constructing a confidence interval for the mean probability of detection of the laboratories. This method is based on an approximation of the distribution of the number y of detections among n independent measurements of a randomly chosen laboratory by a binomial distribution. The confidence interval is not only much easier to calculate but also more accurate than the profile likelihood interval presented by Uhlig et al.  相似文献   

20.
Chlorpheniramine maleate (CLOR) enantiomers were quantified by ultraviolet spectroscopy and partial least squares regression. The CLOR enantiomers were prepared as inclusion complexes with β-cyclodextrin and 1-butanol with mole fractions in the range from 50 to 100%. For the multivariate calibration the outliers were detected and excluded and variable selection was performed by interval partial least squares and a genetic algorithm. Figures of merit showed results for accuracy of 3.63 and 2.83% (S)-CLOR for root mean square errors of calibration and prediction, respectively. The ellipse confidence region included the point for the intercept and the slope of 1 and 0, respectively. Precision and analytical sensitivity were 0.57 and 0.50% (S)-CLOR, respectively. The sensitivity, selectivity, adjustment, and signal-to-noise ratio were also determined. The model was validated by a paired t test with the results obtained by high-performance liquid chromatography proposed by the European pharmacopoeia and circular dichroism spectroscopy. The results showed there was no significant difference between the methods at the 95% confidence level, indicating that the proposed method can be used as an alternative to standard procedures for chiral analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号