首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Biaxial stress relaxation data acquired for a carbon-black-filled elastomer are analyzed. Consistent with previously published simple-tension and pure-shear results for this elastomer, the biaxial relaxation behavior is also found to be a separable function of time and strain effects. A departure from separability is observed at the larger strain; this might be a manifestation of strain-induced crystallization. An analysis of the biaxial results together with the earlier reported simple-tension and pure-shear data reveals that the deformational dependence of this elastomer obeys the Valanis—Landel hypothesis up to moderate deformations. The Ogden strain energy function is found to be an excellent analytical representation of all of the multiaxial data. Large relative errors found at small tensile strains between the data and the Ogden fit are attributed to carbon black structural effects.  相似文献   

2.
Scientific applications of Ion Mobility Spectrometry require the ability to easily compare data between different laboratories. Reduced mobility values attempt to provide this functionality, but no standard exists for the collection and manipulation of the raw data obtained during an IMS experiment. We have created a comprehensive software suite based on the LabVIEW programming language that can be used to collect and interpret IMS data. The software may be used to collect data from a stand-alone IMS cell, a voltage sweep IMS cell, or a coupled chromatography-IMS system, and this framework may be adapted to incorporate mass spectral data analysis as well. This software is provided under an open source license for the benefit of the IMS community.  相似文献   

3.
An important feature of experimental science is that data of various kinds is being produced at an unprecedented rate. This is mainly due to the development of new instrumental concepts and experimental methodologies. It is also clear that the nature of acquired data is significantly different. Indeed in every areas of science, data take the form of always bigger tables, where all but a few of the columns (i.e. variables) turn out to be irrelevant to the questions of interest, and further that we do not necessary know which coordinates are the interesting ones. Big data in our lab of biology, analytical chemistry or physical chemistry is a future that might be closer than any of us suppose. It is in this sense that new tools have to be developed in order to explore and valorize such data sets. Topological data analysis (TDA) is one of these. It was developed recently by topologists who discovered that topological concept could be useful for data analysis. The main objective of this paper is to answer the question why topology is well suited for the analysis of big data set in many areas and even more efficient than conventional data analysis methods. Raman analysis of single bacteria should be providing a good opportunity to demonstrate the potential of TDA for the exploration of various spectroscopic data sets considering different experimental conditions (with high noise level, with/without spectral preprocessing, with wavelength shift, with different spectral resolution, with missing data).  相似文献   

4.
The mathematical and statistical evaluation of environmental data gains an increasing importance in environmental chemistry as the data sets become more complex. It is inarguable that different mathematical and statistical methods should be applied in order to compare results and to enhance the possible interpretation of the data. Very often several aspects have to be considered simultaneously, for example, several chemicals entailing a data matrix with objects (rows) and variables (columns). In this paper a data set is given concerning the pollution of 58 regions in the state of Baden-Württemberg, Germany, which are polluted with metals lead, cadmium, zinc, and with sulfur. For pragmatic reasons the evaluation is performed with the dichotomized data matrix. First this dichotomized 58 x 13 data matrix is evaluated by the Hasse diagram technique, a multicriteria evaluation method which has its scientific origin in Discrete Mathematics. Then the Partially Ordered Scalogram Analysis with Coordinates (POSAC) method is applied. It reduces the data matrix in plotting it in a two-dimensional space. A small given percentage of information is lost in this method. Important priority objects, like maximal and minimal objects (high and low polluted regions), can easily be detected by Hasse diagram technique and POSAC. Two variables attained exceptional importance by the data analysis shown here: TLS, Sulfur found in Tree Layer, is difficult to interpret and needs further investigations, whereas LRPB, Lead in Lumbricus Rubellus, seems to be a satisfying result because the earthworm is commonly discussed in the ecotoxicological literature as a specific and highly sensitive bioindicator.  相似文献   

5.
In the last two decades, the volumes of chemical and biological data are constantly increasing. The problem of converting data sets into knowledge is both expensive and time-consuming, as a result a workflow technology with platforms such as KNIME, was built up to facilitate searching through multiple heterogeneous data sources and filtering for specific criteria then extracting hidden information from these large data. Before any QSAR modeling, a manual data curation is extremely recommended. However, this can be done, for small datasets, but for the extensive data accumulated recently in public databases a manual process of big data will be hardly feasible. In this work, we suggest using KNIME as an automated solution for workflow in data curation, development, and validation of predictive QSAR models from a huge dataset.In this study, we used 250250 structures from NCI database, only 3520 compounds could successfully pass through our workflow safely with their corresponding experimental log P, this property was investigated as a case study, to improve some existing log P calculation algorithms.  相似文献   

6.
In order to increase the rate of drug discovery, pharmaceutical and biotechnology companies spend billions of dollars a year assembling research databases. Current trends still indicate a falling rate in the discovery of New Molecular Entities (NMEs). It is widely accepted that the data need to be integrated in order for it to add value. The degree to which this must be achieved is often misunderstood. The true goal of data integration must be to provide accessible knowledge. If knowledge cannot be gained from these data, then it will invalidate the business case for gathering it. Current data integration solutions focus on the initial task of integrating the actual data and to some extent, also address the need to allow users to access integrated information. Typically the search tools that are provided are either restrictive forms or free text based. While useful, neither of these solutions is suitable for providing full coverage of large numbers of integrated structured data sources. One solution to this accessibility problem is to present the integrated data in a collated manner that allows users to browse and explore it and also perform complex ad-hoc searches on it within a scientific context and without the need for advanced Information Technology (IT) skills. Additionally, the solution should be maintainable by 'in-house' administrators rather than requiring expensive consultancy. This paper examines the background to this problem, investigates the requirements for effective exploitation of corporate data and presents a novel effective solution.  相似文献   

7.
郑永杰  张维冰  张溪 《色谱》1996,14(2):115-116
采用迭代计算确定色谱工作站采样时间,建立了连续变速采样的方法。方法具有采样点数少,节省内存空间,数据处理速度快及误差小的优点,可用于恒温及等梯度洗脱分析。  相似文献   

8.
The technique of reconstructive tomography (RT) is a powerful method of obtaining local, spatially resolved volumetric emission coefficients from line integral data. The applicability of this technique as a diagnostic for nonuniform sources is studied using simulated data with and without noise. The major advantage of RT techniques is that they may be applied, without restriction, to highly asymmetric data as well as symmetric data. When applied to symmetric data, the technique appears to be less susceptible to noise than Abel inversion techniques. Also examined is a method of accounting for self-absorption under certain circumstances.  相似文献   

9.
Two-dimensional (2-D) polyacrylamide gel electrophoresis can detect thousands of polypeptides, separating them by apparent molecular weight (Mr) and isoelectric point (pI). Thus it provides a more realistic and global view of cellular genetic expression than any other technique. This technique has been useful for finding sets of key proteins of biological significance. However, a typical experiment with more than a few gels often results in an unwiedly data management problem. In this paper, the GELLAB-II system is discussed with respect to how data reduction and exploratory data analysis can be aided by computer data management and statistical search techniques. By encoding the gel patterns in a "three-dimensional" (3-D) database, an exploratory data analysis can be carried out in an environment that might be called a "spread sheet for 2-D gel protein data". From such databases, complex parametric network models of protein expression during events such as differentiation might be constructed. For this, 2-D gel databases must be able to include data from other domains external to the gel itself. Because of the increasing complexity of such databases, new tools are required to help manage this complexity. Two such tools, object-oriented databases and expert-system rule-based analysis, are discussed in this context. Comparisons are made between GELLAB and other 2-D gel database analysis systems to illustrate some of the analysis paradigms common to these systems and where this technology may be heading.  相似文献   

10.
11.
In spectroscopy the measured spectra are typically plotted as a function of the wavelength (or wavenumber), but analysed with multivariate data analysis techniques (multiple linear regression (MLR), principal components regression (PCR), partial least squares (PLS)) which consider the spectrum as a set of m different variables. From a physical point of view it could be more informative to describe the spectrum as a function rather than as a set of points, hereby taking into account the physical background of the spectrum, being a sum of absorption peaks for the different chemical components, where the absorbance at two wavelengths close to each other is highly correlated. In a first part of this contribution, a motivating example for this functional approach is given. In a second part, the potential of functional data analysis is discussed in the field of chemometrics and compared to the ubiquitous PLS regression technique using two practical data sets. It is shown that for spectral data, the use of B-splines proves to be an appealing basis to accurately describe the data. By applying both functional data analysis and PLS on the data sets the predictive ability of functional data analysis is found to be comparable to that of PLS. Moreover, many chemometric datasets have some specific structure (e.g. replicate measurements, on the same object or objects that are grouped), but the structure is often removed before analysis (e.g. by averaging the replicates). In the second part of this contribution, we suggest a method to adapt traditional analysis of variance (ANOVA) methods to datasets with spectroscopic data. In particular, the possibilities to explore and interpret sources of variation, such as variations in sample and ambient temperature, are examined. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
The present study summarizes the measurement uncertainty estimations carried out in Nestlé Research Center since 2002. These estimations cover a wide range of analyses of commercial and regulatory interests. In a first part, this study shows that method validation data (repeatability, trueness and intermediate reproducibility) can be used to provide a good estimation of measurement uncertainty.In a second part, measurement uncertainty is compared to collaborative trials data. These data can be used for measurement uncertainty estimation as far as the in-house validation performances are comparable to the method validation performances obtained in the collaborative trial.Based on these two main observations, the aim of this study is to easily estimate the measurement uncertainty using validation data.  相似文献   

13.
用数值遗传算法同时求解配合物稳定常数和各型体的纯光谱张众杰,李通化,朱仲良,丛培盛,孙云平(同济大学化学系,上海,200092)关键词数值遗传算法,二维数据,稳定常数利用滴定或光度法的测定数据,求解酸的离解常数和配合物的稳定常数是化学工作者十分熟悉并...  相似文献   

14.
Zhang X  Zheng J  Gao H 《Talanta》2001,55(1):171-178
Fourier self-deconvolution is an effective means of resolving overlapped bands, but this method requires a mathematical model to yield deconvolution and it is quite sensitive to noises in unresolved bands. Wavelet transform is a technique for noise reduction and deterministic feature capturing because its time-frequency localization or scale is not the same in the entire time-frequency domain. In this work, wavelet transform-based Fourier deconvolution was proposed, in which a discrete approximation (such as A(2)) obtained from performing wavelet transform on the original data was substituted for the original data to be deconvolved and another discrete appropriate approximation (such as A(5)) was used as a lineshape function to yield deconvolution. Again, instead of the apodization function, the B-spline wavelet was used to smooth the deconvolved data to enhance the signal-to-noise ratio. As a consequence, this method does not suffer as badly as Fourier self-deconvolution from noises in the original data. Thus, resolution enhancement can be increased significantly, especially for signals with higher noise level. Furthermore, this method does not require a mathematical model to yield deconvolution; it is very convenient to deconvolve electrochemical signals.  相似文献   

15.
16.
A Plackett‐Burman type dataset from a paper by Williams [1], with 28 observations and 24 two‐level factors, has become a standard dataset for illustrating construction (by halving) of supersaturated designs (SSDs) and for a corresponding data analysis. The aim here is to point out that for several reasons this is an unfortunate situation. The original paper by Williams contains several errors and misprints. Some are in the design matrix, which will here be reconstructed, but worse is an outlier in the response values, which can be observed when data are plotted against the dominating factor. In addition, the data should better be analysed on log‐scale than on original scale. The implications of the outlier for SSD analysis are drastic, and it will be concluded that the data should be used for this purpose only if the outlier is properly treated (omitted or modified). Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

17.
Erny GL  Cifuentes A 《Electrophoresis》2007,28(9):1335-1344
It has been demonstrated that CE-MS is a very useful hyphenated technique for proteomic studies. However, the huge amount of data stored in a single CE-MS run makes it necessary to account with procedures able to extract all the relevant information made available by CE-MS. In this work, we present a new and easy approach capable of generating a simplified 2-D map from CE-MS raw data. This new approach provides the automatic detection and characterization of the most abundant ions from the CE-MS data including their mass-to-charge (m/z) values, ion intensities and analysis times. It is demonstrated that visualization of CE-MS data in this simplified 2-D format allows: (i) an easy and simultaneous visual inspection of large datasets, (ii) an immediate perception of relevant differences in closely related samples, (iii) a rapid monitoring of data quality levels in different samples, and (iv) a fast discrimination between comigrating polypeptides and ESI-MS fragmentation ions. The strategy proposed in this work does not rely on an excellent mass accuracy for peak detection and filtering, since MS values obtained from an IT analyzer are used. Moreover, the methodology developed works directly with the CE-MS raw data, without interference by the user, giving simultaneously a simplified 2-D map and a much easier and more complete data evaluation. Besides, this procedure can easily be implemented in any CE-MS laboratory. The usefulness of this approach is validated by studying the very similar trypsin digests from bovine, rabbit and horse cytochrome c. It is demonstrated that this simplified 2-D approach allows specific markers for each species to be obtained in a fast and simple way.  相似文献   

18.
Reactivity ratio estimation is a non-linear estimation problem. Typically, reactivity ratios are estimated using the instantaneous copolymer composition equation, otherwise known as the Mayo-Lewis model, based on low conversion (<5%) copolymer composition data. However, there are other instantaneous models, which can be used to estimate reactivity ratios, such as the instantaneous triad fraction equations. The aim of this paper is to determine the potential improvement in reactivity ratio estimates when triad fraction data is used in place of and in combination with copolymer composition data. The interest in using triad fraction data in parameter estimation, stems from the fact that there are a greater number of responses measured (six triad fractions) compared to composition leading to data with theoretically more information content. In principle this should lead to reactivity ratio estimates having less uncertainty. In this study, the parameter estimates are obtained by employing the error in variables model (EVM), assuming a multiplicative error structure. Several case studies involving published literature data for different copolymer systems are presented. As the case studies demonstrate in general more precise estimates can be obtained from triad fraction data. Combining the triad fraction with composition data leads to little additional improvement. However, discrepancies arise between reactivity ratios estimated from composition data compared with those obtained from triad fraction data depending upon the copolymer system. Those copolymer systems exhibiting more heterogeneity due to phase separation during polymerization may be showing more discrepancy.  相似文献   

19.
Mixture analysis using PFG-NMR (DOSY) data is, for many chemists, a valuable and increasingly popular technique where the NMR signals of different species are separated according to their diffusion coefficients. Where NMR signals overlap, however, it is often difficult to extract the spectra of pure components from experimental data. In such situations, it can often be helpful to use multivariate methods, which exploit all the available signal covariance, to resolve the spectra of the components of a mixture. The best-established and by some way the quickest such method, DECRA (Direct Exponential Curve Resolution Algorithm), unfortunately requires that data conform to a pure exponential decay as a function of gradient strength squared, while experimental data typically deviate significantly from this. If this deviation is known, the performance of DECRA can be greatly improved for components with similar diffusion coefficients by adjusting the choice of gradient strengths used.  相似文献   

20.
《Chemical physics》1986,107(1):61-74
A method of calculating the pair correlation function from liquid structure factor measurements is described which attempts to maximize the entropy of the calculated distribution. Known constraints are applied to the pair correlation function, and the solution is forced to lie as close as possible to the measured data by means of a feedback procedure in which the difference between the calculated distribution and the measured data is used to make successive estimates of the trial function. In this way distortion which may be present in the original data is kept to a minimum in the calculated function. Previously published neutron diffraction data on liquid argon and liquid water are analysed with this new technique, and a substantial improvement in the Fourier transforms of the water diffraction data is seen.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号