首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article develops a Bayesian method for fault detection and isolation using a sparse reconstruction framework. The normal/training data is assumed to follow a signal‐plus‐noise model, and an indicator matrix is used to show whether the test data is from a faulty process. The distribution of the indicator matrix is modeled by a Laplacian distribution, which forces the indicator matrix to be a sparse one, and a Gibbs sampler is derived to obtain the estimation/reconstruction of the indicator matrix, the unobserved signals, and other parameters like signal mean, covariance, and noise variance. The faulty variables can then be detected and isolated by inspecting whether corresponding rows of the indicator matrix are zero. The proposed Bayesian approach is data driven; it allows for simultaneous fault detection and isolation. A simulation study and an industrial case study are used to test the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Plant‐wide process monitoring is challenging because of the complex relationships among numerous variables in modern industrial processes. The multi‐block process monitoring method is an efficient approach applied to plant‐wide processes. However, dividing the original space into subspaces remains an open issue. The loading matrix generated by principal component analysis (PCA) describes the correlation between original variables and extracted components and reveals the internal relations within the plant‐wide process. Thus, a multi‐block PCA method that constructs principal component (PC) sub‐blocks according to the generalized Dice coefficient of the loading matrix is proposed. The PCs corresponding to similar loading vectors are divided within the same sub‐block. Thus, the PCs in the same sub‐block share similar variational behavior for certain faults. This behavior improves the sensitivity of process monitoring in the sub‐block. A monitoring statistic T2 corresponding to each sub‐block is produced and is integrated into the final probability index based on Bayesian inference. A corresponding contribution plot is also developed to identify the root cause. The superiority of the proposed method is demonstrated by two case studies: a numerical example and the Tennessee Eastman benchmark. Comparisons with other PCA‐based methods are also provided. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Multi‐mode process monitoring is a key issue often raised in industrial process control. Most multivariate statistical process monitoring strategies, such as principal component analysis (PCA) and partial least squares, make an essential assumption that the collected data follow a unimodal or Gaussian distribution. However, owing to the complexity and the multi‐mode feature of industrial processes, the collected data usually follow different distributions. This paper proposes a novel multi‐mode data processing method called weighted k neighbourhood standardisation (WKNS) to address the multi‐mode data problem. This method can transform multi‐mode data into an approximately unimodal or Gaussian distribution. The results of theoretical analysis and discussion suggest that the WKNS strategy is more suitable for multi‐mode data normalisation than the z‐score method is. Furthermore, a new fault detection approach called WKNS‐PCA is developed and applied to detect process outliers. This method does not require process knowledge and multi‐mode modelling; only a single model is required for multi‐mode process monitoring. The proposed method is tested on a numerical example and the Tennessee Eastman process. Finally, the results demonstrate that the proposed data preprocessing and process monitoring methods are particularly suitable and effective in multi‐mode data normalisation and industrial process fault detection. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, a genetic algorithm‐support vector regression (GA‐SVR) coupled approach was proposed for investigating the relationship between fingerprints and properties of herbal medicines. GA was used to select variables so as to improve the predictive ability of the models. Two other widely used approaches, Random Forests (RF) and partial least squares regression (PLSR) combined with GA (namely GA‐RF and GA‐PLSR, respectively), were also employed and compared with the GA‐SVR method. The models were evaluated in terms of the correlation coefficient between the measured and predicted values (Rp), root mean square error of prediction, and root mean square error of leave‐one‐out cross‐validation. The performance has been tested on a simulated system, a chromatographic data set, and a near‐infrared spectroscopic data set. The obtained results indicate that the GA‐SVR model provides a more accurate answer, with higher Rp and lower root mean square error. The proposed method is suitable for the quantitative analysis and quality control of herbal medicines. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
With the rapid development of rubber industry, it becomes more and more important to improve the performance of the quality control system of rubber mixing process. Unfortunately, the large measurement time delay of Mooney viscosity, one of the most important quality parameters of mixed rubber, badly blocks the further development of the issue. The independent component regression‐Gaussian process (ICR‐GP) algorithm is used to solve such typical nonlinear “black‐box” regression problem for the first time to predict Mooney viscosity. In the ICR‐GP method, the non‐Gaussian information is extracted by the independent component regression method firstly, and then the residual Gaussian information is extracted by the Gaussian process method. Meanwhile, both the linear and nonlinear relationships between the input and output variables can be extracted through the ICR‐GP method. With the fact that there is no need to optimize parameters, the ICR‐GP method is especially suitable for “black‐box” regression problems. The highest prediction accuracy was achieved at M = 0.8765 (the root mean square error), which was high enough considering the measuring accuracy (M = ±0.5) of the Mooney viscometer. It is by using the online‐measured rheological parameters as the input variables that the measurement time delay of Mooney viscosity could be dramatically decreased from about 240 to 2 min. Consequently, such Mooney‐viscosity prediction model is very helpful for the development of the rubber mixing process, especially of the emerging one‐step rubber mixing technique. The practical applications performed on the rubber mixing process in a large‐scale tire factory strongly proved the outstanding regression performance of this ICR‐GP Mooney‐viscosity prediction model. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
Multiway principal component analysis (MPCA) has been extensively applied to batch process monitoring. In the case of monitoring a two‐stage batch process, the inter‐stage variation is neglected if MPCA models for each individual stage are used. On the other hand, if two stages of reference data are combined into a large dataset that MPCA is applied to, the dimensions of the unfolded matrix will increase dramatically. In addition, when an abnormal event is detected, it is difficult to identify which stage's operation induced this alarm. In this paper, partial least squares (PLS) is applied to monitor the inter‐stage relation of a two‐stage batch process. In post‐analysis of abnormalities, PLS can clarify whether root causes are from previous stage operations or due to the changes of the inter‐stage correlations. This approach was successfully applied to a semiconductor manufacturing process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
This paper introduces a class of methods to infer the relationship between observations and variables in latent subspace models. The approach is a modification of the recently proposed missing data methods for exploratory data analysis (MEDA). MEDA is useful to identify the structure in the data and also to interpret the contribution of each latent variable. In this paper, MEDA is augmented with dummy variables to find the data variables related to a given deviation detected among observations, for instance, the difference between one cluster of observations and the bulk of the data. The MEDA extension, referred to as observation‐based MEDA or oMEDA, can be performed in several ways, one of which is theoretically shown to be equivalent to a comparison of means between groups. The use of the proposed approach is demonstrated with a number of examples with simulated data and a real data set of archeological artifacts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
Challenges in decomposition odour profiling have led to variation in the documented odour profile by different research groups worldwide. Background subtraction and use of controls are important considerations given the variation introduced by decomposition studies conducted in different geographical environments. The collection of volatile organic compounds (VOCs) from soil beneath decomposing remains is challenging due to the high levels of inherent soil VOCs, further confounded by the use of highly sensitive instrumentation. This study presents a method that provides suitable chromatographic resolution for profiling decomposition odour in soil by comprehensive two‐dimensional gas chromatography coupled with time‐of‐flight mass spectrometry using appropriate controls and field blanks. Logarithmic transformation and t‐testing of compounds permitted the generation of a compound list of decomposition VOCs in soil. Principal component analysis demonstrated the improved discrimination between experimental and control soil, verifying the value of the data handling method. Data handling procedures have not been well documented in this field and standardisation would thereby reduce misidentification of VOCs present in the surrounding environment as decomposition byproducts. Uniformity of data handling and instrumental procedures will reduce analytical variation, increasing confidence in the future when investigating the effect of taphonomic variables on the decomposition VOC profile.  相似文献   

9.
In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts.  相似文献   

10.
This paper describes specific electrochemical enterobacteriaceae lac Z gene DNA sensors based on immobilization of a thiolated 25 base single stranded probe onto disposable screen printed gold electrodes (gold SPEs). Two configurations have been evaluated. In the first one, the capture probe was attached to the electrode surface through its ? SH moiety, while mercaptohexanol (MCH) was used as spacer for the displacement of nonspecifically adsorbed oligonucleotide molecules. The hybridization event between the probe and target DNA sequences was detected at ?0.20 V by square‐wave voltammetry (SWV), using methylene blue (MB) as electrochemical indicator. The second genosensor configuration involved modification of gold high temperature SPEs with a 3,3′‐dithiodipropionic acid di(N‐succinimidyl ester) (DTSP) self‐assembled monolayer (SAM). Moreover, 2‐aminoethanol was used as blocking agent, and further modification with avidin allowed binding of the biotinylated enterobacteriaceae lac Z gene DNA probe. An enzyme amplified detection scheme was applied, based on the coupling of streptavidin‐peroxidase to the biotinylated complementary target, after the hybridization process, and immobilization of tetrathiafulvalene (TTF) as redox mediator atop the modified electrode. The amperometric response obtained at ?0.15 V after the addition of hydrogen peroxide was used to detect the hybridization process. Experimental variables concerning sensors composition and electrochemical transduction were evaluated in both cases. A better precision and reproducibility in the fabrication process, as well as a higher sensitivity were achieved using the biotinylated probe‐based sensor configuration. A limit of detection of 0.002 ng/μL was obtained without any preconcentration step.  相似文献   

11.
Cross‐validation has become one of the principal methods to adjust the meta‐parameters in predictive models. Extensions of the cross‐validation idea have been proposed to select the number of components in principal components analysis (PCA). The element‐wise k‐fold (ekf) cross‐validation is among the most used algorithms for principal components analysis cross‐validation. This is the method programmed in the PLS_Toolbox, and it has been stated to outperform other methods under most circumstances in a numerical experiment. The ekf algorithm is based on missing data imputation, and it can be programmed using any method for this purpose. In this paper, the ekf algorithm with the simplest missing data imputation method, trimmed score imputation, is analyzed. A theoretical study is driven to identify in which situations the application of ekf is adequate and, more importantly, in which situations it is not. The results presented show that the ekf method may be unable to assess the extent to which a model represents a test set and may lead to discard principal components with important information. On a second paper of this series, other imputation methods are studied within the ekf algorithm. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
A new approach to MRI thermometry using encapsulated hyperpolarized xenon is demonstrated. The method is based on the temperature dependent chemical shift of hyperpolarized xenon in a cryptophane‐A cage. This shift is linear with a slope of 0.29 ppm °C?1 which is perceptibly higher than the shift of the proton resonance frequency of water (ca. 0.01 ppm °C?1) that is currently used for MRI thermometry. Using spectroscopic imaging techniques, we collected temperature maps of a phantom sample that could discriminate by direct NMR detection between temperature differences of 0.1 °C at a sensor concentration of 150 μM . Alternatively, the xenon‐in‐cage chemical shift was determined by indirect detection using saturation transfer techniques (Hyper‐CEST) that allow detection of nanomolar agent concentrations. Thermometry based on hyperpolarized xenon sensors improves the accuracy of currently available MRI thermometry methods, potentially giving rise to biomedical applications of biosensors functionalized for binding to specific target molecules.  相似文献   

13.
We have developed new catechol‐based sensors that can detect fluoride via fluorescence or optical absorption even in the presence of other halides. The level and sensitivity of detection of the sensing molecules is dependent on the chromophore length, which is controlled by the number of thiophene units (one to three) within the chromophore. The sensor with three thiophene units, (E)‐2‐(2,2′‐terthiophen‐5‐yl)‐3‐(3,4‐dihydroxyphenyl)acrylonitrile, gives the best response to fluoride. By using fluorescence measurements fluoride is detectable over the concentration range 1.7 μM to 200 μM . Importantly, when adsorbed onto a solid support the fluorescent catechol dye can be used to detect the presence of fluoride in aqueous solution.  相似文献   

14.
Processing plants can produce large amounts of data that process engineers use for analysis, monitoring, or control. Principal component analysis (PCA) is well suited to analyze large amounts of (possibly) correlated data, and for reducing the dimensionality of the variable space. Failing online sensors, lost historical data, or missing experiments can lead to data sets that have missing values where the current methods for obtaining the PCA model parameters may give questionable results due to the properties of the estimated parameters. This paper proposes a method based on nonlinear programming (NLP) techniques to obtain the parameters of PCA models in the presence of incomplete data sets. We show the relationship that exists between the nonlinear iterative partial least squares (NIPALS) algorithm and the optimality conditions of the squared residuals minimization problem, and how this leads to the modified NIPALS used for the missing value problem. Moreover, we compare the current NIPALS‐based methods with the proposed NLP with a simulation example and an industrial case study, and show how the latter is better suited when there are large amounts of missing values. The solutions obtained with the NLP and the iterative algorithm (IA) are very similar. However when using the NLP‐based method, the loadings and scores are guaranteed to be orthogonal, and the scores will have zero mean. The latter is emphasized in the industrial case study. Also, with the industrial data used here we are able to show that the models obtained with the NLP were easier to interpret. Moreover, when using the NLP many fewer iterations were required to obtain them. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
This paper describes a clustering method on three‐way arrays making use of an exploratory visualization approach. The aim of this study is to cluster samples in the object mode of a three‐way array, which is done using the scores (sample loadings) of a three‐way factor model, for example, a Tucker3 or a PARAFAC model. Further, tools are developed to explore and identify reasons for particular clusters by visually mining the data using the clustering results as guidance. We introduce a three‐way clustering tool and demonstrate our results on a metabolite profiling dataset. We explore how high performance liquid chromatography (HPLC) measurements of commercial extracts of St. John's wort (natural remedies for the treatment of mild to moderate depression) differ and which chemical compounds account for those differences. Using common distance measures, for example, Euclidean or Mahalanobis, on the scores of a three‐way model, we verify that we can capture the underlying clustering structure in the data. Beside this, by making use of the visualization approach, we are able to identify the variables playing a significant role in the extracted cluster structure. The suggested approach generalizes straightforwardly to higher‐order data and also to two‐way data. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
Polylactic acid (PLA) nanoparticles coated with Gd(III)‐based metallosurfactants (MS) are prepared using a simple and rapid one‐step method, flash nanoprecipitation (FNP), for magnetic resonance imaging (MRI) applications. By co‐assembling the Gd(III)‐based MS and an amphiphilic polymer, methoxy poly(ethylene glycol)‐b‐poly(?‐caprolactone) (mPEG‐b‐PCL), PLA cores were rapidly encapsulated to form biocompatible T1 contrast agents with tunable particle size and narrow size distribution. The hydrophobic property of Gd(III)‐based MS were finely tuned to achieve their high loading efficiency. The size of the nanoparticles was easily controlled by tuning the stream velocity, Reynolds number and the amount of the amphiphilic block copolymer during the FNP process. Under the optimized condition, the relaxivity of the nanoparticles was achieved up to 35.39 mM?1 s?1 (at 1.5 T), which is over 8 times of clinically used MRI contrast agents, demonstrating the potential application for MR imaging.  相似文献   

17.
带有酚羟基的化合物1,2和3能够用作阴离子选择性生色传感器.它们对氟离子和醋酸根离子表现出明显的颜色变化,而对其它卤素离子没有响应.这种选择性可以用质子转移机理来解释.化合物1 含有两个酚羟基,在和过量的氟离子作用时,两个酚羟基表现为逐步去质子化.紫外光谱,核磁氢谱,核磁氟谱证明了这一点.  相似文献   

18.
In mass spectrometry (MS)-based metabolomics, missing values (NAs) may be due to different causes, including sample heterogeneity, ion suppression, spectral overlap, inappropriate data processing, and instrumental errors. Although a number of methodologies have been applied to handle NAs, NA imputation remains a challenging problem. Here, we propose a non-negative matrix factorization (NMF)-based method for NA imputation in MS-based metabolomics data, which makes use of both global and local information of the data. The proposed method was compared with three commonly used methods: k-nearest neighbors (kNN), random forest (RF), and outlier-robust (ORI) missing values imputation. These methods were evaluated from the perspectives of accuracy of imputation, retrieval of data structures, and rank of imputation superiority. The experimental results showed that the NMF-based method is well-adapted to various cases of data missingness and the presence of outliers in MS-based metabolic profiles. It outperformed kNN and ORI and showed results comparable with the RF method. Furthermore, the NMF method is more robust and less susceptible to outliers as compared with the RF method. The proposed NMF-based scheme may serve as an alternative NA imputation method which may facilitate biological interpretations of metabolomics data.  相似文献   

19.
The root of Cynanchum auriculatum (C. auriculatum ) Royle ex Wight has been shown to possess various pharmacological effects and has recently attracted much attention with respect to its potential role in antitumor activity. The C‐21 steroidal glycosides are commonly accepted as the major active ingredients of C. auriculatum . In this study, the antitumor abilities of different extracted fractions of the root bark and the root tuber of C. auriculatum were investigated by using a 3‐(4,5‐dimethylthiazol‐2‐yl)‐2,5‐diphenyltetrazolium bromide assay in human cancer cell lines HepG2 and SMMC‐7721. The results showed that the chloroform and ethyl acetate fractions of the root tuber suppressed tumor cell growth strongly. To identify and characterize the chemical constituents of different active fractions, an ultra high performance liquid chromatography with triple‐quadrupole tandem mass spectrometry method was developed for the simultaneous quantitation of eight C‐21 steroidal glycosides. The analysis revealed that the C‐21 steroidal glycosides were concentrated in the chloroform and ethyl acetate fractions, and the total contents of different fractions in the root tuber were significantly higher than those of corresponding ones in the root bark. Furthermore, the C‐21 steroidal glycosides based on different types of aglucones were prone in different medicinal parts of C. auriculatum .  相似文献   

20.
Pentenary Cu2ZnSn(SySe1?y)4 (kesterite) photovoltaic absorbers are synthesized by a one‐step annealing process from copper‐poor and zinc‐rich precursor metallic stacks prepared by direct‐current magnetron sputtering deposition. Depending on the chalcogen source—mixtures of sulfur and selenium powders, or selenium disulfide—as well as the annealing temperature and pressure, this simple methodology permits the tuning of the absorber composition from sulfur‐rich to selenium‐rich in one single annealing process. The impact of the thermal treatment variables on chalcogenide incorporation is investigated. The effect of the S/(S+Se) compositional ratio on the structural and morphological properties of the as‐grown films, and the optoelectronic parameters of solar cells fabricated using these absorber films is studied. Using this single‐step sulfo‐selenization method, pentenary kesterite‐based devices with conversion efficiencies up to 4.4 % are obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号