首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multiway principal component analysis (MPCA) has been extensively applied to batch process monitoring. In the case of monitoring a two‐stage batch process, the inter‐stage variation is neglected if MPCA models for each individual stage are used. On the other hand, if two stages of reference data are combined into a large dataset that MPCA is applied to, the dimensions of the unfolded matrix will increase dramatically. In addition, when an abnormal event is detected, it is difficult to identify which stage's operation induced this alarm. In this paper, partial least squares (PLS) is applied to monitor the inter‐stage relation of a two‐stage batch process. In post‐analysis of abnormalities, PLS can clarify whether root causes are from previous stage operations or due to the changes of the inter‐stage correlations. This approach was successfully applied to a semiconductor manufacturing process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
A paramount aspect in the development of a model for a monitoring system is the so‐called parameter stability. This is inversely related to the uncertainty, i.e., the variance in the parameters estimates. Noise affects the performance of the monitoring system, reducing its fault detection capability. Low parameters uncertainty is desired to ensure a reduced amount of noise in the model. Nonetheless, there is no sound study on the parameter stability in batch multivariate statistical process control (BMSPC). The aim of this paper is to investigate the parameter stability associated to the most used synchronization and principal component analysis‐based BMSPC methods. The synchronization methods included in this study are the following: indicator variable, dynamic time warping, relaxed greedy time warping, and time linear expanding/compressing‐based. In addition, different arrangements of the three‐way batch data into two‐way matrices are considered, namely single‐model, K‐models, and hierarchical‐model approaches. Results are discussed in connection with previous conclusions in the first two papers of the series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
4.
In chemical and biochemical processes, steady‐state models are widely used for process assessment, control and optimisation. In these models, parameter adjustment requires data collected under nearly steady‐state conditions. Several approaches have been developed for steady‐state identification (SSID) in continuous processes, but no attempt has been made to adapt them to the singularities of batch processes. The main aim of this paper is to propose an automated method based on batch‐wise unfolding of the three‐way batch process data followed by a principal component analysis (Unfold‐PCA) in combination with the methodology of Brown and Rhinehart 2 for SSID. A second goal of this paper is to illustrate how by using Unfold‐PCA, process understanding can be gained from the batch‐to‐batch start‐ups and transitions data analysis. The potential of the proposed methodology is illustrated using historical data from a laboratory‐scale sequencing batch reactor (SBR) operated for enhanced biological phosphorus removal (EBPR). The results demonstrate that the proposed approach can be efficiently used to detect when the batches reach the steady‐state condition, to interpret the overall batch‐to‐batch process evolution and also to isolate the causes of changes between batches using contribution plots. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Huang H  Qu H 《Analytica chimica acta》2011,707(1-2):47-56
Alcohol precipitation is a critical unit operation during the manufacture of Chinese herbal injections. To facilitate enhanced process understanding and develop control strategy, the use of near-infrared spectroscopy (NIRS) combined with multivariate statistical process control (MSPC) methodology was investigated for in-line monitoring of alcohol precipitation. The effectiveness of the proposed approach was evaluated through an experimental campaign. Six batches were run under normal operating conditions to study batch-to-batch variation or batch reproducibility and establish MSPC control limits, while artificial process variations were purposefully introduced into the four test batches to assess the capability of the model for real-time fault detection. Several MSPC tools were compared and assessed. NIRS, in conjunction with MSPC, has proven to be a feasible process analytical technology (PAT) tool for monitoring batch evolution and potentially facilitating model-based advanced process control of the alcohol precipitation during the manufacture of Chinese herbal injections.  相似文献   

6.
Plant‐wide process monitoring is challenging because of the complex relationships among numerous variables in modern industrial processes. The multi‐block process monitoring method is an efficient approach applied to plant‐wide processes. However, dividing the original space into subspaces remains an open issue. The loading matrix generated by principal component analysis (PCA) describes the correlation between original variables and extracted components and reveals the internal relations within the plant‐wide process. Thus, a multi‐block PCA method that constructs principal component (PC) sub‐blocks according to the generalized Dice coefficient of the loading matrix is proposed. The PCs corresponding to similar loading vectors are divided within the same sub‐block. Thus, the PCs in the same sub‐block share similar variational behavior for certain faults. This behavior improves the sensitivity of process monitoring in the sub‐block. A monitoring statistic T2 corresponding to each sub‐block is produced and is integrated into the final probability index based on Bayesian inference. A corresponding contribution plot is also developed to identify the root cause. The superiority of the proposed method is demonstrated by two case studies: a numerical example and the Tennessee Eastman benchmark. Comparisons with other PCA‐based methods are also provided. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Large‐scale process data in plant‐wide process monitoring are characterized by two features: complex distributions and complex relevance. This study proposes a double‐step block division plant‐wide process monitoring method based on variable distributions and relevant features to overcome this limitation. First, the data distribution is considered, and the normality test method called the D‐test is applied to classify the variables with the same distribution (i.e., Gaussian distribution or non‐Gaussian distribution) in a block. Thus, the second block division is implemented on both blocks obtained in the previous step. The mutual information shared between two variables is used to generate relevant matrixes of the Gaussian and non‐Gaussian blocks. The K‐means method clusters the vectors of the relevant matrix. Principal component analysis is conducted to monitor each Gaussian subblock, whereas independent component analysis is conducted to monitor each non‐Gaussian subblock. A composite statistic is eventually derived through Bayesian inference. The proposed method is applied to a numerical system and the Tennessee Eastman process data set. The monitoring performance shows the superiority of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Multiway principal components analysis (MPCA) and parallel factor analysis (PARAFAC) are widely used in exploratory data analysis and multivariate statistical process control (MSPC). These models are linear in nature, thus, limited when non-linear relations are present in the data. Principal component analysis (PCA) can be extended to non-linear principal components analysis using autoassociative neural networks. In this paper, the network’s bottleneck layer outputs (non-linear components) were made orthogonal. A method to estimate confidence limits based on a kernel probability density function was proposed since these limits do not assume that the non-linear scores are normally distributed. A measure for the non-linear scores (DNL) was presented here to monitor on-line the process replacing the well known Hotelling’s T2 statistic. One hundred and two industrial fermentation runs were used to evaluate the performance of a non-linear technique for multivariate process statistical monitoring. Three process runs with faults were used to compare the error detection performance using a statistic for the non-linear scores and the residuals statistic (SPE).  相似文献   

9.
Multi‐mode process monitoring is a key issue often raised in industrial process control. Most multivariate statistical process monitoring strategies, such as principal component analysis (PCA) and partial least squares, make an essential assumption that the collected data follow a unimodal or Gaussian distribution. However, owing to the complexity and the multi‐mode feature of industrial processes, the collected data usually follow different distributions. This paper proposes a novel multi‐mode data processing method called weighted k neighbourhood standardisation (WKNS) to address the multi‐mode data problem. This method can transform multi‐mode data into an approximately unimodal or Gaussian distribution. The results of theoretical analysis and discussion suggest that the WKNS strategy is more suitable for multi‐mode data normalisation than the z‐score method is. Furthermore, a new fault detection approach called WKNS‐PCA is developed and applied to detect process outliers. This method does not require process knowledge and multi‐mode modelling; only a single model is required for multi‐mode process monitoring. The proposed method is tested on a numerical example and the Tennessee Eastman process. Finally, the results demonstrate that the proposed data preprocessing and process monitoring methods are particularly suitable and effective in multi‐mode data normalisation and industrial process fault detection. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts.  相似文献   

11.
Fermentation diagnosis by multivariate statistical analysis   总被引:1,自引:0,他引:1  
During the course of fermentation, online measuring procedures able to estimate the performance of the current operation are highly desired. Unfortunately, the poor mechanistic understanding of most biologic systems hampers attempts at direct online evaluation of the bioprocess, which is further complicated by the lack of appropriate online sensors and the long lag time associated with offline assays. Quite often available data lack sufficient detail to be directly used, and after a cursory evaluation are stored away. However, these historic databases of process measurements may still retain some useful information. A multivariate statistical procedure has been applied for analyzing the measurement profiles acquired during the monitoring of several fed-batch fermentations for the production of erythromycin. Multivariate principal component analysis has been used to extract information from the multivariate historic database by projecting the process variables onto a low-dimensional space defined by the principal components. Thus, each fermentation is identified by a temporal profile in the principal component plane. The projections represent monitoring charts, consistent with the concept of statistical process control, which are useful for tracking the progress of each fermentation batch and identifying anomalous behaviors (process diagnosis and fault detection).  相似文献   

12.
Principal component analysis (PCA)‐based neural network (NNet) models of HfO2 thin films are used to study the process of efficient model selection and develop an improved model by using multivariate functional data such as X‐ray diffraction data (XRD). The accumulation capacitance and the hysteresis index input parameters, both characteristic of HfO2 dielectric films, were selected for the inclusion in the model by analyzing the process conditions. Standardized XRD were used to analyze the characteristic variations for different process conditions; the responses and the electrical properties were predicted by NNet modeling using crystallinity‐based measurement data. A Bayesian information criterion (BIC) was used to compare the model efficiency and to select an improved model for response prediction. Two conclusions summarize the results of the research documented in this paper: (i) physical or material properties can be predicted by the PCA‐based NNet model using large‐dimension data, and (ii) BIC can be used for the selection and evaluation of predictive models in semiconductor manufacturing processes. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Genomics-based technologies in systems biology have gained a lot of popularity in recent years. These technologies generate large amounts of data. To obtain information from this data, multivariate data analysis methods are required. Many of the datasets generated in genomics are multilevel datasets, in which the variation occurs on different levels simultaneously (e.g. variation between organisms and variation in time). We introduce multilevel component analysis (MCA) into the field of metabolic fingerprinting to separate these different types of variation. This is in contrast to the commonly used principal component analysis (PCA) that is not capable of doing this: in a PCA model the different types of variation in a multilevel dataset are confounded.

MCA generates different submodels for different types of variation. These submodels are lower-dimensional component models in which the variation is approximated. These models are easier to interpret than the original data. Multilevel simultaneous component analysis (MSCA) is a method within the class of MCA models with increased interpretability, due to the fact that the time-resolved variation of all individuals is expressed in the same subspace.

MSCA is applied on a time-resolved metabolomics dataset. This dataset contains 1H NMR spectra of urine collected from 10 monkeys at 29 time-points during 2 months. The MSCA model contains a submodel describing the biorhythms in the urine composition and a submodel describing the variation between the animals. Using MSCA the largest biorhythms in the urine composition and the largest variation between the animals are identified.

Comparison of the MSCA model to a PCA model of this data shows that the MSCA model is better interpretable: the MSCA model gives a better view on the different types of variation in the data since they are not confounded.  相似文献   


14.
Summary: In the last two decades, simulation technology had a large influence on process industries. Today, modern numerical methods, powerful personal computers and convenient software packages facilitate the solution of complex chemical engineering problems on the basis of rigorous process models at every office workplace. However, although in many cases process models are available from the process design step, model based operation of production plants can only be found rarely. Changing this situation would significantly contribute to the cost effectiveness of many production plants. This contribution focuses on the model based operation of polymer processes which are for some reasons not perfectly suited for model application: Polymer process models tend to be complex, meaningful online measurements are expensive and not always reliable, many polymer processes are performed in batch instead of steady-state and for most polymer plants, due to the smaller throughput, the economic impact of model application is much smaller than compared to for instance a steam cracker. When model based operation is considered, it has to be recognized that there is not one single approach but many different alternatives of which maybe only a single one will lead to a sustainable economical improvement of the process. From many successfully applied concepts for model based plant operation it can be clearly identified that always trying to implement the most complex solution (e.g. nonlinear closed-loop online optimization) is neither possible nor reasonable but that plant specific tailor-made solutions are necessary.  相似文献   

15.
Run to run (R2R) optimization based on unfolded Partial Least Squares (u‐PLS) is a promising approach for improving the performance of batch and fed‐batch processes as it is able to continuously adapt to changing processing conditions. Using this technique, the regression coefficients of PLS are used to modify the input profile of the process in order to optimize the yield. When this approach was initially proposed, it was observed that the optimization performed better when PLS was combined with a smoothing technique, in particular a sliding window filtering, which constrained the regression coefficients to be smooth. In the present paper, this result is further investigated and some modifications to the original approach are proposed. Also, the suitability of different smoothing techniques in combination with PLS is studied for both end‐of‐batch quality prediction and R2R optimization. The smoothing techniques considered in this paper include the original filtering approach, the introduction of smoothing constraints in the PLS calibration (Penalized PLS), and the use of functional analysis (Functional PLS). Two fed‐batch process simulators are used to illustrate the results. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Adsorption of heavy metals (Pb, Zn, and Cu) on a new bioadsorbent based on starch reinforced with modified cellulose with toluene‐diisocyanate has been studied using batch‐adsorption technology. The study was carried out in order to find if this bio material, designed for seedling pots manufacture, is able to act like a barrier between soil pollutants and plants. The influence of contact time, pH, adsorbent dose, and salt concentrations was also evaluated. The obtained data were examined using the Langmuir and Freundlich adsorption models. Optimal results were obtained at pH 5.0, temperature of 25°C, contact time of 120 minutes, and an adsorbent dose of 4 mg/mL. Experimental data along with computed Langmuir parameters show that the adsorption process is favorable, and the maximum adsorption capacity of the adsorbent for lead, zinc, and copper was 66.66, 58.82, and 47.61 mg/g, respectively.  相似文献   

17.
建立了一种新的基于过程分析技术(PAT)和质量源于设计(QbD)设计空间的中药制药过程终点分析与控制方法.以近红外(NIR)光谱技术为PAT工具, 采集正常操作条件下制药过程的多批次NIR光谱; 采用主成分分析结合移动块相对标准偏差(PCA-MBRSD)法, 确定每一批次过程的理想终点样本(DEPs), 由多批DEPs的光谱信息构成过程终点设计空间; 在过程终点设计空间确定的范围内, 建立多变量统计过程控制(MSPC)模型, 利用多变量Hotelling T2和SPE控制图对过程终点进行判断.应用上述方法, 进行了金银花醇沉加醇过程终点检测研究, 结果表明该方法灵敏、准确, 适宜于中药制药过程终点检测.  相似文献   

18.
We present a process monitoring scheme aimed at detecting changes in the networked structure of process data that is able to handle, simultaneously, three pervasive aspects of industrial systems: (i) their multivariate nature, with strong cross‐correlations linking the variables; (ii) the dynamic behavior of processes, as a consequence of the presence of inertial elements coupled with the high sampling rates of industrial acquisition systems; and (iii) the multiscale nature of systems, resulting from the superposition of multiple phenomena spanning different regions of the time‐frequency domain. Contrary to current approaches, the multivariate structure will be described through a local measure of association, the partial correlation, in order to improve the diagnosis features without compromising detection speed. It will also be used to infer the relevant causal structure active at each scale, providing a fine map for the complex behavior of the system. The scale‐dependent causal networks will be incorporated in multiscale monitoring through data‐driven sensitivity enhancing transformations (SETs). The results obtained demonstrate that the use of SET is a major factor in detecting process upsets. In fact, it was observed that even single‐scale monitoring methodologies can achieve comparable detection capabilities as their multiscale counterparts as long as a proper SET is employed. However, the multiscale approach still proved to be useful because it led to good results using a much simpler SET model of the system. Therefore, the application of wavelet transforms is advantageous for systems that are difficult to model, providing a good compromise between modeling complexity and monitoring performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
In this article, batch settling of liquid‐liquid dispersion in a vertical batch settler was comprehensively studied. The experiment results of the investigation on liquid‐liquid batch settler phase separation were compared with a well predefined physical model proposed by Jeelani and Hartland [1998, Ind. Eng. Chem. Res. 37: 547–554] using a semi theoretical approach. The effect of initial dispersion height, initial hold‐up volume, settler diameter, and mixing time on separation of batch liquid‐liquid dispersions were experimentally investigated in terms of variation in the final settling time and coalescing interfaces with time. From the present work it was found that, final separation time varied as polynomial function of degree 2 with respect to initial dispersion height and was constant with respect to initial hold‐up volume. Final separation time also varied according to the completeness of dispersion achieved with respect to the mixing time. The experimental data obtained showed a good correlation with the theoretically predicated values. Results allow the use of experimental procedure with the mathematical model as a tool in monitoring the dispersion behavior in commercial units on industrial importance.  相似文献   

20.
The within‐device precision for quantitative assays is the square root of the total variance, which is defined as the sum of the between‐day, between‐run, and within‐run variances under a two‐stage nested random‐effects model. Currently, methods for point and interval estimations have been proposed. However, the literature on sample size determination for within‐device precision is scarce. We propose an approach for the determination of sample size for within‐device precision. Our approach is based on the probability for which the 100(1 − α)% upper confidence bound for the within‐device precision smaller than the pre‐specified claim is greater than 1 − β. We derive the asymptotic distribution of the confidence upper bound based on the modified large‐sample method for sample size determination and allocation. Our study reveals that the dominant term for sample size determination is the between‐day variance. Results of simulation studies are reported. An example with real data is used to illustrate the proposed method. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号