The machining process is primarily used to remove material using cutting tools. Any variation in tool state affects the quality of a finished job and causes disturbances. So, a tool monitoring scheme (TMS) for categorization and supervision of failures has become the utmost priority. To respond, traditional TMS followed by the machine learning (ML) analysis is advocated in this paper. Classification in ML is supervised based learning method wherein the ML algorithm learn from the training data input fed to it and then employ this model to categorize the new datasets for precise prediction of a class and observation. In the current study, investigation on the single point cutting tool is carried out while turning a stainless steel (SS) workpeice on the manual lathe trainer. The vibrations developed during this activity are examined for failure-free and various failure states of a tool. The statistical modeling is then incorporated to trace vital signs from vibration signals. The multiple-binary-rule-based model for categorization is designed using the decision tree. Lastly, various tree-based algorithms are used for the categorization of tool conditions. The Random Forest offered the highest classification accuracy, i.e., 92.6%.
The aim of this work is to derive an accurate model of two-dimensional switched control heating system from data generated by a Finite Element solver. The nonintrusive approach should be able to capture both temperature fields, dynamics and the underlying switching control rule. To achieve this goal, the algorithm proposed in this paper will make use of three main ingredients: proper orthogonal decomposition (POD), dynamic mode decomposition (DMD) and artificial neural networks (ANN). Some numerical results will be presented and compared to the high-fidelity numerical solutions to demonstrate the capability of the method to reproduce the dynamics. 相似文献
The aim of this paper is to present a new classification and regression algorithm based on Artificial Intelligence. The main feature of this algorithm, which will be called Code2Vect, is the nature of the data to treat: qualitative or quantitative and continuous or discrete. Contrary to other artificial intelligence techniques based on the “Big-Data,” this new approach will enable working with a reduced amount of data, within the so-called “Smart Data” paradigm. Moreover, the main purpose of this algorithm is to enable the representation of high-dimensional data and more specifically grouping and visualizing this data according to a given target. For that purpose, the data will be projected into a vectorial space equipped with an appropriate metric, able to group data according to their affinity (with respect to a given output of interest). Furthermore, another application of this algorithm lies on its prediction capability. As it occurs with most common data-mining techniques such as regression trees, by giving an input the output will be inferred, in this case considering the nature of the data formerly described. In order to illustrate its potentialities, two different applications will be addressed, one concerning the representation of high-dimensional and categorical data and another featuring the prediction capabilities of the algorithm. 相似文献
The present work aims at proposing a new methodology for learning reduced models from a small amount of data. It is based on the fact that discrete models, or their transfer function counterparts, have a low rank and then they can be expressed very efficiently using few terms of a tensor decomposition. An efficient procedure is proposed as well as a way for extending it to nonlinear settings while keeping limited the impact of data noise. The proposed methodology is then validated by considering a nonlinear elastic problem and constructing the model relating tractions and displacements at the observation points. 相似文献
Motivated by applications to machine learning, we construct a reversible and irreducible Markov chain whose state space is a certain collection of measurable sets of a chosen l.c.h. space . We study the resulting network (connected undirected graph), including transience, Royden and Riesz decompositions, and kernel factorization. We describe a construction for Hilbert spaces of signed measures which comes equipped with a new notion of reproducing kernels and there is a unique solution to a regularized optimization problem involving the approximation of functions by functions of finite energy. The latter has applications to machine learning (for Markov random fields, for example). 相似文献
Prediction of drag reduction effect caused by pulsating pipe flows is examined using machine learning. First, a large set of flow field data is obtained experimentally by measuring turbulent pipe flows with various pulsation patterns. Consequently, more than 7000 waveforms are applied, obtaining a maximum drag reduction rate and maximum energy saving rate of 38.6% and 31.4%, respectively. The results indicate that the pulsating flow effect can be characterized by the pulsation period and pressure gradient during acceleration and deceleration. Subsequently, two machine learning models are tested to predict the drag reduction rate. The results confirm that the machine learning model developed for predicting the time variation of the flow velocity and differential pressure with respect to the pump voltage can accurately predict the nonlinearity of pressure gradients. Therefore, using this model, the drag reduction effect can be estimated with high accuracy. 相似文献