共查询到9条相似文献,搜索用时 15 毫秒
1.
2.
In this paper, a wind energy conversion system is studied to improve the conversion efficiency and maximize power output. Firstly, a nonlinear state space model is established with respect to shaft current, turbine rotational speed and power output in the wind energy conversion system. As the wind velocity can be descried as a non-Gaussian variable on the system model, the survival information potential is adopted to measure the uncertainty of the stochastic tracking error between the actual wind turbine rotation speed and the reference one. Secondly, to minimize the stochastic tracking error, the control input is obtained by recursively optimizing the performance index function which is constructed with consideration of both survival information potential and control input constraints. To avoid those complex probability formulation, a data driven method is adopted in the process of calculating the survival information potential. Finally, a simulation example is given to illustrate the efficiency of the proposed maximum power point tracking control method. The results demonstrate that by following this method, the actual wind turbine rotation speed can track the reference speed with less time, less overshoot and higher precision, and thus the power output can still be guaranteed under the influence of non-Gaussian wind noises. 相似文献
3.
Luca Spolladore Michela Gelfusa Riccardo Rossi Andrea Murari 《Entropy (Basel, Switzerland)》2021,23(9)
Model selection criteria are widely used to identify the model that best represents the data among a set of potential candidates. Amidst the different model selection criteria, the Bayesian information criterion (BIC) and the Akaike information criterion (AIC) are the most popular and better understood. In the derivation of these indicators, it was assumed that the model’s dependent variables have already been properly identified and that the entries are not affected by significant uncertainties. These are issues that can become quite serious when investigating complex systems, especially when variables are highly correlated and the measurement uncertainties associated with them are not negligible. More sophisticated versions of this criteria, capable of better detecting spurious relations between variables when non-negligible noise is present, are proposed in this paper. Their derivation is obtained starting from a Bayesian statistics framework and adding an a priori Chi-squared probability distribution function of the model, dependent on a specifically defined information theoretic quantity that takes into account the redundancy between the dependent variables. The performances of the proposed versions of these criteria are assessed through a series of systematic simulations, using synthetic data for various classes of functions and noise levels. The results show that the upgraded formulation of the criteria clearly outperforms the traditional ones in most of the cases reported. 相似文献
4.
With the increase in massive digitized datasets of cultural artefacts, social and cultural scientists have an unprecedented opportunity for the discovery and expansion of cultural theory. The WikiArt dataset is one such example, with over 250,000 high quality images of historically significant artworks by over 3000 artists, ranging from the 15th century to the present day; it is a rich source for the potential mining of patterns and differences among artists, genres, and styles. However, such datasets are often difficult to analyse and use for answering complex questions of cultural evolution and divergence because of their raw formats as image files, which are represented as multi-dimensional tensors/matrices. Recent developments in machine learning, multi-modal data analysis and image processing, however, open the door for us to create representations of images that extract important, domain-specific features from images. Art historians have long emphasised the importance of art style, and the colors used in art, as ways to characterise and retrieve art across genre, style, and artist. In this paper, we release a massive vector-based dataset of paintings (WikiArtVectors), with style representations and color distributions, which provides cultural and social scientists with a framework and database to explore relationships across these two vital dimensions. We use state-of-the-art deep learning and human perceptual color distributions to extract the representations for each painting, and aggregate them across artist, style, and genre. These vector representations and distributions can then be used in tandem with information-theoretic and distance metrics to identify large-scale patterns across art style, genre, and artist. We demonstrate the consistency of these vectors, and provide early explorations, while detailing future work and directions. All of our data and code is publicly available on GitHub. 相似文献
5.
Robson P. Bonidia Anderson P. Avila Santos Breno L. S. de Almeida Peter F. Stadler Ulisses Nunes da Rocha Danilo S. Sanches Andr C. P. L. F. de Carvalho 《Entropy (Basel, Switzerland)》2022,24(10)
In recent years, there has been an exponential growth in sequencing projects due to accelerated technological advances, leading to a significant increase in the amount of data and resulting in new challenges for biological sequence analysis. Consequently, the use of techniques capable of analyzing large amounts of data has been explored, such as machine learning (ML) algorithms. ML algorithms are being used to analyze and classify biological sequences, despite the intrinsic difficulty in extracting and finding representative biological sequence methods suitable for them. Thereby, extracting numerical features to represent sequences makes it statistically feasible to use universal concepts from Information Theory, such as Tsallis and Shannon entropy. In this study, we propose a novel Tsallis entropy-based feature extractor to provide useful information to classify biological sequences. To assess its relevance, we prepared five case studies: (1) an analysis of the entropic index q; (2) performance testing of the best entropic indices on new datasets; (3) a comparison made with Shannon entropy and (4) generalized entropies; (5) an investigation of the Tsallis entropy in the context of dimensionality reduction. As a result, our proposal proved to be effective, being superior to Shannon entropy and robust in terms of generalization, and also potentially representative for collecting information in fewer dimensions compared with methods such as Singular Value Decomposition and Uniform Manifold Approximation and Projection. 相似文献
6.
Theo Steininger Jait Dixit Philipp Frank Maksim Greiner Sebastian Hutschenreuter Jakob Knollmüller Reimar Leike Natalia Porqueres Daniel Pumpe Martin Reinecke Matev raml Csongor Varady Torsten Enßlin 《Annalen der Physik》2019,531(3)
NIFTy , “Numerical Information Field Theory,” is a software framework designed to ease the development and implementation of field inference algorithms. Field equations are formulated independently of the underlying spatial geometry allowing the user to focus on the algorithmic design. Under the hood, NIFTy ensures that the discretization of the implemented equations is consistent. This enables the user to prototype an algorithm rapidly in 1D and then apply it to high‐dimensional real‐world problems. This paper introduces NIFTy 3, a major upgrade to the original NIFTy framework. NIFTy 3 allows the user to run inference algorithms on massively parallel high performance computing clusters without changing the implementation of the field equations. It supports n‐dimensional Cartesian spaces, spherical spaces, power spaces, and product spaces as well as transforms to their harmonic counterparts. Furthermore, NIFTy 3 is able to handle non‐scalar fields, such as vector or tensor fields. The functionality and performance of the software package is demonstrated with example code, which implements a mock inference inspired by a real‐world algorithm from the realm of information field theory. NIFTy 3 is open‐source software available under the GNU General Public License v3 (GPL‐3) at https://gitlab.mpcdf.mpg.de/ift/NIFTy/tree/NIFTy_3 . 相似文献
7.
Renormalization group techniques are widely used in modern physics to describe the relevant low energy aspects of systems involving a large number of degrees of freedom. Those techniques are thus expected to be a powerful tool to address open issues in data analysis when datasets are highly correlated. Signal detection and recognition for a covariance matrix having a nearly continuous spectra is currently one of these opened issues. First, investigations in this direction have been proposed in recent investigations from an analogy between coarse-graining and principal component analysis (PCA), regarding separation of sampling noise modes as a UV cut-off for small eigenvalues of the covariance matrix. The field theoretical framework proposed in this paper is a synthesis of these complementary point of views, aiming to be a general and operational framework, both for theoretical investigations and for experimental detection. Our investigations focus on signal detection. They exhibit numerical investigations in favor of a connection between symmetry breaking and the existence of an intrinsic detection threshold. 相似文献
8.
Information theory can be used to analyze the cost–benefit of visualization processes. However, the current measure of benefit contains an unbounded term that is neither easy to estimate nor intuitive to interpret. In this work, we propose to revise the existing cost–benefit measure by replacing the unbounded term with a bounded one. We examine a number of bounded measures that include the Jenson–Shannon divergence, its square root, and a new divergence measure formulated as part of this work. We describe the rationale for proposing a new divergence measure. In the first part of this paper, we focus on the conceptual analysis of the mathematical properties of these candidate measures. We use visualization to support the multi-criteria comparison, narrowing the search down to several options with better mathematical properties. The theoretical discourse and conceptual evaluation in this part provides the basis for further data-driven evaluation based on synthetic and experimental case studies that are reported in the second part of this paper. 相似文献
9.
Many visual representations, such as volume-rendered images and metro maps, feature a noticeable amount of information loss due to a variety of many-to-one mappings. At a glance, there seem to be numerous opportunities for viewers to misinterpret the data being visualized, hence, undermining the benefits of these visual representations. In practice, there is little doubt that these visual representations are useful. The recently-proposed information-theoretic measure for analyzing the cost–benefit ratio of visualization processes can explain such usefulness experienced in practice and postulate that the viewers’ knowledge can reduce the potential distortion (e.g., misinterpretation) due to information loss. This suggests that viewers’ knowledge can be estimated by comparing the potential distortion without any knowledge and the actual distortion with some knowledge. However, the existing cost–benefit measure consists of an unbounded divergence term, making the numerical measurements difficult to interpret. This is the second part of a two-part paper, which aims to improve the existing cost–benefit measure. Part I of the paper provided a theoretical discourse about the problem of unboundedness, reported a conceptual analysis of nine candidate divergence measures for resolving the problem, and eliminated three from further consideration. In this Part II, we describe two groups of case studies for evaluating the remaining six candidate measures empirically. In particular, we obtained instance data for (i) supporting the evaluation of the remaining candidate measures and (ii) demonstrating their applicability in practical scenarios for estimating the cost–benefit of visualization processes as well as the impact of human knowledge in the processes. The real world data about visualization provides practical evidence for evaluating the usability and intuitiveness of the candidate measures. The combination of the conceptual analysis in Part I and the empirical evaluation in this part allows us to select the most appropriate bounded divergence measure for improving the existing cost–benefit measure. 相似文献