共查询到3条相似文献,搜索用时 0 毫秒
1.
Arieh Ben-Naim 《Entropy (Basel, Switzerland)》2022,24(11)
In 2015, I wrote a book with the same title as this article. The book’s subtitle is: “What we know and what we do not know.” On the book’s dedication page, I wrote: “This book is dedicated to readers of popular science books who are baffled, perplexed, puzzled, astonished, confused, and discombobulated by reading about Information, Entropy, Life and the Universe.” In the first part of this article, I will present the definitions of two central concepts: the “Shannon measure of information” (SMI), in Information Theory, and “Entropy”, in Thermodynamics. Following these definitions, I will discuss the framework of their applicability. In the second part of the article, I will examine the question of whether living systems and the entire universe are, or are not within the framework of applicability of the concepts of SMI and Entropy. I will show that much of the confusion that exists in the literature arises because of people’s ignorance about the framework of applicability of these concepts. 相似文献
2.
Galen Reeves 《Entropy (Basel, Switzerland)》2020,22(11)
This paper explores some applications of a two-moment inequality for the integral of the rth power of a function, where . The first contribution is an upper bound on the Rényi entropy of a random vector in terms of the two different moments. When one of the moments is the zeroth moment, these bounds recover previous results based on maximum entropy distributions under a single moment constraint. More generally, evaluation of the bound with two carefully chosen nonzero moments can lead to significant improvements with a modest increase in complexity. The second contribution is a method for upper bounding mutual information in terms of certain integrals with respect to the variance of the conditional density. The bounds have a number of useful properties arising from the connection with variance decompositions. 相似文献
3.
This paper shows if and how the predictability and complexity of stock market data changed over the last half-century and what influence the M1 money supply has. We use three different machine learning algorithms, i.e., a stochastic gradient descent linear regression, a lasso regression, and an XGBoost tree regression, to test the predictability of two stock market indices, the Dow Jones Industrial Average and the NASDAQ (National Association of Securities Dealers Automated Quotations) Composite. In addition, all data under study are discussed in the context of a variety of measures of signal complexity. The results of this complexity analysis are then linked with the machine learning results to discover trends and correlations between predictability and complexity. Our results show a decrease in predictability and an increase in complexity for more recent years. We find a correlation between approximate entropy, sample entropy, and the predictability of the employed machine learning algorithms on the data under study. This link between the predictability of machine learning algorithms and the mentioned entropy measures has not been shown before. It should be considered when analyzing and predicting complex time series data, e.g., stock market data, to e.g., identify regions of increased predictability. 相似文献