首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets KK, the number of workers LL, and sales YY, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY)(logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb–Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.  相似文献   

2.
Giulio Bottazzi 《Physica A》2009,388(7):1133-1136
If business firms face a multiplicative growth process in which their growth rates are Laplace distributed and independent from their sizes, the size cannot be distributed according to a stationary Pareto distribution. Recent contributions, using formal arguments, seem to contrast with these statements. We prove that the proposed formal results are wrong.  相似文献   

3.
We present a hypothetical argument against finite-state processes in statistical language modeling that is based on semantics rather than syntax. In this theoretical model, we suppose that the semantic properties of texts in a natural language could be approximately captured by a recently introduced concept of a perigraphic process. Perigraphic processes are a class of stochastic processes that satisfy a Zipf-law accumulation of a subset of factual knowledge, which is time-independent, compressed, and effectively inferrable from the process. We show that the classes of finite-state processes and of perigraphic processes are disjoint, and we present a new simple example of perigraphic processes over a finite alphabet called Oracle processes. The disjointness result makes use of the Hilberg condition, i.e., the almost sure power-law growth of algorithmic mutual information. Using a strongly consistent estimator of the number of hidden states, we show that finite-state processes do not satisfy the Hilberg condition whereas Oracle processes satisfy the Hilberg condition via the data-processing inequality. We discuss the relevance of these mathematical results for theoretical and computational linguistics.  相似文献   

4.
In the rate-distortion function and the Maximum Entropy (ME) method, Minimum Mutual Information (MMI) distributions and ME distributions are expressed by Bayes-like formulas, including Negative Exponential Functions (NEFs) and partition functions. Why do these non-probability functions exist in Bayes-like formulas? On the other hand, the rate-distortion function has three disadvantages: (1) the distortion function is subjectively defined; (2) the definition of the distortion function between instances and labels is often difficult; (3) it cannot be used for data compression according to the labels’ semantic meanings. The author has proposed using the semantic information G measure with both statistical probability and logical probability before. We can now explain NEFs as truth functions, partition functions as logical probabilities, Bayes-like formulas as semantic Bayes’ formulas, MMI as Semantic Mutual Information (SMI), and ME as extreme ME minus SMI. In overcoming the above disadvantages, this paper sets up the relationship between truth functions and distortion functions, obtains truth functions from samples by machine learning, and constructs constraint conditions with truth functions to extend rate-distortion functions. Two examples are used to help readers understand the MMI iteration and to support the theoretical results. Using truth functions and the semantic information G measure, we can combine machine learning and data compression, including semantic compression. We need further studies to explore general data compression and recovery, according to the semantic meaning.  相似文献   

5.
    
In the past decades, there has been an increasing literature on the presence of an inertial energy cascade in interplanetary space plasma, being interpreted as the signature of Magnetohydrodynamic turbulence (MHD) for both fields and passive scalars. Here, we investigate the passive scalar nature of the solar wind proton density and temperature by looking for scaling features in the mixed-scalar third-order structure functions using measurements on-board the Ulysses spacecraft during two different periods, i.e., an equatorial slow solar wind and a high-latitude fast solar wind, respectively. We find a linear scaling of the mixed third-order structure function as predicted by Yaglom’s law for passive scalars in the case of slow solar wind, while the results for fast solar wind suggest that the mixed fourth-order structure function displays a linear scaling. A simple empirical explanation of the observed difference is proposed and discussed.  相似文献   

6.
Zipf’s law of abbreviation, which posits a negative correlation between word frequency and length, is one of the most famous and robust cross-linguistic generalizations. At the same time, it has been shown that contextual informativity (average surprisal given previous context) is more strongly correlated with word length, although this tendency is not observed consistently, depending on several methodological choices. The present study examines a more diverse sample of languages than the previous studies (Arabic, Finnish, Hungarian, Indonesian, Russian, Spanish and Turkish). I use large web-based corpora from the Leipzig Corpora Collection to estimate word lengths in UTF-8 characters and in phonemes (for some of the languages), as well as word frequency, informativity given previous word and informativity given next word, applying different methods of bigrams processing. The results show different correlations between word length and the corpus-based measure for different languages. I argue that these differences can be explained by the properties of noun phrases in a language, most importantly, by the order of heads and modifiers and their relative morphological complexity, as well as by orthographic conventions.  相似文献   

7.
Wigner’s friend scenarios involve an Observer, or Observers, measuring a Friend, or Friends, who themselves make quantum measurements. In recent discussions, it has been suggested that quantum mechanics may not always be able to provide a consistent account of a situation involving two Observers and two Friends. We investigate this problem by invoking the basic rules of quantum mechanics as outlined by Feynman in the well-known “Feynman Lectures on Physics”. We show here that these “Feynman rules” constrain the a priori assumptions which can be made in generalised Wigner’s friend scenarios, because the existence of the probabilities of interest ultimately depends on the availability of physical evidence (material records) of the system’s past. With these constraints obeyed, a non-ambiguous and consistent account of all measurement outcomes is obtained for all agents, taking part in various Wigner’s Friend scenarios.  相似文献   

8.
It is well-known that the law of total probability does not generally hold in quantum theory. However, recent arguments on some of the fundamental assumptions in quantum theory based on the extended Wigner’s friend scenario show a need to clarify how the law of total probability should be formulated in quantum theory and under what conditions it still holds. In this work, the definition of conditional probability in quantum theory is extended to POVM measurements. A rule to assign two-time conditional probability is proposed for incompatible POVM operators, which leads to a more general and precise formulation of the law of total probability. Sufficient conditions under which the law of total probability holds are identified. Applying the theory developed here to analyze several quantum no-go theorems related to the extended Wigner’s friend scenario reveals logical loopholes in these no-go theorems. The loopholes exist as a consequence of taking for granted the validity of the law of total probability without verifying the sufficient conditions. Consequently, the contradictions in these no-go theorems only reconfirm the invalidity of the law of total probability in quantum theory rather than invalidating the physical statements that the no-go theorems attempt to refute.  相似文献   

9.
Shannon’s entropy is one of the building blocks of information theory and an essential aspect of Machine Learning (ML) methods (e.g., Random Forests). Yet, it is only finitely defined for distributions with fast decaying tails on a countable alphabet. The unboundedness of Shannon’s entropy over the general class of all distributions on an alphabet prevents its potential utility from being fully realized. To fill the void in the foundation of information theory, Zhang (2020) proposed generalized Shannon’s entropy, which is finitely defined everywhere. The plug-in estimator, adopted in almost all entropy-based ML method packages, is one of the most popular approaches to estimating Shannon’s entropy. The asymptotic distribution for Shannon’s entropy’s plug-in estimator was well studied in the existing literature. This paper studies the asymptotic properties for the plug-in estimator of generalized Shannon’s entropy on countable alphabets. The developed asymptotic properties require no assumptions on the original distribution. The proposed asymptotic properties allow for interval estimation and statistical tests with generalized Shannon’s entropy.  相似文献   

10.
There are various distributions of image histograms where regions form symmetrically or asymmetrically based on the frequency of the intensity levels inside the image. In pure image processing, the process of optimal thresholding tends to accurately separate each region in the image histogram to obtain the segmented image. Otsu’s method is the most used technique in image segmentation. Otsu algorithm performs automatic image thresholding and returns the optimal threshold by maximizing between-class variance using the sum of Gaussian distribution for the intensity level in the histogram. There are various types of images where an intensity level has right-skewed histograms and does not fit with the between-class variance of the original Otsu algorithm. In this paper, we proposed an improvement of the between-class variance based on lognormal distribution, using the mean and the variance of the lognormal. The proposed model aims to handle the drawbacks of asymmetric distribution, especially for images with right-skewed intensity levels. Several images were tested for segmentation in the proposed model in parallel with the original Otsu method and the relevant work, including simulated images and Medical Resonance Imaging (MRI) of brain tumors. Two types of evaluation measures were used in this work based on unsupervised and supervised metrics. The proposed model showed superior results, and the segmented images indicated better threshold estimation against the original Otsu method and the related improvement.  相似文献   

11.
Korean river design standards set general design standards for rivers and river-related projects in Korea, which systematize the technologies and methods involved in river-related projects. This includes measurement methods for parts necessary for river design, but does not include information on shear stress. Shear stress is one of the factors necessary for river design and operation. Shear stress is one of the most important hydraulic factors used in the fields of water, especially for artificial channel design. Shear stress is calculated from the frictional force caused by viscosity and fluctuating fluid velocity. Current methods are based on past calculations, but factors such as boundary shear stress or energy gradient are difficult to actually measure or estimate. The point velocity throughout the entire cross-section is needed to calculate the velocity gradient. In other words, the current Korean river design standards use tractive force and critical tractive force instead of shear stress because it is more difficult to calculate the shear stress in the current method. However, it is difficult to calculate the exact value due to the limitations of the formula to obtain the river factor called the tractive force. In addition, tractive force has limitations that use an empirically identified base value for use in practice. This paper focuses on the modeling of shear-stress distribution in open channel turbulent flow using entropy theory. In addition, this study suggests a shear stress distribution formula, which can easily be used in practice after calculating the river-specific factor T. The tractive force and critical tractive force in the Korean river design standards should be modified by the shear stress obtained by the proposed shear stress distribution method. The present study therefore focuses on the modeling of shear stress distribution in an open channel turbulent flow using entropy theory. The shear stress distribution model is tested using a wide range of forty-two experimental runs collected from the literature. Then, an error analysis is performed to further evaluate the accuracy of the proposed model. The results reveal a correlation coefficient of approximately 0.95–0.99, indicating that the proposed method can estimate shear-stress distribution accurately. Based on this, the results of the distribution of shear stress after calculating the river-specific factors show a correlation coefficient of about 0.86 to 0.98, which suggests that the equation can be applied in practice.  相似文献   

12.
The aim of this work is to develop a qualitative picture of the personal income distribution. Treating an economy as a self-organized system the key idea of the model is that the income distribution contains competitive and non-competitive contributions. The presented model distinguishes between three main income classes. 1. Capital income from private firms is shown to be the result of an evolutionary competition between products. A direct consequence of this competition is Gibrat’s law suggesting a lognormal income distribution for small private firms. Taking into account an additional preferential attachment mechanism for large private firms the income distribution is supplemented by a power law (Pareto) tail. 2. Due to the division of labor a diversified labor market is seen as a non-competitive market. In this case wage income exhibits an exponential distribution. 3. Also included is income from a social insurance system. It can be approximated by a Gaussian peak. A consequence of this theory is that for short time intervals a fixed ratio of total labor (total capital) to net income exists (Cobb–Douglas relation). A comparison with empirical high resolution income data confirms this pattern of the total income distribution. The theory suggests that competition is the ultimate origin of the uneven income distribution.  相似文献   

13.
An analysis of the size distribution of Italian firms by age   总被引:1,自引:0,他引:1  
Pasquale Cirillo   《Physica A》2010,389(3):305-3843
In this paper we analyze the size distribution of Italian firms by age. In other words, we want to establish whether the way that the size of firms is distributed varies as firms become old. As a proxy of size we use capital. In [L.M.B. Cabral, J. Mata, On the evolution of the firm size distribution: Facts and theory, American Economic Review 93 (2003) 1075–1090], the authors study the distribution of Portuguese firms and they find out that, while the size distribution of all firms is fairly stable over time, the distributions of firms by age groups are appreciably different. In particular, as the age of the firms increases, their size distribution on the log scale shifts to the right, the left tails becomes thinner and the right tail thicker, with a clear decrease of the skewness. In this paper, we perform a similar analysis with Italian firms using the CEBI database, also considering firms’ growth rates. Although there are several papers dealing with Italian firms and their size distribution, to our knowledge a similar study concerning size and age has not been performed yet for Italy, especially with such a big panel.  相似文献   

14.
In 2015, I wrote a book with the same title as this article. The book’s subtitle is: “What we know and what we do not know.” On the book’s dedication page, I wrote: “This book is dedicated to readers of popular science books who are baffled, perplexed, puzzled, astonished, confused, and discombobulated by reading about Information, Entropy, Life and the Universe.” In the first part of this article, I will present the definitions of two central concepts: the “Shannon measure of information” (SMI), in Information Theory, and “Entropy”, in Thermodynamics. Following these definitions, I will discuss the framework of their applicability. In the second part of the article, I will examine the question of whether living systems and the entire universe are, or are not within the framework of applicability of the concepts of SMI and Entropy. I will show that much of the confusion that exists in the literature arises because of people’s ignorance about the framework of applicability of these concepts.  相似文献   

15.
We consider state changes in quantum theory due to “conditional action” and relate these to the discussion of entropy decrease due to interventions of “intelligent beings” and the principles of Szilard and Landauer/Bennett. The mathematical theory of conditional actions is a special case of the theory of “instruments”, which describes changes of state due to general measurements and will therefore be briefly outlined in the present paper. As a detailed example, we consider the imperfect erasure of a qubit that can also be viewed as a conditional action and will be realized by the coupling of a spin to another small spin system in its ground state.  相似文献   

16.
Living cells are complex systems characterized by fluids crowded by hundreds of different elements, including, in particular, a high density of polymers. They are an excellent and challenging laboratory to study exotic emerging physical phenomena, where entropic forces emerge from the organization processes of many-body interactions. The competition between microscopic and entropic forces may generate complex behaviors, such as phase transitions, which living cells may use to accomplish their functions. In the era of big data, where biological information abounds, but general principles and precise understanding of the microscopic interactions is scarce, entropy methods may offer significant information. In this work, we developed a model where a complex thermodynamic equilibrium resulted from the competition between an effective electrostatic short-range interaction and the entropic forces emerging in a fluid crowded by different sized polymers. The target audience for this article are interdisciplinary researchers in complex systems, particularly in thermodynamics and biophysics modeling.  相似文献   

17.
Many small biological objects, such as viruses, survive in a water environment and cannot remain active in dry air without condensation of water vapor. From a physical point of view, these objects belong to the mesoscale, where small thermal fluctuations with the characteristic kinetic energy of kBT (where kB is the Boltzmann’s constant and T is the absolute temperature) play a significant role. The self-assembly of viruses, including protein folding and the formation of a protein capsid and lipid bilayer membrane, is controlled by hydrophobic forces (i.e., the repulsing forces between hydrophobic particles and regions of molecules) in a water environment. Hydrophobic forces are entropic, and they are driven by a system’s tendency to attain the maximum disordered state. On the other hand, in information systems, entropic forces are responsible for erasing information, if the energy barrier between two states of a switch is on the order of kBT, which is referred to as Landauer’s principle. We treated hydrophobic interactions responsible for the self-assembly of viruses as an information-processing mechanism. We further showed a similarity of these submicron-scale processes with the self-assembly in colloidal crystals, droplet clusters, and liquid marbles.  相似文献   

18.
Landauer’s principle provides a fundamental lower bound for energy dissipation occurring with information erasure in the quantum regime. While most studies have related the entropy reduction incorporated with the erasure to the lower bound (entropic bound), recent efforts have also provided another lower bound associated with the thermal fluctuation of the dissipated energy (thermodynamic bound). The coexistence of the two bounds has stimulated comparative studies of their properties; however, these studies were performed for systems where the time-evolution of diagonal (population) and off-diagonal (coherence) elements of the density matrix are decoupled. In this paper, we aimed to broaden the comparative study to include the influence of quantum coherence induced by the tilted system–reservoir interaction direction. By examining their dependence on the initial state of the information-bearing system, we find that the following properties of the bounds are generically held regardless of whether the influence of the coherence is present or not: the entropic bound serves as the tighter bound for a sufficiently mixed initial state, while the thermodynamic bound is tighter when the purity of the initial state is sufficiently high. The exception is the case where the system dynamics involve only phase relaxation; in this case, the two bounds coincide when the initial coherence is zero; otherwise, the thermodynamic bound serves the tighter bound. We also find the quantum information erasure inevitably accompanies constant energy dissipation caused by the creation of system–reservoir correlation, which may cause an additional source of energetic cost for the erasure.  相似文献   

19.
In this paper, the formulation of time-fractional (TF) electrodynamics is derived based on the Riemann-Silberstein (RS) vector. With the use of this vector and fractional-order derivatives, one can write TF Maxwell’s equations in a compact form, which allows for modelling of energy dissipation and dynamics of electromagnetic systems with memory. Therefore, we formulate TF Maxwell’s equations using the RS vector and analyse their properties from the point of view of classical electrodynamics, i.e., energy and momentum conservation, reciprocity, causality. Afterwards, we derive classical solutions for wave-propagation problems, assuming helical, spherical, and cylindrical symmetries of solutions. The results are supported by numerical simulations and their analysis. Discussion of relations between the TF Schrödinger equation and TF electrodynamics is included as well.  相似文献   

20.
We studied the prisoner’s dilemma game as applied to signed networks. In signed networks, there are two types of links: positive and negative. To establish a payoff matrix between players connected with a negative link, we multiplied the payoff matrix between players connected with a positive link by −1. To investigate the effect of negative links on cooperating behavior, we performed simulations for different negative link densities. When the negative link density is low, the density of the cooperator becomes zero because there is an increasing temptation payoff, b. Here, parameter b is the payoff received by the defector from playing the game with a cooperator. Conversely, when the negative link density is high, the cooperator density becomes almost 1 as b increases. This is because players with a negative link will suffer more payoff damage if they do not cooperate with each other. The negative link forces players to cooperate, so cooperating behavior is enhanced.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号