首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper examines relations between econophysics and the law of entropy as foundations of economic phenomena. Ontological entropy, where actual thermodynamic processes are involved in the flow of energy from the Sun through the biosphere and economy, is distinguished from metaphorical entropy, where similar mathematics used for modeling entropy is employed to model economic phenomena. Areas considered include general equilibrium theory, growth theory, business cycles, ecological economics, urban–regional economics, income and wealth distribution, and financial market dynamics. The power-law distributions studied by econophysicists can reflect anti-entropic forces is emphasized to show how entropic and anti-entropic forces can interact to drive economic dynamics, such as in the interaction between business cycles, financial markets, and income distributions.  相似文献   

2.
The lack of adequate indicators in the research of digital economy may lead to the shortage of data support on decision making for governments. To solve this problem, first we establish a digital economy indicator evaluation system by dividing the digital economy into four types: “basic type”, “technology type”, “integration type” and “service type” and select 5 indicators for each type. On this basis, the weight of each indicator is calculated to find the deficiencies in the development of some digital economic fields by the improved entropy method. By drawing on the empowerment idea of Analytic Hierarchy Process, the improved entropy method firstly compares the difference coefficient of indicators in pairs and maps the comparison results to the scales 1–9. Then, the judgment matrix is constructed based on the information entropy, which can solve as much as possible the problem that the difference among the weight of each indicator is too large in traditional entropy method. The results indicate that: the development of digital economy in Guangdong Province was relatively balanced from 2015 to 2018 and will be better in the future while the development of rural e-commerce in Guangdong Province is relatively backward, and there is an obvious digital gap between urban and rural areas. Next we extract two new variables respectively to replace the 20 indicators we select through principal component analysis and factor analysis methods in multivariate statistical analysis, which can retain the original information to the greatest extent and provide convenience for further research in the future. Finally, we and provide constructive comments of digital economy in Guangdong Province from 2015 to 2018.  相似文献   

3.
The research analyzes the progress of Member States in the implementation of Europe 2020 strategy targets and goals in 2016–2018. Multiple criteria decision-making approaches applied for this task. The set of headline indicators was divided into two logically explained groups. Interval entropy is proposed as an effective tool to make prioritization of headline indicators in separate groups. The sensitivity of the interval entropy is its advantage over classical entropy. Indicator weights were calculated by applying the WEBIRA (weight-balancing indicator ranks accordance) method. The WEBIRA method allows the best harmonization of ranking results according to different criteria groups—this is its advantage over other multiple-criteria methods. Final assessing and ranking of the 28 European Union countries (EU-28) was implemented through the α-cut approach. A k-means clustering procedure was applied to the EU-28 countries by summarizing the ranking results in 2016–2018. Investigation revealed the countries–leaders and countries–outsiders of the Europe 2020 strategy implementation process. It turned out that Sweden, Finland, Denmark, and Austria during the three-year period were the countries that exhibited the greatest progress according to two headline indicator groups’ interrelation. Cluster analysis results are mainly consistent with the EU-28 countries’ categorizations set by other authors.  相似文献   

4.
We consider brain activity from an information theoretic perspective. We analyze the information processing in the brain, considering the optimality of Shannon entropy transport using the Monge–Kantorovich framework. It is proposed that some of these processes satisfy an optimal transport of informational entropy condition. This optimality condition allows us to derive an equation of the Monge–Ampère type for the information flow that accounts for the branching structure of neurons via the linearization of this equation. Based on this fact, we discuss a version of Murray’s law in this context.  相似文献   

5.
In recent years, the identification of the essential nodes in complex networks has attracted significant attention because of their theoretical and practical significance in many applications, such as preventing and controlling epidemic diseases and discovering essential proteins. Several importance measures have been proposed from diverse perspectives to identify crucial nodes more accurately. In this paper, we propose a novel importance metric called node propagation entropy, which uses a combination of the clustering coefficients of nodes and the influence of the first- and second-order neighbor numbers on node importance to identify essential nodes from an entropy perspective while considering the local and global information of the network. Furthermore, the susceptible–infected–removed and susceptible–infected–removed–susceptible epidemic models along with the Kendall coefficient are used to reveal the relevant correlations among the various importance measures. The results of experiments conducted on several real networks from different domains show that the proposed metric is more accurate and stable in identifying significant nodes than many existing techniques, including degree centrality, betweenness centrality, closeness centrality, eigenvector centrality, and H-index.  相似文献   

6.
Coronary heart disease (CHD) is the leading cause of cardiovascular death. This study aimed to propose an effective method for mining cardiac mechano-electric coupling information and to evaluate its ability to distinguish patients with varying degrees of coronary artery stenosis (VDCAS). Five minutes of electrocardiogram and phonocardiogram signals was collected synchronously from 191 VDCAS patients to construct heartbeat interval (RRI)–systolic time interval (STI), RRI–diastolic time interval (DTI), HR-corrected QT interval (QTcI)–STI, QTcI–DTI, Tpeak–Tend interval (TpeI)–STI, TpeI–DTI, Tpe/QT interval (Tpe/QTI)–STI, and Tpe/QTI–DTI series. Then, the cross sample entropy (XSampEn), cross fuzzy entropy (XFuzzyEn), joint distribution entropy (JDistEn), magnitude-squared coherence function, cross power spectral density, and mutual information were applied to evaluate the coupling of the series. Subsequently, support vector machine recursive feature elimination and XGBoost were utilized for feature selection and classification, respectively. Results showed that the joint analysis of XSampEn, XFuzzyEn, and JDistEn had the best ability to distinguish patients with VDCAS. The classification accuracy of severe CHD—mild-to-moderate CHD group, severe CHD—chest pain and normal coronary angiography (CPNCA) group, and mild-to-moderate CHD—CPNCA group were 0.8043, 0.7659, and 0.7500, respectively. The study indicates that the joint analysis of XSampEn, XFuzzyEn, and JDistEn can effectively capture the cardiac mechano-electric coupling information of patients with VDCAS, which can provide valuable information for clinicians to diagnose CHD.  相似文献   

7.
This research article shows how the pricing of derivative securities can be seen from the context of stochastic optimal control theory and information theory. The financial market is seen as an information processing system, which optimizes an information functional. An optimization problem is constructed, for which the linearized Hamilton–Jacobi–Bellman equation is the Black–Scholes pricing equation for financial derivatives. The model suggests that one can define a reasonable Hamiltonian for the financial market, which results in an optimal transport equation for the market drift. It is shown that in such a framework, which supports Black–Scholes pricing, the market drift obeys a backwards Burgers equation and that the market reaches a thermodynamical equilibrium, which minimizes the free energy and maximizes entropy.  相似文献   

8.
In previous research, we showed that ‘texts that tell a story’ exhibit a statistical structure that is not Maxwell–Boltzmann but Bose–Einstein. Our explanation is that this is due to the presence of ‘indistinguishability’ in human language as a result of the same words in different parts of the story being indistinguishable from one another, in much the same way that ’indistinguishability’ occurs in quantum mechanics, also there leading to the presence of Bose–Einstein rather than Maxwell–Boltzmann as a statistical structure. In the current article, we set out to provide an explanation for this Bose–Einstein statistics in human language. We show that it is the presence of ‘meaning’ in ‘texts that tell a story’ that gives rise to the lack of independence characteristic of Bose–Einstein, and provides conclusive evidence that ‘words can be considered the quanta of human language’, structurally similar to how ‘photons are the quanta of electromagnetic radiation’. Using several studies on entanglement from our Brussels research group, we also show, by introducing the von Neumann entropy for human language, that it is also the presence of ‘meaning’ in texts that makes the entropy of a total text smaller relative to the entropy of the words composing it. We explain how the new insights in this article fit in with the research domain called ‘quantum cognition’, where quantum probability models and quantum vector spaces are used in human cognition, and are also relevant to the use of quantum structures in information retrieval and natural language processing, and how they introduce ‘quantization’ and ‘Bose–Einstein statistics’ as relevant quantum effects there. Inspired by the conceptuality interpretation of quantum mechanics, and relying on the new insights, we put forward hypotheses about the nature of physical reality. In doing so, we note how this new type of decrease in entropy, and its explanation, may be important for the development of quantum thermodynamics. We likewise note how it can also give rise to an original explanatory picture of the nature of physical reality on the surface of planet Earth, in which human culture emerges as a reinforcing continuation of life.  相似文献   

9.
Lithosphere-ionosphere non-linear interactions create a complex system where links between different phenomena can remain hidden. The statistical correlation between West Pacific strong earthquakes and high-energy electron bursts escaping trapped conditions was demonstrated in past works. Here, it is investigated from the point of view of information. Starting from the conditional probability statistical model, which was deduced from the correlation, the Shannon entropy, the joint entropy, and the conditional entropy are calculated. Time-delayed mutual information and transfer entropy have also been calculated analytically here for binary events: by including correlations between consecutive earthquake events, and between consecutive earthquakes and electron bursts. These quantities have been evaluated for the complex dynamical system of lithosphere-ionosphere; although the expressions calculated by probabilities resulted in being valid for each pair of binary events. Peaks occurred for the same time delay as in the correlations, Δt = 1.5–3.5 h, and as well as for a new time delay, Δt = −58.5–−56.5 h, for the transfer entropy; this last is linked to EQ self-correlations from the analysis. Even if the low number of self-correlated EQs makes this second peak insignificant in this case, it is of interest to separate the non-linear contribution of the transfer entropy of binary events in the study of a complex system.  相似文献   

10.
In this paper, we propose to quantitatively compare loss functions based on parameterized Tsallis–Havrda–Charvat entropy and classical Shannon entropy for the training of a deep network in the case of small datasets which are usually encountered in medical applications. Shannon cross-entropy is widely used as a loss function for most neural networks applied to the segmentation, classification and detection of images. Shannon entropy is a particular case of Tsallis–Havrda–Charvat entropy. In this work, we compare these two entropies through a medical application for predicting recurrence in patients with head–neck and lung cancers after treatment. Based on both CT images and patient information, a multitask deep neural network is proposed to perform a recurrence prediction task using cross-entropy as a loss function and an image reconstruction task. Tsallis–Havrda–Charvat cross-entropy is a parameterized cross-entropy with the parameter α . Shannon entropy is a particular case of Tsallis–Havrda–Charvat entropy for α=1 . The influence of this parameter on the final prediction results is studied. In this paper, the experiments are conducted on two datasets including in total 580 patients, of whom 434 suffered from head–neck cancers and 146 from lung cancers. The results show that Tsallis–Havrda–Charvat entropy can achieve better performance in terms of prediction accuracy with some values of α .  相似文献   

11.
Based on the analysis and measurement of the overall situation, import and export structure and international competitiveness of the various sectors of service trade in the Guangdong–Hong Kong–Macao Greater Bay Area, with the help of MATLAB and Gray System Modeling software, the synergy degree model was established to quantitatively analyze the synergy level of service trade in the Greater Bay Area with the help of grey correlation analysis method and entropy weight method. The results show that the overall development trend of service trade in the Guangdong–Hong Kong–Macao Greater Bay Area is good. The service trade industries in different regions are highly complementary and have a high degree of correlation. The potential for the coordinated development of internal service trade is excellent, and the overall situation of service trade in the Greater Bay Area is in a stage of transition from a moderate level of synergy to a high level of synergy. The Greater Bay Area can achieve industrial synergy by accelerating industrial integration and green transformation, establishing a coordinated development mechanism, sharing market platform, strengthening personnel security, and further enhancing the international competitiveness of service trade. The established model better reflects the current coordination of service trade in the Guangdong–Hong Kong–Macao Greater Bay Area and has good applicability. In the future, more economic, technological, geographic, and policy data and information can be comprehensively used to study the spatial pattern, evolution rules, and mechanisms of coordinated development in the broader area.  相似文献   

12.
Previous hotel performance studies neglected the role of information entropy in feedback processes between input and output management. This paper focuses on this gap by exploring the relationship between hotel performance at the industry level and the capability of learning by doing and adopting best practices using a sample of 153 UK hotels over a 10-year period between 2008–2017. Besides, this research also fills a literature gap by addressing the issues of measuring hotel performance in light of negative outputs. In order to achieve this, we apply a novel Modified slack-based model for the efficiency analysis and Least Absolute Shrinkage and Selection Operator to examine the influence of entropy related variable on efficiency score. The Results indicate that less can be learnt from inputs than from outputs to improve efficiency levels and resource allocation is more balanced than cash flow and liquidity. The findings suggest that market dynamics explains the cash flow generation potential and liquidity. We find that market conditions are increasingly offering the opportunities for learning and improving hotel efficiency. The results report that the distinctive characteristic of superior performance in hotel operations is the capability to match the cash flow generation potential with market opportunities.  相似文献   

13.
Entropy is a concept that emerged in the 19th century. It used to be associated with heat harnessed by a thermal machine to perform work during the Industrial Revolution. However, there was an unprecedented scientific revolution in the 20th century due to one of its most essential innovations, i.e., the information theory, which also encompasses the concept of entropy. Therefore, the following question is naturally raised: “what is the difference, if any, between concepts of entropy in each field of knowledge?” There are misconceptions, as there have been multiple attempts to conciliate the entropy of thermodynamics with that of information theory. Entropy is most commonly defined as “disorder”, although it is not a good analogy since “order” is a subjective human concept, and “disorder” cannot always be obtained from entropy. Therefore, this paper presents a historical background on the evolution of the term “entropy”, and provides mathematical evidence and logical arguments regarding its interconnection in various scientific areas, with the objective of providing a theoretical review and reference material for a broad audience.  相似文献   

14.
With the rapid expansion of graphs and networks and the growing magnitude of data from all areas of science, effective treatment and compression schemes of context-dependent data is extremely desirable. A particularly interesting direction is to compress the data while keeping the “structural information” only and ignoring the concrete labelings. Under this direction, Choi and Szpankowski introduced the structures (unlabeled graphs) which allowed them to compute the structural entropy of the Erdős–Rényi random graph model. Moreover, they also provided an asymptotically optimal compression algorithm that (asymptotically) achieves this entropy limit and runs in expectation in linear time. In this paper, we consider the stochastic block models with an arbitrary number of parts. Indeed, we define a partitioned structural entropy for stochastic block models, which generalizes the structural entropy for unlabeled graphs and encodes the partition information as well. We then compute the partitioned structural entropy of the stochastic block models, and provide a compression scheme that asymptotically achieves this entropy limit.  相似文献   

15.
In this work, we first consider the discrete version of Fisher information measure and then propose Jensen–Fisher information, to develop some associated results. Next, we consider Fisher information and Bayes–Fisher information measures for mixing parameter vector of a finite mixture probability mass function and establish some results. We provide some connections between these measures with some known informational measures such as chi-square divergence, Shannon entropy, Kullback–Leibler, Jeffreys and Jensen–Shannon divergences.  相似文献   

16.
The most known and used abstract model of the financial market is based on the concept of the informational efficiency (EMH) of that market. The paper proposes an alternative which could be named the behavioural efficiency of the financial market, which is based on the behavioural entropy instead of the informational entropy. More specifically, the paper supports the idea that, in the financial market, the only measure (if any) of the entropy is the available behaviours indicated by the implicit information. Therefore, the behavioural entropy is linked to the concept of behavioural efficiency. The paper argues that, in fact, in the financial markets, there is not a (real) informational efficiency, but there exists a behavioural efficiency instead. The proposal is based both on a new typology of information in the financial market (which provides the concept of implicit information—that is, that information ”translated” by the economic agents from observing the actual behaviours) and on a non-linear (more exactly, a logistic) curve linking the behavioural entropy to the behavioural efficiency of the financial markets. Finally, the paper proposes a synergic overcoming of both EMH and AMH based on the new concept of behavioural entropy in the financial market.  相似文献   

17.
The pattern of financial cycles in the European Union has direct impacts on financial stability and economic sustainability in view of adoption of the euro. The purpose of the article is to identify the degree of coherence of credit cycles in the countries potentially seeking to adopt the euro with the credit cycle inside the Eurozone. We first estimate the credit cycles in the selected countries and in the euro area (at the aggregate level) and filter the series with the Hodrick–Prescott filter for the period 1999Q1–2020Q4. Based on these values, we compute the indicators that define the credit cycle similarity and synchronicity in the selected countries and a set of entropy measures (block entropy, entropy rate, Bayesian entropy) to show the high degree of heterogeneity, noting that the manifestation of the global financial crisis has changed the credit cycle patterns in some countries. Our novel approach provides analytical tools to cope with euro adoption decisions, showing how the coherence of credit cycles can be increased among European countries and how the national macroprudential policies can be better coordinated, especially in light of changes caused by the pandemic crisis.  相似文献   

18.
In this work, we introduce a generalized measure of cumulative residual entropy and study its properties. We show that several existing measures of entropy, such as cumulative residual entropy, weighted cumulative residual entropy and cumulative residual Tsallis entropy, are all special cases of this generalized cumulative residual entropy. We also propose a measure of generalized cumulative entropy, which includes cumulative entropy, weighted cumulative entropy and cumulative Tsallis entropy as special cases. We discuss a generating function approach, using which we derive different entropy measures. We provide residual and cumulative versions of Sharma–Taneja–Mittal entropy and obtain them as special cases this generalized measure of entropy. Finally, using the newly introduced entropy measures, we establish some relationships between entropy and extropy measures.  相似文献   

19.
We provide a stochastic extension of the Baez–Fritz–Leinster characterization of the Shannon information loss associated with a measure-preserving function. This recovers the conditional entropy and a closely related information-theoretic measure that we call conditional information loss. Although not functorial, these information measures are semi-functorial, a concept we introduce that is definable in any Markov category. We also introduce the notion of an entropic Bayes’ rule for information measures, and we provide a characterization of conditional entropy in terms of this rule.  相似文献   

20.
Although the sizes of business firms have been a subject of intensive research, the definition of a “size” of a firm remains unclear. In this study, we empirically characterize in detail the scaling relations between size measures of business firms, analyzing them based on allometric scaling. Using a large dataset of Japanese firms that tracked approximately one million firms annually for two decades (1994–2015), we examined up to the trivariate relations between corporate size measures: annual sales, capital stock, total assets, and numbers of employees and trading partners. The data were examined using a multivariate generalization of a previously proposed method for analyzing bivariate scalings. We found that relations between measures other than the capital stock are marked by allometric scaling relations. Power–law exponents for scalings and distributions of multiple firm size measures were mostly robust throughout the years but had fluctuations that appeared to correlate with national economic conditions. We established theoretical relations between the exponents. We expect these results to allow direct estimation of the effects of using alternative size measures of business firms in regression analyses, to facilitate the modeling of firms, and to enhance the current theoretical understanding of complex systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号