首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we quantify the statistical coherence between financial time series by means of the Rényi entropy. With the help of Campbell’s coding theorem, we show that the Rényi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others. This accentuation is controlled with Rényi’s parameter qq. To tackle the issue of the information flow between time series, we formulate the concept of Rényi’s transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions. This is particularly pertinent in financial time series, where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance. We apply the Rényian information flow to stock market time series from 11 world stock indices as sampled at a daily rate in the time period 02.01.1990–31.12.2009. Corresponding heat maps and net information flows are represented graphically. A detailed discussion of the transfer entropy between the DAX and S&P500 indices based on minute tick data gathered in the period 02.04.2008–11.09.2009 is also provided. Our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the Asia–Pacific region to both European and US markets. An important yet less dramatic excess of information also flows from Europe to the US. This is particularly clearly seen from a careful analysis of Rényi information flow between the DAX and S&P500 indices.  相似文献   

2.
Oversampling is the most popular data preprocessing technique. It makes traditional classifiers available for learning from imbalanced data. Through an overall review of oversampling techniques (oversamplers), we find that some of them can be regarded as danger-information-based oversamplers (DIBOs) that create samples near danger areas to make it possible for these positive examples to be correctly classified, and others are safe-information-based oversamplers (SIBOs) that create samples near safe areas to increase the correct rate of predicted positive values. However, DIBOs cause misclassification of too many negative examples in the overlapped areas, and SIBOs cause incorrect classification of too many borderline positive examples. Based on their advantages and disadvantages, a boundary-information-based oversampler (BIBO) is proposed. First, a concept of boundary information that considers safe information and dangerous information at the same time is proposed that makes created samples near decision boundaries. The experimental results show that DIBOs and BIBO perform better than SIBOs on the basic metrics of recall and negative class precision; SIBOs and BIBO perform better than DIBOs on the basic metrics for specificity and positive class precision, and BIBO is better than both of DIBOs and SIBOs in terms of integrated metrics.  相似文献   

3.
Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the “Effort To Compress” the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).  相似文献   

4.
Recent evidence suggests that spectral change, as measured by cochlea-scaled entropy (CSE), predicts speech intelligibility better than the information carried by vowels or consonants in sentences. Motivated by this finding, the present study investigates whether intelligibility indices implemented to include segments marked with significant spectral change better predict speech intelligibility in noise than measures that include all phonetic segments paying no attention to vowels/consonants or spectral change. The prediction of two intelligibility measures [normalized covariance measure (NCM), coherence-based speech intelligibility index (CSII)] is investigated using three sentence-segmentation methods: relative root-mean-square (RMS) levels, CSE, and traditional phonetic segmentation of obstruents and sonorants. While the CSE method makes no distinction between spectral changes occurring within vowels/consonants, the RMS-level segmentation method places more emphasis on the vowel-consonant boundaries wherein the spectral change is often most prominent, and perhaps most robust, in the presence of noise. Higher correlation with intelligibility scores was obtained when including sentence segments containing a large number of consonant-vowel boundaries than when including segments with highest entropy or segments based on obstruent/sonorant classification. These data suggest that in the context of intelligibility measures the type of spectral change captured by the measure is important.  相似文献   

5.
近红外光谱常被用于含氢有机物质等的物化参数测量,可以提供丰富的结构和组成信息在复杂溶液的光谱定量分析中更是被广泛应用。然而在人体血液等复杂溶液的近红外光谱分析中,强大的背景信息造成的噪声干扰和冗余变量的存在,严重影响着样品的光谱测量和分析,影响着分析的效率和准确度。如何消除背景噪声等的干扰来提高分析准确度已经引起高度重视,近几年来,国内外学者提出了许多基于化学计量学方法的相关方法。从光谱预处理、变量优化和建模分析三方面,以传统的化学计量学方法出发,总结和分析这些方法在人体血液等复杂溶液的近红外光谱定量分析的应用和各自的特点,为提高光谱定量分析准确度的研究提供参考。  相似文献   

6.
The most known and used abstract model of the financial market is based on the concept of the informational efficiency (EMH) of that market. The paper proposes an alternative which could be named the behavioural efficiency of the financial market, which is based on the behavioural entropy instead of the informational entropy. More specifically, the paper supports the idea that, in the financial market, the only measure (if any) of the entropy is the available behaviours indicated by the implicit information. Therefore, the behavioural entropy is linked to the concept of behavioural efficiency. The paper argues that, in fact, in the financial markets, there is not a (real) informational efficiency, but there exists a behavioural efficiency instead. The proposal is based both on a new typology of information in the financial market (which provides the concept of implicit information—that is, that information ”translated” by the economic agents from observing the actual behaviours) and on a non-linear (more exactly, a logistic) curve linking the behavioural entropy to the behavioural efficiency of the financial markets. Finally, the paper proposes a synergic overcoming of both EMH and AMH based on the new concept of behavioural entropy in the financial market.  相似文献   

7.
The purpose of this paper is to propose a new Pythagorean fuzzy entropy for Pythagorean fuzzy sets, which is a continuation of the Pythagorean fuzzy entropy of intuitionistic sets. The Pythagorean fuzzy set continues the intuitionistic fuzzy set with the additional advantage that it is well equipped to overcome its imperfections. Its entropy determines the quantity of information in the Pythagorean fuzzy set. Thus, the proposed entropy provides a new flexible tool that is particularly useful in complex multi-criteria problems where uncertain data and inaccurate information are considered. The performance of the introduced method is illustrated in a real-life case study, including a multi-criteria company selection problem. In this example, we provide a numerical illustration to distinguish the entropy measure proposed from some existing entropies used for Pythagorean fuzzy sets and intuitionistic fuzzy sets. Statistical illustrations show that the proposed entropy measures are reliable for demonstrating the degree of fuzziness of both Pythagorean fuzzy set (PFS) and intuitionistic fuzzy sets (IFS). In addition, a multi-criteria decision-making method complex proportional assessment (COPRAS) was also proposed with weights calculated based on the proposed new entropy measure. Finally, to validate the reliability of the results obtained using the proposed entropy, a comparative analysis was performed with a set of carefully selected reference methods containing other generally used entropy measurement methods. The illustrated numerical example proves that the calculation results of the proposed new method are similar to those of several other up-to-date methods.  相似文献   

8.
In the machine learning literature we can find numerous methods to solve classification problems. We propose two new performance measures to analyze such methods. These measures are defined by using the concept of proportional reduction of classification error with respect to three benchmark classifiers, the random and two intuitive classifiers which are based on how a non-expert person could realize classification simply by applying a frequentist approach. We show that these three simple methods are closely related to different aspects of the entropy of the dataset. Therefore, these measures account somewhat for entropy in the dataset when evaluating the performance of classifiers. This allows us to measure the improvement in the classification results compared to simple methods, and at the same time how entropy affects classification capacity. To illustrate how these new performance measures can be used to analyze classifiers taking into account the entropy of the dataset, we carry out an intensive experiment in which we use the well-known J48 algorithm, and a UCI repository dataset on which we have previously selected a subset of the most relevant attributes. Then we carry out an extensive experiment in which we consider four heuristic classifiers, and 11 datasets.  相似文献   

9.
It is well known that there may be significant individual differences in physiological signal patterns for emotional responses. Emotion recognition based on electroencephalogram (EEG) signals is still a challenging task in the context of developing an individual-independent recognition method. In our paper, from the perspective of spatial topology and temporal information of brain emotional patterns in an EEG, we exploit complex networks to characterize EEG signals to effectively extract EEG information for emotion recognition. First, we exploit visibility graphs to construct complex networks from EEG signals. Then, two kinds of network entropy measures (nodal degree entropy and clustering coefficient entropy) are calculated. By applying the AUC method, the effective features are input into the SVM classifier to perform emotion recognition across subjects. The experiment results showed that, for the EEG signals of 62 channels, the features of 18 channels selected by AUC were significant (p < 0.005). For the classification of positive and negative emotions, the average recognition rate was 87.26%; for the classification of positive, negative, and neutral emotions, the average recognition rate was 68.44%. Our method improves mean accuracy by an average of 2.28% compared with other existing methods. Our results fully demonstrate that a more accurate recognition of emotional EEG signals can be achieved relative to the available relevant studies, indicating that our method can provide more generalizability in practical use.  相似文献   

10.
We investigate high frequency price dynamics in foreign exchange market using data from Reuters information system (the dataset has been provided to us by Olsen and Associates). In our analysis we show that a naïve approach to the definition of price (for example using the spot mid price) may lead to wrong conclusions on price behavior as for example the presence of short term correlations for returns. For this purpose we introduce an algorithm which only uses the non arbitrage principle to estimate real prices from the spot ones. The new definition leads to returns which are not affected by spurious correlations. Furthermore, any apparent information (defined by using Shannon entropy) contained in the data disappears.Received: 12 June 2003, Published online: 9 September 2003PACS: 89.65.Gh Economics; econophysics, financial markets, business and management - 65.40.Gr Entropy and other thermodynamical quantities  相似文献   

11.
12.
H. Ebadi  G.R. Jafari 《Physica A》2010,389(23):5439-5446
Inverse statistics analysis studies the distribution of investment horizons to achieve a predefined level of return. This distribution provides a maximum investment horizon which determines the most likely horizon for gaining a specific return. There exists a significant difference between inverse statistics of financial market data and a fractional Brownian motion (fBm) as an uncorrelated time-series, which is a suitable criteria to measure information content in financial data. In this paper we perform this analysis for the DJIA and S&P500 as two developed markets and Tehran price index (TEPIX) as an emerging market. We also compare these probability distributions with fBm probability, to detect when the behavior of the stocks are the same as fBm.  相似文献   

13.
Bipolar Disorder (BD) is an illness with high prevalence and a huge social and economic impact. It is recurrent, with a long-term evolution in most cases. Early treatment and continuous monitoring have proven to be very effective in mitigating the causes and consequences of BD. However, no tools are currently available for a massive and semi-automatic BD patient monitoring and control. Taking advantage of recent technological developments in the field of wearables, this paper studies the feasibility of a BD episodes classification analysis while using entropy measures, an approach successfully applied in a myriad of other physiological frameworks. This is a very difficult task, since actigraphy records are highly non-stationary and corrupted with artifacts (no activity). The method devised uses a preprocessing stage to extract epochs of activity, and then applies a quantification measure, Slope Entropy, recently proposed, which outperforms the most common entropy measures used in biomedical time series. The results confirm the feasibility of the approach proposed, since the three states that are involved in BD, depression, mania, and remission, can be significantly distinguished.  相似文献   

14.
The aim of this study is to investigate market depth as a stock market liquidity dimension. A new methodology for market depth measurement exactly based on Shannon information entropy for high-frequency data is introduced and utilized. The proposed entropy-based market depth indicator is supported by an algorithm inferring the initiator of a trade. This new indicator seems to be a promising liquidity measure. Both market entropy and market liquidity can be directly measured by the new indicator. The findings of empirical experiments for real-data with a time stamp rounded to the nearest second from the Warsaw Stock Exchange (WSE) confirm that the new proxy enables us to effectively compare market depth and liquidity for different equities. Robustness tests and statistical analyses are conducted. Furthermore, an intra-day seasonality assessment is provided. Results indicate that the entropy-based approach can be considered as an auspicious market depth and liquidity proxy with an intuitive base for both theoretical and empirical analyses in financial markets.  相似文献   

15.
Gerard Briscoe  Philippe De Wilde 《Physica A》2011,390(21-22):3732-3741
A measure called physical complexity is established and calculated for a population of sequences, based on statistical physics, automata theory, and information theory. It is a measure of the quantity of information in an organism’s genome. It is based on Shannon’s entropy, measuring the information in a population evolved in its environment, by using entropy to estimate the randomness in the genome. It is calculated from the difference between the maximal entropy of the population and the actual entropy of the population when in its environment, estimated by counting the number of fixed loci in the sequences of a population. Up until now, physical complexity has only been formulated for populations of sequences with the same length. Here, we investigate an extension to support variable length populations. We then build upon this to construct a measure for the efficiency of information storage, which we later use in understanding clustering within populations. Finally, we investigate our extended physical complexity through simulations, showing it to be consistent with the original.  相似文献   

16.
We investigate a stationary process's crypticity--a measure of the difference between its hidden state information and its observed information--using the causal states of computational mechanics. Here, we motivate crypticity and cryptic order as physically meaningful quantities that monitor how hidden a hidden process is. This is done by recasting previous results on the convergence of block entropy and block-state entropy in a geometric setting, one that is more intuitive and that leads to a number of new results. For example, we connect crypticity to how an observer synchronizes to a process. We show that the block-causal-state entropy is a convex function of block length. We give a complete analysis of spin chains. We present a classification scheme that surveys stationary processes in terms of their possible cryptic and Markov orders. We illustrate related entropy convergence behaviors using a new form of foliated information diagram. Finally, along the way, we provide a variety of interpretations of crypticity and cryptic order to establish their naturalness and pervasiveness. This is also a first step in developing applications in spatially extended and network dynamical systems.  相似文献   

17.
The existence of memory in financial time series has been extensively studied for several stock markets around the world by means of different approaches. However, fixed income markets, i.e. those where corporate and sovereign bonds are traded, have been much less studied. We believe that, given the relevance of these markets, not only from the investors’, but also from the issuers’ point of view (government and firms), it is necessary to fill this gap in the literature. In this paper, we study the sovereign market efficiency of thirty bond indices of both developed and emerging countries, using an innovative statistical tool in the financial literature: the complexity-entropy causality plane. This representation space allows us to establish an efficiency ranking of different markets and distinguish different bond market dynamics. We conclude that the classification derived from the complexity-entropy causality plane is consistent with the qualifications assigned by major rating companies to the sovereign instruments. Additionally, we find a correlation between permutation entropy, economic development and market size that could be of interest for policy makers and investors.  相似文献   

18.
Spectrum sensing is an important function in radio frequency spectrum management and cognitive radio networks. Spectrum sensing is used by one wireless system (e.g., a secondary user) to detect the presence of a wireless service with higher priority (e.g., a primary user) with which it has to coexist in the radio frequency spectrum. If the wireless signal is detected, the second user system releases the given frequency to maintain the principle of not interfering. This paper proposes a machine learning implementation of spectrum sensing using the entropy measure as a feature vector. In the training phase, the information about the activity of the wireless service with higher priority is gathered, and the model is formed. In the classification phase, the wireless system compares the current sensing report to the created model to calculate the posterior probability and classify the sensing report into either the presence or absence of wireless service with higher priority. This paper proposes the novel application of the Fluctuation Dispersion Entropy (FDE) measure recently introduced in the research community as a feature vector to build the model and implement the classification. An improved implementation of the FDE (IFDE) is used to enhance the robustness to noise. IFDE is further enhanced with an adaptive method (AIFDE) to automatically select the hyper-parameter introduced in IFDE. Then, this paper combines the machine learning approach with the entropy measure approach, which are both recent developments in spectrum sensing research. The approach is compared to similar approaches in literature and the classical energy detection method using a generated radar signal data set with different conditions of SNR(dB) and fading conditions. The results show that the proposed approach is able to outperform the approaches from literature based on other entropy measures or the Energy Detector (ED) in a consistent way across different levels of SNR and fading conditions.  相似文献   

19.
Measuring the predictability and complexity of time series using entropy is essential tool designing and controlling a nonlinear system. However, the existing methods have some drawbacks related to the strong dependence of entropy on the parameters of the methods. To overcome these difficulties, this study proposes a new method for estimating the entropy of a time series using the LogNNet neural network model. The LogNNet reservoir matrix is filled with time series elements according to our algorithm. The accuracy of the classification of images from the MNIST-10 database is considered as the entropy measure and denoted by NNetEn. The novelty of entropy calculation is that the time series is involved in mixing the input information in the reservoir. Greater complexity in the time series leads to a higher classification accuracy and higher NNetEn values. We introduce a new time series characteristic called time series learning inertia that determines the learning rate of the neural network. The robustness and efficiency of the method is verified on chaotic, periodic, random, binary, and constant time series. The comparison of NNetEn with other methods of entropy estimation demonstrates that our method is more robust and accurate and can be widely used in practice.  相似文献   

20.
Heart sound signals reflect valuable information about heart condition. Previous studies have suggested that the information contained in single-channel heart sound signals can be used to detect coronary artery disease (CAD). But accuracy based on single-channel heart sound signal is not satisfactory. This paper proposed a method based on multi-domain feature fusion of multi-channel heart sound signals, in which entropy features and cross entropy features are also included. A total of 36 subjects enrolled in the data collection, including 21 CAD patients and 15 non-CAD subjects. For each subject, five-channel heart sound signals were recorded synchronously for 5 min. After data segmentation and quality evaluation, 553 samples were left in the CAD group and 438 samples in the non-CAD group. The time-domain, frequency-domain, entropy, and cross entropy features were extracted. After feature selection, the optimal feature set was fed into the support vector machine for classification. The results showed that from single-channel to multi-channel, the classification accuracy has increased from 78.75% to 86.70%. After adding entropy features and cross entropy features, the classification accuracy continued to increase to 90.92%. The study indicated that the method based on multi-domain feature fusion of multi-channel heart sound signals could provide more information for CAD detection, and entropy features and cross entropy features played an important role in it.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号