首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
具有2n线性复杂度的2n周期二元序列的3错线性复杂度   总被引:3,自引:0,他引:3  
线性复杂度和k错线性复杂度是度量密钥流序列的密码强度的重要指标.通过研究周期为2n的二元序列线性复杂度,提出将k错线性复杂度的计算转化为求Hamming重量最小的错误序列.基于Games-Chan算法,讨论了线性复杂度为2n的2n周期二元序列的3错线性复杂度分布情况;给出了对应k错线性复杂度序列的完整计数公式, k=3,4.对于一般的线性复杂度为2n-m的2n周期二元序列,也可以使用该方法给出对应k错线性复杂度序列的计数公式.  相似文献   

2.
最近何炳生等提出了解大规模单调变分不等式的一种预估-校正算法,然而,这个方法在计算每一个试验点时需要一次投影运算,因而计算量较大.为了克服这个缺点,我们提出了一个解一般大规模g-单调变分不等式的新的预估-校正算法,该方法使用了一个非常有效的预估步长准则,每个步长的选取只需要计算一次投影,这将大大减少计算量.数值试验说明我们的算法比最新文献中出现的投影类方法有效.  相似文献   

3.
信息熵度量风险的探究   总被引:4,自引:1,他引:3  
本文分析了风险的本质后指出,风险是某一特定行为主体对某一金融投资中损失的不确定性和收益的不确定性的认识。在众多风险度量的方法中,熵函数法有着其独特的度量风险的优势,因此,在本文中重点讨论了熵函数作为风险度量的合理性。同时提出一个新的风险度量模型,剖析其主要的数学特性,阐明该模型可以针对不同行为主体能有效地度量金融风险,并且计算量小,易于操作。  相似文献   

4.
18 三角形重心的发现301800天津市宝坻县霍各庄中学刘建国主问句(主提示):画一个图,看一看,量一量,发现了什么没有?这样,关于××××(课题),我们可以提出怎样一个猜想?它可以化归到怎样一个问题?要证明这结论只要去证明什么?模式:观察实验─—猜...  相似文献   

5.
本文利用人眼的形状知觉有偏爱形状的均匀性或对称性的倾向这样一种心理学知识,提出一种根据轮廓恢复三维物体可见表面的取向的极值化方法,讨论了解的存在性及唯一性并给出了求解算法。该方法能正确理解形状规则和不规则的两类物体,因而适用范围较广。同时它还具有计算量小的优点。实验结果表明,用此方法恢复三维形状能获得令人满意的效果。  相似文献   

6.
弧长仪是一种圆弧长度测量仪。 长期以来,人们通常采用弧长公式:弧长=计算圆弧长度,弧长仪的出现改变了这种传统的方法,可以象直尺度量直线那样直接度量,因而,探讨弧长仪的几何原理成为许多人共同关心的问题。  相似文献   

7.
林艳芳  鲍玲鑫 《数学学报》1936,63(5):523-530
本文研究TVS-锥度量空间中的统计收敛以及TVS-锥度量空间的统计完备性.令(X,E,P,d)表示一个TVS-锥度量空间.利用定义在有序Hausdorff拓扑向量空间E上的Minkowski函数ρ,证明了在X上存在一个通常意义下的度量dρ,使得X中的序列(xn)在锥度量d意义下统计收敛到x ∈ X,当且仅当(xn)在度量dρ意义下统计收敛到x.基于此,我们证明了任意一个TVS-锥统计Cauchy序列是几乎处处TVS-锥Cauchy序列,还证明了任意一个TVS-锥统计收敛的序列是几乎处处TVS-锥收敛的.从而,TVS-锥度量空间(X,d)是d-完备的,当且仅当它是d-统计完备的.基于以上结论,通常度量空间中统计收敛的许多性质都可以平行地推广到锥度量空间中统计收敛的情形.  相似文献   

8.
CVaR风险度量模型在投资组合中的运用   总被引:9,自引:1,他引:8  
风险价值(VaR)是近年来金融机构广泛运用的风险度量指标,条件风险价值(CVaR)是VaR的修正模型,也称为平均超额损失或尾部VaR,它比VaR具有更好的性质。在本中,我们将运用风险度量指标VaR和CVaR,提出一个新的最优投资组合模型。介绍了模型的算法,而且利用我国的股票市场进行了实证分析,验证了新模型的有效性,为制定合理的投资组合提供了一种新思路。  相似文献   

9.
系统地分析了Lemple-Ziv复杂性度量方法的应用过程中,将实际信号(时间序列)转变成符号序列的诸多方法中存在的一些问题,提出了更合理兼容法.该方法可以有效地刻划各种时间序列的复杂度.文章最后动态地分析了中国证券市场的复杂性.  相似文献   

10.
提出了一个基于指标形式张量的微分几何定理的机器证明算法.该算法将微分几何定理转化成带指标的张量多项式的计算问题,然后通过利用重写规则,挖掘等价条件和分次选取条件等方法大大减少了这个多项式系统的方程个数.再利用这个多项式系统本身和关于哑元的方程三角化这个多项式系统,将所得到的首项代入结论, 从而得到了该定理的机器证明.该算法不仅能够证明基于指标形式张量的微分几何定理,也可以用于张量方程的求解.  相似文献   

11.
A general approach to non-stationary data from a non-linear dynamical time series is presented. As an application, the RR intervals extracted from the 24 h electrocardiograms of 60 healthy individuals 16–64 yr of age are analyzed with the use of a sliding time window of 100 intervals. This procedure maps the original time series into a time series of the given complexity measure. The state of the system is then given by the properties of the distribution of the complexity measure. The relation of the complexity measures to the level of the catecholamine hormones in the plasma, their dependence on the age of the subject, their mutual correlation and the results of surrogate data tests are discussed. Two different approaches to analyzing complexity are used: pattern entropy as a measure of statistical order and algorithmic complexity as a measure sequential order in heart rate variability. These two complexity measures are found to reflect different aspects of the neuroregulation of the heart. Finally, in some subjects (usually younger persons) the two complexity measures depend on their age while in others (mostly older subjects) they do not – in which case the correlation between is lost.  相似文献   

12.
In this work, we are motivated by the observation that previous considerations of appropriate complexity measures have not directly addressed the fundamental issue that the complexity of any particular matter or thing has a significant subjective component in which the degree of complexity depends on available frames of reference. Any attempt to remove subjectivity from a suitable measure therefore fails to address a very significant aspect of complexity. Conversely, there has been justifiable apprehension toward purely subjective complexity measures, simply because they are not verifiable if the frame of reference being applied is in itself both complex and subjective. We address this issue by introducing the concept of subjective simplicity—although a justifiable and verifiable value of subjective complexity may be difficult to assign directly, it is possible to identify in a given context what is “simple” and, from that reference, determine subjective complexity as distance from simple. We then propose a generalized complexity measure that is applicable to any domain, and provide some examples of how the framework can be applied to engineered systems. © 2016 Wiley Periodicals, Inc. Complexity 21: 533–546, 2016  相似文献   

13.
针对NPD项目复杂性各因素间具有的关联性以及传统评价方法的局限性,提出一种基于关联多属性的2-可加模糊测度方法来对NPD项目复杂性进行评价。在界定项目复杂性内涵的基础上,从产品复杂性、环境复杂性、组织复杂性和技术复杂性四个方面构建了NPD项目复杂性评价指标体系。从模糊测度、默比乌斯变换和交互作用系数间的转化关系出发,基于最大Marichal熵原则,提出了一种确定2-可加模糊测度值的新方法。利用Choquet积分作为集结算子,自下而上计算各候选方案的综合评价值。最后,通过具体算例说明了该方法的可行性和有效性。  相似文献   

14.
15.
We previously introduced the concept of “set‐complexity,” based on a context‐dependent measure of information, and used this concept to describe the complexity of gene interaction networks. In a previous paper of this series we analyzed the set‐complexity of binary graphs. Here, we extend this analysis to graphs with multicolored edges that more closely match biological structures like the gene interaction networks. All highly complex graphs by this measure exhibit a modular structure. A principal result of this work is that for the most complex graphs of a given size the number of edge colors is equal to the number of “modules” of the graph. Complete multipartite graphs (CMGs) are defined and analyzed. The relation between complexity and structure of these graphs is examined in detail. We establish that the mutual information between any two nodes in a CMG can be fully expressed in terms of entropy, and present an explicit expression for the set complexity of CMGs (Theorem 3). An algorithm for generating highly complex graphs from CMGs is described. We establish several theorems relating these concepts and connecting complex graphs with a variety of practical network properties. In exploring the relation between symmetry and complexity we use the idea of a similarity matrix and its spectrum for highly complex graphs. © 2012 Wiley Periodicals, Inc. Complexity, 2012  相似文献   

16.
This paper offers some new results on randomness with respect to classes of measures, along with a didactic exposition of their context based on results that appeared elsewhere. We start with the reformulation of the Martin-Löf definition of randomness (with respect to computable measures) in terms of randomness deficiency functions. A formula that expresses the randomness deficiency in terms of prefix complexity is given (in two forms). Some approaches that go in another direction (from deficiency to complexity) are considered. The notion of Bernoulli randomness (independent coin tosses for an asymmetric coin with some probability p of head) is defined. It is shown that a sequence is Bernoulli if it is random with respect to some Bernoulli measure B p . A notion of “uniform test” for Bernoulli sequences is introduced which allows a quantitative strengthening of this result. Uniform tests are then generalized to arbitrary measures. Bernoulli measures B p have the important property that p can be recovered from each random sequence of B p . The paper studies some important consequences of this orthogonality property (as well as most other questions mentioned above) also in the more general setting of constructive metric spaces.  相似文献   

17.
To analyze the complexity of continuous chaotic systems better, the modified multiscale permutation entropy (MMPE) algorithm is proposed. Characteristics and parameter choices of the MMPE algorithm are investigated. The comparative study between MPE and MMPE shows that MMPE has better robustness for identifying different chaotic systems when the scale factor τ takes large values. Compared with MPE, MMPE algorithm is more suitable for analyzing the complexity of time series as it has τ time series. For its application, MMPE algorithm is used to calculate the complexity of multiscroll chaotic systems. Results show that complexity of multiscroll chaotic systems does not increase as scroll number increases. Discussions based on first‐order difference operation present a reasonable explanation on why the complexity does not increase. This complexity analysis method lays a theoretical as well as experimental basis for the applications of multiscroll chaotic systems. © 2014 Wiley Periodicals, Inc. Complexity 21: 52–58, 2016  相似文献   

18.
The balance between symmetry and randomness as a property of networks can be viewed as a kind of “complexity.” We use here our previously defined “set complexity” measure (Galas et al., IEEE Trans Inf Theory 2010, 56), which was used to approach the problem of defining biological information, in the mathematical analysis of networks. This information theoretic measure is used to explore the complexity of binary, undirected graphs. The complexities, Ψ, of some specific classes of graphs can be calculated in closed form. Some simple graphs have a complexity value of zero, but graphs with significant values of Ψ are rare. We find that the most complex of the simple graphs are the complete bipartite graphs (CBGs). In this simple case, the complexity, Ψ, is a strong function of the size of the two node sets in these graphs. We find the maximum Ψ binary graphs as well. These graphs are distinct from, but similar to CBGs. Finally, we explore directed and stochastic processes for growing graphs (hill‐climbing and random duplication, respectively) and find that node duplication and partial node duplication conserve interesting graph properties. Partial duplication can grow extremely complex graphs, while full node duplication cannot do so. By examining the eigenvalue spectrum of the graph Laplacian we characterize the symmetry of the graphs and demonstrate that, in general, breaking specific symmetries of the binary graphs increases the set‐based complexity, Ψ. The implications of these results for more complex, multiparameter graphs, and for physical and biological networks and the processes of network evolution are discussed. © 2011 Wiley Periodicals, Inc. Complexity, 17,51–64, 2011  相似文献   

19.
在日内高频环境下检验基于兼容法的柯尔莫哥洛夫熵、样本熵和模糊熵等复杂度测算方法对我国沪深300股票指数的测算效率,并运用筛选后的有效算法分阶段研究和比较了序列复杂度的变化过程与变化幅度.结果表明,模糊熵算法是一种更适用于我国沪深300股票指数的有效复杂度测算方法,其对相似容忍度的敏感性更低,测度值连续性更好.随时间推移,我国沪深300股票指数复杂度整体呈上升趋势,而相较于发达市场甚至周边新兴市场其复杂度偏低.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号