首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Classical information systems are introduced in the framework of measure and integration theory. The measurable characteristic functions are identified with the exact events while the fuzzy events are the real measurable functions whose range is contained in the unit interval. Two orthogonality relations are introduced on fuzzy events, the first linked to the fuzzy logic and the second to the fuzzy structure of partial a Baer1-ring. The fuzzy logic is then compared with the “empirical” fuzzy logic induced by the classical information system. In this context, quantum logics could be considered as those empirical fuzzy logics in which it is not possible to have preparation procedures which provide physical systems whose “microstate” is always exactly defined.  相似文献   

2.
In order to modelize the reasoning of intelligent agents represented by a poset T, H. Rasiowa introduced logic systems called “Approximation Logics”. In these systems the use of a set of constants constitutes a fundamental tool. We have introduced in [8] a logic system called without this kind of constants but limited to the case that T is a finite poset. We have proved a completeness result for this system w.r.t. an algebraic semantics. We introduce in this paper a Kripke‐style semantics for a subsystem of for which there existes a deduction theorem. The set of “possible worldsr is enriched by a family of functions indexed by the elements of T and satisfying some conditions. We prove a completeness result for system with respect to this Kripke semantics and define a finite Kripke structure that characterizes the propositional fragment of logic . We introduce a reational semantics (found by E. Orlowska) which has the advantage to allow an interpretation of the propositionnal logic using only binary relations. We treat also the computational complexity of the satisfiability problem of the propositional fragment of logic .  相似文献   

3.
For many systems characterized as “complex” the patterns exhibited on different scales differ markedly from one another. For example, the biomass distribution in a human body “looks very different” depending on the scale at which one examines it. Conversely, the patterns at different scales in “simple” systems (e.g., gases, mountains, crystals) vary little from one scale to another. Accordingly, the degrees of self‐dissimilarity between the patterns of a system at various scales constitute a complexity “signature” of that system. Here we present a novel quantification of self‐dissimilarity. This signature can, if desired, incorporate a novel information‐theoretic measure of the distance between probability distributions that we derive here. Whatever distance measure is chosen, our quantification of self‐dissimilarity can be measured for many kinds of real‐world data. This allows comparisons of the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. species densities in a rain forest vs. capital density in an economy, etc.). Moreover, in contrast to many other suggested complexity measures, evaluating the self‐dissimilarity of a system does not require one to already have a model of the system. These facts may allow self‐dissimilarity signatures to be used as the underlying observational variables of an eventual overarching theory relating all complex systems. To illustrate self‐dissimilarity, we present several numerical experiments. In particular, we show that the underlying structure of the logistic map is picked out by the self‐dissimilarity signature of time series produced by that map. © 2007 Wiley Periodicals, Inc. Complexity 12: 77–85, 2007  相似文献   

4.
Jing Du 《Complexity》2016,21(3):21-35
This article introduces a way of measuring the intrinsic complexity of models. Unlike complication, complexity is an irreducible indication of the innate characteristics of models. Instead of a reductionist paradigm, complexity should be measured in a holistic way. This article redefines the relationship between models and data, and proposes the concept of the “weight” of models, that is, how “heavy” a model is. Based on this concept, this article further defines the complexity of a model to be its ability to distort the space configuration. Three complexity indices are proposed to quantify the extent to which the input space is distorted by a model. It is recognized that there is a lack of widely accepted definition or measure of model complexity. The answer provided by this article is an attempt to move the inquiry a step closer to that goal. © 2014 Wiley Periodicals, Inc. Complexity 21: 21–35, 2016  相似文献   

5.
In this work, we are motivated by the observation that previous considerations of appropriate complexity measures have not directly addressed the fundamental issue that the complexity of any particular matter or thing has a significant subjective component in which the degree of complexity depends on available frames of reference. Any attempt to remove subjectivity from a suitable measure therefore fails to address a very significant aspect of complexity. Conversely, there has been justifiable apprehension toward purely subjective complexity measures, simply because they are not verifiable if the frame of reference being applied is in itself both complex and subjective. We address this issue by introducing the concept of subjective simplicity—although a justifiable and verifiable value of subjective complexity may be difficult to assign directly, it is possible to identify in a given context what is “simple” and, from that reference, determine subjective complexity as distance from simple. We then propose a generalized complexity measure that is applicable to any domain, and provide some examples of how the framework can be applied to engineered systems. © 2016 Wiley Periodicals, Inc. Complexity 21: 533–546, 2016  相似文献   

6.
In order to modelize the reasoning of an intelligent agent represented by a poset T, H. Rasiowa introduced logic systems called “Approximation Logics”. In these systems a set of constants constitutes a fundamental tool. In this papers, we consider logic systems called LT without this kind of constants but limited to the case where T is a finite poset. We prove a weak deduction theorem. We introduce also an algebraic semantics using Hey ting algebra with operators. To prove the completeness theorem of the LT system with respect to the algebraic semantics, we use the method of H. Rasiowa and R. Sikorski for first order logic. In the propositional case, a corollary allows us to assert that it is decidable to know “if a propositional formula is valid”. We study also certain relations between the LT logic and the intuitionistic and classical logics.  相似文献   

7.
We discuss methodology for multidimensional scaling (MDS) and its implementation in two software systems, GGvis and XGvis. MDS is a visualization technique for proximity data, that is, data in the form of N × N dissimilarity matrices. MDS constructs maps (“configurations,” “embeddings”) in IRk by interpreting the dissimilarities as distances. Two frequent sources of dissimilarities are high-dimensional data and graphs. When the dissimilarities are distances between high-dimensional objects, MDS acts as a (often nonlinear) dimension-reduction technique. When the dissimilarities are shortest-path distances in a graph, MDS acts as a graph layout technique. MDS has found recent attention in machine learning motivated by image databases (“Isomap”). MDS is also of interest in view of the popularity of “kernelizing” approaches inspired by Support Vector Machines (SVMs; “kernel PCA”).

This article discusses the following general topics: (1) the stability and multiplicity of MDS solutions; (2) the analysis of structure within and between subsets of objects with missing value schemes in dissimilarity matrices; (3) gradient descent for optimizing general MDS loss functions (“Strain” and “Stress”); (4) a unification of classical (Strain-based) and distance (Stress-based) MDS.

Particular topics include the following: (1) blending of automatic optimization with interactive displacement of configuration points to assist in the search for global optima; (2) forming groups of objects with interactive brushing to create patterned missing values in MDS loss functions; (3) optimizing MDS loss functions for large numbers of objects relative to a small set of anchor points (“external unfolding”); and (4) a non-metric version of classical MDS.

We show applications to the mapping of computer usage data, to the dimension reduction of marketing segmentation data, to the layout of mathematical graphs and social networks, and finally to the spatial reconstruction of molecules.  相似文献   

8.
We define two measures, γ and c, of complexity for Boolean functions. These measures are related to issues of functional decomposition which (for continuous functions) were studied by Arnol'd, Kolmogorov, Vitu?kin and others in connection with Hilbert's 13th Problem. This perspective was first applied to Boolean functions in [1]. Our complexity measures differ from those which were considered earlier [3, 5, 6, 9, 10] and which were used by Ehrenfeucht and others to demonstrate the great complexity of most decision procedures. In contrast to other measures, both γ and c (which range between 0 and 1) have a more combinatorial flavor and it is easy to show that both of them are close to 0 for literally all “meaningful” Boolean functions of many variables. It is not trivial to prove that there exist functions for which c is close to 1, and for γ the same question is still open. The same problem for all traditional measures of complexity is easily resolved by statistical considerations.  相似文献   

9.
Recently, I had a very interesting friendly e-mail discussion with Professor Parikh on vagueness and fuzzy logic. Parikh published several papers concerning the notion of vagueness. They contain critical remarks on fuzzy logic and its ability to formalize reasoning under vagueness [10,11]. On the other hand, for some years I have tried to advocate fuzzy logic (in the narrow sense, as Zadeh says, i.e. as formal logical systems formalizing reasoning under vagueness) and in particular, to show that such systems (of many-valued logic of a certain kind) offer a fully fledged and extremely interesting logic [4, 5]. But this leaves open the question of intuitive adequacy of many-valued logic as a logic of vagueness. Below I shall try to isolate eight questions Parikh asks, add two more and to comment on all of them. Finally, I formulate a problem on truth (in)definability in Łukasiewicz logic which shows, in my opinion, that fuzzy logic is not just “applied logic” but rather belongs to systems commonly called “philosophical logic” like modal logics, etc.  相似文献   

10.
A theory of approximation to measurable sets and measurable functions based on the concepts of recursion theory and discrete complexity theory is developed. The approximation method uses a model of oracle Turing machines, and so the computational complexity may be defined in a natural way. This complexity measure may be viewed as a formulation of the average-case complexity of real functions—in contrast to the more restrictive worst-case complexity. The relationship between these two complexity measures is further studied and compared with the notion of the distribution-free probabilistic computation. The computational complexity of the Lebesgue integral of polynomial-time approximable functions is studied and related to the question “FP = ♯P?”.  相似文献   

11.
《Journal of Complexity》2005,21(1):111-148
In this paper we study the rate of the best approximation of a given function by semialgebraic functions of a prescribed “combinatorial complexity”. We call this rate a “Semialgebraic Complexity” of the approximated function. By the classical Approximation Theory, the rate of a polynomial approximation is determined by the regularity of the approximated function (the number of its continuous derivatives, the domain of analyticity, etc.). In contrast, semialgebraic complexity (being always bounded from above in terms of regularity) may be small for functions not regular in the usual sense. We give various natural examples of functions of low semialgebraic complexity, including maxima of smooth families, compositions, series of a special form, etc. We show that certain important characteristics of the functions, in particular, the geometry of their critical values (Morse–Sard Theorem) are determined by their semialgebraic complexity, and not by their regularity.  相似文献   

12.
A local theory of weak solutions of first-order nonlinear systems of conservation laws is presented. In the systems considered, two of the characteristic speeds become complex for some achieved values of the dependent variable. The transonic “small disturbance” equation is an example of this class of systems. Some familiar concepts from the purely hyperbolic case are extended to such systems of mixed type, including genuine nonlinearity, classification of shocks into distinct fields and entropy inequalities. However, the associated entropy functions are not everywhere locally convex, shock and characteristic speeds are not bounded in the usual sense, and closed loops and disjoint segments are possible in the set of states which can be connected to a given state by a shock. With various assumptions, we show (1) that the state on one side of a shock plus the shock speed determine the state on the other side uniquely, as in the hyperbolic case; (2) that the “small disturbance” equation is a local model for a class of such systems; and (3) that entropy inequalities and/or the existence of viscous profiles can still be used to select the “physically relevant” weak solution of such a system.  相似文献   

13.
由于不同测量条件下的测量结果不是线性可加,AHP用矩阵乘法实现多路径序转换值得商榷.自隶属度从只取"1或0"两个值扩展到可取[0,1]区间上一切实数,可表征界于"是"与"不是"之间所有可能"部分是"模糊状态时起,对二值逻辑的研究已拓展到研究近似推理的模糊逻辑.这是逻辑的一个新的研究方向,目的是在隶属度转换过程中,通过对人类近似推理本领进行规范,使得到的目标值是"真值"在当前条件下的最优近似.模糊逻辑的量化方法是数值计算;推理依据是区分权滤波的冗余理论;实质性计算是由冗余理论导出的、实现隶属度转换的非线性去冗算法;所建的隶属度转换模型也是不同测量条件下高维状态空间上测量结果的非线性可加模型.将一维测量数据映射到高维状态空间上表为隶属度向量,可借助隶属度转换模型解决AHP多路径序转换的非线性计算.  相似文献   

14.
15.
The puzzle of origins and future of government and social complexity in human and social dynamics, arguably a characteristic feature of the emergence and long-term evolution of hierarchy and power in the history of civilizations, is an enduring topic that has challenged political scientists, anthropological archaeologists, and other social scientists and historians. This paper proposes a new computational theory for the emergence of social complexity that accounts for the earliest formation of systems of government (pristine polities) in prehistory and early antiquity, as well as present and future political development. This general social theory is based on a “fast process” of crisis and opportunistic decision-making through collective action, which feeds a “slow” process of political development or decay. The “fast” core iterative process is “canonical” in the sense that it undergoes variations on a recurring theme of signal detection, information-processing, problem-solving, successful adaptation and occasional failure. When a group is successful in managing or overcoming serious situational changes (stresses or opportunities, endogenous or exogenous, social or physical) a probabilistic phase transition may occur, under a specified set of conditions, yielding a long-term (slow) probabilistic accrual process of emergent sociopolitical complexity and development. A reverse process may account for decay. The canonical theory is being formally implemented through the “PoliGen” agent-based model (ABM), based on the new Multi-Agent Simulator of Networks and Neighborhoods (MASON). Empirically, the theory is testable with the datasets on polities developed by the Long-Range Analysis of War (LORANOW) Project. This paper focuses on the concepts, mechanisms, and basic formal structure that constitute the canonical theory and inform the subsequent simulation model.  相似文献   

16.
We consider a two‐dimensional transport equation subject to small diffusive perturbations. The transport equation is given by a Hamiltonian flow near a compact and connected heteroclinic cycle. We investigate approximately harmonic functions corresponding to the generator of the perturbed transport equation. In particular, we investigate such functions in the boundary layer near the heteroclinic cycle; the space of these functions gives information about the likelihood of a particle moving a mesoscopic distance into one of the regions where the transport equation corresponds to periodic oscillations (i.e., a “well” of the Hamiltonian). We find that we can construct such approximately harmonic functions (which can be used as “corrector functions” in certain averaging questions) when certain macroscopic “gluing conditions” are satisfied. This provides a different perspective on some previous work of Freidlin and Wentzell on stochastic averaging of Hamiltonian systems. © 2004 Wiley Periodicals, Inc.  相似文献   

17.
To know the dynamic behavior of a system it is convenient to have a good dynamic model of it. However, in many cases it is not possible either because of its complexity or because of the lack of knowledge of the laws involved in its operation. In these cases, obtaining models from input–output data is shown as a highly effective technique. Specifically, intelligent modeling techniques have become important in recent years in this field. Among these techniques, fuzzy logic is especially interesting because it allows to incorporate to the model the knowledge that is possessed of the system, besides offering a more interpretable model than other techniques. A fuzzy model is, formally speaking, a mathematical model. Therefore, this model can be used to analyze the original system using known systems analysis techniques. In this paper a methodology for extract information from unknown systems using fuzzy logic is presented. More precisely, it is presented the exact linearization of a Takagi–Sugeno fuzzy model with no restrictions in use or distribution of its membership functions, as well as obtaining its equilibrium states, the study of its local behavior and the search for periodic orbits by the application of Poincaré.  相似文献   

18.
采用2004~2017年省际面板数据及门槛计量技术,实证分析了命令型、自愿型和经济型三类环境规制对绿色技术创新驱动产业升级的异质动态调节影响。研究发现,绿色技术创新显著驱动了国内产业升级,但绿色技术创新的产业升级效应具有明显的倒“U”型动态演化特征;环境规制在绿色技术创新驱动产业升级过程中发挥着“调控器”功能,不仅可扭转绿色技术创新对产业升级的不利影响,还有助于提升绿色技术创新的产业升级效应;绿色技术创新对产业升级的动态影响具有显著的异质环境规制调节特征,在命令型环境规制和经济型环境规制调节下均呈现正向“U”型特征,在自愿型环境规制调节下具有正向倒“U”型规律;现阶段环境规制的调节效果存在显著差异,从调控手段来看,命令型环境规制最佳、经济型环境规制次之、自愿型环境规制较弱。从空间差异来看,东部地区最明显、西部地区次之、中部地区较差。  相似文献   

19.
We investigate the construction of stable models of general propositional logic programs. We show that a forward-chaining technique, supplemented by a properly chosen safeguard can be used to construct stable models of logic programs. Moreover, the proposed method has the advantage that if a program has no stable model, the result of the construction is a stable model of a subprogram. Further, in such a case the proposed method “isolates the inconsistency” of the program, that is it points to the part of the program responsible for the inconsistency. The results of computations are called stable submodels. We prove that every stable model of a program is a stable submodel. We investigate the complexity issues associated with stable submodels. The number of steps required to construct a stable submodel is polynomial in the sum of the lengths of the rules of the program. In the infinite case the outputs of the forward chaining procedure have much simpler complexity than those for general stable models. We show how to incorporate other techniques for finding models (e.g. Fitting operator, Van Gelder-Ross-Schlipf operator) into our construction.  相似文献   

20.
To analyze the complexity of continuous chaotic systems better, the modified multiscale permutation entropy (MMPE) algorithm is proposed. Characteristics and parameter choices of the MMPE algorithm are investigated. The comparative study between MPE and MMPE shows that MMPE has better robustness for identifying different chaotic systems when the scale factor τ takes large values. Compared with MPE, MMPE algorithm is more suitable for analyzing the complexity of time series as it has τ time series. For its application, MMPE algorithm is used to calculate the complexity of multiscroll chaotic systems. Results show that complexity of multiscroll chaotic systems does not increase as scroll number increases. Discussions based on first‐order difference operation present a reasonable explanation on why the complexity does not increase. This complexity analysis method lays a theoretical as well as experimental basis for the applications of multiscroll chaotic systems. © 2014 Wiley Periodicals, Inc. Complexity 21: 52–58, 2016  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号