首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The main contribution of this paper is the demonstration that, contrary to conventional thinking, a measurable increase in the operational complexity of the production scheduling function between two companies can occur following closer supply chain integration. The paper presents the practical application of previous work carried out and validated by the authors in terms of (a) methodology for measuring operational complexity, (b) predicted implications of Supplier–Customer integration and (c) derivation of an operational complexity measure applied to before and after Supplier–Customer integration. This application is illustrated via a longitudinal case study. The analysis is based on information theory, whereby operational complexity of a Supplier–Customer system is defined as the amount of information required to describe the state of this system. The results show that operational complexity can increase when companies decide to integrate more closely, which is a fact likely to be overlooked when making decisions to pursue closer Supply-Chain integration. In this study, operational complexity increases due to reduced buffering arising from reduction in the Supplier's inventory capacity. The Customer did not change their operational practices to improve their schedule adherence post-integration, and, consequently, suffered an increase in complexity due to complexity rebound. Both the Supplier's and Customer's decision-making processes after the case study reported in this paper were enhanced by being able to quantify the complex areas to prioritise and direct managerial efforts towards them, through the use of the operational complexity measure. Future work could extend this study (in the ‘low product customisation’ and ‘low product value impact’ quadrant) to investigate Supplier–Customer integration in other quadrants resulting from further combinations between ‘product customisation’ and ‘product value impact’ levels.  相似文献   

2.
3.
The search for logical regularities of classes in the recognition by precedents problems and the use of logical regularities for solving recognition and prediction problems are considered. Logical regularities of classes are defined as conjunctions of one-place predicates that determine the membership of a value of a feature in a certain interval of the real axis. The conjunctions are true on the subsets of reference objects of a certain class and are optimal. Various optimality criteria are considered and the problem of finding logical regularities is formulated as an integer programming problem. A qualitative analysis of these problems is performed. Models for evaluating estimates on the basis of systems of logical regularities are considered. Modifications of linear decision rules for finding estimates of how close the reference objects are to classes are proposed that are based on the search for the maximum gap. Approximations of logical regularities of classes by smooth functions is proposed. The concept of the dynamic logical regularity of classes is introduced, an algorithm for finding dynamic logical regularities is proposed, and a prediction method is developed.  相似文献   

4.
From a logical viewpoint, object is never defined, even by a negative definition. This paper is a theoretical contribution about object using a new constructivist logical approach called Logic of Determination of Objects founded on a basic operation, called determination. This new logic takes into account cognitive problems such as the inheritance of properties by non typical occurrences or by indeterminate atypical objects in opposition to prototypes that are typical completely determinate objects. We show how extensional classes, intensions, more and less determined objects, more or less typical representatives of a concept and prototypes are defined and organized, using a determination operation that constructs a class of indeterminate objects from an object representation of a concept called typical object.  相似文献   

5.
This study explored the use of student-constructed concept maps in conjunction with written interpretive essays as an additional method of assessment in three undergraduate mathematics courses. The primary objectives of this study were to evaluate the benefits of using concept maps and written essays to assess the “connectedness” of students' knowledge; to measure the correlation between students' scores on the concept maps and written essays, course exams, and final grade; and to document students' perception of the effect of this approach on their mathematical knowledge. Results indicated that concept maps, when combined with written essays, are viable tools for assessing students' organization of mathematical knowledge. In addition, students perceive this approach as enhancing their mathematical knowledge.  相似文献   

6.
This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test of logical thinking ability. Results found that the students were not able to give accurate estimations for the lengths of familiar objects. Students were also less accurate in estimating in metric units as compared to English or novel units. Estimation accuracy was dependent on the task context. There were significant differences in estimation accuracy for two‐ versus three‐dimensional estimation tasks. There were no significant differences for estimating objects with different orientations or embedded objects. For the tasks requiring the students to estimate in English units, the embedded task and the three‐dimensional tasks were correlated with logical thinking. For estimation tasks with novel units, three‐dimensional and two‐dimensional estimation tasks were significantly correlated with the logical thinking. In order to interact effectively with our environment it is essential to possess an intuitive grasp of both dimension and scale and to be able to manipulate such information. Estimation, approximating and measuring are all components of such intuition ( Forrester, Latham, & Shire, 1990 , p. 283).  相似文献   

7.
Interpretation of logical connectives as operations on sets of binary strings is considered; the complexity of a set is defined as the minimum of Kolmogorov complexities of its elements. It is readily seen that the complexity of a set obtained by the application of logical operations does not exceed the complexity of the conjunction of their arguments (up to an additive constant). In this paper, it is shown that the complexity of a set obtained by a formula Φ is small (bounded by a constant) if Φ is deducible in the logic of weak excluded middle, and attains the specified upper bound otherwise.  相似文献   

8.
9.
We reconsider some classical natural semantics of integers (namely iterators of functions, cardinals of sets, index of equivalence relations) in the perspective of Kolmogorov complexity. To each such semantics one can attach a simple representation of integers that we suitably effectivize in order to develop an associated Kolmogorov theory. Such effectivizations are particular instances of a general notion of “self‐enumerated system” that we introduce in this paper. Our main result asserts that, with such effectivizations, Kolmogorov theory allows to quantitatively distinguish the underlying semantics. We characterize the families obtained by such effectivizations and prove that the associated Kolmogorov complexities constitute a hierarchy which coincides with that of Kolmogorov complexities defined via jump oracles and/or infinite computations (cf. [6]). This contrasts with the well‐known fact that usual Kolmogorov complexity does not depend (up to a constant) on the chosen arithmetic representation of integers, let it be in any base n ≥ 2 or in unary. Also, in a conceptual point of view, our result can be seen as a mean to measure the degree of abstraction of these diverse semantics. (© 2006 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
Direct-search algorithms form one of the main classes of algorithms for smooth unconstrained derivative-free optimization, due to their simplicity and their well-established convergence results. They proceed by iteratively looking for improvement along some vectors or directions. In the presence of smoothness, first-order global convergence comes from the ability of the vectors to approximate the steepest descent direction, which can be quantified by a first-order criticality (cosine) measure. The use of a set of vectors with a positive cosine measure together with the imposition of a sufficient decrease condition to accept new iterates leads to a convergence result as well as a worst-case complexity bound. In this paper, we present a second-order study of a general class of direct-search methods. We start by proving a weak second-order convergence result related to a criticality measure defined along the directions used throughout the iterations. Extensions of this result to obtain a true second-order optimality one are discussed, one possibility being a method using approximate Hessian eigenvectors as directions (which is proved to be truly second-order globally convergent). Numerically guaranteeing such a convergence can be rather expensive to ensure, as it is indicated by the worst-case complexity analysis provided in this paper, but turns out to be appropriate for some pathological examples.  相似文献   

11.
In this work, we are motivated by the observation that previous considerations of appropriate complexity measures have not directly addressed the fundamental issue that the complexity of any particular matter or thing has a significant subjective component in which the degree of complexity depends on available frames of reference. Any attempt to remove subjectivity from a suitable measure therefore fails to address a very significant aspect of complexity. Conversely, there has been justifiable apprehension toward purely subjective complexity measures, simply because they are not verifiable if the frame of reference being applied is in itself both complex and subjective. We address this issue by introducing the concept of subjective simplicity—although a justifiable and verifiable value of subjective complexity may be difficult to assign directly, it is possible to identify in a given context what is “simple” and, from that reference, determine subjective complexity as distance from simple. We then propose a generalized complexity measure that is applicable to any domain, and provide some examples of how the framework can be applied to engineered systems. © 2016 Wiley Periodicals, Inc. Complexity 21: 533–546, 2016  相似文献   

12.
13.
We use high-frequency data from the Nasdaq exchange to build a measure of volume imbalance in the limit order (LO) book. We show that our measure is a good predictor of the sign of the next market order (MO), i.e., buy or sell, and also helps to predict price changes immediately after the arrival of an MO. Based on these empirical findings, we introduce and calibrate a Markov chain-modulated pure jump model of price, spread, LO and MO arrivals and volume imbalance. As an application of the model, we pose and solve a stochastic control problem for an agent who maximizes terminal wealth, subject to inventory penalties, by executing trades using LOs. We use in-sample-data (January to June 2014) to calibrate the model to 11 equities traded in the Nasdaq exchange and use out-of-sample data (July to December 2014) to test the performance of the strategy. We show that introducing our volume imbalance measure into the optimization problem considerably boosts the profits of the strategy. Profits increase because employing our imbalance measure reduces adverse selection costs and positions LOs in the book to take advantage of favourable price movements.  相似文献   

14.
In this Paper, we illustrate a method (called the ECO method) for enumerating some classes of combinatorial objects. The basic idea of this method is the following: by means of an operator that performs a "local expansion" on the objects, we give some recursive constructions of these classes. We use these constructions to deduce some new funtional equations verified by classes' generating functions. By solving the functional equations, we enumerate the combinatorial objects according to various parameters. We show some applications of the method referring to some classical combinatorial objects, such as: trees, paths, polyminoes and permutations  相似文献   

15.
In this paper, some methods of similarity measures between objects are presented with their properties reviewed. The study is conducted to propose a new method based on genetic algorithm in order to reduce the time complexity of finding n most similar objects among the huge number of objects. This method is tested on two applications. The former aims at finding the most similar residents in a condominium, and the latter deals with finding the most similar n-groups of text documents out of a great dataset. The simulation results show that the proposed method can efficiently improve the order of time complexity especially for the second application.  相似文献   

16.
Project managers readily adopted the concept of the critical path as an aid to identifying those activities most worthy of their attention and possible action. However, current project management packages do not offer a useful measure of criticality in resource constrained projects. A revised method of calculating resource constrained float is presented, together with a discussion of its use in project management. While resource constrained criticality appears to be a practical and useful tool in the analysis of project networks, care is needed in its interpretation as any calculation of such float is conditional on the particular resource allocation employed. A number of other measures of an activity's importance in a network are described and compared in an application to an aircraft development. A quantitative comparison of the measures is developed based on a simulation of the process of management identifying the key activities and directing their control efforts. Resource constrained float appears to be a useful single measure of an activity's importance, encapsulating several useful pieces of management information. However, there are some circumstances in which other measures might be preferred.  相似文献   

17.
The balance between symmetry and randomness as a property of networks can be viewed as a kind of “complexity.” We use here our previously defined “set complexity” measure (Galas et al., IEEE Trans Inf Theory 2010, 56), which was used to approach the problem of defining biological information, in the mathematical analysis of networks. This information theoretic measure is used to explore the complexity of binary, undirected graphs. The complexities, Ψ, of some specific classes of graphs can be calculated in closed form. Some simple graphs have a complexity value of zero, but graphs with significant values of Ψ are rare. We find that the most complex of the simple graphs are the complete bipartite graphs (CBGs). In this simple case, the complexity, Ψ, is a strong function of the size of the two node sets in these graphs. We find the maximum Ψ binary graphs as well. These graphs are distinct from, but similar to CBGs. Finally, we explore directed and stochastic processes for growing graphs (hill‐climbing and random duplication, respectively) and find that node duplication and partial node duplication conserve interesting graph properties. Partial duplication can grow extremely complex graphs, while full node duplication cannot do so. By examining the eigenvalue spectrum of the graph Laplacian we characterize the symmetry of the graphs and demonstrate that, in general, breaking specific symmetries of the binary graphs increases the set‐based complexity, Ψ. The implications of these results for more complex, multiparameter graphs, and for physical and biological networks and the processes of network evolution are discussed. © 2011 Wiley Periodicals, Inc. Complexity, 17,51–64, 2011  相似文献   

18.
We previously introduced the concept of “set‐complexity,” based on a context‐dependent measure of information, and used this concept to describe the complexity of gene interaction networks. In a previous paper of this series we analyzed the set‐complexity of binary graphs. Here, we extend this analysis to graphs with multicolored edges that more closely match biological structures like the gene interaction networks. All highly complex graphs by this measure exhibit a modular structure. A principal result of this work is that for the most complex graphs of a given size the number of edge colors is equal to the number of “modules” of the graph. Complete multipartite graphs (CMGs) are defined and analyzed. The relation between complexity and structure of these graphs is examined in detail. We establish that the mutual information between any two nodes in a CMG can be fully expressed in terms of entropy, and present an explicit expression for the set complexity of CMGs (Theorem 3). An algorithm for generating highly complex graphs from CMGs is described. We establish several theorems relating these concepts and connecting complex graphs with a variety of practical network properties. In exploring the relation between symmetry and complexity we use the idea of a similarity matrix and its spectrum for highly complex graphs. © 2012 Wiley Periodicals, Inc. Complexity, 2012  相似文献   

19.
Current comparative studies such as PISA assess individual achievement in an attempt to grasp the concept of competence. Working with mathematics is then put into concrete terms in the area of application. Thereby, mathematical work is understood as a process of modelling: At first, mathematical models are taken from a real problem; then the mathematical model is solved; finally the mathematical solution is interpreted with a view to reality and the original problem is validated by the solution. During this cycle the main focus is on the transition between reality and the mathematical level. Mental objects are necessary for this transition. These mental objects are described in the German didactic with the concept of Grundvorstellungen'. In the delimitation to related educational constructs, ‘Grundvorstellungen’ can be described as mental models of a mathematical concept.  相似文献   

20.
We consider the logical system of Boolean expressions built on the single connector of implication and on positive literals. Assuming all expressions of a given size to be equally likely, we prove that we can define a probability distribution on the set of Boolean functions expressible in this system. Then we show how to approximate the probability of a function f when the number of variables grows to infinity, and that this asymptotic probability has a simple expression in terms of the complexity of f. We also prove that most expressions computing any given function in this system are “simple”, in a sense that we make precise. The probability of all read‐once functions of a given complexity is also evaluated in this model. At last, using the same techniques, the relation between the probability of a function and its complexity is also obtained when random expressions are drawn according to a critical branching process. © 2011 Wiley Periodicals, Inc. Random Struct. Alg., 2011  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号