首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An information-theoretic approach is applied for measuring the flexibility in flexible manufacturing systems (FMSs). The general relation between flexibility and entropy is discussed. The entropy for a Markovian process is obtained and then applied to closed queueing network models of FMSs to discuss loading flexibility which arises from the power to regulate the frequency of the visit of a part to different work stations. The concept of operations entropy as a measure of operations flexibility, which arises from the power to choose the work station and the corresponding operations, is introduced. The operations entropy has been decomposed into entropies within and between operations and entropies within and between groups of operations. This measure has been used to determine the next operation to be performed on a part by using the principle of least reduction of flexibility.The present paper is an improved version of the paper On measurement of flexibility in flexible manufacturing systems: An information-theoretic approach, presented at the II ORSA/TIMS Special Conference on Flexible Manufacturing Systems, held at Ann Arbor in August 1986.  相似文献   

2.
Despite extensive studies on the flexibility of manufacturing systems over the last two decades, a unified measurement approach has not been developed. To this end, we integrate two domains of machine flexibility models from the literature: operational capability-based machine flexibility and time and cost-based machine flexibility, and propose a generic model to measure machine flexibility with consideration of uncertainties in the system. Furthermore, in our approach we include part characteristics such as processing time and processing cost, the number of operations that a machine can perform, and uncertainties in demand and machine-part assignment. The resulting framework to measure machine flexibility is a two-stage model: a super efficiency Data Envelopment Analysis Model and a flexibility model. The results show that the marginal system machine flexibility does not always increase as the number of operations that a machine can perform increases, and the system machine flexibility depends on the demand uncertainty.  相似文献   

3.
徐宣国  张凯  苏翔  刘开 《运筹与管理》2015,24(6):272-280
云制造环境下服务资源进行动态组合时不可避免地遇到内、外部环境的不确定性,这些不确定性因素直接影响到制造云服务组合的执行成本、效率和质量。为了有效提升制造云服务组合的柔性,需要对其柔性能力进行测度。在假定某待选云服务集中的各服务资源能够以不同的效率替代完成任务的基础上,建立了考虑制造云服务组合柔性的效率柔性、冗余柔性、路径柔性和任务柔性的四维属性测度方法。最后,结合具体算例对该方法的应用过程进行了分析。  相似文献   

4.
The paper considers a discrete stochastic multiple criteria decision making problem. This problem is defined by a finite set of actions A, a set of attributes X and a set of evaluations of actions with respect to attributes E. In stochastic case the evaluation of each action with respect to each attribute takes form of a probability distribution. Thus, the comparison of two actions leads to the comparison of two vectors of probability distributions. In the paper a new procedure for solving this problem is proposed. It is based on three concepts: stochastic dominance, interactive approach, and preference threshold. The idea of the procedure comes from the interactive multiple objective goal programming approach. The set of actions is progressively reduced as the decision maker specifies additional requirements. At the beginning the decision maker is asked to define preference threshold for each attribute. Next, at each iteration the decision maker is confronted with the set of considered actions. If the decision maker is able to make a final choice then the procedure ends, otherwise he/she is asked to specify aspiration level. A didactical example is presented to illustrate the proposed technique.  相似文献   

5.
针对具有工艺路径柔性的车间调度问题,提出基于OR子图和子路径的工艺路径柔性描述方法,该描述方法形式简单且允许OR子图多层嵌套。以此为基础,设计了基于遗传算法的工艺路径柔性调度算法,并采用以工艺路径编码、机器编码和工件调度编码为基础的三维染色体编码策略,其中,工艺路径编码和机器编码分别通过最大子路径数量和最大机器数量随机产生,其优势在于任意染色体均表示可行解,并可以使用简单的交叉算子和变异算子实现遗传操作且其后代亦为可行解。最后通过实验证明了算法的优化能力。  相似文献   

6.
A copula entropy approach to correlation measurement at the country level   总被引:1,自引:0,他引:1  
The entropy optimization approach has widely been applied in finance for a long time, notably in the areas of market simulation, risk measurement, and financial asset pricing. In this paper, we propose copula entropy models with two and three variables to measure dependence in stock markets, which extend the copula theory and are based on Jaynes’s information criterion. Both of them are usually applied under the non-Gaussian distribution assumption. Comparing with the linear correlation coefficient and the mutual information, the strengths and advantages of the copula entropy approach are revealed and confirmed. We also propose an algorithm for the copula entropy approach to obtain the numerical results. With the experimental data analysis at the country level and the economic circle theory in international economy, the validity of the proposed approach is approved; evidently, it captures the non-linear correlation, multi-dimensional correlation, and correlation comparisons without common variables. We would like to make it clear that correlation illustrates dependence, but dependence is not synonymous with correlation. Copulas can capture some special types of dependence, such as tail dependence and asymmetric dependence, which other conventional probability distributions, such as the normal p.d.f. and the Student’s t p.d.f., cannot.  相似文献   

7.
In today’s manufacturing industry more than one performance criteria are considered for optimization to various degrees simultaneously. To deal with such hard competitive environments it is essential to develop appropriate multicriteria scheduling approaches. In this paper consideration is given to the problem of scheduling n independent jobs on a single machine with due dates and objective to simultaneously minimize three performance criteria namely, total weighted tardiness (TWT), maximum tardiness and maximum earliness. In the single machine scheduling literature no previous studies have been performed on test problems examining these criteria simultaneously. After positioning the problem within the relevant research field, we present a new heuristic algorithm for its solution. The developed algorithm termed the hybrid non-dominated sorting differential evolution (h-NSDE) is an extension of the author’s previous algorithm for the single-machine mono-criterion TWT problem. h-NSDE is devoted to the search for Pareto-optimal solutions. To enable the decision maker for evaluating a greater number of alternative non-dominated solutions, three multiobjective optimization approaches have been implemented and tested within the context of h-NSDE: including a weighted-sum based approach, a fuzzy-measures based approach which takes into account the interaction among the criteria as well as a Pareto-based approach. Experiments conducted on existing data set benchmarks problems show the effect of these approaches on the performance of the h-NSDE algorithm. Moreover, comparative results between h-NSDE and some of the most popular multiobjective metaheuristics including SPEA2 and NSGA-II show clear superiority for h-NSDE in terms of both solution quality and solution diversity.  相似文献   

8.
In statistics and machine learning communities, the last fifteen years have witnessed a surge of high-dimensional models backed by penalized methods and other state-of-the-art variable selection techniques. The high-dimensional models we refer to differ from conventional models in that the number of all parameters p and number of significant parameters s are both allowed to grow with the sample size T. When the field-specific knowledge is preliminary and in view of recent and potential affluence of data from genetics, finance and on-line social networks, etc., such (s, T, p)-triply diverging models enjoy ultimate flexibility in terms of modeling, and they can be used as a data-guided first step of investigation. However, model selection consistency and other theoretical properties were addressed only for independent data, leaving time series largely uncovered. On a simple linear regression model endowed with a weakly dependent sequence, this paper applies a penalized least squares (PLS) approach. Under regularity conditions, we show sign consistency, derive finite sample bound with high probability for estimation error, and prove that PLS estimate is consistent in L 2 norm with rate \(\sqrt {s\log s/T}\).  相似文献   

9.
Density-dependent effects, both positive or negative, can have an important impact on the population dynamics of species by modifying their population per-capita growth rates. An important type of such density-dependent factors is given by the so-called Allee effects, widely studied in theoretical and field population biology. In this study, we analyze two discrete single population models with overcompensating density-dependence and Allee effects due to predator saturation and mating limitation using symbolic dynamics theory. We focus on the scenarios of persistence and bistability, in which the species dynamics can be chaotic. For the chaotic regimes, we compute the topological entropy as well as the Lyapunov exponent under ecological key parameters and different initial conditions. We also provide co-dimension two bifurcation diagrams for both systems computing the periods of the orbits, also characterizing the period-ordering routes toward the boundary crisis responsible for species extinction via transient chaos. Our results show that the topological entropy increases as we approach to the parametric regions involving transient chaos, being maximum when the full shift R(L) occurs, and the system enters into the essential extinction regime. Finally, we characterize analytically, using a complex variable approach, and numerically the inverse square-root scaling law arising in the vicinity of a saddle-node bifurcation responsible for the extinction scenario in the two studied models. The results are discussed in the context of species fragility under differential Allee effects.  相似文献   

10.
The paper describes the methodology for developing autoregressive moving average (ARMA) models to represent the workpiece roundness error in the machine taper turning process. The method employs a two stage approach in the determination of the AR and MA parameters of the ARMA model. It first calculates the parameters of the equivalent autoregressive model of the process, and then derives the AR and MA parameters of the ARMA model. Akaike's Information Criterion (AIC) is used to find the appropriate orders m and n of the AR and MA polynomials respectively. Recursive algorithms are developed for the on-line implementation on a laboratory turning machine. Evaluation of the effectiveness of using ARMA models in error forecasting is made using three time series obtained from the experimental machine. Analysis shows that ARMA(3,2) with forgetting factor of 0.95 gives acceptable results for this lathe turning machine.  相似文献   

11.
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal first-order method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the total-variation norm, ||Wx||1 where W is arbitrary, or a combination thereof. In addition, the paper introduces a number of technical contributions such as a novel continuation scheme and a novel approach for controlling the step size, and applies results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with state-of-the-art methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient large-scale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms.  相似文献   

12.
Attribute reduction is one of the key issues in rough set theory. Many heuristic attribute reduction algorithms such as positive-region reduction, information entropy reduction and discernibility matrix reduction have been proposed. However, these methods are usually computationally time-consuming for large data. Moreover, a single attribute significance measure is not good for more attributes with the same greatest value. To overcome these shortcomings, we first introduce a counting sort algorithm with time complexity O(∣C∣ ∣U∣) for dealing with redundant and inconsistent data in a decision table and computing positive regions and core attributes (∣C∣ and ∣U∣ denote the cardinalities of condition attributes and objects set, respectively). Then, hybrid attribute measures are constructed which reflect the significance of an attribute in positive regions and boundary regions. Finally, hybrid approaches to attribute reduction based on indiscernibility and discernibility relation are proposed with time complexity no more than max(O(∣C2U/C∣), O(∣C∣∣U∣)), in which ∣U/C∣ denotes the cardinality of the equivalence classes set U/C. The experimental results show that these proposed hybrid algorithms are effective and feasible for large data.  相似文献   

13.
Traditionally, part dispatching has been done using static rules, rules that fail to take advantage of the dynamic nature of today’s manufacturing systems. In modern manufacturing systems, machines carry multiple tools so parts have the option of being machined at more than one machine. This flexibility, termed routing flexibility in the literature, opens up new possibilities for shop floor planners for the scheduling and dispatching of parts.  相似文献   

14.
Multi-echelon inventory optimization literature distinguishes stochastic- (SS) and guaranteed-service (GS) approaches as mutually exclusive frameworks. While the GS approach considers flexibility measures at the stages to deal with stockouts, the SS approach only relies on safety stock. Within a supply chain, flexibility levels might differ between stages rendering them appropriate candidates for one approach or the other. The existing approaches, however, require the selection of a single framework for the entire supply chain instead of a stage-wise choice. We develop an integrated hybrid-service (HS) approach which endogenously determines the overall cost-optimal approach for each stage and computes the required inventory levels. We present a dynamic programming optimization algorithm for serial supply chains that partitions the entire system into subchains of different types. From a numerical study we find that, besides implicitly choosing the better of the two pure frameworks, whose cost differences can be considerable, the HS approach enables additional pipeline and on-hand stock cost savings. We further identify drivers for the preferability of the HS approach.  相似文献   

15.
Let A be a set of actions evaluated by a set of attributes. Two kinds of evaluations will be considered in this paper: determinist or stochastic in relation to each attribute. The multi-attribute stochastic dominance (MSDr) for a reduced number of attributes will be suggested to model the preferences in this kind of problem. The case of mixed data, where we have the attributes of different natures is not well known in the literature, although it is essential from a practical point of view. To apply the MSDr the subset R of attributes from which approximation of the global preference is valid should be known. The theory of Rough Sets gives us an answer on this issue allowing us to determine a minimal subset of attributes that enables the same classification of objects as the whole set of attributes. In our approach these objects are pairs of actions. In order to represent preferential information we shall use a pairwise comparison table. This table is built for subset BA described by stochastic dominance (SD) relations for particular attributes and a total order for the decision attribute given by the decision maker (DM). Using a Rough Set approach to the analysis of the subset of preference relations, a set of decision rules is obtained, and these are applied to a set AB of potential actions. The Rough Set approach of looking for the reduction of the set of attributes gives us the possibility of operating with MSDr.  相似文献   

16.
We investigate the problems of scheduling n weighted jobs to m parallel machines with availability constraints. We consider two different models of availability constraints: the preventive model, in which the unavailability is due to preventive machine maintenance, and the fixed job model, in which the unavailability is due to a priori assignment of some of the n jobs to certain machines at certain times. Both models have applications such as turnaround scheduling or overlay computing. In both models, the objective is to minimize the total weighted completion time. We assume that m is a constant, and that the jobs are non-resumable.For the preventive model, it has been shown that there is no approximation algorithm if all machines have unavailable intervals even if wi=pi for all jobs. In this paper, we assume that there is one machine that is permanently available and that the processing time of each job is equal to its weight for all jobs. We develop the first polynomial-time approximation scheme (PTAS) when there is a constant number of unavailable intervals. One main feature of our algorithm is that the classification of large and small jobs is with respect to each individual interval, and thus not fixed. This classification allows us (1) to enumerate the assignments of large jobs efficiently; and (2) to move small jobs around without increasing the objective value too much, and thus derive our PTAS. Next, we show that there is no fully polynomial-time approximation scheme (FPTAS) in this case unless P=NP.For the fixed job model, it has been shown that if job weights are arbitrary then there is no constant approximation for a single machine with 2 fixed jobs or for two machines with one fixed job on each machine, unless P=NP. In this paper, we assume that the weight of a job is the same as its processing time for all jobs. We show that the PTAS for the preventive model can be extended to solve this problem when the number of fixed jobs and the number of machines are both constants.  相似文献   

17.
This study addresses a single machine scheduling problem with periodic maintenance, where the machine is assumed to be stopped periodically for maintenance for a constant time w during the scheduling period. Meanwhile, the maintenance period [uv] is assumed to have been previously arranged and the time w is assumed not to exceed the available maintenance period [uv] (i.e. w ? v − u). The time u(v) is the earliest (latest) time at which the machine starts (stops) its maintenance. The objective is to minimize the makespan. Two mixed binary integer programming (BIP) models are provided for deriving the optimal solution. Additionally, an efficient heuristic is proposed for finding the near-optimal solution for large-sized problems. Finally, computational results are provided to demonstrate the efficiency of the models and the effectiveness of the heuristics. The mixed BIP model can optimally solve up to 100-job instances, while the average percentage error of the heuristic is below 1%.  相似文献   

18.
To date, all models reported in the literature relating to the flowshop sequencing problem with no in-process waiting, have been based on single objective optimization. This paper presents a mixed integer goal programming model of the generalized N job, M machine standard flowshop problem with no in-process waiting, i.e. no intermediate queues. Instead of optimization being based on a single objective, the most satisfactory sequence is derived subject to user specified selection of the pre-emptive goals: makespan, flowtime, and machine idle time. Computational results of sample problems illustrating the advantage of a multiple criteria selection method are shown.  相似文献   

19.
Minor mathematics refers to the mathematical practices that are often erased by state-sanctioned curricular images of mathematics. We use the idea of a minor mathematics to explore alternative measurement practices. We argue that minor measurement practices have been buried by a ‘major’ settler mathematics, a process of erasure that distributes ‘sensibility’ and formulates conditions of mathematics dis/ability. We emphasize how measuring involves the making and mixing of analogies, and that this involves attending to intensive relationships rather than extensive properties. Our philosophical and historical approach moves from the archeological origins of human measurement activity, to pivotal developments in modern mathematics, to configurations of curriculum. We argue that the project of proliferating multiple mathematics is required in order to disturb narrow (and perhaps white, western, male) images of mathematics—and to open up opportunities for a more pluralist and inclusive school mathematics.  相似文献   

20.
Rough set feature selection (RSFS) can be used to improve classifier performance. RSFS removes redundant attributes whilst retaining important ones that preserve the classification power of the original dataset. Reducts are feature subsets selected by RSFS. Core is the intersection of all the reducts of a dataset. RSFS can only handle discrete attributes, hence, continuous attributes need to be discretized before being input to RSFS. Discretization determines the core size of a discrete dataset. However, current discretization methods do not consider the core size during discretization. Earlier work has proposed core-generating approximate minimum entropy discretization (C-GAME) algorithm which selects the maximum number of minimum entropy cuts capable of generating a non-empty core within a discrete dataset. The contributions of this paper are as follows: (1) the C-GAME algorithm is improved by adding a new type of constraint to eliminate the possibility that only a single reduct is present in a C-GAME-discrete dataset; (2) performance evaluation of C-GAME in comparison to C4.5, multi-layer perceptrons, RBF networks and k-nearest neighbours classifiers on ten datasets chosen from the UCI Machine Learning Repository; (3) performance evaluation of C-GAME in comparison to Recursive Minimum Entropy Partition (RMEP), Chimerge, Boolean Reasoning and Equal Frequency discretization algorithms on the ten datasets; (4) evaluation of the effects of C-GAME and the other four discretization methods on the sizes of reducts; (5) an upper bound is defined on the total number of reducts within a dataset; (6) the effects of different discretization algorithms on the total number of reducts are analysed; (7) performance analysis of two RSFS algorithms (a genetic algorithm and Johnson’s algorithm).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号