首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the last decade, the problem of getting a consensus group ranking from all users’ ranking data has received increased attention due to its widespread applications. Previous research solved this problem by consolidating the opinions of all users, thereby obtaining an ordering list of all items that represent the achieved consensus. The weakness of this approach, however, is that it always produces a ranking list of all items, regardless of how many conflicts exist among users. This work rejects the forced agreement of all items. Instead, we define a new concept, maximum consensus sequences, which are the longest ranking lists of items that agree with the majority and disagree only with the minority. Based on this concept, algorithm MCS is developed to determine the maximum consensus sequences from users’ ranking data, and also to identify conflict items that need further negotiation. Extensive experiments are carried out using synthetic data sets, and the results indicate that the proposed method is computationally efficient. Finally, we discuss how the identified consensus sequences and conflict items information can be used in practice.  相似文献   

2.
The well-known list of Happel-Vossieck of tame concealed algebras in terms of quivers with relations, and the list of A. Seven of minimal infinite cluster quivers are compared. There is a 1-1 correspondence between the items in these lists, and we explain how an item in one list naturally corresponds to an item in the other list. A central tool for understanding this correspondence is the theory of cluster-tilted algebras.  相似文献   

3.
In this paper we propose a new discrete time discrete state inventory model for perishable items of a single product. Items in stock are assumed to belong to one of a finite number of quality classes that are ordered in such a way that Class 1 contains the best quality and the last class contains the pre-perishable quality. By the end of each epoch, items in each inventory class either stay in the same class or lose quality and move to a lower class. The movement between classes is not observed. Samples are drawn from the inventory and based on the observations of these samples, optimal estimates for the number of items in each quality classes are derived.  相似文献   

4.
We consider the problem of determining lot sizes of multiple items that are manufactured by a single capacitated facility. The manufacturing facility may represent a bottleneck processing activity on the shop floor or a storeroom that provides components to the shop floor. Items flow from the facility to a downstream facility, where they are assembled according to a specified mix. Just-in-time (JIT) manufacturing requires a balanced flow of items, in the proper mix, between successive facilities. Our model determines lot sizes of the various items based on available capacity and four attributes of each item: demand rate, holding cost, set-up time and processing time. Holding costs for each item accrue until the appropriate mix of items is available for shipment downstream. We develop a lot-sizing heuristic that minimizes total holding cost per time unit over all items, subject to capacity availability and the required mix of items.  相似文献   

5.
We construct graphs with lists of available colors for each vertex, such that the size of every list exceeds the maximum vertex‐color degree, but there exists no proper coloring from the lists. This disproves a conjecture of Reed. © 2002 Wiley Periodicals, Inc. J Graph Theory 41: 106–109, 2002  相似文献   

6.
Given sales forecasts for a set of items along with the standard deviation associated with each forecast, we propose a new method of combining forecasts using the concepts of clustering. Clusters of items are identified based on the similarity in their sales forecasts and then a common forecast is computed for each cluster of items. On a real dataset from a national retail chain we have found that the proposed method of combining forecasts produces significantly better sales forecasts than either the individual forecasts (forecasts without combining) or an alternate method of using a single combined forecast for all items in a product line sold by this retailer.  相似文献   

7.
A recently proposed method for the pairwise comparison of arbitrary independent random variables results in a probabilistic relation. When restricted to discrete random variables uniformly distributed on finite multisets of numbers, this probabilistic relation expresses the winning probabilities between pairs of hypothetical dice that carry these numbers and exhibits a particular type of transitivity called dice-transitivity. In case these multisets have equal cardinality, two alternative methods for statistically comparing the ordered lists of the numbers on the faces of the dice have been studied recently: the comonotonic method based upon the comparison of the numbers of the same rank when the lists are in increasing order, and the countermonotonic method, also based upon the comparison of only numbers of the same rank but with the lists in opposite order. In terms of the discrete random variables associated to these lists, these methods each turn out to be related to a particular copula that joins the marginal cumulative distribution functions into a bivariate cumulative distribution function. The transitivity of the generated probabilistic relation has been completely characterized. In this paper, the list comparison methods are generalized for the purpose of comparing arbitrary random variables. The transitivity properties derived in the case of discrete uniform random variables are shown to be generic. Additionally, it is shown that for a collection of normal random variables, both comparison methods lead to a probabilistic relation that is at least moderately stochastic transitive.  相似文献   

8.
This paper introduces a rather general technique for computing the average-case performance of dynamic data structures, subjected to arbitrary sequences of insert, delete, and search operations. The method allows us effectively to evaluate the integrated cost of various interesting data structure implementations, for stacks, dictionaries, symbol tables, priority queues, and linear lists; it can thus be used as a basis for measuring the efficiency of each proposed implementation. For each data type, a specific continued fraction and a family of orthogonal polynomials are associated with sequences of operations: Tchebycheff for stacks, Laguerre for dictionaries, Charlier for symbol tables, Hermite for priority queues, and Meixner for linear lists. Our main result is an explicit expression, for each of the above data types, of the generating function for integrated costs, as a linear integral transform of the generating functions for individual operation costs. We use the result to compute explicitly integrated costs of various implementations of dictionaries and priority queues.  相似文献   

9.
The vehicle routing problem with backhauls involves the delivery and pickup of goods at different customer locations. In many practical situations, however, the same customer may require both a delivery of goods from the distribution centre and a pickup of recycled items simultaneously. In this paper, an insertion-based procedure to generate good initial solutions and a heuristic based on the record-to-record travel, tabu lists, and route improvement procedures are proposed to resolve the vehicle routing problems with simultaneous deliveries and pickups. Computational characteristics of the insertion-based procedure and the hybrid heuristic are evaluated through computational experiments. Computational results show that the insertion-based procedure obtained better solutions than those found in the literature. Computational experiments also show that the proposed hybrid heuristic is able to reduce the gap between initial solutions and optimal solutions effectively and is capable of obtaining optimal solutions very efficiently for small-sized problems.  相似文献   

10.
In this paper, three discrete time integer-valued inventory models for perishable items are introduced. In these models, each item in the stock is assumed to perish in a given period with some probability. The dynamics of the models are affected by a demand process, a replenishment process, and perishability. However perished items are not observed while in stock, unless sold. Recursive estimates for the probability of the number of the perished items are derived.  相似文献   

11.
The knapsack problem (KP) is generalized to the case where items are partially ordered through a set of precedence relations. As in ordinary KPs, each item is associated with profit and weight, the knapsack has a fixed capacity, and the problem is to determine the set of items to be packed in the knapsack. However, each item can be accepted only when all the preceding items have been included in the knapsack. The knapsack problem with these additional constraints is referred to as the precedence-constrained knapsack problem (PCKP). To solve PCKP exactly, we present a pegging approach, where the size of the original problem is reduced by applying the Lagrangian relaxation followed by a pegging test. Through this approach, we are able to solve PCKPs with thousands of items within a few minutes on an ordinary workstation.  相似文献   

12.
A lot of N items is produced on a randomly degrading facility. The lot is split into inspection sublots for detection of faulty items, and the process is returned to as-new condition after each sublot. For exponentially distributed failure time the mean, variance and probability distribution of the total number of good items are considered. Stochastic optimality properties are developed for the equal-sublots policy in both the exponential and general failure time cases.  相似文献   

13.
教学工作量,教学质量,科研能力,专业书籍阅读量是学校考评教师业务素质的基本指标,而这些指标间又存在着一定的关系。本文运用典型相关分析的方法,分析了这些指标间的相关关系,并由分析结果,提出了笔者对考评方法的一点参考意见。  相似文献   

14.
Three-staged cutting patterns are often used in dividing large plates into small rectangular items. Vertical cuts separate the plate into segments in the first stage, horizontal cuts split each segment into strips in the second stage, and vertical cuts divide each strip into items in the third stage. A heuristic algorithm for generating constrained three-staged patterns is presented in this paper. The optimization objective is to maximize the pattern value that is the total value of the included items, while the frequency of each item type should not exceed the specified upper bound. The algorithm uses an exact procedure to generate strips and two heuristic procedures to generate segments and the pattern. The pattern-generation procedure first determines an initial solution and then uses its information to generate more segments to extend the solution space. Computational results show that the algorithm is effective in improving solution quality.  相似文献   

15.
Of interest are the subgroups of various groups which have nonempty intersection with each class of conjugate elements of the group under study. We call these subgroups conjugately dense and study Neumann's problem of describing them in the Chevalley groups over a field. The main theorem lists all conjugately dense subgroups of the Chevalley groups of Lie rank 1 over a locally finite field.  相似文献   

16.
The problem of scheduling the production of new and recoverable defective items of the same product manufactured on the same facility is studied. Items are processed in batches. Each batch comprises two sub-batches processed consecutively. In the first sub-batch, all the items are newly manufactured. Some of them are of the required good quality and some are defective. The defective items are remanufactured in the second sub-batch. They deteriorate while waiting for rework. This results in increased time and cost for their remanufacturing. All the items in the same sub-batch complete at the same time, which is the completion time of the last item in the sub-batch. Each remanufactured defective item is of the required good quality. It is assumed that the percentage of defective items in each batch is the same. A setup time is required to start batch processing and to switch from manufacturing to remanufacturing. The demands for good quality items over time are given. The objective is to find batch sizes such that the total setup and inventory holding cost is minimized and all the demands are satisfied. Dynamic programming algorithms are presented for the general problem and some important special cases.  相似文献   

17.
One-dimensional bin-packing problems require the assignment of a collection of items to bins with the goal of optimizing some criterion related to the number of bins used or the ‘weights’ of the items assigned to the bins. In many instances, the number of bins is fixed and the goal is to assign the items such that the sums of the item weights for each bin are approximately equal. Among the possible applications of one-dimensional bin-packing in the field of psychology are the assignment of subjects to treatments and the allocation of students to groups. An especially important application in the psychometric literature pertains to splitting of a set of test items to create distinct subtests, each containing the same number of items, such that the maximum sum of item weights across all bins is minimized. In this context, the weights typically correspond to item statistics derived from difficulty and discrimination indices. We present a mixed zero-one integer linear programming (MZOILP) formulation of this one-dimensional minimax bin-packing problem and develop an approximate procedure for its solution that is based on the simulated annealing algorithm. In two comparisons that focused on 34 practically-sized test problems (up to 6000 items and 300 bins), the simulated annealing heuristic generally provided better solutions than were obtained when using a commercial mathematical programming software package to solve the MZOILP formulation directly.  相似文献   

18.
A multi-item inventory system is considered which has the property that, for each single item, a reorder policy using the E.O.Q. formula would be appropriate. Holding costs are linear, and fixed ordering costs are assumed to be composed of a major set-up cost reflecting the pure fact of placing an order, and a sum of minor set-up costs corresponding to the items included in the order. If it is desirable to form a certain number of groups of items where all items of one group share the same order cycle, it is shown that there is always an optimal grouping in which items are arranged in increasing order of their ratio of yearly holding costs and minor set-up costs.A heuristic for forming the groups is given which turns out to be an optimal algorithm for the case that there are no major set-up costs. After an initial sorting of ratios, the worst-case complexity of this procedure is linear in the number of items.  相似文献   

19.
We propose truthful approximation mechanisms for strategic variants of the generalized assignment problem (GAP) in a payment-free environment. In GAP, a set of items has to be optimally assigned to a set of bins without exceeding the capacity of any singular bin. In our strategic variant, bins are held by strategic agents and each agent may hide its willingness to receive some items in order to obtain items of higher values. The model has applications in auctions with budgeted bidders.  相似文献   

20.
The knapsack problem (KP) is generalized taking into account a precedence relation between items. Such a relation can be represented by means of a directed acyclic graph, where nodes correspond to items in a one-to-one way. As in ordinary KPs, each item is associated with profit and weight, the knapsack has a fixed capacity, and the problem is to determine the set of items to be included in the knapsack. However, each item can be adopted only when all of its predecessors have been included in the knapsack. The knapsack problem with such an additional set of constraints is referred to as the precedence-constrained knapsack problem (PCKP). We present some dynamic programming algorithms that can solve small PCKPs to optimality, as well as a preprocessing method to reduce the size of the problem. Combining these, we are able to solve PCKPs with up to 2000 items in less than a few minutes of CPU time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号