首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Crowdsourcing is getting popular after a number of industries such as food, consumer products, hotels, electronics, and other large retailers bought into this idea of serving customers. In this paper, we introduce a multi-server queueing model in the context of crowdsourcing. We assume that two types, say, Type 1 and Type 2, of customers arrive to a c-server queueing system. A Type 1 customer has to receive service by one of c servers while a Type 2 customer may be served by a Type 1 customer who is available to act as a server soon after getting a service or by one of c servers. We assume that a Type 1 customer will be available for serving a Type 2 customer (provided there is at least one Type 2 customer waiting in the queue at the time of the service completion of that Type 1 customer) with probability \(p, 0 \le p \le 1\). With probability \(q = 1 - p\), a Type 1 customer will opt out of serving a Type 2 customer provided there is at least one Type 2 customer waiting in the system. Upon completion of a service a free server will offer service to a Type 1 customer on an FCFS basis; however, if there are no Type 1 customers waiting in the system, the server will serve a Type 2 customer if there is one present in the queue. If a Type 1 customer decides to serve a Type 2 customer, for our analysis purposes that Type 2 customer will be removed from the system as Type 1 customer will leave the system with that Type 2 customer. Under the assumption of exponential services for both types of customers we study the model in steady state using matrix analytic methods and establish some results including explicit ones for the waiting time distributions. Some illustrative numerical examples are presented.  相似文献   

2.
We consider a multi-server retrial queue with the Batch Markovian Arrival Process (BMAP). The servers are identical and independent of each other. The service time distribution of a customer by a server is of the phase (PH) type. If a group of primary calls meets idle servers the primary calls occupy the corresponding number of servers. If the number of idle servers is insufficient the rest of calls go to the orbit of unlimited size and repeat their attempts to get service after exponential amount of time independently of each other. Busy servers are subject to breakdowns and repairs. The common flow of breakdowns is the MAP. An event of this flow causes a failure of any busy server with equal probability. When a server fails the repair period starts immediately. This period has PH type distribution and does not depend on the repair time of other broken-down servers and the service time of customers occupying the working servers. A customer whose service was interrupted goes to the orbit with some probability and leaves the system with the supplementary probability. We derive the ergodicity condition and calculate the stationary distribution and the main performance characteristics of the system. Illustrative numerical examples are presented.  相似文献   

3.
Decision-makers who usually face model/parameter risk may prefer to act prudently by identifying optimal contracts that are robust to such sources of uncertainty. In this paper, we tackle this issue under a finite uncertainty set that contains a number of probability models that are candidates for the “true”, but unknown model. Various robust optimisation models are proposed, some of which are already known in the literature, and we show that all of them can be efficiently solved via Second Order Conic Programming (SOCP). Numerical experiments are run for various risk preference choices and it is found that for relatively large sample size, the modeler should focus on finding the best possible fit for the unknown probability model in order to achieve the most robust decision. If only small samples are available, then the modeler should consider two robust optimisation models, namely the Weighted Average Model or Weighted Worst-case Model, rather than focusing on statistical tools aiming to estimate the probability model. Amongst those two, the better choice of the robust optimisation model depends on how much interest the modeler puts on the tail risk when defining its objective function. These findings suggest that one should be very careful when robust optimal decisions are sought in the sense that the modeler should first understand the features of its objective function and the size of the available data, and then to decide whether robust optimisation or statistical inferences is the best practical approach.  相似文献   

4.
Prediction of customer choice behaviour has been a big challenge for marketing researchers. They have adopted various models to represent customers purchase patterns. Some researchers considered simple zero–order models. Others proposed higher–order models to represent explicitly customers tendency to seek [variety] or [reinforcement] as they make repetitive choices. Nevertheless, the question [Which model has the highest probability of representing some future data?] still prevails. The objective of this paper is to address this question. We assess the predictive effectiveness of the well–known customer choice models. In particular, we compare the predictive ability of the [dynamic attribute satiation] (DAS) model due to McAlister (Journal of Consumer Research, 91, pp. 141–150, 1982) with that of the well–known stochastic variety seeking and reinforcement behaviour models. We found that the stochastic [beta binomial] model has the best predictive effectiveness on both simulated and real purchase data. Using simulations, we also assessed the effectiveness of the stochastic models in representing various complex choice processes generated by the DAS. The beta binomial model mimicked the DAS processes the best. In this research we also propose, for the first time, a stochastic choice rule for the DAS model.  相似文献   

5.
One of the latest developments in network revenue management (RM) is the incorporation of customer purchase behavior via discrete choice models. Many authors presented control policies for the booking process that are expressed in terms of which combination of products to offer at a given point in time and given resource inventories. However, in many implemented RM systems—most notably in the hotel industry—bid price control is being used, and this entails the problem that the recommended combination of products as identified by these policies might not be representable through bid price control. If demand were independent from available product alternatives, an optimal choice of bid prices is to use the marginal value of capacity for each resource in the network. But under dependent demand, this is not necessarily the case. In fact, it seems that these bid prices are typically not restrictive enough and result in buy-down effects.We propose (1) a simple and fast heuristic that iteratively improves on an initial guess for the bid price vector; this first guess could be, for example, dynamic estimates of the marginal value of capacity. Moreover, (2) we demonstrate that using these dynamic marginal capacity values directly as bid prices can lead to significant revenue loss as compared to using our heuristic to improve them. Finally, (3) we investigate numerically how much revenue performance is lost due to the confinement to product combinations that can be represented by a bid price.The heuristic is not restricted to a particular choice model and can be combined with any method that provides us with estimates of the marginal values of capacity. In our numerical experiments, we test the heuristic on some popular networks examples taken from peer literature. We use a multinomial logit choice model which allows customers from different segments to have products in common that they consider to purchase. In most problem instances, our heuristic policy results in significant revenue gains over some currently available alternatives at low computational cost.  相似文献   

6.
In this paper we analyze two single server queueing-inventory systems in which items in the inventory have a random common life time. On realization of common life time, all customers in the system are flushed out. Subsequently the inventory reaches its maximum level S through a (positive lead time) replenishment for the next cycle which follows an exponential distribution. Through cancellation of purchases, inventory gets added until their expiry time; where cancellation time follows exponential distribution. Customers arrive according to a Poisson process and service time is exponentially distributed. On arrival if a customer finds the server busy, then he joins a buffer of varying size. If there is no inventory, the arriving customer first try to queue up in a finite waiting room of capacity K. Finding that at full, he joins a pool of infinite capacity with probability γ (0 < γ < 1); else it is lost to the system forever. We discuss two models based on ‘transfer’ of customers from the pool to the waiting room / buffer. In Model 1 when, at a service completion epoch the waiting room size drops to preassigned number L ? 1 (1 < L < K) or below, a customer is transferred from pool to waiting room with probability p (0 < p < 1) and positioned as the last among the waiting customers. If at a departure epoch the waiting room turns out to be empty and there is at least one customer in the pool, then the one ahead of all waiting in the pool gets transferred to the waiting room with probability one. We introduce a totally different transfer mechanism in Model 2: when at a service completion epoch, the server turns idle with at least one item in the inventory, the pooled customer is immediately taken for service. At the time of a cancellation if the server is idle with none, one or more customers in the waiting room, then the head of the pooled customer go to the buffer directly for service. Also we assume that no customer joins the system when there is no item in the inventory. Several system performance measures are obtained. A cost function is discussed for each model and some numerical illustrations are presented. Finally a comparison of the two models are made.  相似文献   

7.
Within the multicriteria aggregation–disaggregation framework, ordinal regression aims at inducing the parameters of a decision model, for example those of a utility function, which have to represent some holistic preference comparisons of a Decision Maker (DM). Usually, among the many utility functions representing the DM’s preference information, only one utility function is selected. Since such a choice is arbitrary to some extent, recently robust ordinal regression has been proposed with the purpose of taking into account all the sets of parameters compatible with the DM’s preference information. Until now, robust ordinal regression has been implemented to additive utility functions under the assumption of criteria independence. In this paper we propose a non-additive robust ordinal regression on a set of alternatives A, whose utility is evaluated in terms of the Choquet integral which permits to represent the interaction among criteria, modelled by the fuzzy measures, parameterizing our approach.  相似文献   

8.
Stochastic loss networks are often very effective models for studying the random dynamics of systems requiring simultaneous resource possession. Given a stochastic network and a multi-class customer workload, the classical Erlang model renders the stationary probability that a customer will be lost due to insufficient capacity for at least one required resource type. Recently a novel family of slice methods has been proposed by Jung et al. (Proceedings of ACM SIGMETRICS conference on measurement and modeling of computer systems, pp. 407–418, 2008) to approximate the stationary loss probabilities in the Erlang model, and has been shown to provide better performance than the classical Erlang fixed point approximation in many regimes of interest. In this paper, we propose some new methods for loss probability calculation. We propose a refinement of the 3-point slice method of Jung et al. (Proceedings of ACM SIGMETRICS conference on measurement and modeling of computer systems, pp. 407–418, 2008) which exhibits improved accuracy, especially when heavily loaded networks are considered, at comparable computational cost. Next we exploit the structure of the stationary distribution to propose randomized algorithms to approximate both the stationary distribution and the loss probabilities. Whereas our refined slice method is exact in a certain scaling regime and is therefore ideally suited to the asymptotic analysis of large networks, the latter algorithms borrow from volume computation methods for convex polytopes to provide approximations for the unscaled network with error bounds as a function of the computational costs.  相似文献   

9.
In this paper we propose a new model for the p-median problem. In the standard p-median problem it is assumed that each demand point is served by the closest facility. In many situations (for example, when demand points are communities of customers and each customer makes his own selection of the facility) demand is divided among the facilities. Each customer selects a facility which is not necessarily the closest one. In the gravity p-median problem it is assumed that customers divide their patronage among the facilities with the probability that a customer patronizes a facility being proportional to the attractiveness of that facility and to a decreasing utility function of the distance to the facility.  相似文献   

10.
In multistage cutting stock problems (CSP) the cutting process is distributed over several successive stages. Every stage except the last one produces intermediate products. The list of intermediate products may be given or arbitrary. The goal is to minimize the total amount of material taken out of stock to cut finished products sufficient to meet customer demands. If the intermediate sizes are given, the column generation technique can be applied to multistage cutting problems. If the intermediate sizes are not given then another dimension is added to the problem complexity. We propose a special procedure for this case that dynamically generates both rows (intermediate sizes) and columns (patterns). We refer to this method as row-and-column generation. The method uses an auxiliary problem embedded into the frame of the revised simplex algorithm. It is a non-linear knapsack problem that can be solved efficiently. In contrast to the column generation method the developed technique cannot guarantee the optimal solution. However, the results of computational experiments are very promising and prove that the method is a valuable addition to the set of tools for modeling and solving multistage CSPs.  相似文献   

11.
This paper deals with an M / G / 1 queue with vacations and multiple phases of operation. If there are no customers in the system at the instant of a service completion, a vacation commences, that is, the system moves to vacation phase 0. If none is found waiting at the end of a vacation, the server goes for another vacation. Otherwise, the system jumps from phase 0 to some operative phase i with probability \(q_i\), \(i = 1,2, \ldots ,n.\) In operative phase i, \(i = 1,2, \ldots ,n\), the server serves customers according to the discipline of FCFS (First-come, first-served). Using the method of supplementary variables, we obtain the stationary system size distribution at arbitrary epoch. The stationary sojourn time distribution of an arbitrary customer is also derived. In addition, the stochastic decomposition property is investigated. Finally, we present some numerical results.  相似文献   

12.
In this paper we study a scheduling model that simultaneously considers production scheduling, material supply, and product delivery. One vehicle with limited loading capacity transports unprocessed jobs from the supplier’s warehouse to the factory in a fixed travelling time. Another capacitated vehicle travels between the factory and the customer to deliver finished jobs to the customer. The objective is to minimize the arrival time of the last delivered job to the customer. We show that the problem is NP-hard in the strong sense, and propose an O(n) time heuristic with a tight performance bound of 2. We identify some polynomially solvable cases of the problem, and develop heuristics with better performance bounds for some special cases of the problem. Computational results show that all the heuristics are effective in producing optimal or near-optimal solutions quickly.  相似文献   

13.
The importance of accurately measuring consumer preference for service quality management to firms in exceedingly competitive environments where customers have an increasing array of access to information cannot be overstated. There has been a resurgence of interest in consumer preference measurement and service quality management, specifically real-time service management, as more data about customer behavior and means to process these data to generate actionable policies become available. Recent years have also witnessed the incorporation of Radio-Frequency Identification (RFID) tags in a wide variety of applications where item-level information can be beneficially leveraged to provide competitive advantage. We propose a knowledge-based framework for real-time service management incorporating RFID-generated item-level identification data. We consider the economic motivations for adopting RFID solutions for customer service management through analysis of service quality, response speed and service dependability. We conclude by providing managerial insights on when and where managers should consider RFID-generated identification information to improve their customer services.  相似文献   

14.
We consider the inventory control problem of an independent supplier in a continuous review system. The supplier faces demand from a single customer who in turn faces Poisson demand and follows a continuous review (R, Q) policy. If no information about the inventory levels at the customer is available, reviews and ordering are usually carried out by the supplier only at points in time when a customer demand occurs. It is common to apply an installation stock reorder point policy. However, as the demand faced by the supplier is not Markovian, this policy can be improved by allowing placement of orders at any point in time. We develop a time delay policy for the supplier, wherein the supplier waits until time t after occurrence of the customer demand to place his next order. If the next customer demand occurs before this time delay, then the supplier places an order immediately. We develop an algorithm to determine the optimal time delay policy. We then evaluate the value of information about the customer’s inventory level. Our numerical study shows that if the supplier were to use the optimal time delay policy instead of the installation stock policy then the value of the customer’s inventory information is not very significant.  相似文献   

15.
This paper studies the operating characteristics of an M[x]/G/1 queueing system under a variant vacation policy, where the server leaves for a vacation as soon as the system is empty. The server takes at most J vacations repeatedly until at least one customer is found waiting in the queue when the server returns from a vacation. If the server is busy or on vacation, an arriving batch balks (refuses to join) the system with probability 1 − b. We derive the system size distribution at different points in time, as well as the waiting time distribution in the queue. Finally, important system characteristics are derived along with some numerical illustration.  相似文献   

16.
The paper considers a discrete stochastic multiple criteria decision making problem. This problem is defined by a finite set of actions A, a set of attributes X and a set of evaluations of actions with respect to attributes E. In stochastic case the evaluation of each action with respect to each attribute takes form of a probability distribution. Thus, the comparison of two actions leads to the comparison of two vectors of probability distributions. In the paper a new procedure for solving this problem is proposed. It is based on three concepts: stochastic dominance, interactive approach, and preference threshold. The idea of the procedure comes from the interactive multiple objective goal programming approach. The set of actions is progressively reduced as the decision maker specifies additional requirements. At the beginning the decision maker is asked to define preference threshold for each attribute. Next, at each iteration the decision maker is confronted with the set of considered actions. If the decision maker is able to make a final choice then the procedure ends, otherwise he/she is asked to specify aspiration level. A didactical example is presented to illustrate the proposed technique.  相似文献   

17.
This paper studies a special class of differential information games with pre-play communication —games with “cheap play”. We consider problems in which there are several rounds of payoff-irrelevant publicly observable choice (or discussion) of actions, followed by a final round in which actions are binding and payoff relevant. A natural focal subset of equilibria of such games in one that consists of equilibria involvingno regret. Such games were first studied by Green and Laffont (1987), where a criterion calledposterior implementability is introduced with the intention of identifying regret-free equilibria in games with cheap play. This is simply a restriction on the Bayesian equilibrium of the underlying one-shot game. If indeed such a restriction does characterize regret-freeness, then the analytics of such situations would be enormously simplified since one can ignore the messy extended-form of the cheap play game; merely examining the one-shot game is sufficient. We argue that regret-freeness of an equilibrium has a subtle distinction: regret-freeness in moves and regret-freeness in assessments. We show that the former causes the extended-form to be irrelevant; posterior implementability completely characterizes equilibria with regret-freeness in moves. The latter, on the other hand, does not yield a similar principle: the extended-form cannot be ignored.  相似文献   

18.
The original rough set approach proved to be very useful in dealing with inconsistency problems following from information granulation. It operates on a data table composed of a set U of objects (actions) described by a set Q of attributes. Its basic notions are: indiscernibility relation on U, lower and upper approximation of either a subset or a partition of U, dependence and reduction of attributes from Q, and decision rules derived from lower approximations and boundaries of subsets identified with decision classes. The original rough set idea is failing, however, when preference-orders of attribute domains (criteria) are to be taken into account. Precisely, it cannot handle inconsistencies following from violation of the dominance principle. This inconsistency is characteristic for preferential information used in multicriteria decision analysis (MCDA) problems, like sorting, choice or ranking. In order to deal with this kind of inconsistency a number of methodological changes to the original rough sets theory is necessary. The main change is the substitution of the indiscernibility relation by a dominance relation, which permits approximation of ordered sets in multicriteria sorting. To approximate preference relations in multicriteria choice and ranking problems, another change is necessary: substitution of the data table by a pairwise comparison table, where each row corresponds to a pair of objects described by binary relations on particular criteria. In all those MCDA problems, the new rough set approach ends with a set of decision rules playing the role of a comprehensive preference model. It is more general than the classical functional or relational model and it is more understandable for the users because of its natural syntax. In order to workout a recommendation in one of the MCDA problems, we propose exploitation procedures of the set of decision rules. Finally, some other recently obtained results are given: rough approximations by means of similarity relations, rough set handling of missing data, comparison of the rough set model with Sugeno and Choquet integrals, and results on equivalence of a decision rule preference model and a conjoint measurement model which is neither additive nor transitive.  相似文献   

19.
郭捷 《运筹与管理》2013,22(6):105-109
本文建立了具有顾客选择偏好的供应链与供应链竞争随机用户网络均衡模型。基于随机用户均衡理论和logit模型,利用变分不等式,得出在竞争均衡态下胜出的供应链,其市场占有率和所提供产品的市场价格等参数。该模型从供应链与供应链竞争的角度,很好刻画了顾客的对具有价格等差异性的同类产品的选择偏好,并给出了研究思路,适用算法和合理的经济解释。  相似文献   

20.
The computational complexity of the bipartite popular matching problem depends on how indifference appears in the preference lists. If one side has strict preferences while nodes on the other side are all indifferent (but prefer to be matched), then a popular matching can be found in polynomial time (Cseh et al., 2015). However, as the same paper points out, the problem becomes NP-complete if nodes with strict preferences are allowed on both sides and indifferent nodes are allowed on one side. We show that the problem of finding a strongly popular matching is polynomial-time solvable even in the latter case. More generally, we give a polynomial-time algorithm for the many-to-many version, i.e. the strongly popular b-matching problem, in the setting where one side has strict preferences while agents on the other side may have one tie of arbitrary length at the end of their preference list.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号