首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study a discrete-time, classified multi-server queue with a shared buffer. There arem servers and each server belongs to one ofk classes (mk), so thatk kinds of jobs can be served in the system. We characterize a bursty arrival process using bursts which consist of the same kind of jobs. Once the first job of a burst arrives at the queue, the successive jobs will arrive on every time slot until the last job of the burst arrives. The numbers of jobs of a burst and the inter-arrival times of bursts are assumed to be i.i.d., respectively, and the service time is assumed to be equal to one slot. We propose an efficient numerical method to exactly obtain the job loss probability, the waiting time distribution and the mean queue length. We apply this model to the ATM switch with a shared buffer and obtain the performance measures. Numerical results show the advantage of the ATM switch with a shared buffer compared to the one with output buffers.  相似文献   

2.
All studies in the admission control of a service station make decisions at arrival epochs. When arrivals are internal and are rejected from a queue, the rejected jobs have to be routed to other stations in the system. However the system will not know whether a job will be admitted to a queue or not until its arrival epoch to that queue. Thus, the system has to react dynamically and agilely to the decisions made at a specific queue and may try several queues before finding a queue that admits the job. This paper remedies these difficulties by changing the decision epochs of the admission control from arrival epochs to departure epochs with the actions of switching (keeping) the arrival stream on or off. Thus upstream stations will have information on the admission status of their downstream stations all the time. It is proved that the optimal policy for this revised admission control system is of control limit type for an M/G/1 queue. Comparisons of the optimal values and optimal policies for the admission controls made at arrival epochs and at departure epochs are included in the paper.  相似文献   

3.
In this paper, we model an open queuing network and analyze performance measures with and without feedback as two individual cases. The model comprises a single queue with a dedicated processor capable of handling two like jobs as a single job. Two different job arrivals with different processing times are considered with an internal timer. Performance measures such as average queue length, average response time, average waiting time of the jobs are computed and plotted. The joint density function for the inter arrival time and arrival rate are derived. The probability mass function has been derived for all possible cases that may arise in a duration (0, t], considering n job arrivals during that period of time and an integer programming problem is formulated to obtain optimal sequence patterns which would maximize the efficiency of the model.  相似文献   

4.
This paper presents a one-server queueing model with retrials in discrete-time. The number of primary jobs arriving in a time slot follows a general probability distribution and the different numbers of primary arrivals in consecutive time slots are mutually independent. Each job requires from the server a generally distributed number of slots for its service, and the service times of the different jobs are independent. Jobs arriving in a slot can start their service only at the beginning of the next slot. When upon arrival jobs find the server busy all incoming jobs are sent into orbit. When upon arrival in a slot jobs find the server idle, then one of the incoming jobs (randomly chosen) in that slot starts its service at the beginning of the next slot, whereas the other incoming jobs in that slot, if any, are sent into orbit. During each slot jobs in the orbit try to re-enter the system individually, independent of each other, with a given retrial probability.  相似文献   

5.
Cruise  James  Jonckheere  Matthieu  Shneer  Seva 《Queueing Systems》2020,95(3-4):271-279
Queueing Systems - We consider Poisson streams of exponentially distributed jobs arriving at each edge of a hypergraph of queues. Upon arrival, an incoming job is routed to the shortest queue among...  相似文献   

6.
We revisit the problem of job assignment to multiple heterogeneous servers in parallel. The system under consideration, however, has a few unique features. Specifically, repair jobs arrive to the queueing system in batches according to a Poisson process. In addition, servers are heterogeneous and the service time distributions of the individual servers are general. The objective is to optimally assign each job within a batch arrival to minimize the long-run average number of jobs in the entire system. We focus on the class of static assignment policies where jobs are routed to servers upon arrival according to pre-determined probabilities. We solve the model analytically and derive the structural properties of the optimal static assignment. We show that when the traffic is below a certain threshold, it is better to not assign any jobs to slower servers. As traffic increases (either due to an increase in job arrival rate or batch size), more slower servers will be utilized. We give an explicit formula for computing the threshold. Finally we compare and evaluate the performance of the static assignment policy to two dynamic policies, specifically the shortest expected completion policy and the shortest queue policy.  相似文献   

7.
This paper aims to develop an on-line Ant Colony Optimization (ACO) framework, where jobs arrive over time, and at any time we lack knowledge concerning future jobs. A due date is determined upon job arrival, and jobs are sequenced on the machine to optimize the sum of weighted lead times with all due dates met. We propose that each ant is associated with a sequence of waiting jobs with quoted due dates. This waiting sequence is constantly updated over time (whenever a job is selected to be processed or a new job arrives). The on-line schedule is constructed by selecting the first job in the waiting list of the “best” ant to process (along with its due date) as the machine becomes available. However, for the ant where this job is not the first one in the list, processing it pushes back the waiting jobs positioned before it. If such push back results in a due date violation, this ant will be eliminated. Further, our ACO framework does not include the iterative procedure due to the characteristics of the on-line problem; this is one difference from the traditional ACO framework besides ant elimination. The computational testing on generated instances shows that our ACO algorithm outperforms an existing effective on-line algorithm in the literature. Also, with local search incorporated using the EDD (Earliest Due Date) rule, improvements can be obtained in both computational outcome and time.  相似文献   

8.
We consider a 2-class queueing system, operating under a generalized processor-sharing discipline. The arrival rate to the secondary queue is much smaller than that to the primary queue, while the exponentially distributed service requirements have comparable parameters. The primary queue is assumed to be heavily loaded, so the processor-sharing factor for the secondary queue is assumed to be relatively small. We use singular perturbation analyses in a small parameter measuring the ratio of arrival rates, and the closeness of the system to instability. Two different regimes are analyzed, corresponding to a heavily loaded and a lightly loaded secondary queue, respectively. With suitable scaling of variables, lowest order asymptotic approximations to the joint stationary distribution of the numbers of jobs in the two queues are derived, as well as to the marginal distributions.  相似文献   

9.
In this paper, we consider a MAP/G/1 queue in which each customer arrives with a service and a space requirement, which could be dependent. However, the space and service requirements of different customers are assumed to be independent. Each customer occupies its space requirement in a buffer until it has completely received its service, at which time, it relinquishes the space it occupied. We study and solve the problem of finding the steady-state distribution of the total space requirement of all customers present in the system. In the process of doing so, we also generalize the solution of the MAP/G/1 queue and find the time-average joint distribution of the queue-length, the state of the arrival process and the elapsed service time, conditioned on the server being busy. This problem has applications to the design of buffer requirements for a computer or communication system.  相似文献   

10.
The class of tandem queueing networks with job feedback is studied under stationarity conditions on the arrival and service times sequences. Each job, after completing service in the last queue, is fed back (rerouted) to the first one, a random number of times, before leaving the system. The average execution time per job is exactly computed, as the number of jobs becomes large, and is minimized under mild conditions. The degree of parallelism achieved in the processing is also computed. The issue of rate-stability of the system is then considered. The network is defined to be rate-stable iff the job departure rate is equal to the job arrival rate; that depends heavily on the dynamic feedback policy we employ to place rerouted jobs in specific places of the front queue buffer of the network. The condition under which the network is rate-stable is specified, and a dynamic feedback policy is constructed, which rate-stabilizes the system under the maximum possible job arrival rate; thus, it maximizes the dynamic throughput of the network. Other related results concerning the performance of tandem networks with feedback are obtained.Research supported in part by grants NSF-DDM-RIA-9010778, NSF-NCR-9116268, NSF-NCR-NYI-9258 507, by an AT&T Foundation grant and a GTE Fellowship.  相似文献   

11.
We consider a system where incoming jobs may be executed at different servers, each of which goes through alternating periods of being available and unavailable. Neither the states of the servers nor the relevant queue sizes are known at moments of arrival. Hence, a load balancing mechanism that relies on random time-out intervals and job transfers from one queue to another is adopted. The object is to minimize a cost function which may include holding costs and transfer costs. A model of a single queue with an unreliable server and timeouts is analyzed first. The results are then used to obtain an approximate solution for arbitrary number of queues. Several transfer policies are evaluated and compared.  相似文献   

12.
Congestion and memory occupancy in computer system may be reduced further if new jobs are admitted only when the number of jobs queued at CPU is below CPU run queue cutoff (RQ). In this paper, we prove that response time of a job is invariant with respect toRQ if jobs do not communicate each other. We also demonstrate this invariance property numerically using marix-geometric methods and present an approximate method for the delay due to context switching under time slicing. The approximation suggests that time slicing with constant overhead yields a throughput similar to an FCFS system without overhead.  相似文献   

13.
Ishizaki  Fumio  Takine  Tetsuya 《Queueing Systems》2000,34(1-4):67-100
An efficient yet accurate estimation of the tail distribution of the queue length has been considered as one of the most important issues in call admission and congestion controls in ATM networks. The arrival process in ATM networks is essentially a superposition of sources which are typically bursty and periodic either due to their origin or their periodic slot occupation after traffic shaping. In this paper, we consider a discrete-time queue where the arrival process is a superposition of general periodic Markov sources. The general periodic Markov source is rather general since it is assumed only to be irreducible, stationary and periodic. Note also that the source model can represent multiple time-scale correlations in arrivals. For this queue, we obtain upper and lower bounds for the asymptotic tail distribution of the queue length by bounding the asymptotic decay constant. The formulas can be applied to a queue having a huge number of states describing the arrival process. To show this, we consider an MPEG-like source which is a special case of general periodic Markov sources. The MPEG-like source has three time-scale correlations: peak rate, frame length and a group of pictures. We then apply our bound formulas to a queue with a superposition of MPEG-like sources, and provide some numerical examples to show the numerical feasibility of our bounds. Note that the number of states in a Markov chain describing the superposed arrival process is more than 1.4 × 1088. Even for such a queue, the numerical examples show that the order of the magnitude of the tail distribution can be readily obtained.  相似文献   

14.
研究工件可提前预知信息的在线分批排序问题, 工件的预知信息时间依时间到达, 目标为极小化最大完工时间. 已知从工件的信息可预知到该工件可加工需要时间~$a$, 所有工件的最大加工时间为~$p_{{\rm max}}$, 多个工件可以作为一批被机器同时加工, 批的加工时间为该批工件中最长加工时间. 对于批容量无限的单机问题给出一个在线算法~$\gamma H^\infty$, 并证明其竞争比和问题的下界都为~$1+\gamma$, 其中~$\gamma=\left(-1+\sqrt{1+\frac{4p_{{\rm max}}}{p_{{\rm max}}+a}}\right)/2$, 进而算法是最优的.  相似文献   

15.
Each day a facility commences service at time zero. All customers arriving prior to time T are served during that day. The queuing discipline is First-Come First-Served. Each day, each person in the population chooses whether or not to visit the facility that day. If he decides to visit, he arrives at an instant of time such that his expected waiting time in the queue is minimal. We investigate the arrival rate of customers in equilibrium, where each customer is fully aware of the characteristics of the system. We show that the arrival rate is constant before opening time, but that in general it is not constant between opening and closing time. For the case of exponential distribution of service time, we develop a set of equations from which the equilibrium queue size distribution and expected waiting time can be numerically computed as functions of time.  相似文献   

16.
We study infinite-server queues in which the arrival process is a Cox process (or doubly stochastic Poisson process), of which the arrival rate is given by a shot-noise process. A shot-noise rate emerges naturally in cases where the arrival rate tends to exhibit sudden increases (or shots) at random epochs, after which the rate is inclined to revert to lower values. Exponential decay of the shot noise is assumed, so that the queueing systems are amenable to analysis. In particular, we perform transient analysis on the number of jobs in the queue jointly with the value of the driving shot-noise process. Additionally, we derive heavy-traffic asymptotics for the number of jobs in the system by using a linear scaling of the shot intensity. First we focus on a one-dimensional setting in which there is a single infinite-server queue, which we then extend to a network setting.  相似文献   

17.
In a line production system, if the sequencing at each work station is done according to the times that the jobs are due out of the system then the sequencing is according to what is called the dynamic priority rule. The priorities are dynamic because the longer a job waits in a queue for service, the less the likelihood that a later arrival will precede it. In this paper interest is focused on the equilibrium probability distribution of the time that a job spends in such a system (called the flow time). Reported here are results of simulation studies which suggest a technique for locating these distributions graphically from theoretically derived flow time distributions for a similar system, but in which queue discipline is governed by a first-come, first-served rule.  相似文献   

18.
At regular times, a satellite launcher company has to plan the use of its launcher to get the maximum profit. In a more formal way, the problem consists of selecting and scheduling a subset of unit-length jobs constrained by capacitated time slots so that the overall cost is a minimum. The data associated with each job are its weight, its time-window and its expected gain when it is performed. With each time slot are associated a setup cost and a capacity. The setup cost of a time slot is due when this time-slot is used to perform at least one job. Moreover the total weight of all jobs scheduled within a time slot is at most the time slot capacity. We first show that the general problem is hard and provide some easy special cases. We then propose a first dynamic-programming polynomial-time algorithm for the special case with unit weights. A second and more efficient dynamic programming algorithm is also provided for the special case of unit weights and agreeable time windows. This last algorithm is finally improved for the special case of equal gains.  相似文献   

19.
We study the behavior of a single-server discrete-time queue with batch arrivals, where the information on the queue length and possibly on service completions is delayed. Such a model describes situations arising in high speed telecommunication systems, where information arrives in messages, each comprising a variable number of fixed-length packets, and it takes one unit of time (a slot) to transmit a packet. Since it is not desirable to attempt service when the system may be empty, we study a model where we assume that service is attempted only if, given the information available to the server, it is certain that there are messages in the queue. We characterize the probability distribution of the number of messages in the queue under some general stationarity assumptions on the arrival process, when information on the queue size is delayedK slots, and derive explicit expressions of the PGF of the queue length for the case of i.i.d. batch arrivals and general independent service times. We further derive the PGF of the queue size when information onboth the queue length and service completion is delayedK=1 units of time. Finally, we extend the results to priority queues and show that when all messages are of unit length, thec rule remains optimal even in the case of delayed information.  相似文献   

20.
We consider a two-node tandem queueing network in which the upstream queue is M/G/1 and each job reuses its upstream service requirement when moving to the downstream queue. Both servers employ the first-in-first-out policy. We investigate the amount of work in the second queue at certain embedded arrival time points, namely when the upstream queue has just emptied. We focus on the case of infinite-variance service times and obtain a heavy traffic process limit for the embedded Markov chain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号