首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider a problem of allocating limited quantities of M types of resources among N independent activities that evolve over T epochs. In each epoch, we assign to each activity a task which consumes resources, generates utility, and determines the subsequent state of the activity. We study the complexity of, and approximation algorithms for, maximizing average utility.  相似文献   

2.
We propose algorithms to perform two new operations on an arrangement of line segments in the plane, represented by a trapezoidal map: the split of the map along a given vertical line D, and the union of two trapezoidal maps computed in two vertical slabs of the plane that are adjacent through a vertical line D. The data structure we use is a modified Influence Graph, still allowing dynamic insertions and deletions of line segments in the map. The algorithms for both operations run in O(sD logn+log2n) time, where n is the number of line segments in the map, and sD is the number of line segments intersected by D.  相似文献   

3.
We study an online unit-job scheduling problem arising in buffer management. Each job is specified by its release time, deadline, and a nonnegative weight. Due to overloading conditions, some jobs have to be dropped. The goal is to maximize the total weight of scheduled jobs. We present several competitive online algorithms for various versions of unit-job scheduling, as well as some lower bounds on the competitive ratios.We first give a randomized algorithm RMix with competitive ratio of e/(e−1)≈1.582. This is the first algorithm for this problem with competitive ratio smaller than 2.Then we consider s-bounded instances, where the span of each job (deadline minus release time) is at most s. We give a 1.25-competitive randomized algorithm for 2-bounded instances, matching the known lower bound. We also give a deterministic algorithm Edfα, whose competitive ratio on s-bounded instances is 2−2/s+o(1/s). For 3-bounded instances its ratio is ≈1.618, matching the known lower bound.In s-uniform instances, the span of each job is exactly s. We show that no randomized algorithm can be better than 1.25-competitive on s-uniform instances, if the span s is unbounded. For s=2, our proof gives a lower bound of . Also, in the 2-uniform case, we prove a lower bound of for deterministic memoryless algorithms, matching a known upper bound.Finally, we investigate the multiprocessor case and give a -competitive algorithm for m processors. We also show improved lower bounds for the general and s-uniform cases.  相似文献   

4.
Two examples, Sampson's monks and Padgett and Ansell’ Florentines, illustrate the viability approach of dynamic networks. Notably, the relationship with centrality is studied. Historical processes involving networks are discussed.

Networks are presented as controls in controlled dynamic systems. Viability is the property for a state x that there exists a trajectory starting from x and satisfying the constraints until the time horizon. To obtain this, connection matrices must be selected at each time and each visited state among a specific set, the regulation map, which is carefully defined and built.  相似文献   

5.
In the majority problem, we are given n balls coloured black or white and we are allowed to query whether two balls have the same colour or not. The goal is to find a ball of majority colour in the minimum number of queries. The answer is known to be nB(n) where B(n) is the number of 1’s in the binary representation of n. In this paper we study randomized algorithms for determining majority, which are allowed to err with probability at most ε. We show that any such algorithm must have expected running time at least . Moreover, we provide a randomized algorithm which shows that this result is best possible. These extend a result of De Marco and Pelc [G. De Marco, A. Pelc, Randomized algorithms for determining the majority on graphs, Combin. Probab. Comput. 15 (2006) 823-834].  相似文献   

6.
Quantum algorithms and complexity have recently been studied not only for discrete, but also for some numerical problems. Most attention has been paid so far to the integration and approximation problems, for which a speed-up is shown in many important cases by quantum computers with respect to deterministic and randomized algorithms on a classical computer. In this paper, we deal with the randomized and quantum complexity of initial-value problems. For this nonlinear problem, we show that both randomized and quantum algorithms yield a speed-up over deterministic algorithms. Upper bounds on the complexity in the randomized and quantum settings are shown by constructing algorithms with a suitable cost, where the construction is based on integral information. Lower bounds result from the respective bounds for the integration problem.  相似文献   

7.
In this paper, a class of continuous Estimation of Distribution Algorithms (EDAs) based on Gaussian models is analyzed to investigate their potential for solving dynamic optimization problems where the global optima may change dramatically during time. Experimental results on a number of dynamic problems show that the proposed strategy for dynamic optimization can significantly improve the performance of the original EDAs and the optimal solutions can be consistently located.  相似文献   

8.
In this paper, we present an overview of probabilistic techniques based on randomized algorithms for solving “hard’’ problems arising in performance verification and control of complex systems. This area is fairly recent, even though its roots lie in the robustness techniques for handling uncertain control systems developed in the 1980s. In contrast to these deterministic techniques, the main ingredient of the methods discussed in this survey is the use of probabilistic concepts. The introduction of probability and random sampling permits overcoming the fundamental tradeoff between numerical complexity and conservatism that lie at the roots of the worst-case deterministic methodology. The simplicity of implementation of randomized techniques may also help bridging the gap between theory and practical applications.  相似文献   

9.
Iterated (repeated, successive) integration is used for integrating functions satisfying the Lipschitz condition. To construct an optimal (minimax) algorithm, it is necessary to integrate optimally functions evaluated with indefinite errors. Such a problem is solved by generalization of an algorithm from Ref. 1 which is sequentially optimal for one-dimensional problems.  相似文献   

10.
We consider the following problem: Each processor of the network has assigned a (not necessarily unique) input value. Determine the multiplicity of each input value. Solving this problem means any input-symmetric function (i.e., function not sensitive to permutations of its input values) can be computed. We consider an anonymous synchronous network of arbitrary topology, in which dynamic link faults [P. Fraigniaud, C. Peyrat, Inform. Process. Lett. 71 (1999) 115–119; N. Santoro, P. Widmayer, in: Proc. SIGAL'90, Tokyo, 1990, in: LNCS, Springer, Berlin, 1990, pp. 358–369] may occur.

An instance of this problem has been stated as an open problem by N. Santoro at the rump session of SIROCCO'98: Is it possible to distributively compute parity function (XOR) on anonymous hypercubes with dynamic faults?

We show that if the network size N (the number of processors) is known, the multiplicity of inputs (and thus any input-symmetric function) can be computed on any connected network. The time complexity depends on the details of the model and the amount of topological information, but it is always a low polynomial in N.  相似文献   


11.
This paper studies a multicast problem arising in wavelength division multiplexing single-hop lightwave networks, as well as in Video-on-Demand systems. In this problem, the same message of duration Δ has to be transmitted to a set of n receivers which are not all available simultaneously. The receivers can be partitioned into subsets, each served by a different transmission, with the objective of minimizing their overall waiting cost. When there is a single data channel available for transmission, a dynamic programming algorithm is devised which finds an optimal solution in O(nlogn+min{n2,nΔ2}) time, improving over a previously known O(n3) time algorithm. When multiple data channels are available for transmission, an optimal O(n) time algorithm is proposed which finds an optimal solution if the message has constant transmission duration, whereas an NP-completeness proof is given if the message has arbitrary transmission duration.  相似文献   

12.
The structural and computational aspects of two decomposition algorithms suitable for dynamic optimization of nonlinear interconnected networks are examined. Both methods arise from a decomposition based on Lagrangian duality theory of the addressed dynamic optimization problem, which is the minimization of energy costs over a given time period, subject to the requirement that the network equations and inequality restrictions are satisfied. The first algorithm uses a spatial decomposition of the state space into subgroups of state variables associated with particular network zones. This leads to a number of lower-dimensional optimization problems which can be solved individually at one level and coordinated at a higher level to account for interactions between these zones. The second algorithm uses time decomposition to solve a sequence of static optimization problems, one for each time increment into which the interval is subdivided, which are then coordinated to take account of dynamic interaction between the time increments. Computational results from an actual network in the United Kingdom are presented for both methods.  相似文献   

13.
14.
This paper presents a novel approach to simulation metamodeling using dynamic Bayesian networks (DBNs) in the context of discrete event simulation. A DBN is a probabilistic model that represents the joint distribution of a sequence of random variables and enables the efficient calculation of their marginal and conditional distributions. In this paper, the construction of a DBN based on simulation data and its utilization in simulation analyses are presented. The DBN metamodel allows the study of the time evolution of simulation by tracking the probability distribution of the simulation state over the duration of the simulation. This feature is unprecedented among existing simulation metamodels. The DBN metamodel also enables effective what-if analysis which reveals the conditional evolution of the simulation. In such an analysis, the simulation state at a given time is fixed and the probability distributions representing the state at other time instants are updated. Simulation parameters can be included in the DBN metamodel as external random variables. Then, the DBN offers a way to study the effects of parameter values and their uncertainty on the evolution of the simulation. The accuracy of the analyses allowed by DBNs is studied by constructing appropriate confidence intervals. These analyses could be conducted based on raw simulation data but the use of DBNs reduces the duration of repetitive analyses and is expedited by available Bayesian network software. The construction and analysis capabilities of DBN metamodels are illustrated with two example simulation studies.  相似文献   

15.
Optimal location of interconnected facilities on tree networks is considered in the case when some of the nodes of the network contain existing facilities. The distances between the facilities must satisfy maximum constraints. Polynomial algorithms for the solution of this problem are proposed.  相似文献   

16.
The topology of a mobile wireless network changes over time. Maintaining routes between all nodes requires the continuous transmission of control information, which consumes precious power and bandwidth resources. Many routing protocols have been developed, trading off control overhead and route quality. In this paper, we ask whether there exist low‐overhead schemes that produce low‐stretch routes, even in large networks where all the nodes are mobile. We present a scheme that maintains a hierarchical structure within which constant‐stretch routes can be efficiently computed between every pair of nodes. The scheme rebuilds each level of the hierarchy periodically, at a rate that decreases exponentially with the level of the hierarchy. We prove that this scheme achieves constant stretch under a mild smoothness condition on the nodal mobility processes. Furthermore, we prove tight bounds for the network‐wide control overhead under the additional assumption of the connectivity graph forming a doubling metric space. Specifically, we show that for a connectivity model combining the random geometric graph with obstacles, constant‐stretch routes can be maintained with a total overhead of bits of control information per time unit. © 2015 Wiley Periodicals, Inc. Random Struct. Alg., 47, 669–709, 2015  相似文献   

17.
We consider a semi-dynamic setting for the Temporal Constraint Satisfaction Problem (TCSP), where we are requested to maintain the path-consistency of a network under a sequence of insertions of new (further) constraints between pairs of variables. We show how to maintain the path-consistency in O(nR3) amortized time on a sequence of Θ(n2) insertions, where n is the number of vertices of the network and R is its range, defined as the maximum size of the minimum interval containing all the intervals of a single constraint.Furthermore we extend our algorithms to deal with more general temporal networks where variables can be points and/or intervals and constraints can also be defined on pairs of different kinds of variables. For such cases our algorithms maintain their performance. Finally, we adapt our algorithms to also maintain the arc-consistency of such generalized networks in O(R) amortized time for Θ(n2) insertions.  相似文献   

18.
In Stolyar (Queueing Systems 50 (2005) 401–457) a dynamic control strategy, called greedy primal-dual (GPD) algorithm, was introduced for the problem of maximizing queueing network utility subject to stability of the queues, and was proved to be (asymptotically) optimal. (The network utility is a concave function of the average rates at which the network generates several “commodities.”) Underlying the control problem of Stolyar (Queueing Systems 50 (2005) 401–457) is a convex optimization problem subject to a set of linear constraints. In this paper we introduce a generalized GPD algorithm, which applies to the network control problem with additional convex (possibly non-linear) constraints on the average commodity rates. The underlying optimization problem in this case is a convex problem subject to convex constraints. We prove asymptotic optimality of the generalized GPD algorithm. We illustrate key features and applications of the algorithm on simple examples. AMS Subject Classifications: 90B15 · 90C25 · 60K25 · 68M12  相似文献   

19.
We apply the stochastic dynamic programming to obtain a lower bound for the mean project completion time in a PERT network, where the activity durations are exponentially distributed random variables. Moreover, these random variables are non-static in that the distributions themselves vary according to some randomness in society like strike or inflation. This social randomness is modelled as a function of a separate continuous-time Markov process over the time horizon. The results are verified by simulation.  相似文献   

20.
This paper presents an optimal fully dynamic recognition algorithm for directed cographs. Given the modular decomposition tree of a directed cograph G, the algorithm supports arc and vertex modification (insertion or deletion) in O(d) time where d is the number of arcs involved in the operation. Moreover, if the modified graph remains a directed cograph, the modular decomposition tree is updated; otherwise, a certificate is returned within the same complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号