首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The optimal path-finding algorithm which is an important module in developing route guidance systems and traffic control systems has to provide correct paths to consider U-turns, P-turns, and no-left-turns in urban transportation networks.Traditional methods which have been used to consider those regulations on urban transportation networks can be categorized into network representation and algorithmic methods like the vine-building algorithm. First, network representation methods use traditional optimal path-finding algorithms with modifications to the network structure: for example, just adding dummy nodes and links to the existing network allows constraint-search in the network. This method which creates large networks is hard to implement and introduces considerable difficulties in network coding. With the increased number of nodes and links, the memory requirement tremendously increases, which causes the processing speed to slow down. For these reasons, the method has not been widely accepted for incorporating turning regulations in optimal path-finding problems in transportation networks. Second, algorithmic methods, as they are mainly based on the vine-building algorithm, have been suggested for determining optimal path for networks with turn penalties and prohibitions. However, the algorithms, although they nicely reflect the characteristics of urban transportation networks, frequently provide infeasible or suboptimal solutions.The algorithm to be suggested in this research is a method which is basically based on Dijkstra's algorithm [1] and the tree-building algorithm used to construct optimal paths. Unlike the traditional node labeling algorithms which label each node with minimum estimated cost, this algorithm labels each link with minimum estimated cost.Comparison with the vine-building algorithm shows that the solution of the link-labeling algorithm is better than that of the vine-building algorithm which very frequently provides suboptimal solutions. As a result, the algorithm allows turning regulations, while providing an optimal solution within a reasonable time limit.  相似文献   

2.
Local search methods are often used to reduce the power consumption of broadcast routing in wireless networks. For a classic method, sweep, the best available time complexity result is O(|V|4). We present an O(|V|2)-time method, which exhaustively removes unnecessary transmissions yielding a solution comparable to that of sweep.  相似文献   

3.
We consider the situation where two agents try to solve each their own task in a common environment. In particular, we study simple sequential Bayesian games with unlimited time horizon where two players share a visible scene, but where the tasks (termed assignments) of the players are private information. We present an influence diagram framework for representing simple type of games, where each player holds private information. The framework is used to model the analysis depth and time horizon of the opponent and to determine an optimal policy under various assumptions on analysis depth of the opponent. Not surprisingly, the framework turns out to have severe complexity problems even in simple scenarios due to the size of the relevant past. We propose two approaches for approximation. One approach is to use Limited Memory Influence Diagrams (LIMIDs) in which we convert the influence diagram into a set of Bayesian networks and perform single policy update. The other approach is information enhancement, where it is assumed that the opponent in a few moves will know your assignment. Empirical results are presented using a simple board game.  相似文献   

4.
Currently, structure analysis of signed networks with positive and negative links has received wide attention and is becoming a research focus in the area of network science. In recent years, many community detection methods for signed networks have been proposed to analyze the structure of signed networks. However, current methods can only efficiently analyze the signed networks with the single community structure and unable to analyze the signed networks with the coexisting structure of communities and peripheral nodes, bipartite, or other structures. To address this problem, in this study, we present a mathematically principled method for the structure analysis of signed networks with positive and negative links, in which a probabilistic model firstly is proposed to model the signed networks with the single community or the coexisting structure, and a variational Bayesian approach is deduced to learn the approximate distribution of model parameters. For determining the optimal model, we also deduce a model selection criterion based on the evidence theory. In addition, to efficiently analyze the large signed networks, we propose a fast learning version of our algorithm with the time complexity O(k2E) where k is the number of groups and E is the number of links. In our experiments, the proposed method is validated in the synthetic and real-world signed networks, and is compared with the state-of-the-art methods. The experimental results demonstrate that the proposed method can more efficiently and accurately analyze to the structure of signed networks than the state-of-the-art methods.  相似文献   

5.
This paper discusses an alternative to conditioning that may be used when the probability distribution is not fully specified. It does not require any assumptions (such as CAR: coarsening at random) on the unknown distribution. The well-known Monty Hall problem is the simplest scenario where neither naive conditioning nor the CAR assumption suffice to determine an updated probability distribution. This paper thus addresses a generalization of that problem to arbitrary distributions on finite outcome spaces, arbitrary sets of ‘messages’, and (almost) arbitrary loss functions, and provides existence and characterization theorems for robust probability updating strategies. We find that for logarithmic loss, optimality is characterized by an elegant condition, which we call RCAR (reverse coarsening at random). Under certain conditions, the same condition also characterizes optimality for a much larger class of loss functions, and we obtain an objective and general answer to how one should update probabilities in the light of new information.  相似文献   

6.
We present the interpolation search B-tree (ISB-tree), a new cache-aware indexing scheme that supports update operations (insertions and deletions) in O(1) worst-case block transfers and search operations in O(logBlogn) expected block transfers, where B represents the disk block size and n denotes the number of stored elements. The expected search bound holds with high probability for a large class of (unknown) input distributions. The worst-case search bound of our indexing scheme is O(logBn) block transfers. Our update and expected search bounds constitute a considerable improvement over the O(logBn) worst-case block transfer bounds for search and update operations achieved by the B-tree and its numerous variants. This is also verified by an accompanying experimental study.  相似文献   

7.
In this paper we analyze a queueing system with a general service scheduling function. There are two types of customer with different service requirements. The service order for customers of each type is determined by the service scheduling function αk(ij) where αk(ij) is the probability for type-k customer to be selected when there are i type-1 and j type-2 customers. This model is motivated by traffic control to support traffic streams with different traffic characteristics in telecommunication networks (in particular, ATM networks). By using the embedded Markov chain and supplementary variable methods, we obtain the queue-length distribution as well as the loss probability and the mean waiting time for each type of customer. We also apply our model to traffic control to support diverse traffics in telecommunication networks. Finally, the performance measures of the existing diverse scheduling policies are compared. We expect to help the system designers select appropriate scheduling policy for their systems.  相似文献   

8.
Researchers have emphasized that different factors have to be considered when discussing the flexibility of a single machine and that of a group of machines. The present research focuses on proposing methods for measuring machine-group flexibility, which is an extension of the model for measuring single machine flexibility. The measurement of machine-group flexibility needs to take into account at least the following three attributes, namely efficiency, versatility and redundancy. Measurement models for each of these three attributes are demonstrated, and a combined measurement approach for machine-group flexibility is suggested. The entropy approach, which states that the greater the number of available options, the larger the entropy value, is applied to the measurement of versatility and redundancy. Finally, an example illustrates the application of the flexibility measurement models developed in this research.  相似文献   

9.
We discuss two issues in using mixtures of polynomials (MOPs) for inference in hybrid Bayesian networks. MOPs were proposed by Shenoy and West for mitigating the problem of integration in inference in hybrid Bayesian networks. First, in defining MOP for multi-dimensional functions, one requirement is that the pieces where the polynomials are defined are hypercubes. In this paper, we discuss relaxing this condition so that each piece is defined on regions called hyper-rhombuses. This relaxation means that MOPs are closed under transformations required for multi-dimensional linear deterministic conditionals, such as Z = X + Y, etc. Also, this relaxation allows us to construct MOP approximations of the probability density functions (PDFs) of the multi-dimensional conditional linear Gaussian distributions using a MOP approximation of the PDF of the univariate standard normal distribution. Second, Shenoy and West suggest using the Taylor series expansion of differentiable functions for finding MOP approximations of PDFs. In this paper, we describe a new method for finding MOP approximations based on Lagrange interpolating polynomials (LIP) with Chebyshev points. We describe how the LIP method can be used to find efficient MOP approximations of PDFs. We illustrate our methods using conditional linear Gaussian PDFs in one, two, and three dimensions, and conditional log-normal PDFs in one and two dimensions. We compare the efficiencies of the hyper-rhombus condition with the hypercube condition. Also, we compare the LIP method with the Taylor series method.  相似文献   

10.
We consider all-optical networks that use wavelength-division multiplexing and employ wavelength conversion at specific nodes in order to maximize their capacity usage. We investigate the effect of allowing reroutings on the number of necessary wavelength converters. We disprove a claim of Wilfong and Winkler [G. Wilfong, P. Winkler, Ring routing and wavelength translation, in: Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’98, 1998, pp. 333-341] according to which reroutings do not have any effect on the number of necessary wavelength converters on bidirected networks. We show that there exist (bidirected) networks on n nodes that require Θ(n) converters without reroutings, but only O(1) converters if reroutings are allowed. We also address the cases of undirected networks and networks with shortest-path routings. In each case, we resolve the complexity of computing optimal placements of converters.  相似文献   

11.
Let G=(V,E) be a graph with vertex set V and edge set E. The k-coloring problem is to assign a color (a number chosen in {1,…,k}) to each vertex of G so that no edge has both endpoints with the same color. The adaptive memory algorithm is a hybrid evolutionary heuristic that uses a central memory. At each iteration, the information contained in the central memory is used for producing an offspring solution which is then possibly improved using a local search algorithm. The so obtained solution is finally used to update the central memory. We describe in this paper an adaptive memory algorithm for the k-coloring problem. Computational experiments give evidence that this new algorithm is competitive with, and simpler and more flexible than, the best known graph coloring algorithms.  相似文献   

12.
Block coordinate update (BCU) methods enjoy low per-update computational complexity because every time only one or a few block variables would need to be updated among possibly a large number of blocks. They are also easily parallelized and thus have been particularly popular for solving problems involving large-scale dataset and/or variables. In this paper, we propose a primal–dual BCU method for solving linearly constrained convex program with multi-block variables. The method is an accelerated version of a primal–dual algorithm proposed by the authors, which applies randomization in selecting block variables to update and establishes an O(1 / t) convergence rate under convexity assumption. We show that the rate can be accelerated to \(O(1/t^2)\) if the objective is strongly convex. In addition, if one block variable is independent of the others in the objective, we then show that the algorithm can be modified to achieve a linear rate of convergence. The numerical experiments show that the accelerated method performs stably with a single set of parameters while the original method needs to tune the parameters for different datasets in order to achieve a comparable level of performance.  相似文献   

13.
Given n planar existing facility locations, a planar new facility location X is called efficient if there is no other location Y for which the rectilinear distance between Y and each existing facility is at least as small as between X and each existing facility, and strictly less for at least one existing facility. Rectilinear distances are typically used to measure travel distances between points via rectilinear aisles or street networks. We first present a simple arrow algorithm, based entirely on geometrical analysis, that constructs all efficient locations. We then present a row algorithm which is of order n(log n) that constructs all efficient locations, and establish that no alternative algorithm can be of a lower order.  相似文献   

14.
In this paper, an approximate method for the analysis of open networks of queues in tandem and with blocking is proposed. The network consists of M single server queuing stations with exogenous Poisson arrival processes and exponentially distributed service times. The analysis is based on the method of decomposition where the total network is broken down into queues which are analyzed as M/C2/1/N queues assuming Poisson arrival and departure processes to find the steady-state probabilities of the number of customers at each station. The procudure reduces the problem to a number of elementary operations which can be performed efficiently with the aid of a computer. We also compare different definitions of blocking. Numerical results are given to demonstrate the accuracy of the new method.  相似文献   

15.
We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions. We show that the updating technique can be efficiently coupled with the simplest subgradient methods, the unconstrained minimization method by B.Polyak, and the constrained minimization scheme by N.Shor. Similar results can be obtained for a new nonsmooth random variant of a coordinate descent scheme. We present also the promising results of preliminary computational experiments.  相似文献   

16.
We present algorithms for maintaining data structures supporting fast (polylogarithmic) point-location and ray-shooting queries in arrangements of hyperplanes. This data structure allows for deletion and insertion of hyperplanes. Our algorithms use random bits in the construction of the data structure but do not make any assumptions about the update sequence or the hyperplanes in the input. The query bound for our data structure isÕ(polylog(n)), wheren is the number of hyperplanes at any given time, and theÕ notation indicates that the bound holds with high probability, where the probability is solely with respect to randomization in the data structure. By high probability we mean that the probability of error is inversely proportional to a large degree polynomial inn. The space requirement isÕ(n d). The cost of update isÕ(n d?1 logn. The expected cost of update isO(n d?1); the expectation is again solely with respect to randomization in the data structure. Our algorithm is extremely simple. We also give a related algorithm with optimalÕ(logn) query time, expectedO(n d) space requirement, and amortizedO(n d?1) expected cost of update. Moreover, our approach has a versatile quality which is likely to have further applications to other dynamic algorithms. Ford=2, 3 we also show how to obtain polylogarithmic update time in the CRCW PRAM model so that the processor-time product matches (within a polylogarithmic factor) the sequential update time.  相似文献   

17.
In this paper, we provide a unified iteration complexity analysis for a family of general block coordinate descent methods, covering popular methods such as the block coordinate gradient descent and the block coordinate proximal gradient, under various different coordinate update rules. We unify these algorithms under the so-called block successive upper-bound minimization (BSUM) framework, and show that for a broad class of multi-block nonsmooth convex problems, all algorithms covered by the BSUM framework achieve a global sublinear iteration complexity of \(\mathcal{{O}}(1/r)\), where r is the iteration index. Moreover, for the case of block coordinate minimization where each block is minimized exactly, we establish the sublinear convergence rate of O(1/r) without per block strong convexity assumption.  相似文献   

18.
We show that Pearl's causal networks can be described using causal compositional models (CCMs) in the valuation-based systems (VBS) framework. One major advantage of using the VBS framework is that as VBS is a generalization of several uncertainty theories (e.g., probability theory, a version of possibility theory where combination is the product t-norm, Spohn's epistemic belief theory, and Dempster–Shafer belief function theory), CCMs, initially described in probability theory, are now described in all uncertainty calculi that fit in the VBS framework. We describe conditioning and interventions in CCMs. Another advantage of using CCMs in the VBS framework is that both conditioning and intervention can be easily described in an elegant and unifying algebraic way for the same CCM without having to do any graphical manipulations of the causal network. We describe how conditioning and intervention can be computed for a simple example with a hidden (unobservable) variable. Also, we illustrate the algebraic results using numerical examples in some of the specific uncertainty calculi mentioned above.  相似文献   

19.
The metric space model abstracts many proximity or similarity problems, where the most frequently considered primitives are range and k-nearest neighbor search, leaving out the similarity join, an extremely important primitive. In fact, despite the great attention that this primitive has received in traditional and even multidimensional databases, little has been done for general metric databases.We solve two variants of the similarity join problem: (1) range joins: Given two sets of objects and a distance threshold r, find all the object pairs (one from each set) at distance at most r; and (2) k-closest pair joins: Find the k closest object pairs (one from each set). For this sake, we devise a new metric index, coined List of Twin Clusters (LTC), which indexes both sets jointly, instead of the natural approach of indexing one or both sets independently. Finally, we show how to use the LTC in order to solve classical range queries. Our results show significant speedups over the basic quadratic-time naive alternative for both join variants, and that the LTC is competitive with the original list of clusters when solving range queries. Furthermore, we show that our technique has a great potential for improvements.  相似文献   

20.
Motivated by questions related to a fragmentation process which has been studied by Aldous, Pitman, and Bertoin, we use the continuous-time ballot theorem to establish some results regarding the lengths of the excursions of Brownian motion and related processes. We show that the distribution of the lengths of the excursions below the maximum for Brownian motion conditioned to first hit λ>0 at time t is not affected by conditioning the Brownian motion to stay below a line segment from (0,c) to (t,λ). We extend a result of Bertoin by showing that the length of the first excursion below the maximum for a negative Brownian excursion plus drift is a size-biased pick from all of the excursion lengths, and we describe the law of a negative Brownian excursion plus drift after this first excursion. We then use the same methods to prove similar results for the excursions of more general Markov processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号