首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the last years we have witnessed remarkable progress in providing efficient algorithmic solutions to the problem of computing best journeys (or routes) in schedule-based public transportation systems. We have now models to represent timetables that allow us to answer queries for optimal journeys in a few milliseconds, also at a very large scale. Such models can be classified into two types: those representing the timetable as an array, and those representing it as a graph. Array-based models have been shown to be very effective in terms of query time, while graph-based ones usually answer queries by computing shortest paths, and hence they are suitable to be combined with the speed-up techniques developed for road networks.In this paper, we study the behavior of graph-based models in the prominent case of dynamic scenarios, i.e., when delays might occur to the original timetable. In particular, we make the following contributions. First, we consider the graph-based reduced time-expanded model and give a simplified and optimized routine for handling delays, and a re-engineered and fine-tuned query algorithm. Second, we propose a new graph-based model, namely the dynamic timetable model, natively tailored to efficiently incorporate dynamic updates, along with a query algorithm and a routine for handling delays. Third, we show how to adapt the ALT algorithm to such graph-based models. We have chosen this speed-up technique since it supports dynamic changes, and a careful implementation of it can significantly boost its performance. Finally, we provide an experimental study to assess the effectiveness of all proposed models and algorithms, and to compare them with the array-based state of the art solution for the dynamic case. We evaluate both new and existing approaches by implementing and testing them on real-world timetables subject to synthetic delays.Our experimental results show that: (i) the dynamic timetable model is the best model for handling delays; (ii) graph-based models are competitive to array-based models with respect to query time in the dynamic case; (iii) the dynamic timetable model compares favorably with both the original and the reduced time-expanded model regarding space; (iv) combining the graph-based models with speed-up techniques designed for road networks, such as ALT, is a very promising approach.  相似文献   

2.
Detecting low-diameter clusters is an important graph-based data mining technique used in social network analysis, bioinformatics and text-mining. Low pairwise distances within a cluster can facilitate fast communication or good reachability between vertices in the cluster. Formally, a subset of vertices that induce a subgraph of diameter at most k is called a k-club. For low values of the parameter k, this model offers a graph-theoretic relaxation of the clique model that formalizes the notion of a low-diameter cluster. Using a combination of graph decomposition and model decomposition techniques, we demonstrate how the fundamental optimization problem of finding a maximum size k-club can be solved optimally on large-scale benchmark instances that are available in the public domain. Our approach circumvents the use of complicated formulations of the maximum k-club problem in favor of a simple relaxation based on necessary conditions, combined with canonical hypercube cuts introduced by Balas and Jeroslow.  相似文献   

3.
Given a simple undirected graph, the problem of finding a maximum subset of vertices satisfying a nontrivial, interesting property Π that is hereditary on induced subgraphs, is known to be NP-hard. Many well-known graph properties meet the above conditions, making the problem widely applicable. This paper proposes a general purpose exact algorithmic framework to solve this problem and investigates key algorithm design and implementation issues that are helpful in tailoring the general framework for specific graph properties. The performance of the algorithms so derived for the maximum s-plex and the maximum s-defective clique problems, which arise in network-based data mining applications, is assessed through a computational study.  相似文献   

4.
The Markov Decision Process (MDP) framework is a tool for the efficient modelling and solving of sequential decision-making problems under uncertainty. However, it reaches its limits when state and action spaces are large, as can happen for spatially explicit decision problems. Factored MDPs and dedicated solution algorithms have been introduced to deal with large factored state spaces. But the case of large action spaces remains an issue. In this article, we define graph-based Markov Decision Processes (GMDPs), a particular Factored MDP framework which exploits the factorization of the state space and the action space of a decision problem. Both spaces are assumed to have the same dimension. Transition probabilities and rewards are factored according to a single graph structure, where nodes represent pairs of state/decision variables of the problem. The complexity of this representation grows only linearly with the size of the graph, whereas the complexity of exact resolution grows exponentially. We propose an approximate solution algorithm exploiting the structure of a GMDP and whose complexity only grows quadratically with the size of the graph and exponentially with the maximum number of neighbours of any node. This algorithm, referred to as MF-API, belongs to the family of Approximate Policy Iteration (API) algorithms. It relies on a mean-field approximation of the value function of a policy and on a search limited to the suboptimal set of local policies. We compare it, in terms of performance, with two state-of-the-art algorithms for Factored MDPs: SPUDD and Approximate Linear Programming (ALP). Our experiments show that SPUDD is not generally applicable to solving GMDPs, due to the size of the action space we want to tackle. On the other hand, ALP can be adapted to solve GMDPs. We show that ALP is faster than MF-API and provides solutions of similar quality for most problems. However, for some problems MF-API provides significantly better policies, and in all cases provides a better approximation of the value function of approximate policies. These promising results show that the GMDP model offers a convenient framework for modelling and solving a large range of spatial and structured planning problems, that can arise in many different domains where processes are managed over networks: natural resources, agriculture, computer networks, etc.  相似文献   

5.
IDEA (Imprecise Data Envelopment Analysis) extends DEA so it can simultaneously treat exact and imprecise data where the latter are known only to obey ordinal relations or to lie within prescribed bounds. AR-IDEA extends this further to include AR (Assurance Region) and the like approaches to constraints on the variables. In order to provide one unified approach, a further extension also includes cone-ratio envelopment approaches to simultaneous transformations of the data and constraints on the variables. The present paper removes a limitation of IDEA and AR-IDEA which requires access to actually attained maximum values in the data. This is accomplished by introducing a dummy variable that supplies needed normalizations on maximal values and this is done in a way that continues to provide linear programming equivalents to the original problems. This dummy variable can be regarded as a new DMU (Decision Making Unit), referred to as a CMD (Column Maximum DMU).  相似文献   

6.
A study of the worst-case performance of Wong's heuristic for the Steiner problem in directed networks (SPDN) is presented in this paper.SPDN is a classic combinatorial optimization problem having the status of a very difficult problem (NP-hard problem) and it is known as an optimization model for a broad class of problems in networks. Several exact and heuristic approaches have been designed for SPDN in the last twenty five years.Some papers analyze theoretical and experimental behavior of heuristics for SPDN, specially for undirected networks, but none of these has studied the worst-case performance of Wong's heuristic. In this paper, we find a lower bound for that performance and show that this bound is consistent with comparable results in the literature on SPDN and its undirected version.  相似文献   

7.
We consider a clique relaxation model based on the concept of relative vertex connectivity. It extends the classical definition of a k-vertex-connected subgraph by requiring that the minimum number of vertices whose removal results in a disconnected (or a trivial) graph is proportional to the size of this subgraph, rather than fixed at k. Consequently, we further generalize the proposed approach to require vertex-connectivity of a subgraph to be some function f of its size. We discuss connections of the proposed models with other clique relaxation ideas from the literature and demonstrate that our generalized framework, referred to as f-vertex-connectivity, encompasses other known vertex-connectivity-based models, such as s-bundle and k-block. We study related computational complexity issues and show that finding maximum subgraphs with relatively large vertex connectivity is NP-hard. An interesting special case that extends the R-robust 2-club model recently introduced in the literature, is also considered. In terms of solution techniques, we first develop general linear mixed integer programming (MIP) formulations. Then we describe an effective exact algorithm that iteratively solves a series of simpler MIPs, along with some enhancements, in order to obtain an optimal solution for the original problem. Finally, we perform computational experiments on several classes of random and real-life networks to demonstrate performance of the developed solution approaches and illustrate some properties of the proposed clique relaxation models.  相似文献   

8.
Limited memory influence diagrams are graph-based models that describe decision problems with limited information such as planning with teams and/or agents with imperfect recall. Solving a (limited memory) influence diagram is an NP-hard problem, often approached through local search. In this work we give a closer look at k-neighborhood local search approaches. We show that finding a k-neighboring strategy that improves on the current solution is W[1]-hard and hence unlikely to be polynomial-time tractable. We also show that finding a strategy that resembles an optimal strategy (but may have low expected utility) is NP-hard. We then develop fast schema to perform approximate k-local search; experiments show that our methods improve on current local search algorithms both with respect to time and to accuracy.  相似文献   

9.
One consequence of the graph minor theorem is that for every k there exists a finite obstruction set Obs(TW?k). However, relatively little is known about these sets, and very few general obstructions are known. The ones that are known are the cliques, and graphs which are formed by removing a few edges from a clique. This paper gives several general constructions of minimal forbidden minors which are sparse in the sense that the ratio of the treewidth to the number of vertices n does not approach 1 as n approaches infinity. We accomplish this by a novel combination of using brambles to provide lower bounds and achievable sets to demonstrate upper bounds. Additionally, we determine the exact treewidth of other basic graph constructions which are not minimal forbidden minors.  相似文献   

10.
11.
In this paper, the exact solution of average path length in Barabási–Albert model is given. The average path length is an important property of networks and attracts much attention in many areas. The Barabási–Albert model, also called scale free model, is a popular model used in modeling real systems. Hence it is valuable for us to examine the average path length of scale free model. There are two answers, regarding the exact solution for the average path length of scale free networks, already provided by Newman and Bollobas respectively. As Newman proposed, the average path length grows as log(n) with the network size n. However, Bollobas suggested that while it was true when m = 1, the answer changed to log(n)/log(log(n)) when m > 1. In this paper, as we propose, the exact solution of average path length of BA model should approach log(n)/log(log(n)) regardless the value of m. Finally, the simulation is presented to show the validity of our result.  相似文献   

12.
A relevant financial planning problem is the periodical rebalance of a portfolio of assets such that the portfolio’s total value exhibits certain characteristics. This problem can be modelled using a transition graph G to represent the future state space evolution of the corresponding economy and mathematically formulated as a linear programming problem. We present two different mathematical formulations of the problem. The first considers explicitly the set of the possible scenarios (scenario-based approach), while the second considers implicitly the whole set of scenarios provided by the graph G (graph-based approach). Unfortunately, for both the formulations the size of the corresponding linear programs can be huge even for simple financial problems. However, the graph-based approach seems to be a more powerful model, since it allows to consider a huge number of scenarios in a very compact formulation. The purpose of this paper is to present both heuristic and exact methods for the solution of large-scale multi-period financial planning problems using the graph-based model. In particular, in this paper we propose lower and upper bounds and three exact methods based on column, row and column/row generation, respectively. Since the methods based on column/row generation exploits simultaneously both the primal and the dual structure of the problem we call it Criss-Cross generation method. Computational results are given to prove the effectiveness of the proposed methods.   相似文献   

13.
Spearman's Footrule, D, is the sum of the absolute values of the differences between the ranks in two rankings of n objects. For the case of equally likely permutations, tables of the exact cumulative distribution function (c.d.f.) of D are given for 11 ⩽ n ⩽ 18. The maximum difference between the exact c.d.f. for D and the normal approximation is given as well as the maximum difference between the exact c.d.f. for D and the normal approximation with correction for continuity.  相似文献   

14.
The stochastic uncapacitated single allocation p-hub center problem is an extension of the deterministic version which aims to minimize the longest origin-destination path in a hub and spoke network. Considering the stochastic nature of travel times on links is important when designing a network to guarantee the quality of service measured by a maximum delivery time for a proportion of all deliveries. We propose an efficient reformulation for a stochastic p-hub center problem and develop exact solution approaches based on variable reduction and a separation algorithm. We report numerical results to show effectiveness of our new reformulations and approaches by finding global solutions of small-medium sized problems. The combination of model reformulation and a separation algorithm is particularly noteworthy in terms of computational speed.  相似文献   

15.
The computation of Global Climate Models (GCMs) presents significant numerical challenges. This paper presents new algorithms based on sparse occupancy trees for learning and emulating the long wave radiation parameterization in the NCAR CAM climate model. This emulation occupies by far the most significant portion of the computational time in the implementation of the model. From the mathematical point of view this parameterization can be considered as a mapping R220R33 which is to be learned from scattered data samples (xi,yi), i=1,…,N. Hence, the problem represents a typical application of high-dimensional statistical learning. The goal is to develop learning schemes that are not only accurate and reliable but also computationally efficient and capable of adapting to time-varying environmental states. The algorithms developed in this paper are compared with other approaches such as neural networks, nearest neighbor methods, and regression trees as to how these various goals are met.  相似文献   

16.
In the k -partition problem (k-PP), one is given an edge-weighted undirected graph, and one must partition the node set into at most k subsets, in order to minimise (or maximise) the total weight of the edges that have their end-nodes in the same subset. Various hierarchical variants of this problem have been studied in the context of data mining. We consider a ‘two-level’ variant that arises in mobile wireless communications. We show that an exact algorithm based on intelligent preprocessing, cutting planes and symmetry-breaking is capable of solving small- and medium-size instances to proven optimality, and providing strong lower bounds for larger instances.  相似文献   

17.
Association rule mining from a transaction database (TDB) requires the detection of frequently occurring patterns, called frequent itemsets (FIs), whereby the number of FIs may be potentially huge. Recent approaches for FI mining use the closed itemset paradigm to limit the mining effort to a subset of the entire FI family, the frequent closed itemsets (FCIs). We show here how FCIs can be mined incrementally yet efficiently whenever a new transaction is added to a database whose mining results are available. Our approach for mining FIs in dynamic databases relies on recent results about lattice incremental restructuring and lattice construction. The fundamentals of the incremental FCI mining task are discussed and its reduction to the problem of lattice update, via the CI family, is made explicit. The related structural results underlie two algorithms for updating the set of FCIs of a given TDB upon the insertion of a new transaction. A straightforward method searches for necessary completions throughout the entire CI family, whereas a second method exploits lattice properties to limit the search to CIs which share at least one item with the new transaction. Efficient implementations of the parsimonious method is discussed in the paper together with a set of results from a preliminary study of the method's practical performances.  相似文献   

18.
Hub and spoke networks are used to switch and transfer commodities between terminal nodes in distribution systems at minimum cost and/or time. The p-hub center allocation problem is to minimize maximum travel time in networks by locating p hubs from a set of candidate hub locations and allocating demand and supply nodes to hubs. The capacities of the hubs are given. In previous studies, authors usually considered only quantitative parameters such as cost and time to find the optimum location. But it seems not to be sufficient and often the critical role of qualitative parameters like quality of service, zone traffic, environmental issues, capability for development in the future and etc. that are critical for decision makers (DMs), have not been incorporated into models. In many real world situations qualitative parameters are as much important as quantitative ones. We present a hybrid approach to the p-hub center problem in which the location of hub facilities is determined by both parameters simultaneously. Dealing with qualitative and uncertain data, Fuzzy systems are used to cope with these conditions and they are used as the basis of this work. We use fuzzy VIKOR to model a hybrid solution to the hub location problem. Results are used by a genetic algorithm solution to successfully solve a number of problem instances. Furthermore, this method can be used to take into account more desired quantitative variables other than cost and time, like future market and potential customers easily.  相似文献   

19.
20.
An important problem in the context of wireless sensor networks is the Maximum Network Lifetime Problem (MLP): find a collection of subset of sensors (cover) each covering the whole set of targets and assign them an activation time so that network lifetime is maximized. In this paper we consider a variant of MLP, where we allow each cover to neglect a certain fraction (1 ? α) of the targets. We analyze the problem and show that the total network lifetime can be hugely improved by neglecting a very small portion of the targets. An exact approach, based on a Column Generation scheme, is presented and a heuristic solution algorithm is also provided to initialize the approach. The proposed approaches are tested on a wide set of instances. The experimentation shows the effectiveness of both the proposed problems and solution algorithms in extending network lifetime and improving target coverage time when some regularity conditions are taken into account.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号