首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The time-constrained shortest path problem is an important generalisation of the classical shortest path problem and in recent years has attracted much research interest. We consider a time-schedule network, where every node in the network has a list of pre-specified departure times and departure from a node may take place only at one of these departure times. The objective of this paper is to find the first K minimum cost simple paths subject to a total time constraint. An efficient polynomial time algorithm is developed. It is also demonstrated that the algorithm can be modified for finding the first K paths for all possible values of total time.  相似文献   

2.
We use tools and methods from real algebraic geometry (spaces of ultrafilters, elimination of quantifiers) to formulate a theory of convexity in KN over an arbitrary ordered field. By defining certain ideal points (which can be viewed as generalizations of recession cones) we obtain a generalized notion of polar set. These satisfy a form of polar duality that applies to general convex sets and does not reduce to classical duality if K is the field of real numbers. As an application we give a partial classification of total orderings of Artinian local rings and two applications to ordinary convex geometry over the real numbers.  相似文献   

3.
Parallel to Cox's [JRSS B34 (1972) 187-230] proportional hazards model, generalized logistic models have been discussed by Anderson [Bull. Int. Statist. Inst. 48 (1979) 35-53] and others. The essential assumption is that the two densities ratio has a known parametric form. A nice property of this model is that it naturally relates to the logistic regression model for categorical data. In astronomic, demographic, epidemiological, and other studies the variable of interest is often truncated by an associated variable. This paper studies generalized logistic models for the two-sample truncated data problem, where the two lifetime densities ratio is assumed to have the form exp{α+φ(x;β)}. Here φ is a known function of x and β, and the baseline density is unspecified. We develop a semiparametric maximum likelihood method for the case where the two samples have a common truncation distribution. It is shown that inferences for β do not depend the nonparametric components. We also derive an iterative algorithm to maximize the semiparametric likelihood for the general case where different truncation distributions are allowed. We further discuss how to check goodness of fit of the generalized logistic model. The developed methods are illustrated and evaluated using both simulated and real data.  相似文献   

4.
We present a novel, simple and easily implementable algorithm to report all intersections in an embedding of a complete graph. For graphs with N vertices and complexity K measured as the number of segments of the embedding, the running time of the algorithm is Θ(K+NM), where M is the maximum number of edges cut by any vertical line. Our algorithm handles degeneracies, such as vertical edges or multiply intersecting edges, without requiring numerical perturbations to achieve general position.The algorithm is based on the sweep line technique, one of the most fundamental techniques in computational geometry, where an imaginary line passes through a given set of geometric objects, usually from left to right. The algorithm sweeps the graph using a topological line, borrowing the concept of horizon trees from the topological sweep method [H. Edelsbrunner, L.J. Guibas, Topologically sweeping an arrangement, J. Comput. Syst. Sci. 38 (1989) 165-194; J. Comput. Syst. Sci. 42 (1991) 249-251 (corrigendum)].The novelty in our approach is to control the topological line through the use of the moving wall that separates at any time the graph into two regions: the region of known structure, in front of the moving wall, and the region that may contain intersections generated by edges-that have not yet been registered in the sweep process-behind the wall.Our method has applications to graph drawing and for depth-based statistical analysis, for computing the simplicial depth median for a set of N data points [G. Aloupis, S. Langerman, M. Soss, G. Toussaint, Algorithms for bivariate medians and a Fermat-Torricelli problem for lines, Comp. Geom. Theory Appl. 26 (1) (2003) 69-79].We present the algorithm, its analysis, experimental results and extension of the method to general graphs.  相似文献   

5.
An ordered set P is called K-free if it does not contain a four-element subset {a, b, c, d} such that a < b is the only comparability among these elements. In this paper we present a polynomial algorithm to find the jump number of K-free ordered sets.  相似文献   

6.
Longest edge (nested) algorithms for triangulation refinement in two dimensions are able to produce hierarchies of quality and nested irregular triangulations as needed both for adaptive finite element methods and for multigrid methods. They can be formulated in terms of the longest edge propagation path (Lepp) and terminal edge concepts, to refine the target triangles and some related neighbors. We discuss a parallel multithread algorithm, where every thread is in charge of refining a triangle t and its associated Lepp neighbors. The thread manages a changing Lepp(t) (ordered set of increasing triangles) both to find a last longest (terminal) edge and to refine the pair of triangles sharing this edge. The process is repeated until triangle t is destroyed. We discuss the algorithm, related synchronization issues, and the properties inherited from the serial algorithm. We present an empirical study that shows that a reasonably efficient parallel method with good scalability was obtained.  相似文献   

7.
Storing XML documents in relational databases has drawn much attention in recent years because it can leverage existing investments in relational database technologies. Different algorithms have been proposed to map XML DTD/Schema to relational schema in order to store XML data in relational databases. However, most work defines mapping rules based on heuristics without considering application characteristics, hence fails to produce efficient relational schema for various applications. In this paper, we propose a workload-aware approach to generate relational schema from XML data and user specified workload. Our approach adopts the genetic algorithm to find optimal mappings. An elegant encoding method and related operations are proposed to manipulate mappings using bit strings. Various techniques for optimization can be applied to the XML to relational mapping problem based on this representation. We implemented the proposed algorithm and our experiment results showed that our algorithm was more robust and produced better mappings than existing work.  相似文献   

8.
Earlier, one of the authors have introduced the concept of generalized bundle spaces [2, 7]. This term refers to a structure similar to the principal bundle in which the group acting in a leaf depends on the leaf. In the paper, we develop this idea as applied to G -structures and find the structure equations of a K - generalized G -structure.  相似文献   

9.
We generalize notions and results obtained by Amice for regular compact subsets S of a local field K and extended by Bhargava to general compact subsets of K. Considering any ultrametric valued field K and subsets S that are regular in a generalized sense (but not necessarily compact), we show that they still have strong properties such as having v-orderings ${\{a_n\}_{n\geq0}}$ which satisfy a generalized Legendre formula, which are very well ordered and well distributed sequences in the sense of Helsmoortel and which remain v-orderings when a finite number of the initial terms of the sequence are deleted.  相似文献   

10.
We extend the well-known Birkhoff’s operation of cardinal power from partially ordered sets onto n-ary relational systems. The extended power is then studied not only for n-ary relational systems but also for some of their special cases, namely partial algebras and total algebras. It turns out that a concept of diagonality plays an important role when studying the powers.  相似文献   

11.
This paper deals with a particular type of Markov decision process in which the state takes the form I = S × Z, where S is countable, and Z = {1, 2}, and the action space K = Z, independently of s?S. The state space I is ordered by a partial order ?, which is specified in terms of an integer valued function on S. The action space K has the natural order ≤. Under certain monotonicity and submodularity conditions it is shown that isotone optimal policies exist with respect to ? and ? on I and K respectively. The paper examines how the particular isotone structure may be used to simplify the usual policy space algorithm. A brief discussion of the usual successive approximation (value iteration) method, and also the extension of the ideas to semi-Markov decision processes, is given.  相似文献   

12.
Summary In the present work a common basis of convergence analysis is given for a large class of iterative procedures which we call general approximation methods. The concept of strong uniqueness is seen to play a fundamental role. The broad range of applications of this proposed classification will be made clear by means of examples from various areas of numerical mathematics. Included in this classification are methods for solving systems of equations, the Remes algorithm, methods for nonlinear Chebyshev-approximation, the classical Newton method along with its variants such as Newton's method for partially ordered spaces and for degenerate tangent spaces. As an example of the latter the approximation with exponential sums having coalescing frequencies is discussed, that is the case where the tangent space is degenerate.  相似文献   

13.
Efficient algorithms are given to find the maximum lengthn of an ordered list in which 4 elements can be merged using exactlyk comparisons. A top down algorithm for the (2,n) merge problem is discussed and is shown to obtain the optimal merge length first reported by Hwang and Lin. Our algorithms combine this top down approach and strong heuristics, some of which derived from Hwang's optimal algorithm for the (3,n) problem, and produce a lengthn which is close to the optimal lengthf 4(k).  相似文献   

14.
Generalized eigenvalue problems can be considered as a system of polynomials. The homotopy continuation method is used to find all the isolated zeros of the polynomial system which corresponds to the eigenpairs of the generalized eigenvalue problem. A special homotopy is constructed in such a way that there are exactly n distinct smooth curves connecting trivial solutions to desired eigenpairs. Since the curves followed by general homotopy curve following scheme are computed independently of one another, the algorithm is a likely candidate for exploiting the advantages of parallel processing to the generalized eigenvalue problems.  相似文献   

15.
Calmness of multifunctions is a well-studied concept of generalized continuity in which single-valued selections from the image sets of the multifunction exhibit a restricted type of local Lipschitz continuity where the base point is fixed as one point of comparison. Generalized continuity properties of multifunctions like calmness can be applied to convergence analysis when the multifunction appropriately represents the iterates generated by some algorithm. Since it involves an essentially linear relationship between input and output, calmness gives essentially linear convergence results when it is applied directly to convergence analysis. We introduce a new continuity concept called ‘supercalmness’ where arbitrarily small calmness constants can be obtained near the base point, which leads to essentially superlinear convergence results. We also explore partial supercalmness and use a well-known generalized derivative to characterize both when a multifunction is supercalm and when it is partially supercalm. To illustrate the value of such characterizations, we explore in detail a new example of a general primal sequential quadratic programming method for nonlinear programming and obtain verifiable conditions to ensure convergence at a superlinear rate.  相似文献   

16.
For the algebraic Riccati equation whose four coefficient matrices form a nonsingular M-matrix or an irreducible singular M-matrix K, the minimal nonnegative solution can be found by Newton’s method and the doubling algorithm. When the two diagonal blocks of the matrix K have both large and small diagonal entries, the doubling algorithm often requires many more iterations than Newton’s method. In those cases, Newton’s method may be more efficient than the doubling algorithm. This has motivated us to study Newton-like methods that have higher-order convergence and are not much more expensive each iteration. We find that the Chebyshev method of order three and a two-step modified Chebyshev method of order four can be more efficient than Newton’s method. For the Riccati equation, these two Newton-like methods are actually special cases of the Newton–Shamanskii method. We show that, starting with zero initial guess or some other suitable initial guess, the sequence generated by the Newton–Shamanskii method converges monotonically to the minimal nonnegative solution.We also explain that the Newton-like methods can be used to great advantage when solving some Riccati equations involving a parameter.  相似文献   

17.
This paper proposes a comparative appraisal of the fuzzy classification methods which are Fuzzy C-Means, K Nearest Neighbours, method based on Fuzzy Rules and Fuzzy Pattern Matching method. It presents the results we obtained in applying those methods on three types of data that we present in the second part of this article. The classification rate and the computing times are compared from a method to another. This paper describes the advantages of the fuzzy classifiers for an application to a diagnosis problem. To finish it proposes a synthesis of our study which can constitute a base to choose an algorithm in order to apply it to a process diagnosis in real time. It shows how we can associate unsupervised and supervised methods in a diagnosis algorithm.  相似文献   

18.
Clustering is a popular data analysis and data mining technique. Since clustering problem have NP-complete nature, the larger the size of the problem, the harder to find the optimal solution and furthermore, the longer to reach a reasonable results. A popular technique for clustering is based on K-means such that the data is partitioned into K clusters. In this method, the number of clusters is predefined and the technique is highly dependent on the initial identification of elements that represent the clusters well. A large area of research in clustering has focused on improving the clustering process such that the clusters are not dependent on the initial identification of cluster representation. Another problem about clustering is local minimum problem. Although studies like K-Harmonic means clustering solves the initialization problem trapping to the local minima is still a problem of clustering. In this paper we develop a new algorithm for solving this problem based on a tabu search technique—Tabu K-Harmonic means (TabuKHM). The experiment results on the Iris and the other well known data, illustrate the robustness of the TabuKHM clustering algorithm.  相似文献   

19.
In this paper we consider equilibrium problems in vector metric spaces where the function f and the set K are perturbed by the parameters ε,η. We study the stability of the solutions, providing some results in the peculiar framework of generalized monotone functions, first in the particular case where K is fixed, then under both data perturbation.  相似文献   

20.
This paper presents a simulated annealing algorithm for resource constrained project scheduling problems with the objective of minimising makespan. In the search algorithm, a solution is represented with a priority list, a vector of numbers each of which denotes the priority of each activity. In the algorithm, a priority scheduling method is used for making a complete schedule from a given priority list (and hence a project schedule is defined by a priority list). The search algorithm is applied to find a priority list which corresponds to a good project schedule. Unlike most of priority scheduling methods, in the suggested algorithm some activities are delayed on purpose so as to extend search space. Solutions can be further improved by delaying certain activities, since non-delay schedules are not dominant in the problem (the set of non-delay schedules does not always include an optimal solution). The suggested algorithm is flexible in that it can be easily applied to problems with an objective function of a general form and/or complex constraints. The performance of the simulated annealing algorithm is compared with existing heuristics on problems prepared by Patterson and randomly generated test problems. Computational results showed that the suggested algorithm outperformed existing ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号