首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 12 毫秒
1.
It is now well-documented that the structure of evolutionary relationships between a set of present-day species is not necessarily tree-like. The reason for this is that reticulation events such as hybridizations mean that species are a mixture of genes from different ancestors. Since such events are relatively rare, a fundamental problem for biologists is to determine the smallest number of hybridization events required to explain a given (input) set of data in a single (hybrid) phylogeny. The main results of this paper show that computing this smallest number is APX-hard, and thus NP-hard, in the case the input is a collection of phylogenetic trees on sets of present-day species. This answers a problem which was raised at a recent conference (Phylogenetic Combinatorics and Applications, Uppsala University, 2004). As a consequence of these results, we also correct a previously published NP-hardness proof in the case the input is a collection of binary sequences, where each sequence represents the attributes of a particular present-day species. The APX-hardness of these problems means that it is unlikely that there is an efficient algorithm for either computing the result exactly or approximating it to any arbitrary degree of accuracy.  相似文献   

2.
In this paper we propose a nonmonotone trust region algorithm for optimization with simple bound constraints. Under mild conditions, we prove the global convergence of the algorithm. For the monotone case it is also proved that the correct active set can be identified in a finite number of iterations if the strict complementarity slackness condition holds, and so the proposed algorithm reduces finally to an unconstrained minimization method in a finite number of iterations, allowing a fast asymptotic rate of convergence. Numerical experiments show that the method is efficient. Accepted 5 September 2000. Online publication 4 December 2000.  相似文献   

3.
求解变量带简单界约束的非线性规划问题的信赖域方法   总被引:3,自引:0,他引:3  
陈中文  韩继业 《计算数学》1997,19(3):257-266
1.引言。本文考虑下述变量带简单界约束的非线性规划问题:问题(1.1)不仅是实际应用中出现的简单的约束最优化问题,而且相当一部分最优化问题可以把变量限制在有意义的区间内181.因此,无论在理论方面还是在实际应用方面,都有必要研究此种问题.给出简便而且有效的算法.有些文章提出了一些特殊的方法.如011和[2].14]及16]提出了一类信赖域方法,它们都借助于某种辅助点,证明了算法的全局收敛性.在收敛速度的分析方面,除要求在*-T点满足严格互补松弛外,它们还要求另一个条件,即在每次迭代中,辅助点的有效约束必须在尝…  相似文献   

4.
A VLSI sorter of sizeO(n) can sortn elements in linear time when the input and output time are taken into account. If the input contains more thann elements, some prepocessing has to be performed. A VLSI partition algorithm that provides a solution to this problem is presented. The algorithm partitions the input data into two smaller parts as the quicksort algorithm does. That is, the elements of the first part will be smaller than the elements of the second part. The partition is repeated until the parts are small enough to fit in the sorter. It is shown that the average number of times each element must go through the partitioner isO(logk) for a data file of sizekn wheren is the size of the sorter. In the worst case where the partitioner fails to divide the input evenly, the elements must goO(k) times through the partitioner like in the quicksort algorithm. The partitioner can also be used, with simple modifications, as a sorter, a stack, a queue, or as a priority queue. Other advantages of the VLSI algorithm are also discussed.  相似文献   

5.
On the performance of the ICP algorithm   总被引:2,自引:0,他引:2  
We present upper and lower bounds for the number of iterations performed by the Iterative Closest Point (ICP) algorithm. This algorithm has been proposed by Besl and McKay as a successful heuristic for matching of point sets in d-space under translation, but so far it seems not to have been rigorously analyzed. We consider two standard measures of resemblance that the algorithm attempts to optimize: The RMS (root mean squared distance) and the (one-sided) Hausdorff distance. We show that in both cases the number of iterations performed by the algorithm is polynomial in the number of input points. In particular, this bound is quadratic in the one-dimensional problem, under the RMS measure, for which we present a lower bound construction of Ω(nlogn) iterations, where n is the overall size of the input. Under the Hausdorff measure, this bound is only O(n) for input point sets whose spread is polynomial in n, and this is tight in the worst case.We also present several structural geometric properties of the algorithm under both measures. For the RMS measure, we show that at each iteration of the algorithm the cost function monotonically and strictly decreases along the vector Δt of the relative translation. As a result, we conclude that the polygonal path π, obtained by concatenating all the relative translations that are computed during the execution of the algorithm, does not intersect itself. In particular, in the one-dimensional problem all the relative translations of the ICP algorithm are in the same (left or right) direction. For the Hausdorff measure, some of these properties continue to hold (such as monotonicity in one dimension), whereas others do not.  相似文献   

6.
At each iteration, the algorithm determines a feasible descent direction by minimizing a linear or quadratic approximation to the cost on the feasible set. The algorithm is easy to implement if the approximation is easy to minimize on the feasible set, which happens in some important cases. Convergence rate information is obtained, which is sufficient to enable deduction of the number of iterations needed to achieve a specified reduction in the distance from the optimum (measured in terms of the cost). Existing convergence rates for algorithms for solving such convex problems are either asymptotic (and so do not enable the required number of iterations to be deduced) or decrease as the number of constraints increases. The convergence rate information obtained here, however, is independent of the number of constraints. For the case where the quadratic approximation to the cost is not strictly convex (which includes the linear approximation case), the diameter is the only property of the feasible set which affects the convergence rate information. If the quadratic approximation is strictly convex, the convergence rate is independent of the size and geometry of the feasible set. An application to a control-constrained optimal control problem is outlined.  相似文献   

7.
Genetic algorithms are stochastic search algorithms that have been applied to optimization problems. In this paper we analyze the run-time complexity of a genetic algorithm when we are interested in one of a set of distinguished solutions. One such case occurs when multiple optima exist. We define the worst case scenario and derive a probabilistic worst case bound on the number of iterations required to find one of these multiple solutions of interest.  相似文献   

8.
In a wide range of applications it is required to compute the nearest correlation matrix in the Frobenius norm to a given symmetric but indefinite matrix. Of the available methods with guaranteed convergence to the unique solution of this problem the easiest to implement, and perhaps the most widely used, is the alternating projections method. However, the rate of convergence of this method is at best linear, and it can require a large number of iterations to converge to within a given tolerance. We show that Anderson acceleration, a technique for accelerating the convergence of fixed-point iterations, can be applied to the alternating projections method and that in practice it brings a significant reduction in both the number of iterations and the computation time. We also show that Anderson acceleration remains effective, and indeed can provide even greater improvements, when it is applied to the variants of the nearest correlation matrix problem in which specified elements are fixed or a lower bound is imposed on the smallest eigenvalue. Alternating projections is a general method for finding a point in the intersection of several sets and ours appears to be the first demonstration that this class of methods can benefit from Anderson acceleration.  相似文献   

9.
A new descent algorithm for solving quadratic bilevel programming problems   总被引:2,自引:0,他引:2  
1. IntroductionA bilevel programming problem (BLPP) involves two sequential optimization problems where the constraint region of the upper one is implicitly determined by the solutionof the lower. It is proved in [1] that even to find an approximate solution of a linearBLPP is strongly NP-hard. A number of algorithms have been proposed to solve BLPPs.Among them, the descent algorithms constitute an important class of algorithms for nonlinear BLPPs. However, it is assumed for almost all…  相似文献   

10.
The sequential minimization optimization (SMO) is a simple and efficient decomposition algorithm for solving support vector machines (SVMs). In this paper, an improved working set selection and a simplified minimization step are proposed for the SMO-type decomposition method that reduces the learning time for SVM and increases the efficiency of SMO. Since the working set is selected directly according to the Karush–Kuhn–Tucker (KKT) conditions, the minimization step of subproblem is simplified, accordingly the learning time for SVM is reduced and the convergence is accelerated. Following Keerthi’s method, the convergence of the proposed algorithm is analyzed. It is proven that within a finite number of iterations, solution that is based on satisfaction of the KKT conditions will be obtained by using the improved algorithm.  相似文献   

11.
Summary We present an algorithm which combines standard active set strategies with the gradient projection method for the solution of quadratic programming problems subject to bounds. We show, in particular, that if the quadratic is bounded below on the feasible set then termination occurs at a stationary point in a finite number of iterations. Moreover, if all stationary points are nondegenerate, termination occurs at a local minimizer. A numerical comparison of the algorithm based on the gradient projection algorithm with a standard active set strategy shows that on mildly degenerate problems the gradient projection algorithm requires considerable less iterations and time than the active set strategy. On nondegenerate problems the number of iterations typically decreases by at least a factor of 10. For strongly degenerate problems, the performance of the gradient projection algorithm deteriorates, but it still performs better than the active set method.Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38  相似文献   

12.
As a part of a heuristic for the fast detection of new word combinations in text streams, we consider the NP-hard Partial Set Cover of Pairs problem. There we wish to cover a maximum number of pairs of elements by a prescribed number of sets from a given set family. While the approximation ratio of the greedy algorithm for the classic Partial Set Cover problem is completely understood, the same question for covering of pairs is intrinsically more complicated, since the pairs insert some graph-theoretic structure. The best approximation guarantee for the first greedy step can be rephrased as a problem in extremal combinatorics: Assume that we may place a fixed number of subsets of fixed and equal size in a set, how many different pairs of elements can we cover? In this paper we introduce a method to calculate optimal approximation guarantees, and we demonstrate its use on the smallest set families.  相似文献   

13.
《Journal of Complexity》2002,18(1):375-391
The process of partitioning a large set of patterns into disjoint and homogeneous clusters is fundamental in knowledge acquisition. It is called Clustering in the literature and it is applied in various fields including data mining, statistical data analysis, compression and vector quantization. The k-means is a very popular algorithm and one of the best for implementing the clustering process. The k-means has a time complexity that is dominated by the product of the number of patterns, the number of clusters, and the number of iterations. Also, it often converges to a local minimum. In this paper, we present an improvement of the k-means clustering algorithm, aiming at a better time complexity and partitioning accuracy. Our approach reduces the number of patterns that need to be examined for similarity, in each iteration, using a windowing technique. The latter is based on well known spatial data structures, namely the range tree, that allows fast range searches.  相似文献   

14.
In this paper, by means of a new efficient identification technique of active constraints and the method of strongly sub-feasible direction, we propose a new sequential system of linear equations (SSLE) algorithm for solving inequality constrained optimization problems, in which the initial point is arbitrary. At each iteration, we first yield the working set by a pivoting operation and a generalized projection; then, three or four reduced linear equations with a same coefficient are solved to obtain the search direction. After a finite number of iterations, the algorithm can produced a feasible iteration point, and it becomes the method of feasible directions. Moreover, after finitely many iterations, the working set becomes independent of the iterates and is essentially the same as the active set of the KKT point. Under some mild conditions, the proposed algorithm is proved to be globally, strongly and superlinearly convergent. Finally, some preliminary numerical experiments are reported to show that the algorithm is practicable and effective.  相似文献   

15.
We consider a class of convex programming problems whose objective function is given as a linear function plus a convex function whose arguments are linear functions of the decision variables and whose feasible region is a polytope. We show that there exists an optimal solution to this class of problems on a face of the constraint polytope of dimension not more than the number of arguments of the convex function. Based on this result, we develop a method to solve this problem that is inspired by the simplex method for linear programming. It is shown that this method terminates in a finite number of iterations in the special case that the convex function has only a single argument. We then use this insight to develop a second algorithm that solves the problem in a finite number of iterations for an arbitrary number of arguments in the convex function. A computational study illustrates the efficiency of the algorithm and suggests that the average-case performance of these algorithms is a polynomial of low order in the number of decision variables. The work of T. C. Sharkey was supported by a National Science Foundation Graduate Research Fellowship. The work of H. E. Romeijn was supported by the National Science Foundation under Grant No. DMI-0355533.  相似文献   

16.
In a Hilbert space, we study the finite termination of iterative methods for solving a monotone variational inequality under a weak sharpness assumption. Most results to date require that the sequence generated by the method converges strongly to a solution. In this paper, we show that the proximal point algorithm for solving the variational inequality terminates at a solution in a finite number of iterations if the solution set is weakly sharp. Consequently, we derive finite convergence results for the gradient projection and extragradient methods. Our results show that the assumption of strong convergence of sequences can be removed in the Hilbert space case.  相似文献   

17.
An algorithm is described for finding a feasible point for a system of linear inequalities. If the solution set has nonempty interior, termination occurs after a finite number of iterations. The algorithm is a projection-type method, similar to the relaxation methods of Agmon, Motzkin, and Schoenberg. It differs from the previous methods in that it solves for a certain “dual” solution in addition to a primal solution.  相似文献   

18.
In this paper, the problem of minimizing a function of several variables subject to inequality constraints is considered. The method employed is the sequential gradient-restoration algorithm with complete restoration. In this algorithm, the inequality constraints are transformed into equality constraints by suitable transformations. A modification of the sequential gradient-restoration algorithm, designed to improve the convergence characteristics, is presented. It consists of inserting a prerestorative step prior to any iteration of the algorithm. The aim of this prerestorative step is to reduce the constraint violation. Eight numerical examples are presented. They show the considerable beneficial effects associated with the above prerestorative step: on the average, the number of iterations of the modified algorithm is less than 50% of the number of iterations of the standard algorithm. An analogous remark holds for the computer time.  相似文献   

19.
Monotonic (isotonic) regression is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the monotonic regression, formulated as a least distance problem with monotonicity constraints. The resulting smoothed monotonic regression is a convex quadratic optimization problem. We focus on the case, where the set of observations is completely (linearly) ordered. Our smoothed pool-adjacent-violators algorithm is designed for solving the regularized problem. It belongs to the class of dual active-set algorithms. We prove that it converges to the optimal solution in a finite number of iterations that does not exceed the problem size. One of its advantages is that the active set is progressively enlarging by including one or, typically, more constraints per iteration. This resulted in solving large-scale test problems in a few iterations, whereas the size of that problems was prohibitively too large for the conventional quadratic optimization solvers. Although the complexity of our algorithm grows quadratically with the problem size, we found its running time to grow almost linearly in our computational experiments.  相似文献   

20.
Problems of partitioning a finite set of Euclidean points (vectors) into clusters are considered. The criterion is to minimize the sum, over all clusters, of (1) squared norms of the sums of cluster elements normalized by the cardinality, (2) squared norms of the sums of cluster elements, and (3) norms of the sum of cluster elements. It is proved that all these problems are strongly NP-hard if the number of clusters is a part of the input and are NP-hard in the ordinary sense if the number of clusters is not a part of the input (is fixed). Moreover, the problems are NP-hard even in the case of dimension 1 (on a line).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号