首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An infeasible-interior-point algorithm for linear complementarity problems   总被引:3,自引:0,他引:3  
We modify the algorithm of Zhang to obtain anO(n2L) infeasible-interior-point algorithm for monotone linear complementarity problems that has an asymptoticQ-subquadratic convergence rate. The algorithm requires the solution of at most two linear systems with the same coefficient matrix at each iteration.This research was supported by the Office of Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38.  相似文献   

2.
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficientO(nlogn+m) time implementation of the Sloan algorithm, wheren is the number of rows (vertices), andm is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler RCM algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning. This work was partially supported by the U. S. National Science Foundation grants CCR-9412698, DMS-9505110, and ECS-9527169, by U. S. Department of Energy grant DE-FG05-94ER25216, and by the National Aeronautics and Space Administration under NASA Contract NAS1-19480 while the second author was in residence at the Institute for Computer Applications in Science and Engineering (ICASE), NASA Langley Research Center, Hampton, VA.  相似文献   

3.
Towards a cost-effective ILU preconditioner with high level fill   总被引:3,自引:0,他引:3  
There has been increased interest in the effect of the ordering of the unknowns on Preconditioned Conjugate Gradient (PCG) methods. A recently proposed Minimum Discarded Fill (MDF) ordering technique is effective in finding goodILU(l) preconditioners, especially for problems arising from unstructured finite element grids. This algorithm can identify anisotropy in complicated physical structures and orders the unknowns in an appropriate direction. TheMDF technique may be viewed as an analogue of the minimum deficiency algorithm in sparse matrix technology, and hence is expensive to compute for high levelILU(l) preconditioners.In this work, several less expensive variants of theMDF technique are explored to produce cost-effectiveILU preconditioners. The ThresholdMDF ordering combinesMDF ideas with drop tolerance techniques to identify the sparsity pattern in theILU preconditioners. The Minimum Update Matrix (MUM) ordering technique is a simplification of theMDF ordering and is an analogue of the minimum degree algorithm. TheMUM ordering method is especially effective for large matrices arising from Navier-Stokes problems.This work was supported by the Natural Sciences and Engineering Research Council of Canada, by the Information Technology Research Centre, which is funded by the Province of Ontario, and by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy under contract DE-AC05-84OR21400 with Martin Marietta Energy Systems, Inc., through an appointment to the U.S. Department of Energy Postgraduate Research Program administered by Oak Ridge Associated Universities.  相似文献   

4.
We consider the complexity of finding a local minimum for the nonconvex Quadratic Knapsack Problem. Global minimization for this example of quadratic programming is NP-hard. Moré and Vavasis have investigated the complexity of local minimization for the strictly concave case of QKP; here we extend their algorithm to the general indefinite case. Our main result is an algorithm that computes a local minimum in O(n(logn)2) steps. Our approach involves eliminating all but one of the convex variables through parametrization, yielding a nondifferentiable problem. We use a technique from computational geometry to address the nondifferentiable problem.Supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research, Department of Energy, under contract W-31-109-Eng-38, in part by a Fannie and John Hertz Foundation graduate fellowship, and in part by Department of Energy grant DE-FG02-86ER25013.A000.  相似文献   

5.
We apply Megiddo's parametric searching technique to several geometric optimization problems and derive significantly improved solutions for them. We obtain, for any fixed ε>0, anO(n 1+ε) algorithm for computing the diameter of a point set in 3-space, anO(8/5+ε) algorithm for computing the width of such a set, and onO(n 8/5+ε) algorithm for computing the closest pair in a set ofn lines in space. All these algorithms are deterministic. Work by Bernard Chazelle was supported by NSF Grant CCR-90-02352. Work by Herbert Edelsbrunner was supported by NSF Grant CCR-89-21421. Work by Leonidas Guibas and Micha Sharir was supported by a grant from the U.S.-Israeli Binational Science Foundation. Work by Micha Sharir was also supported by ONR Grant N00014-90-J-1284, by NSF Grant CCR-89-01484, and by grants from the Fund for Basic Research administered by the Israeli Academy of Sciences, and the G.I.F., the German-Israeli Foundation for Scientific Research and Development.  相似文献   

6.
We consider a version of the knapsack problem which gives rise to a separable concave minimization problem subject to bounds on the variables and one equality constraint. We characterize strict local miniimizers of concave minimization problems subject to linear constraints, and use this characterization to show that although the problem of determining a global minimizer of the concave knapsack problem is NP-hard, it is possible to determine a local minimizer of this problem with at most O(n logn) operations and 1+[logn] evaluations of the function. If the function is quadratic this algorithm requires at most O(n logn) operations.Work supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research of the U.S. Department of Energy under Contract W-31-109-Eng-38.Work supported in part by a Fannie and John Hertz Foundation graduate fellowship.  相似文献   

7.
This paper describes a method for computing all the eigenvalues in a user supplied interval [a,b], and their associated eigenvectors, of the symmetric definite quadratic-matrix problem (M 2+C+K)x=0, where the matrices ar sufficiently sparse that methods based on similarity transformations are inappropriate. Only the triangular factorization of onen ×n matrix need be computed.Research sponsored in part by the Mathematics Department, University of Linköping, Sweden, and the Applied Mathematical Sciences Research Program, Office of Energy Research, U.S. Department of Energy under contract W-7405-eng-26 with the Union Carbide Corporation.  相似文献   

8.
Recently, various interior point algorithms related to the Karmarkar algorithm have been developed for linear programming. In this paper, we first show how this interior point philosophy can be adapted to the linear 1 problem (in which there are no feasibility constraints) to yield a globally and linearly convergent algorithm. We then show that the linear algorithm can be modified to provide aglobally and ultimatelyquadratically convergent algorithm. This modified algorithm appears to be significantly more efficient in practise than a more straightforward interior point approach via a linear programming formulation: we present numerical results to support this claim.This paper was presented at the Third SIAM Conference on Optimization, in Boston, April 1989.Research partially supported by the Applied Mathematical Sciences Research Program (KC-04-02) of the Office of Energy Research of the U.S. Department of Energy under grant DE-FG02-86ER25013.A000, by the U.S. Army Research Office through the Mathematical Sciences Institute, Cornell University, and by the Computational Mathematics Program of the National Science Foundation under grant DMS-8706133.Research partially supported by the U.S. Army Research Office through the Mathematical Sciences Institute, Cornell University and by the Computational Mathematics Program of the National Science Foundation under grant DMS-8706133.  相似文献   

9.
Existing implementations of Munkres' algorithm for the optimal assignment problem are shown to requireO(n 4) time in the worstn×n case. A new implementation is presented which runs in worst-case timeO(n 3) and compares favorably in performance with the algorithm of Edmonds and Karp for this problem.The results of this paper were obtained by the author while at the Department of Computer Science, Cornell University. This work was supported in part by a Vanderbilt University Research Council Grant.  相似文献   

10.
We investigate algorithmic questions that arise in the statistical problem of computing lines or hyperplanes of maximum regression depth among a set of n points. We work primarily with a dual representation and find points of maximum undirected depth in an arrangement of lines or hyperplanes. An O(n d ) time and O(n d−1) space algorithm computes undirected depth of all points in d dimensions. Properties of undirected depth lead to an O(nlog 2 n) time and O(n) space algorithm for computing a point of maximum depth in two dimensions, which has been improved to an O(nlog n) time algorithm by Langerman and Steiger (Discrete Comput. Geom. 30(2):299–309, [2003]). Furthermore, we describe the structure of depth in the plane and higher dimensions, leading to various other geometric and algorithmic results. A preliminary version of this paper appeared in the proceedings of the 15th Annual ACM Symposium on Computational Geometry (1999) M. van Kreveld partially funded by the Netherlands Organization for Scientific Research (NWO) under FOCUS/BRICKS grant number 642.065.503. J.S.B. Mitchell’s research largely conducted while the author was a Fulbright Research Scholar at Tel Aviv University. The author is partially supported by NSF (CCR-9504192, CCR-9732220), Boeing, Bridgeport Machines, Sandia, Seagull Technology, and Sun Microsystems. M. Sharir supported by NSF Grants CCR-97-32101 and CCR-94-24398, by grants from the U.S.–Israeli Binational Science Foundation, the G.I.F., the German–Israeli Foundation for Scientific Research and Development, and the ESPRIT IV LTR project No. 21957 (CGAL), and by the Hermann Minkowski—MINERVA Center for Geometry at Tel Aviv University. J. Snoeyink supported in part by grants from NSERC, the Killam Foundation, and CIES while at the University of British Columbia.  相似文献   

11.
In this paper, we describe an algorithm to stably sort an array ofn elements using only a linear number of data movements and constant extra space, albeit in quadratic time. It was not known previously whether such an algorithm existed. When the input contains only a constant number of distinct values, we present a sequence ofin situ stable sorting algorithms makingO(n lg(k+1) n+kn) comparisons (lg(K) means lg iteratedk times and lg* the number of times the logarithm must be taken to give a result 0) andO(kn) data movements for any fixed valuek, culminating in one that makesO(n lg*n) comparisons and data movements. Stable versions of quicksort follow from these algorithms.Research supported by Natural Sciences and Engineering Research Council of Canada grant No.A-8237 and the Information Technology Research Centre of Ontario.Supported in part by a Research Initiation Grant from the Virginia Engineering Foundation.  相似文献   

12.
Quadratic programming with one negative eigenvalue is NP-hard   总被引:2,自引:0,他引:2  
We show that the problem of minimizing a concave quadratic function with one concave direction is NP-hard. This result can be interpreted as an attempt to understand exactly what makes nonconvex quadratic programming problems hard. Sahni in 1974 [8] showed that quadratic programming with a negative definite quadratic term (n negative eigenvalues) is NP-hard, whereas Kozlov, Tarasov and Haijan [2] showed in 1979 that the ellipsoid algorithm solves the convex quadratic problem (no negative eigenvalues) in polynomial time. This report shows that even one negative eigenvalue makes the problem NP-hard.This author's work supported by the Applied Mathematical Sciences Program (KC-04-02) of the Office of Energy Research of the U.S. Department of Energy under grant DE-FG02-86ER25013. A000 and in part by the National Science Foundation, the Air Force Office of Scientific Research, and the Office of Naval Research, through NSF grant DMS 8920550.  相似文献   

13.
We give a combinatorial definition of the notion of a simple orthogonal polygon beingk-concave, wherek is a nonnegative integer. (A polygon is orthogonal if its edges are only horizontal or vertical.) Under this definition an orthogonal polygon which is 0-concave is convex, that is, it is a rectangle, and one that is 1-concave is orthoconvex in the usual sense, and vice versa. Then we consider the problem of computing an orthoconvex orthogonal polygon of maximal area contained in a simple orthogonal polygon. This is the orthogonal version of the potato peeling problem. AnO(n 2) algorithm is presented, which is a substantial improvement over theO(n 7) time algorithm for the general problem.The work of the first author was supported under a Natural Sciences and Engineering Research Council of Canada Grant No. A-5692 and the work of the second author was partially supported by NSF Grants Nos. DCR-84-01898 and DCR-84-01633.  相似文献   

14.
Enumerating Constrained Non-crossing Minimally Rigid Frameworks   总被引:2,自引:1,他引:1  
In this paper we present an algorithm for enumerating without repetitions all the non-crossing generically minimally rigid bar-and-joint frameworks under edge constraints, which we call constrained non-crossing Laman frameworks, on a given set of n points in the plane. Our algorithm is based on the reverse search paradigm of Avis and Fukuda. It generates each output graph in O(n 4) time and O(n) space, or, with a slightly different implementation, in O(n 3) time and O(n 2) space. In particular, we obtain that the set of all the constrained non-crossing Laman frameworks on a given point set is connected by flips which preserve the Laman property. D. Avis’s research was supported by NSERC and FQRNT grants. N. Katoh’s, M. Ohsaki’s and S.-i. Tanigawa’s research was supported by NEXT Grant-in-Aid for Scientific Research on priority areas of New Horizons in Computing. I. Streinu’s research was supported by NSF grant CCF-0430990 and NSF-DARPA CARGO CCR-0310661.  相似文献   

15.
Summary Nested dissection is an algorithm invented by Alan George for preserving sparsity in Gaussian elimination on symmetric positive definite matrices. Nested dissection can be viewed as a recursive divide-and-conquer algorithm on an undirected graph; it usesseparators in the graph, which are small sets of vertices whose removal divides the graph approximately in half. George and Liu gave an implementation of nested dissection that used a heuristic to find separators. Lipton and Tarjan gave an algorithm to findn 1/2-separators in planar graphs and two-dimensional finite element graphs, and Lipton, Rose, and Tarjan used these separators in a modified version of nested dissection, guaranteeing bounds ofO (n logn) on fill andO(n 3/2) on operation count. We analyze the combination of the original George-Liu nested dissection algorithm and the Lipton-Tarjan planar separator algorithm. This combination is interesting because it is easier to implement than the Lipton-Rose-Tarjan version, especially in the framework of existïng sparse matrix software. Using some topological graph theory, we proveO(n logn) fill andO(n 3/2) operation count bounds for planar graphs, twodimensional finite element graphs, graphs of bounded genus, and graphs of bounded degree withn 1/2-separators. For planar and finite element graphs, the leading constant factor is smaller than that in the Lipton-Rose-Tarjan analysis. We also construct a class of graphs withn 1/2-separators for which our algorithm does not achieve anO(n logn) bound on fill.The work of this author was supported in part by the Hertz Foundation under a graduate fellowship and by the National Science Foundation under Grant MCS 82-02948The work of this author was supported in part by the National Science Foundation under Grant MCS 78-26858 and by the Office of Naval Research under Contract N00014-76-C-0688  相似文献   

16.
We consider the following problem: given a rectangle containingn points, find the largest perimeter subrectangle whose sides are parallel to those of the original rectangle, whose aspect ratio is below a given bound, and which does not contain any of the given points. Chazelle, Drysdale and Lee have studied a variant of this problem with areas as the quantity to be maximized. They gave anO(nlog3 n) algorithm for that problem. We adopt a similar divide-and-conquer approach and are able to use the simpler properties of the perimeter measure to obtain anO(nlog2 n) algorithm for our problem.The work of the first author was supported by the Academy of Finland and that of the second by the Natural Sciences and Engineering Research Council of Canada Grant No. A-5692.  相似文献   

17.
Summary Construction of optimal triangular meshes for controlling the errors in gradient estimation for piecewise linear interpolation of data functions in the plane is discussed. Using an appropriate linear coordinate transformation, rigorously optimal meshes for controlling the error in quadratic data functions are constructed. It is shown that the transformation can be generated as a curvilinear coordinate transformation for anyC data function with nonsingular Hessian matrix. Using this transformation, a construction of nearly optimal meshes for general data functions is described and the error equilibration properties of these meshes discussed. In particular, it is shown that equilibration of errors is not a sufficient condition for optimality. A comparison of meshes generated under several different criteria is made, and their equilibrating properties illustrated.This work was supported by the Natural Sciences and Engineering Research Council of Canada, by the Information Technology Research Centre, which is funded by the Province of Ontario, by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy under contract DE-AC05-84OR21400 with Martin Marietta Energy Systems, Inc., and through an appointment to the U.S. Department of Energy Postgraduate Research Program administered by Oak Ridge Associated Universities  相似文献   

18.
We propose a very simple ray-shooting algorithm, whose only data structure is a triangulation. The query algorithm, after locating the triangle containing the origin of the ray, walks along the ray, advancing from one triangle to a neighboring one until the polygon boundary is reached. The key result of the paper is a Steiner triangulation of a simple polygon with the property that a ray can intersect at most O(log n) triangles before reaching the polygon boundary. We are able to compute such a triangulation in linear sequential time, or in O(log n) parallel time using O(n/log n) processors. This gives a simple, yet optimal, ray-shooting algorithm for a simple polygon. Using a well-known technique, we can extend our triangulation procedure to a multiconnected polygon with k components and n vertices, so that a ray intersects at most O(√k log n) triangles.  相似文献   

19.
We consider-approximation schemes for indefinite quadratic programming. We argue that such an approximation can be found in polynomial time for fixed andt, wheret denotes the number of negative eigenvalues of the quadratic term. Our algorithm is polynomial in 1/ for fixedt, and exponential int for fixed. We next look at the special case of knapsack problems, showing that a more efficient (polynomial int) approximation algorithm exists.Part of this work was done while the author was visiting Sandia National Laboratories, Albuquerque, New Mexico, supported by the U.S. Department of Energy under contract DE-AC04-76DP00789. Part of this work was also supported by the Applied Mathematical Sciences Program (KC-04-02) of the Office of Energy Research of the U.S. Department of Energy under grant DE-FG02-86ER25013.A000 and in part by the National Science Foundation, the Air Force Office of Scientific Research, and the Office of Naval Research, through NSF grant DMS 8920550.  相似文献   

20.
B-trees have been popular data structures since their definition. Their success is due to the fact that a B-tree containingn keys is balanced with a height ofO(logn). However, for a given set of elements and their access frequencies, one can construct many B-trees (possibly with different heights). The average access costs of these trees may vary significantly. An algorithm to construct a weighted time-optimal B-tree is presented. A weighted time-optimal B-tree is one in which the weighted access cost is minimized. A dynamic programming algorithm is used to construct a weighted time-optimal B-tree givenn elements and their weights. The algorithm runs in timeO(mn 3 logn) and has a storage requirement ofO(mn 2 logn) wherem is the order of the B-tree andn is the number of keys.This research was supported in part by Texas Advanced Research Grant 1080 and NASA Grant NAG 5–739.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号