首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The linear complementarity problem (M|q) is to findw andz inR n such thatwMz=q,w0,z0,w t z=0, givenM inR n×n andq in . Murty's Bard-type algorithm for solving LCP is modeled as a digraph.Murty's original convergence proof considered allq inR n andM inR n×n , aP-matrix. We show how to solve more LCP's by restricting the set ofq vectors and enlarging the class ofM matrices beyondP-matrices. The effect is that the graph contains an embedded graph of the type considered by Stickney and Watson wheneverM is a matrix containing a principal submatrix which is aP-matrix. Examples are presented which show what can happen when the hypotheses are further weakened.  相似文献   

2.
We give a short proof of the finiteness of Murty's principal pivoting algorithm for solving the linear complementarity problemy = Mz + q, y T z = 0,y 0,z 0 withP-matrixM.  相似文献   

3.
Given ann-vertex simple polygonP, the problem of computing the shortest weakly visible subedge ofPis that of finding a shortest line segmentson the boundary ofPsuch thatPis weakly visible froms(ifsexists). In this paper, we present new geometric observations that are useful for solving this problem. Based on these geometric observations, we obtain optimal sequential and parallel algorithms for solving this problem. Our sequential algorithm runs inO(n) time, and our parallel algorithm runs inO(log n) time usingO(n/log n) processors in the CREW PRAM computational model. Using the previously best known sequential algorithms to solve this problem would takeO(n2) time. We also give geometric observations that lead to extremely simple and optimal algorithms for solving, both sequentially and in parallel, the case of this problem where the polygons are rectilinear.  相似文献   

4.
Cottle and Dantzig (Ref. 1) showed that the generalized linear complementarity problem has a solution for anyqR m ifM is a vertical blockP-matrix of type (m 1,...,m n ). They also extended known characterizations of squareP-matrices to vertical blockP-matrices.Here we show, using a technique similar to Murty's (Ref. 2), that there exists a unique solution for anyqR m if and only ifM is a vertical blockP-matrix of type (m 1,...,m n ). To obtain this characterization, we employ a generalization of Tucker's theorem (Ref. 3) and a generalization of a theorem initially introduced by Gale and Nikaido (Ref. 4).  相似文献   

5.
In this paper, the authors develop a new direct method for the solution of a BLCP, that is, a linear complementarity problem (LCP) with upper bounds, when its matrix is a symmetric or an unsymmetricP-matrix. The convergence of the algorithm is established by extending Murty's principal pivoting method to an LCP which is equivalent to the BLCP. Computational experience with large-scale BLCPs shows that the basic-set method can solve efficiently large-scale BLCPs with a symmetric or an unsymmetricP-matrix.  相似文献   

6.
Kohlberg (1972) has shown how the nucleolus for ann-person game with side-payments may be found by solving a single minimization LP in case the imputation space is a polytope. However the coefficients in the LP have a very wide range even for problems with 3 or 4 players. Therefore the method is computationally viable only for small problems on machines with finite precision. Maschler et al. (1979) find the nucleolus by solving a sequence of minimization LPs with constraint coefficients of either –1, 0 or 1. However the number of LPs to be solved is o(4 n ). In this paper, we show how to find the nucleolus by solving a sequence of o(2 n ) LPs whose constraint coefficients are –1, 0 or 1.  相似文献   

7.
In this paper we present an algorithm for finding an optimum assignment for ann×n matrixM inn iterations. The method uses systematic permutations on the rows ofM and is based on the properties of optimum assignments. The implementation presented in the paper requires at mostO(n 3) in time andn 2+6n memory locations for solving a densen×n problem.This work was supported by the National Science Foundation Grant NSF ENG 74-19788.  相似文献   

8.
In Part 1 of this paper, we introduced a (2K+1)n-dimensional portfolio optimization problem with variable transaction costs taken into account. We presented a method for solving the (2K+1)n-dimensional problem by solving a sequence of n-dimensional optimization problems accounting for the transaction costs implicitly rather than explicitly. In Part 2, we propose a degeneracy resolving rule, present computational results comparing our method with the interior-point optimizer of Mosek, well known for its speed and efficient use of sparsity, and also address the efficiency of the new method. This research was supported by the National Science and Engineering Research Council of Canada and the Austrian National Bank. The authors wish to acknowledge the valuable assistance of Jivendra Kale, Zhengzheng Zhou and Associate Editor Franco Giannessi for thoughtful comments and suggestions.  相似文献   

9.
In this paper we compare the efficiency of several simplicial variable dimension algorithms. To do so, we first treat the issues of degeneracy and accelerating. We present a device for solving degeneracy. Furthermore we compare several accelerating techniques. The technique of iterated quasi-Newton steps after each major cycle of the simplicial algorithm is implemented in a computer code, which is used to compare the efficiency of the (n+1)-ray, 2n-ray, 2 n -ray and (3 n −1)-ray algorithms. Except for the (n+1)-ray algorithm, the number of function evaluations does not differ very much between the various algorithms. It appeared, however, that the 2 n -algorithm needs considerably less computation time.  相似文献   

10.
Recently, É. Tardos gave a strongly polynomial algorithm for the minimum-cost circulation problem and solved the open problem posed in 1972 by J. Edmonds and R.M. Karp. Her algorithm runs in O(m 2 T(m, n) logm) time, wherem is the number of arcs,n is the number of vertices, andT(m, n) is the time required for solving a maximum flow problem in a network withm arcs andn vertices. In the present paper, taking an approach that is a dual of Tardos's, we also give a strongly polynomial algorithm for the minimum-cost circulation problem. Our algorithm runs in O(m 2 S(m, n) logm) time and reduces the computational complexity, whereS(m, n) is the time required for solving a shortest path problem with a fixed origin in a network withm arcs,n vertices, and a nonnegative arc length function. The complexity is the same as that of Orlin's algorithm, recently developed by efficiently implementing the Edmonds-Karp scaling algorithm.  相似文献   

11.
M. Reza Peyghami 《PAMM》2007,7(1):2060081-2060082
One of the main ingredients of interior point methods is the proximity functions to measure the distance of the iterates from the central path of linear optimization problems. In this paper, an interior point method for solving P*(κ)-linear complementarity problem, κ ≥ 0, is proposed. For this version, we use a new class of proximity functions induced by new kernel functions. Using some mild and easy to check conditions, we show that the large-update primal-dual interior point methods for solving P*(κ)-linear complementarity problem enjoy the so far best worst case theoretical complexity, namely O (κn log n log n /ε) iteration bound, with special choices of the parameters p, q ≥ 1. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
In this article, we introduce two new asynchronous multisplitting methods for solving the system of weakly nonlinear equations Ax = G(x) in which A is an n × n real matrix and G(x) = (g 1(x), g 2(x), . . . , g n (x)) T is a P-bounded mapping. First, by generalized accelerated overrelaxation (GAOR) technique, we introduce the asynchronous parallel multisplitting GAOR method (including the synchronous parallel multisplitting AOR method as a special case) for solving the system of weakly nonlinear equations. Second, asynchronous parallel multisplitting method based on symmetric successive overrelaxation (SSOR) multisplitting is introduced, which is called asynchronous parallel multisplitting SSOR method. Then under suitable conditions, we establish the convergence of the two introduced methods. The given results contain synchronous multisplitting iterations as a special case.  相似文献   

13.
This paper proposes a simple heuristic that generates a solution for echelon (r,nQ,T) policies by sequentially solving a deterministic demand problem, a subproblem with fixed reorder intervals, and a subproblem with fixed batch sizes. For each of these problems, we further simplify the computation by solving a series of single-stage systems whose parameters are obtained directly from the original problem data. In a numerical study, we find that this heuristic outperforms an existing one in the literature.  相似文献   

14.
A portfolio optimization problem consists of maximizing an expected utility function of n assets. At the end of a typical time period, the portfolio will be modified by buying and selling assets in response to changing conditions. Associated with this buying and selling are variable transaction costs that depend on the size of the transaction. A straightforward way of incorporating these costs can be interpreted as the reduction of portfolios’ expected returns by transaction costs if the utility function is the mean-variance or the power utility function. This results in a substantially higher-dimensional problem than the original n-dimensional one, namely (2K+1)n-dimensional optimization problem with (4K+1)n additional constraints, where 2K is the number of different transaction costs functions. The higher-dimensional problem is computationally expensive to solve. This two-part paper presents a method for solving the (2K+1)n-dimensional problem by solving a sequence of n-dimensional optimization problems, which account for the transaction costs implicitly rather than explicitly. The key idea of the new method in Part 1 is to formulate the optimality conditions for the higher-dimensional problem and enforce them by solving a sequence of lower-dimensional problems under the nondegeneracy assumption. In Part 2, we propose a degeneracy resolving rule, address the efficiency of the new method and present the computational results comparing our method with the interior-point optimizer of Mosek. This research was supported by the National Science and Engineering Research Council of Canada and the Austrian National Bank. The authors acknowledge the valuable assistance of Rob Grauer and Associate Editor Franco Giannessi for thoughtful comments and suggestions.  相似文献   

15.
Computing a maximum independent set, weighted or unweighted, isNP-hard for general as well as planar graphs. However, polynomial time algorithms do exist for solving this problem on special classes of graphs. In this paper we present an efficient algorithm for computing a maximum weight independent set in trees. A divide and conquer approach based on centroid decomposition of trees is used to compute a maximum weight independent set withinO(n logn) time, wheren is the number of vertices in the tree. We introduce a notion of analternating tree which is crucial in obtaining a new independent set from the previous one.  相似文献   

16.
In this paper we study the use of the Fourier, Sine and Cosine Transform for solving or preconditioning linear systems, which arise from the discretization of elliptic problems. Recently, R. Chan and T. Chan considered circulant matrices for solving such systems. Instead of using circulant matrices, which are based on the Fourier Transform, we apply the Fourier and the Sine Transform directly. It is shown that tridiagonal matrices arising from the discretization of an onedimensional elliptic PDE are connected with circulant matrices by congruence transformations with the Fourier or the Sine matrix. Therefore, we can solve such linear systems directly, using only Fast Fourier Transforms and the Sherman-Morrison-Woodbury formula. The Fast Fourier Transform is highly parallelizable, and thus such an algorithm is interesting on a parallel computer. Moreover, similar relations hold between block tridiagonal matrices and Block Toeplitz-plus-Hankel matrices of ordern 2×n 2 in the 2D case. This can be used to define in some sense natural approximations to the given matrix which lead to preconditioners for solving such linear systems.  相似文献   

17.
In [ 3 ], a general recursive construction for optical orthogonal codes is presented, that guarantees to approach the optimum asymptotically if the original families are asymptotically optimal. A challenging problem on OOCs is to obtain optimal OOCs, in particular with λ > 1. Recently we developed an algorithmic scheme based on the maximal clique problem (MCP) to search for optimal (n, 4, 2)‐OOCs for orders up to n = 44. In this paper, we concentrate on recursive constructions for optimal (n, 4, 2)‐OOCs. While “most” of the codewords can be constructed by general recursive techniques, there remains a gap in general between this and the optimal OOC. In some cases, this gap can be closed, giving recursive constructions for optimal (n, 4, 2)‐OOCs. This is predicated on reducing a series of recursive constructions for optimal (n, 4, 2)‐OOCs to a single, finite maximal clique problem. By solving these finite MCP problems, we can extend the general recursive construction for OOCs in [ 3 ] to obtain new recursive constructions that give an optimal (n · 2x, 4, 2)‐OOC with x ≥ 3, if there exists a CSQS(n). © 2004 Wiley Periodicals, Inc.  相似文献   

18.
We present an efficient algorithm for generating an n × n nonsingular matrix uniformly over a finite field. This algorithm is useful for several cryptographic and checking applications. Over GF[2] our algorithm runs in expected time M(n) + O(n2), where M(n) is the time needed to multiply two n × n matrices, and the expected number of random bits it uses is n2 + 3. (Over other finite fields we use n2 + O(1) random field elements on average.) This is more efficient than the standard method for solving this problem, both in terms of expected running time and the expected number of random bits used. The standard method is to generate random n × n matrices until we produce one with nonzero determinant. In contrast, our technique directly produces a random matrix guaranteed to have nonzero determinant. We also introduce efficient algorithms for related problems such as uniformly generating singular matrices or matrices with fixed determinant. © 1993 John Wiley & Sons, Inc.  相似文献   

19.
In this paper, we will show that Lagrange interpolatory polynomials are optimal for solving some approximation theory problems concerning the finding of linear widths.In particular, we will show that

, where n is a set of the linear operators with finite rank n+1 defined on −1,1], and where n+1 denotes the set of polynomials p=∑i=0n+1aixi of degreen+1 such that an+11. The infimum is achieved for Lagrange interpolatory polynomial for nodes .  相似文献   

20.
M. Oswald  G. Reinelt  H. Seitz 《TOP》2009,17(1):158-170
The linear ordering problem consists of finding an ordering of the nodes of the weighted complete digraph on n nodes such that the sum of the weights of the arcs compatible with the ordering is maximized. In this paper, we report about the usefulness of mod-k cuts in a branch-and-cut algorithm for solving linear ordering problems to optimality.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号