首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present an improvement in the implementation of the Leverrier-Faddeev algorithm for symbolic computation of the Moore-Penrose inverse of one-variable polynomial matrices, introduced in Linear Algebra Appl. 252, 35–60 (1997). Complexity analysis of the original algorithm and its improvement is presented. Algorithm and its improvement are implemented and compared in the symbolic computational package MATHEMATICA. We compare CPU time required for computation of some test matrices by means of the original algorithm and its improvement.  相似文献   

2.
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang-bang control problem, under several different inexactness schemes.  相似文献   

3.
We present an algorithm that determines Sequential Tail Value at Risk (STVaR) for path-independent payoffs in a binomial tree. STVaR is a dynamic version of Tail-Value-at-Risk (TVaR) characterized by the property that risk levels at any moment must be in the range of risk levels later on. The algorithm consists of a finite sequence of backward recursions that is guaranteed to arrive at the solution of the corresponding dynamic optimization problem. The algorithm makes concrete how STVaR differs from TVaR over the remaining horizon, and from recursive TVaR, which amounts to Dynamic Programming. Algorithmic aspects are compared with the cutting-plane method. Time consistency and comonotonicity properties are illustrated by applying the algorithm on elementary examples.  相似文献   

4.
One of the challenging optimization problems is determining the minimizer of a nonlinear programming problem that has binary variables. A vexing difficulty is the rate the work to solve such problems increases as the number of discrete variables increases. Any such problem with bounded discrete variables, especially binary variables, may be transformed to that of finding a global optimum of a problem in continuous variables. However, the transformed problems usually have astronomically large numbers of local minimizers, making them harder to solve than typical global optimization problems. Despite this apparent disadvantage, we show that the approach is not futile if we use smoothing techniques. The method we advocate first convexifies the problem and then solves a sequence of subproblems, whose solutions form a trajectory that leads to the solution. To illustrate how well the algorithm performs we show the computational results of applying it to problems taken from the literature and new test problems with known optimal solutions.  相似文献   

5.
6.
In this paper we present a parallel algorithm for parallel computing the solution of the general restricted linear equations Ax=b,xT, where T is a subspace of ? n and bAT. By this algorithm the solution x=A T,S (2) b is obtained in n(log?2 m+log?2(n?s+1)+7)+log?2 m+1 steps with P=mn processors when m≥2(n?1) and with P=2n(n?1) processors otherwise.  相似文献   

7.
A primal-dual path-following algorithm that applies directly to a linear program of the form, min{c t xAx = b, Hx u, x 0,x n }, is presented. This algorithm explicitly handles upper bounds, generalized upper bounds, variable upper bounds, and block diagonal structure. We also show how the structure of time-staged problems and network flow problems can be exploited, especially on a parallel computer. Finally, using our algorithm, we obtain a complexity bound of O( ds 2 log(nk)) fortransportation problems withs origins,d destinations (s <d), andn arcs, wherek is the maximum absolute value of the input data.This research was supported in part by NSF Grants DMS-85-12277 and CDR-84-21402 and by ONR Contract N00014-87-K-0214.  相似文献   

8.
In this paper we present some non-interior path-following methods for linear complementarity problems. Instead of using the standard central path we use a scaled central path. Based on this new central path, we first give a feasible non-interior path-following method for linear complementarity problems. And then we extend it to an infeasible method. After proving the boundedness of the neighborhood, we prove the convergence of our method. Another point we should present is that we prove the local quadratic convergence of feasible method without the assumption of strict complementarity at the solution.  相似文献   

9.
10.
We introduce an entropy-like proximal algorithm for the problem of minimizing a closed proper convex function subject to symmetric cone constraints. The algorithm is based on a distance-like function that is an extension of the Kullback-Leiber relative entropy to the setting of symmetric cones. Like the proximal algorithms for convex programming with nonnegative orthant cone constraints, we show that, under some mild assumptions, the sequence generated by the proposed algorithm is bounded and every accumulation point is a solution of the considered problem. In addition, we also present a dual application of the proposed algorithm to the symmetric cone linear program, leading to a multiplier method which is shown to possess similar properties as the exponential multiplier method (Tseng and Bertsekas in Math. Program. 60:1–19, 1993) holds.  相似文献   

11.
12.
We present an algorithm to decompose a polynomial system into a finite set of normal ascending sets such that the set of the zeros of the polynomial system is the union of the sets of the regular zeros of the normal ascending sets.If the polynomial system is zero dimensional,the set of the zeros of the polynomials is the union of the sets of the zeros of the normal ascending sets.  相似文献   

13.
In this paper, we introduce an additional constraint to the one-dimensional variable sized bin packing problem. Practically, some of items have to be packed separately in different bins due to their specific requirement. Therefore, these items are labelled as different types. The bins can be used to pack either any type of items if they are empty originally or the same type of items as what they already have. We model the problem as a type-constrained and variable sized bin packing problem (TVSBPP), and solve it via a branch and bound method. An efficient backtracking procedure is proposed to improve the efficiency of the algorithm.  相似文献   

14.
INDSCAL (INdividual Differences SCALing) is a useful technique for investigating both common and unique aspects of K similarity data matrices. The model postulates a common stimulus configuration in a low-dimensional Euclidean space, while representing differences among the K data matrices by differential weighting of dimensions by different data sources. Since Carroll and Chang proposed their algorithm for INDSCAL, several issues have been raised: non-symmetric solutions, negative saliency weights, and the degeneracy problem. Orthogonal INDSCAL (O-INDSCAL) which imposes orthogonality constraints on the matrix of stimulus configuration has been proposed to overcome some of these difficulties. Two algorithms have been proposed for O-INDSCAL, one by Ten Berge, Knol, and Kiers, and the other by Trendafilov. In this paper, an acceleration technique called minimal polynomial extrapolation is incorporated in Ten Berge et al.’s algorithm. Simulation studies are conducted to compare the performance of the three algorithms (Ten Berge et al.’s original algorithm, the accelerated algorithm, and Trendafilov’s). Possible extensions of the accelerated algorithm to similar situations are also suggested.  相似文献   

15.
In this paper, we propose a new hybrid algorithm for the Hamiltonian cycle problem by synthesizing the Cross Entropy method and Markov decision processes. In particular, this new algorithm assigns a random length to each arc and alters the Hamiltonian cycle problem to the travelling salesman problem. Thus, there is now a probability corresponding to each arc that denotes the probability of the event “this arc is located on the shortest tour.” Those probabilities are then updated as in cross entropy method and used to set a suitable linear programming model. If the solution of the latter yields any tour, the graph is Hamiltonian. Numerical results reveal that when the size of graph is small, say less than 50 nodes, there is a high chance the algorithm will be terminated in its cross entropy component by simply generating a Hamiltonian cycle, randomly. However, for larger graphs, in most of the tests the algorithm terminated in its optimization component (by solving the proposed linear program).  相似文献   

16.
We consider an algorithm called FEMWARP for warping triangular and tetrahedral finite element meshes that computes the warping using the finite element method itself. The algorithm takes as input a two- or three-dimensional domain defined by a boundary mesh (segments in one dimension or triangles in two dimensions) that has a volume mesh (triangles in two dimensions or tetrahedra in three dimensions) in its interior. It also takes as input a prescribed movement of the boundary mesh. It computes as output updated positions of the vertices of the volume mesh. The first step of the algorithm is to determine from the initial mesh a set of local weights for each interior vertex that describes each interior vertex in terms of the positions of its neighbors. These weights are computed using a finite element stiffness matrix. After a boundary transformation is applied, a linear system of equations based upon the weights is solved to determine the final positions of the interior vertices. The FEMWARP algorithm has been considered in the previous literature (e.g., in a 2001 paper by Baker). FEMWARP has been successful in computing deformed meshes for certain applications. However, sometimes FEMWARP reverses elements; this is our main concern in this paper. We analyze the causes for this undesirable behavior and propose several techniques to make the method more robust against reversals. The most successful of the proposed methods includes combining FEMWARP with an optimization-based untangler.  相似文献   

17.
Faugère and Rahmany have presented the invariant F5 algorithm to compute SAGBI-Grbner bases of ideals of invariant rings. This algorithm has an incremental structure, and it is based on the matrix version of F5 algorithm to use F5 criterion to remove a part of useless reductions. Although this algorithm is more efficient than the Buchberger-like algorithm, however it does not use all the existing criteria (for an incremental structure) to detect superfluous reductions. In this paper, we consider a new algorithm, namely, invariant G2V algorithm, to compute SAGBI-Grbner bases of ideals of invariant rings using more criteria. This algorithm has a new structure and it is based on the G2V algorithm; a variant of the F5 algorithm to compute Grbner bases. We have implemented our new algorithm in Maple , and we give experimental comparison, via some examples, of performance of this algorithm with the invariant F5 algorithm.  相似文献   

18.
The Flexible Job-Shop Scheduling Problem is concerned with the determination of a sequence of jobs, consisting of many operations, on different machines, satisfying several parallel goals. We introduce a Memetic Algorithm, based on the NSGAII (Non-Dominated Sorting Genetic Algorithm II) acting on two chromosomes, to solve this problem. The algorithm adds, to the genetic stage, a local search procedure (Simulated Annealing). We have assessed its efficiency by running the algorithm on multiple objective instances of the problem. We draw statistics from those runs, which indicate that this Memetic Algorithm yields good and low-cost solutions.  相似文献   

19.
20.
We present a predictor-corrector path-following interior-point algorithm for \(P_*(\kappa )\) horizontal linear complementarity problem based on new search directions. In each iteration, the algorithm performs two kinds of steps: a predictor (damped Newton) step and a corrector (full Newton) step. The full Newton-step is generated from an algebraic reformulation of the centering equation, which defines the central path and seeks directions in a small neighborhood of the central path. While the damped Newton step is used to move in the direction of optimal solution and reduce the duality gap. We derive the complexity for the algorithm, which coincides with the best known iteration bound for \(P_*(\kappa )\) -horizontal linear complementarity problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号