首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper discusses plausible explanations of the somewhat folkloric, ‘tailing off’ convergence behavior of the Dantzig-Wolfe decomposition algorithm for linear programs. Is is argued that such beahvior may be used to numerical inaccuracy. Procedures to identify and mitigate such difficulties are outlined.  相似文献   

2.
We show that a modified variant of the interior point method can solve linear programs (LPs) whose coefficients are real numbers from a subring of the algebraic integers. By defining the encoding size of such numbers to be the bit size of the integers that represent them in the subring, we prove the modified algorithm runs in time polynomial in the encoding size of the input coefficients, the dimension of the problem, and the order of the subring. We then extend the Tardos scheme to our case, obtaining a running time which is independent of the objective and right-hand side data. As a consequence of these results, we are able to show that LPs with real circulant coefficient matrices can be solved in strongly polynomial time. Finally, we show how the algorithm can be applied to LPs whose coefficients belong to the extension of the integers by a fixed set of square roots.  相似文献   

3.
Central European Journal of Operations Research - The maximum clique problems calls for determining the size of the largest clique in a given graph. This graph problem affords a number of zero-one...  相似文献   

4.
5.
A simple approach is presented to study the asymptotic behavior of some algorithms with an underlying tree structure. It is shown that some asymptotic oscillating behaviors can be precisely analyzed without resorting to complex analysis techniques as it is usually done in this context. A new explicit representation of periodic functions involved is obtained at the same time. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2005  相似文献   

6.
We present numerical algorithms calculating compositions of the left and right fractional integrals. We apply quadratic interpolation and obtain the fractional Simpson's rule. We estimate the local truncation error of the proposed approximations, calculate errors generated by presented algorithms, and determine the experimental rate of convergence. Finally, we show examples of numerical evaluation of these operators.  相似文献   

7.
The purpose of this paper is twofold. First, in the framework of the strategic groups’ literature, it analyZes changes in productivity and efficiency of Spanish private and savings banks over an eight-year period (1998-2006). Second, by adapting the decomposition of the Malmquist productivity indices suggested by Färe et al. (1994), it proposes similar components decomposing the Luenberger productivity indicator. Initially, productivity is decomposed into technological and efficiency changes. Thereafter, this efficiency change is decomposed into pure efficiency, scale and congestion changes. Empirical results demonstrate that productivity improvements are partially due to technological innovation. Furthermore, it is shown how the competition between private and savings banks develops in terms of the analyzed productivity and efficiency components. While private banks enjoy better efficiency change, savings banks contribute more to technological progress. Consequently, the Luenberger components are used as cluster analysis inputs. Thus, economic interpretations of the resulting performance groups are made via key differences in productivity components. Finally, following the strategic groups’ literature, supplementary insights are gained by linking these performance groups with banking ratios.  相似文献   

8.
Auxiliary problem principle and decomposition of optimization problems   总被引:14,自引:0,他引:14  
The auxiliary problem principle allows one to find the solution of a problem (minimization problem, saddle-point problem, etc.) by solving a sequence of auxiliary problems. There is a wide range of possible choices for these problems, so that one can give special features to them in order to make them easier to solve. We introduced this principle in Ref. 1 and showed its relevance to decomposing a problem into subproblems and to coordinating the subproblems. Here, we derive several basic or abstract algorithms, already given in Ref. 1, and we study their convergence properties in the framework of i infinite-dimensional convex programming.  相似文献   

9.
Convergence behavior of interior-point algorithms   总被引:4,自引:0,他引:4  
We show that most interior-point algorithms for linear programming generate a solution sequence in which every limit point satisfies the strict complementarity condition. These algorithms include all path-following algorithms and some potential reduction algorithms. The result also holds for the monotone complementarity problem if a strict complementarity solution exists. In general, the limit point is a solution that maximizes the number of its nonzero components among all solutions.Research supported in part by NSF Grant DDM-8922636, the Iowa Business School Summer Grant, and the Interdisciplinary Research Grant of the University of Iowa Center for Advanced Studies.  相似文献   

10.
This paper provides a structural analysis of decomposition algorithms using a generalization of linear splitting methods. This technique is used to identify explicitly the essential similarities and differences between several classical algorithms. Similar concepts can be used to analyze a large class of multilevel hierarchical structures.This research was supported in part by ONR Contract No. N00014-76-C-0346, in part by the US Department of Energy, Division of Electric Energy Systems, Contract No. ERDA-E(49-18)-2087 at the Massachusetts Institute of Technology, and in part by the Joint Services Electronics Program, Contract No. DAAG-29-78-C-0016.The authors would like to thank Dr. P. Varaiya, University of California at Berkeley, and Dr. D. Bertsekas for their comments and suggestions.  相似文献   

11.
A data assimilation problem for unsteady models is considered as a sequence of coupled inverse problems of reconstruction of the space-time structure of the state functions with various sets of measurement data. The data assimilation is carried out jointly with the identification of an additional unknown function, which is interpreted as a function of model uncertainty. A variational principle serves as the basis for constructing the algorithms. Various versions of the algorithms are presented and analyzed. Based on the principle of the residual, a computationally efficient algorithm for data assimilation in a locally one-dimensional case is constructed. A theoretical estimate of its performance is obtained. This algorithm is one of the main components of a data assimilation system based on a splitting scheme for unsteady three-dimensional transport and transformation models of atmospheric chemistry.  相似文献   

12.
While semidefinite relaxations are known to deliver good approximations for combinatorial optimization problems like graph bisection, their practical scope is mostly associated with small dense instances. For large sparse instances, cutting plane techniques are considered the method of choice. These are also applicable for semidefinite relaxations via the spectral bundle method, which allows to exploit structural properties like sparsity. In order to evaluate the relative strengths of linear and semidefinite approaches for large sparse instances, we set up a common branch-and-cut framework for linear and semidefinite relaxations of the minimum graph bisection problem. It incorporates separation algorithms for valid inequalities of the bisection cut polytope described in a recent study by the authors. While the problem specific cuts help to strengthen the linear relaxation significantly, the semidefinite bound profits much more from separating the cycle inequalities of the cut polytope on a slightly enlarged support. Extensive numerical experiments show that this semidefinite branch-and-cut approach without problem specific cuts is a superior choice to the classical simplex approach exploiting bisection specific inequalities on a clear majority of our large sparse test instances from VLSI design and numerical optimization.  相似文献   

13.
The paper compares the numerical performances of the LDL decomposition of the BFGS variable-metric algorithm, the Dennis-Mei dogleg algorithm on the BFGS update, and Davidon's projections with the BFGS update with the straight BFGS update on a number of standard test problems. Numerical results indicate that the standard BFGS algorithm is superior to all of the more complex strategies.This research was supported by the National Science Foundation under Research Grant No. MCS77-07327.  相似文献   

14.
Tensor decompositions such as the canonical format and the tensor train format have been widely utilized to reduce storage costs and operational complexities for high‐dimensional data, achieving linear scaling with the input dimension instead of exponential scaling. In this paper, we investigate even lower storage‐cost representations in the tensor ring format, which is an extension of the tensor train format with variable end‐ranks. Firstly, we introduce two algorithms for converting a tensor in full format to tensor ring format with low storage cost. Secondly, we detail a rounding operation for tensor rings and show how this requires new definitions of common linear algebra operations in the format to obtain storage‐cost savings. Lastly, we introduce algorithms for transforming the graph structure of graph‐based tensor formats, with orders of magnitude lower complexity than existing literature. The efficiency of all algorithms is demonstrated on a number of numerical examples, and in certain cases, we demonstrate significantly higher compression ratios when compared to previous approaches to using the tensor ring format.  相似文献   

15.
Given then×p orthogonal matrixA and the convex functionf:R nR, we find two orthogonal matricesP andQ such thatf is almost constant on the convex hull of ± the columns ofP, f is sufficiently nonconstant on the column space ofQ, and the column spaces ofP andQ provide an orthogonal direct sum decomposition of the column space ofA. This provides a numerically stable algorithm for calculating the cone of directions of constancy, at a pointx, of a convex function. Applications to convex programming are discussed.This work was supported by the National Science and Engineering Research Council of Canada (Grant No. A3388 and Summer Grant).  相似文献   

16.
Stochastic programming usually represents uncertainty discretely by means of a scenario tree. This representation leads to an exponential growth of the size of stochastic mathematical problems when better accuracy is needed. Trying to solve the problem as a whole, considering all scenarios together, yields to huge memory requirements that surpass the capabilities of current computers. Thus, decomposition algorithms are employed to divide the problem into several smaller subproblems and to coordinate their solution in order to obtain the global optimum. This paper analyzes several decomposition strategies based on the classical Benders decomposition algorithm, and applies them in the emerging computational grid environments. Most decomposition algorithms are not able to take full advantage of all the computing power available in a grid system because of unavoidable dependencies inherent to the algorithms. However, a special decomposition method presented in this paper aims at reducing dependency among subproblems, to the point where all the subproblems can be sent simultaneously to the grid. All algorithms have been tested in a grid system, measuring execution times required to solve standard optimization problems and a real-size hydrothermal coordination problem. Numerical results are shown to confirm that this new method outperforms the classical ones when used in grid computing environments.  相似文献   

17.
In this paper we construct and analyze new non-overlapping domain decomposition preconditioners for the solution of second-order elliptic and parabolic boundary value problems. The preconditioners are developed using uniform preconditioners on the subdomains instead of exact solves. They exhibit the same asymptotic condition number growth as the corresponding preconditioners with exact subdomain solves and are much more efficient computationally. Moreover, this asymptotic condition number growth is bounded independently of jumps in the operator coefficients across subdomain boundaries. We also show that our preconditioners fit into the additive Schwarz framework with appropriately chosen subspace decompositions. Condition numbers associated with the new algorithms are computed numerically in several cases and compared with those of the corresponding algorithms in which exact subdomain solves are used.

  相似文献   


18.
In this paper a new general approach for the so-called “zero-one principle” for sorting algorithms is described. A theorem from propositional logic that states the connection between two-valued logic and many-valued logic is used to prove this zero-one principle. As a consequence a zero-one principle for a more general class of sorting algorithms is derived.  相似文献   

19.
《Optimization》2012,61(5):675-682
We consider the problem how a convex optimal-value function arising in primal decomposition can be finitely continued beyond its domain. By a suitable presentation of the exact penalty method an implementable continuation can be obtained which does not change the set of optimal solutions. If the problem has separability and partially linearity properties we manage to obtain a complete continuation of the optimal-value function.  相似文献   

20.
The dual simplex algorithm has become a strong contender in solving large scale LP problems. One key problem of any dual simplex algorithm is to obtain a dual feasible basis as a starting point. We give an overview of methods which have been proposed in the literature and present new stable and efficient ways to combine them within a state-of-the-art optimization system for solving real world linear and mixed integer programs. Furthermore, we address implementation aspects and the connection between dual feasibility and LP-preprocessing. Computational results are given for a large set of large scale LP problems, which show our dual simplex implementation to be superior to the best existing research and open-source codes and competitive to the leading commercial code on many of our most difficult problem instances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号