首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
This paper presents two new approximate versions of the alternating direction method of multipliers (ADMM) derived by modifying of the original “Lagrangian splitting” convergence analysis of Fortin and Glowinski. They require neither strong convexity of the objective function nor any restrictions on the coupling matrix. The first method uses an absolutely summable error criterion and resembles methods that may readily be derived from earlier work on the relationship between the ADMM and the proximal point method, but without any need for restrictive assumptions to make it practically implementable. It permits both subproblems to be solved inexactly. The second method uses a relative error criterion and the same kind of auxiliary iterate sequence that has recently been proposed to enable relative-error approximate implementation of non-decomposition augmented Lagrangian algorithms. It also allows both subproblems to be solved inexactly, although ruling out “jamming” behavior requires a somewhat complicated implementation. The convergence analyses of the two methods share extensive underlying elements.  相似文献   

2.
A new Lagrangian relaxation (LR) approach is developed for job shop scheduling problems. In the approach, operation precedence constraints rather than machine capacity constraints are relaxed. The relaxed problem is decomposed into single or parallel machine scheduling subproblems. These subproblems, which are NP-complete in general, are approximately solved by using fast heuristic algorithms. The dual problem is solved by using a recently developed “surrogate subgradient method” that allows approximate optimization of the subproblems. Since the algorithms for subproblems do not depend on the time horizon of the scheduling problems and are very fast, our new LR approach is efficient, particularly for large problems with long time horizons. For these problems, the machine decomposition-based LR approach requires much less memory and computation time as compared to a part decomposition-based approach as demonstrated by numerical testing.  相似文献   

3.
We propose a novel class of Sequential Monte Carlo (SMC) algorithms, appropriate for inference in probabilistic graphical models. This class of algorithms adopts a divide-and-conquer approach based upon an auxiliary tree-structured decomposition of the model of interest, turning the overall inferential task into a collection of recursively solved subproblems. The proposed method is applicable to a broad class of probabilistic graphical models, including models with loops. Unlike a standard SMC sampler, the proposed divide-and-conquer SMC employs multiple independent populations of weighted particles, which are resampled, merged, and propagated as the method progresses. We illustrate empirically that this approach can outperform standard methods in terms of the accuracy of the posterior expectation and marginal likelihood approximations. Divide-and-conquer SMC also opens up novel parallel implementation options and the possibility of concentrating the computational effort on the most challenging subproblems. We demonstrate its performance on a Markov random field and on a hierarchical logistic regression problem. Supplementary materials including proofs and additional numerical results are available online.  相似文献   

4.
A widespread and successful approach to tackle unit-commitment problems is constraint decomposition: by dualizing the linking constraints, the large-scale nonconvex problem decomposes into smaller independent subproblems. The dual problem consists then in finding the best Lagrangian multiplier (the optimal “price”); it is solved by a convex nonsmooth optimization method. Realistic modeling of technical production constraints makes the subproblems themselves difficult to solve exactly. Nonsmooth optimization algorithms can cope with inexact solutions of the subproblems. In this case however, we observe that the computed dual solutions show a noisy and unstable behaviour, that could prevent their use as price indicators. In this paper, we present a simple and easy-to-implement way to stabilize dual optimal solutions, by penalizing the noisy behaviour of the prices in the dual objective. After studying the impact of a general stabilization term on the model and the resolution scheme, we focus on the penalization by discrete total variation, showing the consistency of the approach. We illustrate our stabilization on a synthetic example, and real-life problems from EDF (the French Electricity Board).  相似文献   

5.
Schwarz domain decomposition methods are developed for the numerical solution of singularly perturbed elliptic problems. Three variants of a two-level Schwarz method with interface subproblems are investigated both theoretically and from the point of view of their computer realization on a distributed memory multiprocessor computer. Numerical experiments illustrate their parallel performance as well as their behavior with respect to the critical parameters such as the perturbation parameter, the size of the interface subdomains and the number of parallel processors. Application of one of the methods to a model problem with an interior layer of complex geometry is also discussed. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

6.
Utilizing the Tikhonov regularization method and extragradient and linesearch methods, some new extragradient and linesearch algorithms have been introduced in the framework of Hilbert spaces. In the presented algorithms, the convexity of optimization subproblems is assumed, which is weaker than the strong convexity assumption that is usually supposed in the literature, and also, the auxiliary equilibrium problem is not used. Some strong convergence theorems for the sequences generated by these algorithms have been proven. It has been shown that the limit point of the generated sequences is a common element of the solution set of an equilibrium problem and the solution set of a split feasibility problem in Hilbert spaces. To illustrate the usability of our results, some numerical examples are given. Optimization subproblems in these examples have been solved by FMINCON toolbox in MATLAB.  相似文献   

7.
8.
This paper develops a new error criterion for the approximate minimization of augmented Lagrangian subproblems. This criterion is practical since it is readily testable given only a gradient (or subgradient) of the augmented Lagrangian. It is also “relative” in the sense of relative error criteria for proximal point algorithms: in particular, it uses a single relative tolerance parameter, rather than a summable parameter sequence. Our analysis first describes an abstract version of the criterion within Rockafellar’s general parametric convex duality framework, and proves a global convergence result for the resulting algorithm. Specializing this algorithm to a standard formulation of convex programming produces a version of the classical augmented Lagrangian method with a novel inexact solution condition for the subproblems. Finally, we present computational results drawn from the CUTE test set—including many nonconvex problems—indicating that the approach works well in practice.  相似文献   

9.
Summary A method is given for the solution of linear equations arising in the finite element method applied to a general elliptic problem. This method reduces the original problem to several subproblems (of the same form) considered on subregions, and an auxiliary problem. Very efficient iterative methods with the preconditioning operator and using FFT are developed for the auxiliary problem.  相似文献   

10.
In this article, we propose a multiphysics mixed finite element method with Nitsche's technique for Stokes-poroelasticity problem. Firstly, we reformulate the poroelasticity part of the original problem by introducing two pseudo-pressures to into a “fluid–fluid” coupled problem so that we can use the classical stable finite element pairs to deal with this problem conveniently. Then, we prove the existence and uniqueness of weak solution of the reformulated problem. And we use Nitsche's technique to approximate the coupling condition at the interface to propose a loosely-coupled time-stepping method to solve three subproblems at each time step–a Stokes problem, a generalized Stokes problem and a mixed diffusion problem. And the proposed method does not require any restriction on the choice of the discrete approximation spaces on each side of the interface provided that appropriate quadrature methods are adopted. Also, we give the stability analysis and error estimates of the loosely-coupled time-stepping method. Finally, we give the numerical tests to show that the proposed numerical method has a good stability and no “locking” phenomenon.  相似文献   

11.
Lower Bounds for Fixed Spectrum Frequency Assignment   总被引:1,自引:0,他引:1  
Determining lower bounds for the sum of weighted constraint violations in fixed spectrum frequency assignment problems is important in order to evaluate the performance of heuristic algorithms. It is well known that, when adopting a binary constraints model, clique and near-clique subproblems have a dominant role in the theory of lower bounds for minimum span problems. In this paper we highlight their importance for fixed spectrum problems. We present a method based on the linear relaxation of an integer programming formulation of the problem, reinforced with constraints derived from clique-like subproblems. The results obtained are encouraging both in terms of quality and in terms of computation time.  相似文献   

12.
A price model with variance and correlation coefficients as random processes is analyzed. Parametric analysis is realized by means of “direct” and “inverse” trade algorithms. Results of numerical experiments are obtained on a computer using an INVERT program.  相似文献   

13.
Maximization of submodular functions on a ground set is a NP-hard combinatorial optimization problem. Data correcting algorithms are among the several algorithms suggested for solving this problem exactly and approximately. From the point of view of Hasse diagrams data correcting algorithms use information belonging to only one level in the Hasse diagram adjacent to the level of the solution at hand. In this paper, we propose a data correcting algorithm that looks at multiple levels of the Hasse diagram and hence makes the data correcting algorithm more efficient. Our computations with quadratic cost partition problems show that this multilevel search effects a 8- to 10-fold reduction in computation times, so that some of the dense quadratic partition problem instances of size 500, currently considered as some of the most difficult problems and far beyond the capabilities of current exact methods, are solvable on a personal computer working at 300 MHz within 10 min.  相似文献   

14.
We consider the problem of estimating the optimal steady effort level from a time series of catch and effort data, taking account of errors in the observation of the “effective effort” as well as randomness in the stock-production function. The “total least squares” method ignores the time series nature of the data, while the “approximate likelihood” method takes it into account. We compare estimation schemes based upon these two methods by applying them to artificial data for which the “correct” parameters are known. We use a similar procedure to compare the effectiveness of a “power model” for stock and production with the “Ricker model.” We apply these estimation methods to some sets of real data, and obtain an interval estimate of the optimal effort.  相似文献   

15.
高阶优化算法是利用目标函数的高阶导数信息进行优化的算法,是最优化领域中的一个新兴的研究方向.高阶算法具有更低的迭代复杂度,但是需要求解一个更难的子问题.主要介绍三种高阶算法,分别为求解凸问题的高阶加速张量算法和A-HPE框架下的最优张量算法,以及求解非凸问题的ARp算法.同时也介绍了怎样求解高阶算法的子问题.希望通过对高阶算法的介绍,引起更多学者的关注与重视.  相似文献   

16.
The stabilized sequential quadratic programming (SQP) method has nice local convergence properties: it possesses local superlinear convergence under very mild assumptions not including any constraint qualifications. However, any attempts to globalize convergence of this method indispensably face some principal difficulties concerned with intrinsic deficiencies of the steps produced by it when relatively far from solutions; specifically, it has a tendency to produce long sequences of short steps before entering the region where its superlinear convergence shows up. In this paper, we propose a modification of the stabilized SQP method, possessing better “semi-local” behavior, and hence, more suitable for the development of practical realizations. The key features of the new method are identification of the so-called degeneracy subspace and dual stabilization along this subspace only; thus the name “subspace-stabilized SQP”. We consider two versions of this method, their local convergence properties, as well as a practical procedure for approximation of the degeneracy subspace. Even though we do not consider here any specific algorithms with theoretically justified global convergence properties, subspace-stabilized SQP can be a relevant substitute for the stabilized SQP in such algorithms using the latter at the “local phase”. Some numerical results demonstrate that stabilization along the degeneracy subspace is indeed crucially important for success of dual stabilization methods.  相似文献   

17.
For solving inverse gravimetry problems, efficient stable parallel algorithms based on iterative gradient methods are proposed. For solving systems of linear algebraic equations with block-tridiagonal matrices arising in geoelectrics problems, a parallel matrix sweep algorithm, a square root method, and a conjugate gradient method with preconditioner are proposed. The algorithms are implemented numerically on a parallel computing system of the Institute of Mathematics and Mechanics (PCS-IMM), NVIDIA graphics processors, and an Intel multi-core CPU with some new computing technologies. The parallel algorithms are incorporated into a system of remote computations entitled “Specialized Web-Portal for Solving Geophysical Problems on Multiprocessor Computers.” Some problems with “quasi-model” and real data are solved.  相似文献   

18.
Tensor ring (TR) decomposition has been widely applied as an effective approach in a variety of applications to discover the hidden low-rank patterns in multidimensional and higher-order data. A well-known method for TR decomposition is the alternating least squares (ALS). However, solving the ALS subproblems often suffers from high cost issue, especially for large-scale tensors. In this paper, we provide two strategies to tackle this issue and design three ALS-based algorithms. Specifically, the first strategy is used to simplify the calculation of the coefficient matrices of the normal equations for the ALS subproblems, which takes full advantage of the structure of the coefficient matrices of the subproblems and hence makes the corresponding algorithm perform much better than the regular ALS method in terms of computing time. The second strategy is to stabilize the ALS subproblems by QR factorizations on TR-cores, and hence the corresponding algorithms are more numerically stable compared with our first algorithm. Extensive numerical experiments on synthetic and real data are given to illustrate and confirm the above results. In addition, we also present the complexity analyses of the proposed algorithms.  相似文献   

19.
This paper discusses solution techniques for the morning commute problem that is formulated as a discrete variational inequality (VI). Various heuristics have been proposed to solve this problem, mostly because the analytical properties of the path travel time function have not yet been well understood. Two groups of “non-heuristic” algorithms for general VIs, namely projection-type algorithms and ascent direction algorithms, were examined. In particular, a new ascent direction method is introduced and implemented with a heuristic line search procedure. The performance of these algorithms are compared on simple instances of the morning commute problem. The implications of numerical results are discussed.  相似文献   

20.
By introducing auxiliary variables, the traditional Markov chain Monte Carlo method can be improved in certain cases by implementing a “slice sampler.” In the current literature, this sampling technique is used to sample from multivariate distributions with both single and multiple auxiliary variables. When the latter is employed, it generally updates one component at a time.

In this article, we propose two variations of a new multivariate normal slice sampling method that uses multiple auxiliary variables to perform multivariate updating. These methods are flexible enough to allow for truncation to a rectangular region and/or exclusion of any n-dimensional hyper-quadrant. We present results of our methods and existing state-of-the-art slice samplers by comparing efficiency and accuracy. We find that we can generate approximately iid samples at a rate that is more efficient than other methods that update all dimensions at once. Supplemental materials are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号