首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
In this paper, we first demonstrate that positive semidefiniteness of a large well-structured sparse symmetric matrix can be represented via positive semidefiniteness of a bunch of smaller matrices linked, in a linear fashion, to the matrix. We derive also the “dual counterpart” of the outlined representation, which expresses the possibility of positive semidefinite completion of a well-structured partially defined symmetric matrix in terms of positive semidefiniteness of a specific bunch of fully defined submatrices of the matrix. Using the representations, we then reformulate well-structured large-scale semidefinite problems into smooth convex–concave saddle point problems, which can be solved by a Prox-method developed in [6] with efficiency . Implementations and some numerical results for large-scale Lovász capacity and MAXCUT problems are finally presented.   相似文献   

2.
Classical stochastic gradient methods are well suited for minimizing expected-value objective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., the problem \(\min _x \mathbf{E}_v\left[ f_v\big (\mathbf{E}_w [g_w(x)]\big ) \right] .\) In order to solve this stochastic composition problem, we propose a class of stochastic compositional gradient descent (SCGD) algorithms that can be viewed as stochastic versions of quasi-gradient method. SCGD update the solutions based on noisy sample gradients of \(f_v,g_{w}\) and use an auxiliary variable to track the unknown quantity \(\mathbf{E}_w\left[ g_w(x)\right] \). We prove that the SCGD converge almost surely to an optimal solution for convex optimization problems, as long as such a solution exists. The convergence involves the interplay of two iterations with different time scales. For nonsmooth convex problems, the SCGD achieves a convergence rate of \(\mathcal {O}(k^{-1/4})\) in the general case and \(\mathcal {O}(k^{-2/3})\) in the strongly convex case, after taking k samples. For smooth convex problems, the SCGD can be accelerated to converge at a rate of \(\mathcal {O}(k^{-2/7})\) in the general case and \(\mathcal {O}(k^{-4/5})\) in the strongly convex case. For nonconvex problems, we prove that any limit point generated by SCGD is a stationary point, for which we also provide the convergence rate analysis. Indeed, the stochastic setting where one wants to optimize compositions of expected-value functions is very common in practice. The proposed SCGD methods find wide applications in learning, estimation, dynamic programming, etc.  相似文献   

3.
In this paper, we propose a two-parameter preconditioned variant of the deteriorated PSS iteration method (J. Comput. Appl. Math., 273, 41–60 (2015)) for solving singular saddle point problems. Semi-convergence analysis shows that the new iteration method is convergent unconditionally. The new iteration method can also be regarded as a preconditioner to accelerate the convergence of Krylov subspace methods. Eigenvalue distribution of the corresponding preconditioned matrix is presented, which is instructive for the Krylov subspace acceleration. Note that, when the leading block of the saddle point matrix is symmetric, the new iteration method will reduce to the preconditioned accelerated HSS iteration method (Numer. Algor., 63 (3), 521–535 2013), the semi-convergence conditions of which can be simplified by the results in this paper. To further improve the effectiveness of the new iteration method, a relaxed variant is given, which has much better convergence and spectral properties. Numerical experiments are presented to investigate the performance of the new iteration methods for solving singular saddle point problems.  相似文献   

4.
Summary. In this paper we consider additive Schwarz-type iteration methods for saddle point problems as smoothers in a multigrid method. Each iteration step of the additive Schwarz method requires the solutions of several small local saddle point problems. This method can be viewed as an additive version of a (multiplicative) Vanka-type iteration, well-known as a smoother for multigrid methods in computational fluid dynamics. It is shown that, under suitable conditions, the iteration can be interpreted as a symmetric inexact Uzawa method. In the case of symmetric saddle point problems the smoothing property, an important part in a multigrid convergence proof, is analyzed for symmetric inexact Uzawa methods including the special case of the additive Schwarz-type iterations. As an example the theory is applied to the Crouzeix-Raviart mixed finite element for the Stokes equations and some numerical experiments are presented. Mathematics Subject Classification (1991):65N22, 65F10, 65N30Supported by the Austrian Science Foundation (FWF) under the grant SFB F013}\and Walter Zulehner  相似文献   

5.
In the last two decades, the augmented linear systems and the saddle point problems have been solved by many researchers who have used the conjugate gradient method or the generalized SOR iterative method and variants of them. In the latter class of methods, when the block \(A \in {\mathrm{I\!R}}^{m\times m}\) of the matrix coefficient \(\mathcal {A} = \left[ \begin{array}{cc} A &{} B \\ -B^T &{} 0\end{array}\right] \in {\mathrm{I\!R}}^{(m+n) \times (m+n)}\), \(m \ge n,\) of the linear system to be solved, is symmetric positive definite and \({\mathrm{rank}}(B) =r \le n\), convergence regions and optimal values of the parameters involved have been determined. In this work, we consider the block A to be nonsymmetric positive definite, \({\mathrm{rank}}(B) =r < n \), and use a two-level stationary iterative method whose main step is the linear second-order stationary iterative method for the solution of this class of problems. This method leads to the singular Manteuffel algorithm and the determination of its optimal parameters. As a byproduct, the optimal parameters of the Generalized Modified SSOR method in a particular case are also determined. Numerical examples verify our theoretical findings.  相似文献   

6.
In this paper we propose a variant of the random coordinate descent method for solving linearly constrained convex optimization problems with composite objective functions. If the smooth part of the objective function has Lipschitz continuous gradient, then we prove that our method obtains an ?-optimal solution in $\mathcal{O}(n^{2}/\epsilon)$ iterations, where n is the number of blocks. For the class of problems with cheap coordinate derivatives we show that the new method is faster than methods based on full-gradient information. Analysis for the rate of convergence in probability is also provided. For strongly convex functions our method converges linearly. Extensive numerical tests confirm that on very large problems, our method is much more numerically efficient than methods based on full gradient information.  相似文献   

7.
In this paper, we take a quasi-Newton approach to nonlinear eigenvalue problems (NEPs) of the type M(λ)v =?0, where \(M:\mathbb {C}\rightarrow \mathbb {C}^{n\times n}\) is a holomorphic function. We investigate which types of approximations of the Jacobian matrix lead to competitive algorithms, and provide convergence theory. The convergence analysis is based on theory for quasi-Newton methods and Keldysh’s theorem for NEPs. We derive new algorithms and also show that several well-established methods for NEPs can be interpreted as quasi-Newton methods, and thereby, we provide insight to their convergence behavior. In particular, we establish quasi-Newton interpretations of Neumaier’s residual inverse iteration and Ruhe’s method of successive linear problems.  相似文献   

8.
We use inverted finite element method (IFEM) for computing three-dimensional vector potentials and for solving div-curl systems in the whole space \(\mathbb {R}^{3}\). IFEM is substantially different from the existing approaches since it is a non truncature method which preserves the unboundness of the domain. After developping the method, we analyze its convergence in term of weighted norms. We then give some three-dimensional numerical results which demonstrate the efficiency and the accuracy of the method and confirm its convergence.  相似文献   

9.
We propose a class of parametric smooth functions that approximate the fundamental plus function, (x)+=max{0, x}, by twice integrating a probability density function. This leads to classes of smooth parametric nonlinear equation approximations of nonlinear and mixed complementarity problems (NCPs and MCPs). For any solvable NCP or MCP, existence of an arbitrarily accurate solution to the smooth nonlinear equations as well as the NCP or MCP, is established for sufficiently large value of a smoothing parameter . Newton-based algorithms are proposed for the smooth problem. For strongly monotone NCPs, global convergence and local quadratic convergence are established. For solvable monotone NCPs, each accumulation point of the proposed algorithms solves the smooth problem. Exact solutions of our smooth nonlinear equation for various values of the parameter , generate an interior path, which is different from the central path for interior point method. Computational results for 52 test problems compare favorably with these for another Newton-based method. The smooth technique is capable of solving efficiently the test problems solved by Dirkse and Ferris [6], Harker and Xiao [11] and Pang & Gabriel [28].This material is based on research supported by Air Force Office of Scientific Research Grant F49620-94-1-0036 and National Science Foundation Grant CCR-9322479.  相似文献   

10.
Gradient methods for minimizing composite functions   总被引:1,自引:0,他引:1  
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (with convergence rate $O\left({1 \over k}\right)$ ), and an accelerated multistep version with convergence rate $O\left({1 \over k^2}\right)$ , where $k$ is the iteration counter. For nonconvex problems with this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.  相似文献   

11.
In this paper we are concerned with the Waterloo variant of the index calculus method for the discrete logarithm problem in . We provide a rigorous proof for the heuristic arguments for the running time of the Waterloo algorithm. This implies in studying the behavior of pairs of coprime smooth polynomials over finite fields. Our proof involves a double saddle point method, and it is in nature similar to the one of Odlyzko for the rigorous analysis of the basic index calculus.  相似文献   

12.
Denote the integer lattice points in the \(N\) -dimensional Euclidean space by \(\mathbb {Z}^N\) and assume that \(X_\mathbf{n}\) , \(\mathbf{n} \in \mathbb {Z}^N\) is a linear random field. Sharp rates of convergence of histogram estimates of the marginal density of \(X_\mathbf{n}\) are obtained. Histograms can achieve optimal rates of convergence \(({\hat{\mathbf{n}}}^{-1} \log {\hat{\mathbf{n}}})^{1/3}\) where \({\hat{\mathbf{n}}}=n_1 \times \cdots \times n_N\) . The assumptions involved can easily be checked. Histograms appear to be very simple and good estimators from the point of view of uniform convergence.  相似文献   

13.
This paper presents a homotopy interior point method for solving a semi-infinite programming (SIP) problem. For algorithmic purpose, based on bilevel strategy, first we illustrate appropriate necessary conditions for a solution in the framework of standard nonlinear programming (NLP), which can be solved by homotopy method. Under suitable assumptions, we can prove that the method determines a smooth interior path from a given interior point to a point w *, at which the necessary conditions are satisfied. Numerical tracing this path gives a globally convergent algorithm for the SIP. Lastly, several preliminary computational results illustrating the method are given.  相似文献   

14.
In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence \(O\Big ({n^2 \over k^2}\Big )\), where k is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence \(O\Big ({n \over k^{1/2}}\Big )\). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.  相似文献   

15.
16.
An improvement on a generalized preconditioned Hermitian and skew-Hermitian splitting method (GPHSS), originally presented by Pan and Wang (J. Numer. Methods Comput. Appl. 32, 174–182, 2011), for saddle point problems, is proposed in this paper and referred to as IGPHSS for simplicity. After adding a matrix to the coefficient matrix on two sides of first equation of the GPHSS iterative scheme, both the number of required iterations for convergence and the computational time are significantly decreased. The convergence analysis is provided here. As saddle point problems are indefinite systems, the Conjugate Gradient method is unsuitable for them. The IGPHSS is compared with Gauss-Seidel, which requires partial pivoting due to some zero diagonal entries, Uzawa and GPHSS methods. The numerical experiments show that the IGPHSS method is better than the original GPHSS and the other two relevant methods.  相似文献   

17.
The majority of first-order methods for large-scale convex–concave saddle point problems and variational inequalities with monotone operators are proximal algorithms. To make such an algorithm practical, the problem’s domain should be proximal-friendly—admit a strongly convex function with easy to minimize linear perturbations. As a by-product, this domain admits a computationally cheap linear minimization oracle (LMO) capable to minimize linear forms. There are, however, important situations where a cheap LMO indeed is available, but the problem domain is not proximal-friendly, which motivates search for algorithms based solely on LMO. For smooth convex minimization, there exists a classical algorithm using LMO—conditional gradient. In contrast, known to us similar techniques for other problems with convex structure (nonsmooth convex minimization, convex–concave saddle point problems, even as simple as bilinear ones, and variational inequalities with monotone operators, even as simple as affine) are quite recent and utilize common approach based on Fenchel-type representations of the associated objectives/vector fields. The goal of this paper was to develop alternative (and seemingly much simpler) decomposition techniques based on LMO for bilinear saddle point problems and for variational inequalities with affine monotone operators.  相似文献   

18.
In this paper, we develop the iteration techniques for Galerkin and collocation methods for linear Volterra integral equations of the second kind with a smooth kernel, using piecewise constant functions. We prove that the convergence rates for every step of iteration improve by order \({\mathcal {O}}(h^{2})\) for Galerkin method, whereas in collocation method, it is improved by \({\mathcal {O}}(h)\) in infinity norm. We also show that the system to be inverted remains same for every iteration as in the original projection methods. We illustrate our results by numerical examples.  相似文献   

19.
We consider non-linear wavelet-based estimators of spatial regression functions with (known) random design on strictly stationary random fields, which are indexed by the integer lattice points in the \(N\)-dimensional Euclidean space and are assumed to satisfy some mixing conditions. We investigate their asymptotic rates of convergence based on thresholding of empirical wavelet coefficients and show that these estimators achieve nearly optimal convergence rates within a logarithmic term over a large range of Besov function classes \(B^{s}_{p,q}\). Therefore, wavelet estimators still achieve nearly optimal convergence rates for random fields and provide explicitly the extraordinary local adaptability.  相似文献   

20.
We analyze the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method’s iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from \(O(1/\sqrt{k})\) to O(1 / k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1 / k) to a linear convergence rate of the form \(O(\rho ^k)\) for \(\rho < 1\). Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. This extends our earlier work Le Roux et al. (Adv Neural Inf Process Syst, 2012), which only lead to a faster rate for well-conditioned strongly-convex problems. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号