首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
We propose a general alternative regularization algorithm for solving the split equality fixed point problem for the class of quasi-pseudocontractive mappings in Hilbert spaces. We also illustrate the performance of our algorithm with numerical example and compare the result with some other algorithms in the literature in this direction. We found out that our algorithm requires a lesser number of iterations and CPU time for its convergence than some of the existing algorithms. Our results extend and generalize some existing results in the literature in this direction.  相似文献   

2.
邵新慧  祁猛 《计算数学》2022,44(2):206-216
多重线性系统在当今的工程计算和数据挖掘等领域有很多实际应用,许多问题可以转化为多重线性系统求解问题.在本文中,我们首先提出了一种新的迭代算法来求解系数张量为M-张量的多重线性系统,在此基础上又提出了一种新的改进算法,并对两种算法的收敛性进行了分析.数值算例的结果表明,本文提出的两种算法是有效的并且改进算法的迭代时间更少.  相似文献   

3.
Approximations to continuous functions by linear splines cangenerally be greatly improved if the knot points are free variables.In this paper we address the problem of computing a best linearspline L2-approximant to a given continuous function on a givenclosed real interval with a fixed number of free knots. We describe an algorithm that is currently available and establishthe theoretical basis for two new algorithms that we have developedand tested. We show that one of these new algorithms had goodlocal convergence properties by comparison with the other techniques,though its convergence is quite slow. The second new algorithmis not so robust but is quicker and so is used to aid efficiency.A starting procedure based on a dynamic programming approachis introduced to give more reliable global convergence properties. We thus propose a hybrid algorithm which is both robust andreasonably efficient for this problem.  相似文献   

4.
This paper introduces an algorithm for the nonnegative matrix factorization-and-completion problem, which aims to find nonnegative low-rank matrices X and Y so that the product XY approximates a nonnegative data matrix M whose elements are partially known (to a certain accuracy). This problem aggregates two existing problems: (i) nonnegative matrix factorization where all entries of M are given, and (ii) low-rank matrix completion where nonnegativity is not required. By taking the advantages of both nonnegativity and low-rankness, one can generally obtain superior results than those of just using one of the two properties. We propose to solve the non-convex constrained least-squares problem using an algorithm based on the classical alternating direction augmented Lagrangian method. Preliminary convergence properties of the algorithm and numerical simulation results are presented. Compared to a recent algorithm for nonnegative matrix factorization, the proposed algorithm produces factorizations of similar quality using only about half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspectral images, the proposed algorithm yields overall better qualities than those produced by two recent matrix-completion algorithms that do not exploit nonnegativity.  相似文献   

5.
ABSTRACT

We propose a novel iterative algorithm for solving a large sparse linear system. The method is based on the EM algorithm. If the system has a unique solution, the algorithm guarantees convergence with a geometric rate. Otherwise, convergence to a minimal Kullback–Leibler divergence point is guaranteed. The algorithm is easy to code and competitive with other iterative algorithms.  相似文献   

6.
In this paper,we present an extrapolated parallel subgradient projection method with the centering technique for the convex feasibility problem,the algorithm improves the convergence by reason of using centering techniques which reduce the oscillation of the corresponding sequence.To prove the convergence in a simply way,we transmit the parallel algorithm in the original space to a sequential one in a newly constructed product space.Thus,the convergence of the parallel algorithm is derived with the help of the sequential one under some suitable conditions.Numerical results show that the new algorithm has better convergence than the existing algorithms.  相似文献   

7.
In this paper, we propose three different kinds of iteration schemes to compute the approximate solutions of variational inequalities in the setting of Banach spaces. First, we suggest Mann-type steepest-descent iterative algorithm, which is based on two well-known methods: Mann iterative method and steepest-descent method. Second, we introduce modified hybrid steepest-descent iterative algorithm. Third, we propose modified hybrid steepest-descent iterative algorithm by using the resolvent operator. For the first two cases, we prove the convergence of sequences generated by the proposed algorithms to a solution of a variational inequality in the setting of Banach spaces. For the third case, we prove the convergence of the iterative sequence generated by the proposed algorithm to a zero of an operator, which is also a solution of a variational inequality.  相似文献   

8.
The alternating direction method of multipliers(ADMM)is a widely used method for solving many convex minimization models arising in signal and image processing.In this paper,we propose an inertial ADMM for solving a two-block separable convex minimization problem with linear equality constraints.This algorithm is obtained by making use of the inertial Douglas-Rachford splitting algorithm to the corresponding dual of the primal problem.We study the convergence analysis of the proposed algorithm in infinite-dimensional Hilbert spaces.Furthermore,we apply the proposed algorithm on the robust principal component analysis problem and also compare it with other state-of-the-art algorithms.Numerical results demonstrate the advantage of the proposed algorithm.  相似文献   

9.
We propose a new stochastic first-order algorithm for solving sparse regression problems. In each iteration, our algorithm utilizes a stochastic oracle of the subgradient of the objective function. Our algorithm is based on a stochastic version of the estimate sequence technique introduced by Nesterov (Introductory lectures on convex optimization: a basic course, Kluwer, Amsterdam, 2003). The convergence rate of our algorithm depends continuously on the noise level of the gradient. In particular, in the limiting case of noiseless gradient, the convergence rate of our algorithm is the same as that of optimal deterministic gradient algorithms. We also establish some large deviation properties of our algorithm. Unlike existing stochastic gradient methods with optimal convergence rates, our algorithm has the advantage of readily enforcing sparsity at all iterations, which is a critical property for applications of sparse regressions.  相似文献   

10.
In this paper, we consider the discrete optimization via simulation problem with a single stochastic constraint. We present two genetic-algorithm-based algorithms that adopt different sampling rules and searching mechanisms, and thus deliver different statistical guarantees. The first algorithm offers global convergence as the simulation effort goes to infinity. However, the algorithm’s finite-time efficiency may be sacrificed to maintain this theoretically appealing property. We therefore propose the second heuristic algorithm that can take advantage of the desirable mechanics of genetic algorithm, and might be better able to find near-optimal solutions in a reasonable amount of time. Empirical studies are performed to compare the efficiency of the proposed algorithms with other existing ones.  相似文献   

11.
一类带非单调线搜索的信赖域算法   总被引:1,自引:0,他引:1  
通过将非单调Wolfe线搜索技术与传统的信赖域算法相结合,我们提出了一类新的求解无约束最优化问题的信赖域算法.新算法在每一迭代步只需求解一次信赖域子问题,而且在每一迭代步Hesse阵的近似都满足拟牛顿条件并保持正定传递.在一定条件下,证明了算法的全局收敛性和强收敛性.数值试验表明新算法继承了非单调技术的优点,对于求解某...  相似文献   

12.
In this article, we present a fast and stable algorithm for solving a class of optimization problems that arise in many statistical estimation procedures, such as sparse fused lasso over a graph, convex clustering, and trend filtering, among others. We propose a so-called augmented alternating direction methods of multipliers (ADMM) algorithm to solve this class of problems. Compared to a standard ADMM algorithm, our proposal significantly reduces the computational cost at each iteration while maintaining roughly the same overall convergence speed. We also consider a new varying penalty scheme for the ADMM algorithm, which could further accelerate the convergence, especially when solving a sequence of problems with tuning parameters of different scales. Extensive numerical experiments on the sparse fused lasso problem show that the proposed algorithm is more efficient than the standard ADMM and two other existing state-of-the-art specialized algorithms. Finally, we discuss a possible extension and some interesting connections to two well-known algorithms. Supplementary materials for the article are available online.  相似文献   

13.
We consider the problem of finding the most probable explanation (also known as the MAP assignment) on probabilistic graphical models. The dual decomposition algorithms based on coordinate descent are efficient approximate techniques for this problem, where the local dual functions are constructed and optimized to monotonically increase the cost of the dual function. In this paper, we present a unifying framework for constructing and optimizing these local dual functions, and introduce an energy distribution view to analyze the convergence rates of these algorithms. To optimize the local dual functions, we first propose a new concept—the energy distribution ratio—to describe the features of the solutions, and then derive an explicit optimal solution, which covers most of the monotonic dual decomposition algorithms. It is shown that the differences of these algorithms lie in both the forms of the local dual functions and the settings of the energy distribution ratios, and the existing algorithms mainly focus on constructing compact and solvable local dual functions. In contrast, we study the impact of the energy distribution ratios and introduce two energy distribution criteria for fast convergence. Moreover, we exploit dynamic energy distribution ratios to optimize the local dual functions, and propose a series of improved algorithms. The experimental results on synthetic and real problems show the improved algorithms outperform the existing ones on the convergence performance.  相似文献   

14.
《Optimization》2012,61(4):1011-1031
This article deals with the conjugate gradient method on a Riemannian manifold with interest in global convergence analysis. The existing conjugate gradient algorithms on a manifold endowed with a vector transport need the assumption that the vector transport does not increase the norm of tangent vectors, in order to confirm that generated sequences have a global convergence property. In this article, the notion of a scaled vector transport is introduced to improve the algorithm so that the generated sequences may have a global convergence property under a relaxed assumption. In the proposed algorithm, the transported vector is rescaled in case its norm has increased during the transport. The global convergence is theoretically proved and numerically observed with examples. In fact, numerical experiments show that there exist minimization problems for which the existing algorithm generates divergent sequences, but the proposed algorithm generates convergent sequences.  相似文献   

15.
We propose two restricted memory level bundle-like algorithms for minimizing a convex function over a convex set. If the memory is restricted to one linearization of the objective function, then both algorithms are variations of the projected subgradient method. The first algorithm, proposed in Hilbert space, is a conceptual one. It is shown to be strongly convergent to the solution that lies closest to the initial iterate. Furthermore, the entire sequence of iterates generated by the algorithm is contained in a ball with diameter equal to the distance between the initial point and the solution set. The second algorithm is an implementable version. It mimics as much as possible the conceptual one in order to resemble convergence properties. The implementable algorithm is validated by numerical results on several two-stage stochastic linear programs.  相似文献   

16.
We propose a modification of the classical extragradient and proximal point algorithms for finding a zero of a maximal monotone operator in a Hilbert space. At each iteration of the method, an approximate extragradient-type step is performed using information obtained from an approximate solution of a proximal point subproblem. The algorithm is of a hybrid type, as it combines steps of the extragradient and proximal methods. Furthermore, the algorithm uses elements in the enlargement (proposed by Burachik, Iusem and Svaiter) of the operator defining the problem. One of the important features of our approach is that it allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems. This yields a more practical proximal-algorithm-based framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. It is further demonstrated that the modified forward-backward splitting algorithm of Tseng falls within the presented general framework.  相似文献   

17.
BFGS算法对非凸函数优化问题的收敛性   总被引:1,自引:0,他引:1  
BFGS算法是无约束最优化中最著名的数值算法之一,对非凸函数BFGS算法是否具有整体收敛性,这是一个open问题,本文考虑Wolfo线搜索下目标函数非凸的BFGS算法,我们给出一个使该算法收敛的充分条件。  相似文献   

18.
We propose an algorithm, semismooth Newton coordinate descent (SNCD), for the elastic-net penalized Huber loss regression and quantile regression in high dimensional settings. Unlike existing coordinate descent type algorithms, the SNCD updates a regression coefficient and its corresponding subgradient simultaneously in each iteration. It combines the strengths of the coordinate descent and the semismooth Newton algorithm, and effectively solves the computational challenges posed by dimensionality and nonsmoothness. We establish the convergence properties of the algorithm. In addition, we present an adaptive version of the “strong rule” for screening predictors to gain extra efficiency. Through numerical experiments, we demonstrate that the proposed algorithm is very efficient and scalable to ultrahigh dimensions. We illustrate the application via a real data example. Supplementary materials for this article are available online.  相似文献   

19.
We deal with zero-sum two-player stochastic games with perfect information. We propose two algorithms to find the uniform optimal strategies and one method to compute the optimality range of discount factors. We prove the convergence in finite time for one algorithm. The uniform optimal strategies are also optimal for the long run average criterion and, in transient games, for the undiscounted criterion as well.  相似文献   

20.
** Email: zhenghaihuang{at}yahoo.com.cn; huangzhenghai{at}hotmail.com In this paper, we propose a non-interior continuation algorithmfor solving the P0-matrix linear complementarity problem (LCP),which is conceptually simpler than most existing non-interiorcontinuation algorithms in the sense that the proposed algorithmonly needs to solve at most one linear system of equations ateach iteration. We show that the proposed algorithm is globallyconvergent under a common assumption. In particular, we showthat the proposed algorithm is globally linearly and locallyquadratically convergent under some assumptions which are weakerthan those required in many existing non-interior continuationalgorithms. It should be pointed out that the assumptions usedin our analysis of both global linear and local quadratic convergencedo not imply the uniqueness of the solution to the LCP concerned.To the best of our knowledge, such a convergence result hasnot been reported in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号