首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we study a nonlinear multigrid method for solving a general image denoising model with two L 1-regularization terms. Different from the previous studies, we give a simpler derivation of the dual formulation of the general model by augmented Lagrangian method. In order to improve the convergence rate of the proposed multigrid method, an improved dual iteration is proposed as its smoother. Furthermore, we apply the proposed method to the anisotropic ROF model and the anisotropic LLT model. We also give the local Fourier analysis (LFAs) of the Chambolle’s dual iterations and a modified smoother for solving these two models, respectively. Numerical results illustrate the efficiency of the proposed method and indicate that such a multigrid method is more suitable to deal with large-sized images.  相似文献   

2.
We apply the dual algorithm of Chambolle for the minimization of the LLT model. A convergence theorem is given for the proposed algorithm. The algorithm overcomes the numerical difficulties related to the non-differentiability of the LLT model. The dual algorithm is faster than the original gradient descent algorithm. Numerical experiments are supplied to demonstrate the efficiency of the algorithm.  相似文献   

3.
A family of new conjugate gradient methods is proposed based on Perry’s idea, which satisfies the descent property or the sufficient descent property for any line search. In addition, based on the scaling technology and the restarting strategy, a family of scaling symmetric Perry conjugate gradient methods with restarting procedures is presented. The memoryless BFGS method and the SCALCG method are the special forms of the two families of new methods, respectively. Moreover, several concrete new algorithms are suggested. Under Wolfe line searches, the global convergence of the two families of the new methods is proven by the spectral analysis for uniformly convex functions and nonconvex functions. The preliminary numerical comparisons with CG_DESCENT and SCALCG algorithms show that these new algorithms are very effective algorithms for the large-scale unconstrained optimization problems. Finally, a remark for further research is suggested.  相似文献   

4.
In 2004 Chambolle proposed an algorithm for mean curvature flow based on a variational problem. Since then, the convergence, extensions and applications of his algorithm have been studied by many people. In this paper we give a proof of the convergence of an anisotropic version of Chambolle’s algorithm by use of the signed distance function. An application of our scheme to an approximation of the nonlocal curvature flow such as crystalline one is also discussed.  相似文献   

5.
New accelerated nonlinear conjugate gradient algorithms which are mainly modifications of Dai and Yuan’s for unconstrained optimization are proposed. Using the exact line search, the algorithm reduces to the Dai and Yuan conjugate gradient computational scheme. For inexact line search the algorithm satisfies the sufficient descent condition. Since the step lengths in conjugate gradient algorithms may differ from 1 by two orders of magnitude and tend to vary in a very unpredictable manner, the algorithms are equipped with an acceleration scheme able to improve the efficiency of the algorithms. Computational results for a set consisting of 750 unconstrained optimization test problems show that these new conjugate gradient algorithms substantially outperform the Dai-Yuan conjugate gradient algorithm and its hybrid variants, Hestenes-Stiefel, Polak-Ribière-Polyak, CONMIN conjugate gradient algorithms, limited quasi-Newton algorithm LBFGS and compare favorably with CG_DESCENT. In the frame of this numerical study the accelerated scaled memoryless BFGS preconditioned conjugate gradient ASCALCG algorithm proved to be more robust.  相似文献   

6.
Image restoration models based on total variation (TV) have become popular since their introduction by Rudin, Osher, and Fatemi (ROF) in 1992. The dual formulation of this model has a quadratic objective with separable constraints, making projections onto the feasible set easy to compute. This paper proposes application of gradient projection (GP) algorithms to the dual formulation. We test variants of GP with different step length selection and line search strategies, including techniques based on the Barzilai-Borwein method. Global convergence can in some cases be proved by appealing to existing theory. We also propose a sequential quadratic programming (SQP) approach that takes account of the curvature of the boundary of the dual feasible set. Computational experiments show that the proposed approaches perform well in a wide range of applications and that some are significantly faster than previously proposed methods, particularly when only modest accuracy in the solution is required.  相似文献   

7.
In this paper, we suggest another accelerated conjugate gradient algorithm for which both the descent and the conjugacy conditions are guaranteed. The search direction is selected as a linear combination of the gradient and the previous direction. The coefficients in this linear combination are selected in such a way that both the descent and the conjugacy condition are satisfied at every iteration. The algorithm introduces the modified Wolfe line search, in which the parameter in the second Wolfe condition is modified at every iteration. It is shown that both for uniformly convex functions and for general nonlinear functions, the algorithm with strong Wolfe line search generates directions bounded away from infinity. The algorithm uses an acceleration scheme modifying the step length in such a manner as to improve the reduction of the function values along the iterations. Numerical comparisons with some conjugate gradient algorithms using a set of 75 unconstrained optimization problems with different dimensions show that the computational scheme outperforms the known conjugate gradient algorithms like Hestenes and Stiefel; Polak, Ribière and Polyak; Dai and Yuan or the hybrid Dai and Yuan; CG_DESCENT with Wolfe line search, as well as the quasi-Newton L-BFGS.  相似文献   

8.
Non-negative matrix factorization (NMF) is a problem to obtain a representation of data using non-negativity constraints. Since the NMF was first proposed by Lee, NMF has attracted much attention for over a decade and has been successfully applied to numerous data analysis problems. Recent years, many variants of NMF have been proposed. Common methods are: iterative multiplicative update algorithms, gradient descent methods, alternating least squares (ANLS). Since alternating least squares has nice optimization properties, various optimization methods can be used to solve ANLS’s subproblems. In this paper, we propose a modified subspace Barzilai-Borwein for subproblems of ANLS. Moreover, we propose a modified strategy for ANLS. Global convergence results of our algorithm are established. The results of numerical experiments are reported to show the effectiveness of the proposed algorithm.  相似文献   

9.
A new conjugate gradient method is proposed by applying Powell’s symmetrical technique to conjugate gradient methods in this paper. Using Wolfe line searches, the global convergence of the method is analyzed by using the spectral analysis of the conjugate gradient iteration matrix and Zoutendijk’s condition. Based on this, some concrete descent algorithms are developed. 200s numerical experiments are presented to verify their performance and the numerical results show that these algorithms are competitive compared with the PRP+ algorithm. Finally, a brief discussion of the new proposed method is given.  相似文献   

10.
We present an analytical study of gradient descent algorithms applied to a classification problem in machine learning based on artificial neural networks. Our approach is based on entropy–entropy dissipation estimates that yield explicit rates. Specifically, as long as the neural nets remain within a set of “good classifiers”, we establish a striking feature of the algorithm: it mathematically diverges as the number of gradient descent iterations (“time”) goes to infinity but this divergence is only logarithmic, while the loss function vanishes polynomially. As a consequence, this algorithm still yields a classifier that exhibits good numerical performance and may even appear to converge.  相似文献   

11.
Descent methods with linesearch in the presence of perturbations   总被引:3,自引:0,他引:3  
We consider the class of descent algorithms for unconstrained optimization with an Armijo-type stepsize rule in the case when the gradient of the objective function is computed inexactly. An important novel feature in our theoretical analysis is that perturbations associated with the gradient are not assumed to be relatively small or to tend to zero in the limit (as a practical matter, we expect them to be reasonably small, so that a meaningful approximate solution can be obtained). This feature makes our analysis applicable to various difficult problems encounted in practice. We propose a modified Armijo-type rule for computing the stepsize which guarantees that the algorithm obtains a reasonable approximate solution. Furthermore, if perturbations are small relative to the size of the gradient, then our algorithm retains all the standard convergence properties of descent methods.  相似文献   

12.
We present a linear-time approximation scheme for solving the trust region subproblem (TRS). It employs Nesterov’s accelerated gradient descent algorithm to solve a convex programming reformulation of (TRS). The total time complexity is less than that of the recent linear-time algorithm. The algorithm is further extended to the two-sided trust region subproblem.  相似文献   

13.
In this paper we present a derivative-free optimization algorithm for finite minimax problems. The algorithm calculates an approximate gradient for each of the active functions of the finite max function and uses these to generate an approximate subdifferential. The negative projection of 0 onto this set is used as a descent direction in an Armijo-like line search. We also present a robust version of the algorithm, which uses the ‘almost active’ functions of the finite max function in the calculation of the approximate subdifferential. Convergence results are presented for both algorithms, showing that either f(x k )→?∞ or every cluster point is a Clarke stationary point. Theoretical and numerical results are presented for three specific approximate gradients: the simplex gradient, the centered simplex gradient and the Gupal estimate of the gradient of the Steklov averaged function. A performance comparison is made between the regular and robust algorithms, the three approximate gradients, and a regular and robust stopping condition.  相似文献   

14.
《Optimization》2012,61(12):1457-1471
A modified Polak–Ribière–Polyak conjugate gradient algorithm which satisfies both the sufficient descent condition and the conjugacy condition is presented. These properties are independent of the line search. The algorithms use the standard Wolfe line search. Under standard assumptions, we show the global convergence of the algorithm. Numerical comparisons with conjugate gradient algorithms using a set of 750 unconstrained optimization problems, some of them from the CUTE library, show that this computational scheme outperforms the known Polak–Ribière–Polyak algorithm, as well as some other unconstrained optimization algorithms.  相似文献   

15.
Molecular similarity index measures the similarity between two molecules. Computing the optimal similarity index is a hard global optimization problem. Since the objective function value is very hard to compute and its gradient vector is usually not available, previous research has been based on non-gradient algorithms such as random search and the simplex method. In a recent paper, McMahon and King introduced a Gaussian approximation so that both the function value and the gradient vector can be computed analytically. They then proposed a steepest descent algorithm for computing the optimal similarity index of small molecules. In this paper, we consider a similar problem. Instead of computing atom-based derivatives, we directly compute the derivatives with respect to the six free variables describing the relative positions of the two molecules.. We show that both the function value and gradient vector can be computed analytically and apply the more advanced BFGS method in addition to the steepest descent algorithm. The algorithms are applied to compute the similarities among the 20 amino acids and biomolecules like proteins. Our computational results show that our algorithm can achieve more accuracy than previous methods and has a 6-fold speedup over the steepest descent method.  相似文献   

16.
We propose an adaptive smoothing algorithm based on Nesterov’s smoothing technique in Nesterov (Math Prog 103(1):127–152, 2005) for solving “fully” nonsmooth composite convex optimization problems. Our method combines both Nesterov’s accelerated proximal gradient scheme and a new homotopy strategy for smoothness parameter. By an appropriate choice of smoothing functions, we develop a new algorithm that has the \(\mathcal {O}\left( \frac{1}{\varepsilon }\right) \)-worst-case iteration-complexity while preserves the same complexity-per-iteration as in Nesterov’s method and allows one to automatically update the smoothness parameter at each iteration. Then, we customize our algorithm to solve four special cases that cover various applications. We also specify our algorithm to solve constrained convex optimization problems and show its convergence guarantee on a primal sequence of iterates. We demonstrate our algorithm through three numerical examples and compare it with other related algorithms.  相似文献   

17.
A modification of Karmarkar's algorithm for linear programming successively generates column-scaled equivalents of the original linear program, and in the scaled coordinates obtains improvements according to steepest descent directions. As an interior-feasible-preserving algorithm, termination requires a purification algorithm to obtain a dual basic optimal solution. Together the two algorithms comprise a ‘hybrid’ procedure for solving linear programs.In this paper we present some convergence results on the Phase II and Phase I portions of the scaling algorithm. We also present results of numerical experiments on examples of Klee—Minty type which show sensitivity to the starting interior-feasible point and steepest descent step size.  相似文献   

18.
In this paper, stochastic approximation (SA) algorithm with a new adaptive step size scheme is proposed. New adaptive step size scheme uses a fixed number of previous noisy function values to adjust steps at every iteration. The algorithm is formulated for a general descent direction and almost sure convergence is established. The case when negative gradient is chosen as a search direction is also considered. The algorithm is tested on a set of standard test problems. Numerical results show good performance and verify efficiency of the algorithm compared to some of existing algorithms with adaptive step sizes.  相似文献   

19.
The paper is devoted to the convergence properties of finite-difference local descent algorithms in global optimization problems with a special -convex structure. It is assumed that the objective function can be closely approximated by some smooth convex function. Stability properties of the perturbed gradient descent and coordinate descent methods are investigated. Basing on this results some global optimization properties of finite-difference local descent algorithms, in particular, coordinate descent method, are discovered. These properties are not inherent in methods using exact gradients.The paper was presented at the II. IIASA-Workshop on Global Optimization, Sopron (Hungary), December 9–14, 1990.  相似文献   

20.

We construct new sequence transformations based on Wynn’s epsilon and rho algorithms. The recursions of the new algorithms include the recursions of Wynn’s epsilon and rho algorithm and of Osada’s generalized rho algorithm as special cases. We demonstrate the performance of our algorithms numerically by applying them to some linearly and logarithmically convergent sequences as well as some divergent series.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号