共查询到20条相似文献,搜索用时 46 毫秒
1.
Hideaki Iiduka 《Applied mathematics and computation》2011,217(13):6315-6327
Many constrained sets in problems such as signal processing and optimal control can be represented as a fixed point set of a certain nonexpansive mapping, and a number of iterative algorithms have been presented for solving a convex optimization problem over a fixed point set. This paper presents a novel gradient method with a three-term conjugate gradient direction that is used to accelerate conjugate gradient methods for solving unconstrained optimization problems. It is guaranteed that the algorithm strongly converges to the solution to the problem under the standard assumptions. Numerical comparisons with the existing gradient methods demonstrate the effectiveness and fast convergence of this algorithm. 相似文献
2.
Adjoint techniques are a common tool in the numerical treatment of optimal control problems. They are used for efficient evaluations
of the gradient of the objective in gradient-based optimization algorithms. Different adjoint techniques for the optimal control
of Burgers equation with Neumann boundary control are studied. The methods differ in the point in the numerical algorithm
at which the adjoints are incorporated. Discretization methods for the continuous adjoint are discussed and compared with
methods applying the application of the discrete adjoint. At the example of the implicit Euler method and the Crank Nicolson
method it is shown that a discretization for the adjoint problem that is adjoint to the discretized optimal control problem
avoids additional errors in gradient-based optimization algorithms. The approach of discrete adjoints coincides with that
of automatic differentiation tools (AD) which provide exact gradient calculations on the discrete level. 相似文献
3.
In this paper, we consider a general class of nonlinear mixed discrete programming problems. By introducing continuous variables to replace the discrete variables, the problem is first transformed into an equivalent nonlinear continuous optimization problem subject to original constraints and additional linear and quadratic constraints. Then, an exact penalty function is employed to construct a sequence of unconstrained optimization problems, each of which can be solved effectively by unconstrained optimization techniques, such as conjugate gradient or quasi-Newton methods. It is shown that any local optimal solution of the unconstrained optimization problem is a local optimal solution of the transformed nonlinear constrained continuous optimization problem when the penalty parameter is sufficiently large. Numerical experiments are carried out to test the efficiency of the proposed method. 相似文献
4.
Convex optimization methods are used for many machine learning models such as support vector machine. However, the requirement of a convex formulation can place limitations on machine learning models. In recent years, a number of machine learning methods not requiring convexity have emerged. In this paper, we study non-convex optimization problems on the Stiefel manifold in which the feasible set consists of a set of rectangular matrices with orthonormal column vectors. We present examples of non-convex optimization problems in machine learning and apply three nonlinear optimization methods for finding a local optimal solution; geometric gradient descent method, augmented Lagrangian method of multipliers, and alternating direction method of multipliers. Although the geometric gradient method is often used to solve non-convex optimization problems on the Stiefel manifold, we show that the alternating direction method of multipliers generally produces higher quality numerical solutions within a reasonable computation time. 相似文献
5.
6.
A. V. Penenko 《Numerical Analysis and Applications》2012,5(4):326-341
A method for constructing numerical schemes for an inverse coefficient heat conduction problem with boundary measurement data and piecewise-constant coefficients is considered. Some numerical schemes for a gradient optimization algorithm to solve the inverse problem are presented. The method is based on locally-adjoint problems in combination with approximation methods in Hilbert spaces. 相似文献
7.
A method is proposed for finding the global minimum of a multivariate polynomial via sum of squares (SOS) relaxation over
its gradient variety. That variety consists of all points where the gradient is zero and it need not be finite. A polynomial
which is nonnegative on its gradient variety is shown to be SOS modulo its gradient ideal, provided the gradient ideal is
radical or the polynomial is strictly positive on the real gradient variety. This opens up the possibility of solving previously
intractable polynomial optimization problems. The related problem of constrained minimization is also considered, and numerical
examples are discussed. Experiments show that our method using the gradient variety outperforms prior SOS methods. 相似文献
8.
9.
共轭梯度法是求解无约束最优化问题的有效方法.本文在βkDY的基础上对βk引入参数,提出了一类新共轭梯度法,并证明其在强Wolfe线性搜索条件下具有充分下降性和全局收敛性. 相似文献
10.
11.
12.
A class of globally convergent conjugate gradient methods 总被引:4,自引:0,他引:4
Conjugate gradient methods are very important ones for solving nonlinear optimization problems, especially for large scale problems. However, unlike quasi-Newton methods, conjugate gradient methods were usually analyzed individually. In this paper, we propose a class of conjugate gradient methods, which can be regarded as some kind of convex combination of the Fletcher-Reeves method and the method proposed by Dai et al. To analyze this class of methods, we introduce some unified tools that concern a general method with the scalar βk having the form of φk/φk-1. Consequently, the class of conjugate gradient methods can uniformly be analyzed. 相似文献
13.
New formulations of the optimal control problem for metal solidification in a furnace are proposed and studied. The underlying mathematical model of the process is based on a three-dimensional two-phase initial-boundary value problem of the Stefan type. The formulated problems are solved numerically with the help of gradient optimization methods. The gradient of the cost function is computed by applying the fast automatic differentiation technique, which yields the exact value of the cost function gradient for a chosen discrete version of the optimal control problem. The research results are described and analyzed. Some of the results are illustrated. 相似文献
14.
Shu-Tian Liu 《Linear algebra and its applications》2010,432(7):1851-1863
Recently, a continuous method has been proposed by Golub and Liao as an alternative way to solve the minimum and interior eigenvalue problems. According to their numerical results, their method seems promising. This article is an extension along this line. In this article, firstly, we convert an eigenvalue problem to an equivalent constrained optimization problem. Secondly, using the Karush-Kuhn-Tucker conditions of this equivalent optimization problem, we obtain a variant of the Rayleigh quotient gradient flow, which is formulated by a system of differential-algebraic equations. Thirdly, based on the Rayleigh quotient gradient flow, we give a practical numerical method for the minimum and interior eigenvalue problems. Finally, we also give some numerical experiments of our method, the Golub and Liao method, and EIGS (a Matlab implementation for computing eigenvalues using restarted Arnoldi’s method) for some typical eigenvalue problems. Our numerical experiments indicate that our method seems promising for most test problems. 相似文献
15.
A new approximation method is presented for directly minimizing a composite nonsmooth function that is locally Lipschitzian. This method approximates only the generalized gradient vector, enabling us to use directly well-developed smooth optimization algorithms for solving composite nonsmooth optimization problems. This generalized gradient vector is approximated on each design variable coordinate by using only the active components of the subgradient vectors; then, its usability is validated numerically by the Pareto optimum concept. In order to show the performance of the proposed method, we solve four academic composite nonsmooth optimization problems and two dynamic response optimization problems with multicriteria. Specifically, the optimization results of the two dynamic response optimization problems are compared with those obtained by three typical multicriteria optimization strategies such as the weighting method, distance method, and min–max method, which introduces an artificial design variable in order to replace the max-value cost function with additional inequality constraints. The comparisons show that the proposed approximation method gives more accurate and efficient results than the other methods. 相似文献
16.
《Optimization》2012,61(4):993-1009
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, we propose a new two-parameter family of conjugate gradient methods for unconstrained optimization. The two-parameter family of methods not only includes the already existing three practical nonlinear conjugate gradient methods, but has other family of conjugate gradient methods as subfamily. The two-parameter family of methods with the Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the two-parameter family of methods. The numerical results show that this method is efficient for the given test problems. In addition, the methods related to this family are uniformly discussed. 相似文献
17.
Sindhu Narayanan & P. Kaelo 《高等学校计算数学学报(英文版)》2021,14(2):527-539
Conjugate gradient methods are interesting iterative methods that solve
large scale unconstrained optimization problems. A lot of recent research has thus
focussed on developing a number of conjugate gradient methods that are more effective. In this paper, we propose another hybrid conjugate gradient method as a linear
combination of Dai-Yuan (DY) method and the Hestenes-Stiefel (HS) method. The
sufficient descent condition and the global convergence of this method are established using the generalized Wolfe line search conditions. Compared to the other
conjugate gradient methods, the proposed method gives good numerical results and
is effective. 相似文献
18.
Optimization problems using total variation frequently appear in image analysis models, in which the sharp edges of images are preserved. Direct gradient descent methods usually yield very slow convergence when used for such optimization problems. Recently, many duality-based gradient projection methods have been proposed to accelerate the speed of convergence. In this dual formulation, the cost function of the optimization problem is singular, and the constraint set is not a polyhedral set. In this paper, we establish two inequalities related to projected gradients and show that, under some non-degeneracy conditions, the rate of convergence is linear. 相似文献
19.
关于线性规划问题熵障碍对偶法的注记 总被引:1,自引:1,他引:0
线性规划是目标优化问题中最常用的模型。关于大规模线性规划问题的有效求解问题一直受到人们的关注。熵障碍对偶法是继内点法之后,又一解线性规划问题的新的算法。本文讨论了熵障碍对偶法的推广形式及其梯度类算法的收敛性。 相似文献
20.
El-Sayed M. E. Mostafa 《Journal of Applied Mathematics and Computing》2012,40(1-2):529-549
In this paper, three nonlinear conjugate gradient methods are analyzed and studied for solving matrix optimization problem arising in the static output feedback control design for continuous-time systems. The problem structure is exploited and the methods are tested numerically on wide range of benchmark test problems. 相似文献