首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, an algorithm of barrier objective penalty function for inequality constrained optimization is studied and a conception–the stability of barrier objective penalty function is presented. It is proved that an approximate optimal solution may be obtained by solving a barrier objective penalty function for inequality constrained optimization problem when the barrier objective penalty function is stable. Under some conditions, the stability of barrier objective penalty function is proved for convex programming. Specially, the logarithmic barrier function of convex programming is stable. Based on the barrier objective penalty function, an algorithm is developed for finding an approximate optimal solution to an inequality constrained optimization problem and its convergence is also proved under some conditions. Finally, numerical experiments show that the barrier objective penalty function algorithm has better convergence than the classical barrier function algorithm.  相似文献   

2.
In this article, a smoothing objective penalty function for inequality constrained optimization problems is presented. The article proves that this type of the smoothing objective penalty functions has good properties in helping to solve inequality constrained optimization problems. Moreover, based on the penalty function, an algorithm is presented to solve the inequality constrained optimization problems, with its convergence under some conditions proved. Two numerical experiments show that a satisfactory approximate optimal solution can be obtained by the proposed algorithm.  相似文献   

3.
In this work we study an interior penalty method for a finite-dimensional large-scale linear complementarity problem (LCP) arising often from the discretization of stochastic optimal problems in financial engineering. In this approach, we approximate the LCP by a nonlinear algebraic equation containing a penalty term linked to the logarithmic barrier function for constrained optimization problems. We show that the penalty equation has a solution and establish a convergence theory for the approximate solutions. A smooth Newton method is proposed for solving the penalty equation and properties of the Jacobian matrix in the Newton method have been investigated. Numerical experimental results using three non-trivial test examples are presented to demonstrate the rates of convergence, efficiency and usefulness of the method for solving practical problems.  相似文献   

4.
Efficient and reliable integrators are indispensable for the design of sequential solvers for optimal control problems involving continuous dynamics, especially for real-time applications. In this paper, optimal control problems for systems represented by index-1 differential-algebraic equations are investigated. On the basis of a time-scaling transformation, the control is parameterized as a piecewise constant function with variable heights and switching time instants. Compared with control parameterization with fixed time grids, the flexibility of adjusting switching time instants increases the chance of finding the optimal solution. Furthermore, error constraints are introduced in the optimization procedure such that the optimal control obtained has a guarantee of integration accuracy. For the derived approximate nonlinear programming problem, a function evaluation and forward sensitivity propagation algorithm is proposed with an embedded implicit Runge–Kutta integrator, which executes one Newton iteration in the limit by employing a predictor-corrector strategy. This algorithm is combined with a nonlinear programming solver Ipopt to construct the optimal control solver. Numerical experiments for the solution of the optimal control problem for a Delta robot demonstrate that the computational speed of this solver is increased by a factor of 0.5–2 when compared with the same solver without the predictor-corrector strategy, and increased by a factor of 20–40 when compared with solver embedding IDAS, the Implicit Differential-Algebraic solver with Sensitivity capabilities developed by Lawrence Livermore National Laboratory. Meanwhile, the accuracy loss compared with the one using IDAS is small and admissible.  相似文献   

5.
We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang-bang control problem, under several different inexactness schemes.  相似文献   

6.
Penalty function is an important tool in solving many constrained optimization problems in areas such as industrial design and management. In this paper, we study exactness and algorithm of an objective penalty function for inequality constrained optimization. In terms of exactness, this objective penalty function is at least as good as traditional exact penalty functions. Especially, in the case of a global solution, the exactness of the proposed objective penalty function shows a significant advantage. The sufficient and necessary stability condition used to determine whether the objective penalty function is exact for a global solution is proved. Based on the objective penalty function, an algorithm is developed for finding a global solution to an inequality constrained optimization problem and its global convergence is also proved under some conditions. Furthermore, the sufficient and necessary calmness condition on the exactness of the objective penalty function is proved for a local solution. An algorithm is presented in the paper in finding a local solution, with its convergence proved under some conditions. Finally, numerical experiments show that a satisfactory approximate optimal solution can be obtained by the proposed algorithm.  相似文献   

7.
Value-Estimation Function Method for Constrained Global Optimization   总被引:5,自引:0,他引:5  
A novel value-estimation function method for global optimization problems with inequality constraints is proposed in this paper. The value-estimation function formulation is an auxiliary unconstrained optimization problem with a univariate parameter that represents an estimated optimal value of the objective function of the original optimization problem. A solution is optimal to the original problem if and only if it is also optimal to the auxiliary unconstrained optimization with the parameter set at the optimal objective value of the original problem, which turns out to be the unique root of a basic value-estimation function. A logarithmic-exponential value-estimation function formulation is further developed to acquire computational tractability and efficiency. The optimal objective value of the original problem as well as the optimal solution are sought iteratively by applying either a generalized Newton method or a bisection method to the logarithmic-exponential value-estimation function formulation. The convergence properties of the solution algorithms guarantee the identification of an approximate optimal solution of the original problem, up to any predetermined degree of accuracy, within a finite number of iterations.  相似文献   

8.
Existing algorithms for solving unconstrained optimization problems are generally only optimal in the short term. It is desirable to have algorithms which are long-term optimal. To achieve this, the problem of computing the minimum point of an unconstrained function is formulated as a sequence of optimal control problems. Some qualitative results are obtained from the optimal control analysis. These qualitative results are then used to construct a theoretical iterative method and a new continuous-time method for computing the minimum point of a nonlinear unconstrained function. New iterative algorithms which approximate the theoretical iterative method and the proposed continuous-time method are then established. For convergence analysis, it is useful to note that the numerical solution of an unconstrained optimization problem is none other than an inverse Lyapunov function problem. Convergence conditions for the proposed continuous-time method and iterative algorithms are established by using the Lyapunov function theorem.  相似文献   

9.
Global convergence is proved for a partitioned BFGS algorithm, when applied on a partially separable problem with a convex decomposition. This case convers a known practical optimization method for large dimensional unconstrained problems. Inexact solution of the linear system defining the search direction and variants of the steplength rule are also shown to be acceptable without affecting the global convergence properties.  相似文献   

10.
A pseudospectral method for generating optimal trajectories of linear and nonlinear constrained dynamic systems is proposed. The method consists of representing the solution of the optimal control problem by an mth degree interpolating polynomial, using Chebyshev nodes, and then discretizing the problem using a cell-averaging technique. The optimal control problem is thereby transformed into an algebraic nonlinear programming problem. Due to its dynamic nature, the proposed method avoids many of the numerical difficulties typically encountered in solving standard optimal control problems. Furthermore, for discontinuous optimal control problems, we develop and implement a Chebyshev smoothing procedure which extracts the piecewise smooth solution from the oscillatory solution near the points of discontinuities. Numerical examples are provided, which confirm the convergence of the proposed method. Moreover, a comparison is made with optimal solutions obtained by closed-form analysis and/or other numerical methods in the literature.  相似文献   

11.
We discuss several optimization procedures to solve finite element approximations of linear-quadratic Dirichlet optimal control problems governed by an elliptic partial differential equation posed on a 2D or 3D Lipschitz domain. The control is discretized explicitly using continuous piecewise linear approximations. Unconstrained, control-constrained, state-constrained and control-and-state constrained problems are analysed. A preconditioned conjugate method for a reduced problem in the control variable is proposed to solve the unconstrained problem, whereas semismooth Newton methods are discussed for the solution of constrained problems. State constraints are treated via a Moreau–Yosida penalization. Convergence is studied for both the continuous problems and the finite dimensional approximations. In the finite dimensional case, we are able to show convergence of the optimization procedures even in the absence of Tikhonov regularization parameter. Computational aspects are also treated and several numerical examples are included to illustrate the theoretical results.  相似文献   

12.
In this article, a novel objective penalty function as well as its second-order smoothing is introduced for constrained optimization problems (COP). It is shown that an optimal solution to the second-order smoothing objective penalty optimization problem is an optimal solution to the original optimization problem under some mild conditions. Based on the second-order smoothing objective penalty function, an algorithm that has better convergence is introduced. Numerical examples illustrate that this algorithm is efficient in solving COP.  相似文献   

13.
Infinite-dimensional optimization problems occur in various applications such as optimal control problems and parameter identification problems. If these problems are solved numerically the methods require a discretization which can be viewed as a perturbation of the data of the optimization problem. In this case the expected convergence behavior of the numerical method used to solve the problem does not only depend on the discretized problem but also on the original one. Algorithms which are analyzed include the gradient projection method, conditional gradient method, Newton's method and quasi-Newton methods for unconstrained and constrained problems with simple constraints.  相似文献   

14.
对不等式约束优化问题提出了一个低阶精确罚函数的光滑化算法. 首先给出了光滑罚问题、非光滑罚问题及原问题的目标函数值之间的误差估计,进而在弱的假
设之下证明了光滑罚问题的全局最优解是原问题的近似全局最优解. 最后给出了一个基于光滑罚函数的求解原问题的算法,证明了算法的收敛性,并给出数值算例说明算法的可行性.  相似文献   

15.
We consider an inverse quadratic programming (QP) problem in which the parameters in both the objective function and the constraint set of a given QP problem need to be adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a linear complementarity constrained minimization problem with a positive semidefinite cone constraint. With the help of duality theory, we reformulate this problem as a linear complementarity constrained semismoothly differentiable (SC1) optimization problem with fewer variables than the original one. We propose a perturbation approach to solve the reformulated problem and demonstrate its global convergence. An inexact Newton method is constructed to solve the perturbed problem and its global convergence and local quadratic convergence rate are shown. As the objective function of the problem is a SC1 function involving the projection operator onto the cone of positively semi-definite symmetric matrices, the analysis requires an implicit function theorem for semismooth functions as well as properties of the projection operator in the symmetric-matrix space. Since an approximate proximal point is required in the inexact Newton method, we also give a Newton method to obtain it. Finally we report our numerical results showing that the proposed approach is quite effective.  相似文献   

16.
A new method for nonlinearly constrained optimization problems is proposed. The method consists of two steps. In the first step, we get a search direction by the linearly constrained subproblems based on conic functions. In the second step, we use a differentiable penalty function, and regard it as the metric function of the problem. From this, a new approximate solution is obtained. The global convergence of the given method is also proved.  相似文献   

17.
A modified multiplier method for optimization problems with equality constraints is suggested and its application to constrained optimal control problems described. For optimal control problems with free terminal time, a gradient descent technique for updating control functions as well as the terminal time is developed. The modified multiplier method with the simplified conjugate gradient method is used to compute the solution of a time-optimal control problem for a V/STOL aircraft.  相似文献   

18.
In the paper, we consider the bioprocess system optimal control problem. Generally speaking, it is very difficult to solve this problem analytically. To obtain the numerical solution, the problem is transformed into a parameter optimization problem with some variable bounds, which can be efficiently solved using any conventional optimization algorithms, e.g. the improved Broyden–Fletcher–Goldfarb–Shanno algorithm. However, in spite of the improved Broyden–Fletcher–Goldfarb–Shanno algorithm is very efficient for local search, the solution obtained is usually a local extremum for non-convex optimal control problems. In order to escape from the local extremum, we develop a novel stochastic search method. By performing a large amount of numerical experiments, we find that the novel stochastic search method is excellent in exploration, while bad in exploitation. In order to improve the exploitation, we propose a hybrid numerical optimization algorithm to solve the problem based on the novel stochastic search method and the improved Broyden–Fletcher–Goldfarb–Shanno algorithm. Convergence results indicate that any global optimal solution of the approximate problem is also a global optimal solution of the original problem. Finally, two bioprocess system optimal control problems illustrate that the hybrid numerical optimization algorithm proposed by us is low time-consuming and obtains a better cost function value than the existing approaches.  相似文献   

19.
In this article, we study convergence of the extragradient method for constrained convex minimization problems in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In this article, the convergence of the extragradient method for solving convex minimization problems is established for nonsummable computational errors. We show that the the extragradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.  相似文献   

20.
Many constrained sets in problems such as signal processing and optimal control can be represented as a fixed point set of a certain nonexpansive mapping, and a number of iterative algorithms have been presented for solving a convex optimization problem over a fixed point set. This paper presents a novel gradient method with a three-term conjugate gradient direction that is used to accelerate conjugate gradient methods for solving unconstrained optimization problems. It is guaranteed that the algorithm strongly converges to the solution to the problem under the standard assumptions. Numerical comparisons with the existing gradient methods demonstrate the effectiveness and fast convergence of this algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号