首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The convergence of two-phase methods for approximating the Edgeworth-Pareto hull (EPH) in nonlinear multicriteria optimization problems is analyzed. The methods are based on the iterative supplement of the finite set of feasible criteria vectors (approximation basis) whose EPH approximates the desired set. A feature of two-phase methods is that the criteria images of randomly generated points of the decision space approach the Pareto frontier via local optimization of adaptively chosen convolutions of criteria. The convergence of two-phase methods is proved for both an abstract form of the algorithm and for a two-phase method based on the Germeier convolution.  相似文献   

2.
Methods for approximating the Edgeworth-Pareto hull (EPH) of the set of feasible criteria vectors in nonlinear multicriteria optimization problems are examined. The relative efficiency of two EPH approximation methods based on classical methods of searching for local extrema of convolutions of criteria is experimentally studied for a large-scale applied problem (with several hundred variables). A hybrid EPH approximation method combining classical and genetic approximation methods is considered.  相似文献   

3.
The problem of approximating the Pareto frontier (nondominated frontier) of the feasible set of criteria vectors in nonlinear multicriteria optimization problems is considered. The problem is solved by approximating the Edgeworth-Pareto hull (EPH), i.e., the maximum set with the same Pareto frontier as the original feasible set of criteria vectors. An EPH approximation method is studied that is based on the statistical accuracy estimation of the current approximation and on adaptive supplement of a metric net whose EPH approximates the desired set. The convergence of the method is proved, estimates for the convergence rate are obtained, and the efficiency of the method is studied in the case of a compact feasible set and continuous criteria functions. It is shown that the convergence rate of the method with respect to the number k of iterations is no lower than $ o\left( {k^{{1 \mathord{\left/ {\vphantom {1 {\overline {dm} Y}}} \right. \kern-\nulldelimiterspace} {\overline {dm} Y}}} } \right) $ o\left( {k^{{1 \mathord{\left/ {\vphantom {1 {\overline {dm} Y}}} \right. \kern-\nulldelimiterspace} {\overline {dm} Y}}} } \right) , where $ \overline {dm} Y $ \overline {dm} Y is the upper metric dimension of the feasible set of criteria vectors.  相似文献   

4.
The Newton method is one of the most used methods for solving nonlinear system of equations when the Jacobian matrix is nonsingular. The method converges to a solution with Q-order two for initial points sufficiently close to the solution. The method of Halley and the method of Chebyshev are among methods that have local and cubic rate of convergence. Combining these methods with a backtracking and curvilinear strategy for unconstrained optimization problems these methods have been shown to be globally convergent. The backtracking forces a strict decrease of the function of the unconstrained optimization problem. It is shown that no damping of the step in the backtracking routine is needed close to a strict local minimizer and the global method behaves as a local method. The local behavior for the unconstrained optimization problem is investigated by considering problems with two unknowns and it is shown that there are no significant differences in the region where the global method turn into a local method for second and third order methods. Further, the final steps to reach a predefined tolerance are investigated. It is shown that the region where the higher order methods terminate in one or two iteration is significantly larger than the corresponding region for Newton’s method.  相似文献   

5.
In this paper, a novel multivariate fractional-order (FO) Gradient-based extremum seeking control (Gradient-based ESC) approach is developed for the optimization of multivariable dynamical systems. The proposed Gradient-based ESC, utilizing FO operators, is presented to not only speed up the convergence rate and enhance the control accuracy but also improve the search efficiency of the extrema by regulating the fractional-order. For multivariable dynamical systems, the stability analysis of the proposed multivariate FO Gradient-based ESC is presented in details to guarantee the convergence performance of multi-input optimization problems. Simulation and experimental results are given to demonstrate the effectiveness and advantages of the proposed approach by comparing with the corresponding integer-order (IO) Gradient-based ESC.  相似文献   

6.
Two of the main approaches in multiple criteria optimization are optimization over the efficient set and utility function program. These are nonconvex optimization problems in which local optima can be different from global optima. Existing global optimization methods for solving such problems can only work well for problems of moderate dimensions. In this article, we propose some ways to reduce the number of criteria and the dimension of a linear multiple criteria optimization problem. By the concept of so-called representative and extreme criteria, which is motivated by the concept of redundant (or nonessential) objective functions of Gal and Leberling, we can reduce the number of criteria without altering the set of efficient solutions. Furthermore, by using linear independent criteria, the linear multiple criteria optimization problem under consideration can be transformed into an equivalent linear multiple criteria optimization problem in the space of linear independent criteria. This equivalence is understood in a sense that efficient solutions of each problem can be derived from efficient solutions of the other by some affine transformation. As a result, such criteria and dimension reduction techniques could help to increase the efficiency of existing algorithms and to develop new methods for handling global optimization problems arisen from multiple objective optimization.  相似文献   

7.
A new version is presented of the necessary and sufficient condition of convergence, to a normal law, of sums of independent variables in a nonclassical situation (i.e., absence of limiting negligibility of variables). The obtained condition differs from previously obtained conditions by the fact that it does not use Levy's metric and that is is closer to classical formulations. A similar condition is sufficient for the closeness of two convolutions when the number of components of the convolutions increases without bounds.  相似文献   

8.
Two generalized trajectory methods are combined to provide a novel and powerful numerical procedure for systematically finding multiple local extrema of a multivariable objective function. This procedure can form part of a strategy for global optimization in which the greatest local maximum and least local minimum in the interior of a specified region are compared to the largest and smallest values of the objective function on the boundary of the region. The first trajectory method, a homotopy scheme, provides a globally convergent algorithm to find a stationary point of the objective function. The second trajectory method, a relaxation scheme, starts at one stationary point and systematically connects other stationary points in the specified region by a network of trjectories. It is noted that both generalized trajectory methods actually solve the stationarity conditions, and so they can also be used to find multiple roots of a set of nonlinear equations.  相似文献   

9.
基于修正拟牛顿方程,利用Goldstein-Levitin-Polyak(GLP)投影技术,建立了求解带凸集约束的优化问题的两阶段步长非单调变尺度梯度投影算法,证明了算法的全局收敛性和一定条件下的Q超线性收敛速率.数值结果表明新算法是有效的,适合求解大规模问题.  相似文献   

10.
We investigate the convergence of finite-difference local descent algorithms for the solution of global optimization problems with a multi-extremum objective function. Application of noise-tolerant local descent algorithms to the class of so-called -regular problems makes it possible to bypass minor extrema and thus identify the global structure of the objective function. Experimental data presented in the article confirm the efficiency of the parallel gradient and coordinate descent algorithms for the solution of some test problems.  相似文献   

11.
Local convergence analysis for partitioned quasi-Newton updates   总被引:8,自引:0,他引:8  
Summary This paper considers local convergence properties of inexact partitioned quasi-Newton algorithms for the solution of certain non-linear equations and, in particular, the optimization of partially separable objective functions. Using the bounded deterioration principle, one obtains local and linear convergence, which impliesQ-superlinear convergence under the usual conditions on the quasi-Newton updates. For the optimization case, these conditions are shown to be satisfied by any sequence of updates within the convex Broyden class, even if some Hessians are singular at the minimizer. Finally, local andQ-superlinear convergence is established for an inexact partitioned variable metric method under mild assumptions on the initial Hessian approximations.Work supported by a research grant of the Deutsche Forschungsgemeinschaft, Bonn and carried out at the Department of Applied Mathematics and Theoretical Physics Cambridge (United Kingdom)  相似文献   

12.
Stabilized sequential quadratic programming (sSQP) methods for nonlinear optimization generate a sequence of iterates with fast local convergence regardless of whether or not the active-constraint gradients are linearly dependent. This paper concerns the local convergence analysis of an sSQP method that uses a line search with a primal-dual augmented Lagrangian merit function to enforce global convergence. The method is provably well-defined and is based on solving a strictly convex quadratic programming subproblem at each iteration. It is shown that the method has superlinear local convergence under assumptions that are no stronger than those required by conventional stabilized SQP methods. The fast local convergence is obtained by allowing a small relaxation of the optimality conditions for the quadratic programming subproblem in the neighborhood of a solution. In the limit, the line search selects the unit step length, which implies that the method does not suffer from the Maratos effect. The analysis indicates that the method has the same strong first- and second-order global convergence properties that have been established for augmented Lagrangian methods, yet is able to transition seamlessly to sSQP with fast local convergence in the neighborhood of a solution. Numerical results on some degenerate problems are reported.  相似文献   

13.
The Wiener process is a widely used statistical model for stochastic global optimization. One of the first optimization algorithms based on a statistical model, the so-called P-algorithm, was based on the Wiener process. Despite many advantages, this process does not give a realistic model for many optimization problems, particularly from the point of view of local behavior. In the present paper, a version of the P-algorithm is constructed based on a stochastic process with smooth sampling functions. It is shown that, in such a case, the algorithm has a better convergence rate than in the case of the Wiener process. A similar convergence rate is proved for a combination of the Wiener model-based P-algorithm with quadratic fit-based local search.  相似文献   

14.
Discretization in semi-infinite programming: the rate of convergence   总被引:8,自引:0,他引:8  
The discretization approach for solving semi-infinite optimization problems is considered. We are interested in the convergence rate of the error between the solution of the semi-infinite problem and the solution of the discretized program depending on the discretization mesh-size. It will be shown how this rate depends on whether the minimizer is strict of order one or two and on whether the discretization includes boundary points of the index set in a specific way. This is done for ordinary and for generalized semi-infinite problems. Received: November 21, 2000 / Accepted: May 2001?Published online September 17, 2001  相似文献   

15.
Summary We discuss the problem of reconstructing a functionf from a finite set of moments. Problems of this kind typically arise as discretizations of integral equations of the first kind. We propose an algorithm which is based on a pointwise optimization of the pointspread function, which makes it particularly suitable for local reconstructions. The method is compared with known methods as Backus-Gilbert and projection methods. Convergence of the method is proved and the rate of convergence is determined. The influence of noisy data is examined and numerical examples show the usefulness of the method.  相似文献   

16.
The paper is devoted to the problem of approximating reachable sets of a nonlinear control system with state constraints given as a solution set of a nonlinear inequality. A state constraint elimination procedure based on the introduction of an auxiliary constraintfree control system is proposed. The equations of the auxiliary system depend on a small parameter. It is shown that the reachable set of the original system can be approximated in the Hausdorff metric by reachable sets of the auxiliary control system as the small parameter tends to zero. Estimates of the convergence rate are given.  相似文献   

17.
The sensitivity function induced by a convex programming problem is examined. Its monotonicity, subdifferentiability, and closure properties are analyzed. A relation to the Pareto optimal solution set of the multicriteria convex optimization problem is established. The role of the sensitivity function in systems describing optimization problems is clarified. It is shown that the solution of these systems can often be reduced to the minimization of the sensitivity function on a convex set. Numerical methods for solving such problems are proposed, and their convergence is proved.  相似文献   

18.
Several papers in the scientific literature use metaheuristics to solve continuous global optimization. To perform this task, some metaheuristics originally proposed for solving combinatorial optimization problems, such as Greedy Randomized Adaptive Search Procedure (GRASP), Tabu Search and Simulated Annealing, among others, have been adapted to solve continuous global optimization problems. Proposed by Hirsch et al., the Continuous-GRASP (C-GRASP) is one example of this group of metaheuristics. The C-GRASP is an adaptation of GRASP proposed to solve continuous global optimization problems under box constraints. It is simple to implement, derivative-free and widely applicable method. However, according to Hedar, due to its random construction, C-GRASP may fail to detect promising search directions especially in the vicinity of minima, which may result in a slow convergence. To minimize this problem, in this paper we propose a set of methods to direct the search on C-GRASP, called Directed Continuous-GRASP (DC-GRASP). The proposal is to combine the ability of C-GRASP to diversify the search over the space with some efficient local search strategies to accelerate its convergence. We compare the DC-GRASP with the C-GRASP and other metaheuristics from literature on a set of standard test problems whose global minima are known. Computational results show the effectiveness and efficiency of the proposed methods, as well as their ability to accelerate the convergence of the C-GRASP.  相似文献   

19.
In this paper, we analyse the convergence rate of the sequence of objective function values of a primal-dual proximal-point algorithm recently introduced in the literature for solving a primal convex optimization problem having as objective the sum of linearly composed infimal convolutions, nonsmooth and smooth convex functions and its Fenchel-type dual one. The theoretical part is illustrated by numerical experiments in image processing.  相似文献   

20.
研究一类新的求解无约束优化问题的超记忆梯度法,分析了算法的全局收敛性和线性收敛速率.算法利用一种多步曲线搜索准则产生新的迭代点,在每步迭代时同时确定下降方向和步长,并且不用计算和存储矩阵,适于求解大规模优化问题.数值试验表明算法是有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号