首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we investigate the use of DC (Difference of Convex functions) models and algorithms in the application of trust-region methods to the solution of a class of nonlinear optimization problems where the constrained set is closed and convex (and, from a practical point of view, where projecting onto the feasible region is computationally affordable). We consider DC local models for the quadratic model of the objective function used to compute the trust-region step, and apply a primal-dual subgradient method to the solution of the corresponding trust-region subproblems. One is able to prove that the resulting scheme is globally convergent to first-order stationary points. The theory requires the use of exact second-order derivatives but, in turn, the computation of the trust-region step asks only for one projection onto the feasible region (in comparison to the calculation of the generalized Cauchy point which may require more). The numerical efficiency and robustness of the proposed new scheme when applied to bound-constrained problems is measured by comparing its performance against some of the current state-of-the-art nonlinear programming solvers on a vast collection of test problems.  相似文献   

2.
A new auxiliary function method based on the idea which executes a two-stage deterministic search for global optimization is proposed. Specifically, a local minimum of the original function is first obtained, and then a stretching function technique is used to modify the objective function with respect to the obtained local minimum. The transformed function stretches the function values higher than the obtained minimum upward while it keeps the ones with lower values unchanged. Next, an auxiliary function is constructed on the stretched function, which always descends in the region where the function values are higher than the obtained minimum, and it has a stationary point in the lower area. We optimize the auxiliary function and use the found stationary point as the starting point to turn to the first step to restart the search. Repeat the procedure until termination. A theoretical analysis is also made. The main feature of the new method is that it relaxes significantly the requirements for the parameters. Numerical experiments on benchmark functions with different dimensions (up to 50) demonstrate that the new algorithm has a more rapid convergence and a higher success rate, and can find the solutions with higher quality, compared with some other existing similar algorithms, which is consistent with the analysis in theory.  相似文献   

3.
This paper presents the results obtained by applying the cell-to-cell mapping method to solve the problem of the time-optimal trajectory planning for coordinated multiple robotic arms handling a common object along a specified geometric path. Based on the structure of the time-optimal trajectory control law, the continuous dynamic model of multiple arms is first approximated by a discrete and finite cell-to-cell mapping on a two-dimensional cell space over a phase plane. The optimal trajectory and the corresponding control are then determined by using the cell-to-cell mapping and a simple search algorithm. To further improve the computational efficiency and to allow for parallel computation, a hierarchical search algorithm consisting of a multiple-variable optimization on the top level and a number of cell-to-cell searches on the bottom level is proposed and implemented in the paper. Besides its simplicity, another distinguishing feature of the cell-to-cell mapping methods is the generation of all optimal trajectories for a given final state and all possible initial states through a single searching process. For most of the existing trajectory planning methods, the planning process can be started only when both the initial and final states have been specified. The cell-to-cell method can be generalized to any optimal trajectory planning problem for a multiple robotic arms system.  相似文献   

4.
Lagrangian methods are popular in solving continuous constrained optimization problems. In this paper, we address three important issues in applying Lagrangian methods to solve optimization problems with inequality constraints.First, we study methods to transform inequality constraints into equality constraints. An existing method, called the slack-variable method, adds a slack variable to each inequality constraint in order to transform it into an equality constraint. Its disadvantage is that when the search trajectory is inside a feasible region, some satisfied constraints may still pose some effect on the Lagrangian function, leading to possible oscillations and divergence when a local minimum lies on the boundary of the feasible region. To overcome this problem, we propose the MaxQ method that carries no effect on satisfied constraints. Hence, minimizing the Lagrangian function in a feasible region always leads to a local minimum of the objective function. We also study some strategies to speed up its convergence.Second, we study methods to improve the convergence speed of Lagrangian methods without affecting the solution quality. This is done by an adaptive-control strategy that dynamically adjusts the relative weights between the objective and the Lagrangian part, leading to better balance between the two and faster convergence.Third, we study a trace-based method to pull the search trajectory from one saddle point to another in a continuous fashion without restarts. This overcomes one of the problems in existing Lagrangian methods that converges only to one saddle point and requires random restarts to look for new saddle points, often missing good saddle points in the vicinity of saddle points already found.Finally, we describe a prototype Novel (Nonlinear Optimization via External Lead) that implements our proposed strategies and present improved solutions in solving a collection of benchmarks.  相似文献   

5.
A fast descent algorithm, resorting to a “stretching” function technique and built on one hybrid method (GRSA) which combines simulated annealing (SA) algorithm and gradient based methods for large scale global optimizations, is proposed. Unlike the previously proposed method in which the original objective functions remain unchanged during the whole course of optimization, the new method firstly constructs an auxiliary function on one local minimizer obtained by gradient based methods and then SA is executed on this constructed auxiliary function instead of on the original objective function in order that we can improve the jumping ability of SA algorithm to escape from the currently discovered local minimum to a better one from which the gradient based methods restart a new local search. The above procedure is repeated until a global minimum is detected. In addition, corresponding to the adopted “stretching” technique, a new next trial point generating scheme is designed. It is verified by simulation especially on large scale problems that the convergence speed is greatly accelerated, which is its main difference from many other reported methods that mostly cope with functions with less than 50 variables and does not apply to large scale optimization problems. Furthermore, the new algorithm functions as a global optimization procedure with a high success probability and high solution precision.  相似文献   

6.
Interval Newton methods in conjunction with generalized bisection are important elemetns of algorithms which find theglobal optimum within a specified box X n of an objective function whose critical points are solutions to the system of nonlinear equationsF(X)=0with mathematical certainty, even in finite presision arithmetic. The overall efficiency of such a scheme depends on the power of the interval Newton method to reduce the widths of the coordinate intervals of the box. Thus, though the generalized bisection method will still converge in a box which contains a critical point at which the Jacobian matrix is singular, the process is much more costly in that case. Here, we propose modifications which make the generalized bisection method isolate singular solutions more efficiently. These modifications are based on an observation about the verification property of interval Newton methods and on techniques for detecting the singularity and removing the region containing it. The modifications assume no special structure forF. Additionally, one of the observations should also make the algorithm more efficient when finding nonsingular solutions. We present results of computational experiments.  相似文献   

7.
《Optimization》2012,61(6):563-577
In this article, we first propose an unconstrained optimization reformulation of the generalized nonlinear complementarity problem (GNCP) over a polyhedral cone, and then discuss the conditions under which its any stationary point is a solution of the GNCP. The conditions which guarantee the nonsingularity and positive definiteness of the Hessian matrix of the objective function are also given. In the end, we design a Newton-type method to solve the GNCP and show the global and local quadratic convergence of the proposed method under certain assumptions.  相似文献   

8.
Researchers apply scan statistics to test for unusually large clusters of events within a time window of specified length w, or alternatively an unusually small window w that contains a specified number of events. In some cases, the researcher is interested in testing for a range of specified window lengths, or a set of several specified number of events k (cluster sizes). In this paper, we derive accurate approximations for the joint distributions of scan statistics for a range of values of w, or of k, that can be used to set an experiment-wide level of significance that takes into account the multiple comparisons involved. We use these methods to compare different ways of choosing the window sizes for the different cluster sizes. One special case is a multiple comparison procedure based on a generalized likelihood ratio test (GLRT) for a range of window sizes. We compare the power of the GLRT with another method for allocating the window sizes. We find that the GLRT is sensitive for very small window sizes at the expense of moderate and larger window sizes. We illustrate these results on two examples, one involving clustering of translocation breakpoints in DNA, and the other involving disease clusters.  相似文献   

9.
Researchers rely on the distance function to model multiple product production using multiple inputs. A stochastic directional distance function (SDDF) allows for noise in potentially all input and output variables. Yet, when estimated, the direction selected will affect the functional estimates because deviations from the estimated function are minimized in the specified direction. Specifically, the parameters of the parametric SDDF are point identified when the direction is specified; we show that the parameters of the parametric SDDF are set identified when multiple directions are considered. Further, the set of identified parameters can be narrowed via data-driven approaches to restrict the directions considered. We demonstrate a similar narrowing of the identified parameter set for a shape constrained nonparametric method, where the shape constraints impose standard features of a cost function such as monotonicity and convexity.Our Monte Carlo simulation studies reveal significant improvements, as measured by out of sample radial mean squared error, in functional estimates when we use a directional distance function with an appropriately selected direction and the errors are uncorrelated across variables. We show that these benefits increase as the correlation in error terms across variables increase. This correlation is a type of endogeneity that is common in production settings. From our Monte Carlo simulations we conclude that selecting a direction that is approximately orthogonal to the estimated function in the central region of the data gives significantly better estimates relative to the directions commonly used in the literature. For practitioners, our results imply that selecting a direction vector that has non-zero components for all variables that may have measurement error provides a significant improvement in the estimator’s performance. We illustrate these results using cost and production data from samples of approximately 500 US hospitals per year operating in 2007, 2008, and 2009, respectively, and find that the shape constrained nonparametric methods provide a significant increase in flexibility over second order local approximation parametric methods.  相似文献   

10.
This paper presents a global error bound for the projected gradient and a local error bound for the distance from a feasible solution to the optimal solution set of a nonlinear programming problem by using some characteristic quantities such as value function, trust region radius etc., which are appeared in the trust region method. As applications of these error bounds, we obtain sufficient conditions under which a sequence of feasible solutions converges to a stationary point or to an optimal solution, respectively, and a necessary and sufficient condition under which a sequence of feasible solutions converges to a Kuhn–Tucker point. Other applications involve finite termination of a sequence of feasible solutions. For general optimization problems, when the optimal solution set is generalized non-degenerate or gives generalized weak sharp minima, we give a necessary and sufficient condition for a sequence of feasible solutions to terminate finitely at a Kuhn–Tucker point, and a  sufficient condition which guarantees that a sequence of feasible solutions terminates finitely at a stationary point. This research was supported by the National Natural Science Foundation of China (10571106) and CityU Strategic Research Grant.  相似文献   

11.
基于最优化方法求解约束非线性方程组的一个突出困难是计算 得到的仅是该优化问题的稳定点或局部极小点,而非方程组的解点.由此引出的问题是如何从一个稳定点出发得到一个相对于方程组解更好的点. 该文采用投影型算法,推广了Nazareth-Qi$^{[8,9]}$ 求解无约束非线性方程组的拉格朗日全局算法(Lagrangian Global-LG)于约束方程上; 理论上证明了从优化问题的稳定点出发,投影LG方法可寻找到一个更好的点. 数值试验证明了LG方法的有效性.  相似文献   

12.
In this paper we consider Weber-like location problems. The objective function is a sum of terms, each a function of the Euclidean distance from a demand point. We prove that a Weiszfeld-like iterative procedure for the solution of such problems converges to a local minimum (or a saddle point) when three conditions are met. Many location problems can be solved by the generalized Weiszfeld algorithm. There are many problem instances for which convergence is observed empirically. The proof in this paper shows that many of these algorithms indeed converge.  相似文献   

13.
This paper presents a new sequential method for constrained nonlinear optimization problems. The principal characteristics of these problems are very time consuming function evaluations and the absence of derivative information. Such problems are common in design optimization, where time consuming function evaluations are carried out by simulation tools (e.g., FEM, CFD). Classical optimization methods, based on derivatives, are not applicable because often derivative information is not available and is too expensive to approximate through finite differencing.The algorithm first creates an experimental design. In the design points the underlying functions are evaluated. Local linear approximations of the real model are obtained with help of weighted regression techniques. The approximating model is then optimized within a trust region to find the best feasible objective improving point. This trust region moves along the most promising direction, which is determined on the basis of the evaluated objective values and constraint violations combined in a filter criterion. If the geometry of the points that determine the local approximations becomes bad, i.e. the points are located in such a way that they result in a bad approximation of the actual model, then we evaluate a geometry improving instead of an objective improving point. In each iteration a new local linear approximation is built, and either a new point is evaluated (objective or geometry improving) or the trust region is decreased. Convergence of the algorithm is guided by the size of this trust region. The focus of the approach is on getting good solutions with a limited number of function evaluations.  相似文献   

14.
Structural equivalence (Lorrain and White, 1971) and automorphic equivalence (Everett, 1985) are generalized to define neighborhood‐ and ego‐centered equivalences. It is shown that local versions of these equivalences can then be formulated quite naturally. In addition to these natural localizations, a generalized procedure capable of localizing any model of role equivalence is presented. From a theoretical point of view, local roles are recommended by the notion that network influences on ego diminish with distance. From a practical point of view, local roles help find structure in graphs where global equivalences find no two actors equivalent.  相似文献   

15.
1.IntroductionInunconstrainedoptimizationthebasicproblemconsideredisMinf(x)(1.1)wheref(x):R"-Risarealdifferentiablefunction.Manyalgorithmshavebeenproposedforsolving(1.1).The8upermemorydescentmethodisoneofthem.Itsmainideaistocombineadescentdirectionwiththedisplacementsgeneratedbypreviousiterationsforobtaininganewsearchdirection.thetypicaJformofthemethodisshownbyWolfeandViazminsky['4].Thatis,forthekthiteration,calculateak3P1`),skandxk+ifrom1)TheProjectSupportedbyNationalNaturalSciencesFoun…  相似文献   

16.
The problem of globally minimizing a convex function subject to general continuous inequality constraints is investigated. A convergent outer approximation method is proposed which systematically exploits the convexity of the objective function in order to transcend local optimality. Also the question of finding a good starting point by using a local approach is discussed.  相似文献   

17.
A general class of variational models with concave priors is considered for obtaining certain sparse solutions, for which nonsmoothness and non-Lipschitz continuity of the objective functions pose significant challenges from an analytical as well as numerical point of view. For computing a stationary point of the underlying variational problem, a Newton-type scheme with provable convergence properties is proposed. The possible non-positive definiteness of the generalized Hessian is handled by a tailored regularization technique, which is motivated by reweighting as well as the classical trust-region method. Our numerical experiments demonstrate selected applications in image processing, support vector machines, and optimal control of partial differential equations.  相似文献   

18.
Yin  Jianyuan  Yu  Bing  Zhang  Lei 《中国科学 数学(英文版)》2021,64(8):1801-1816
We introduce a generalized numerical algorithm to construct the solution landscape, which is a pathway map consisting of all the stationary points and their connections. Based on the high-index optimizationbased shrinking dimer(Hi OSD) method for gradient systems, a generalized high-index saddle dynamics(GHi SD)is proposed to compute any-index saddles of dynamical systems. Linear stability of the index-k saddle point can be proved for the GHi SD system. A combination of the downward search algorithm and the upward search algorithm is applied to systematically construct the solution landscape, which not only provides a powerful and efficient way to compute multiple solutions without tuning initial guesses, but also reveals the relationships between different solutions. Numerical examples, including a three-dimensional example and the phase field model, demonstrate the novel concept of the solution landscape by showing the connected pathway maps.  相似文献   

19.
A new method for continuous global minimization problems, acronymed SCM, is introduced. This method gives a simple transformation to convert the objective function to an auxiliary function with gradually fewer local minimizers. All Local minimizers except a prefixed one of the auxiliary function are in the region where the function value of the objective function is lower than its current minimal value. Based on this method, an algorithm is designed which uses a local optimization method to minimize the auxiliary function to find a local minimizer at which the value of the objective function is lower than its current minimal value. The algorithm converges asymptotically with probability one to a global minimizer of the objective function. Numerical experiments on a set of standard test problems with several problems' dimensions up to 50 show that the algorithm is very efficient compared with other global optimization methods.  相似文献   

20.
The direct kinematics problem for parallel robots can be stated as follows: given values of the joint variables, the corresponding Cartesian variable values, the pose of the end-effector, must be found. Most of the times the direct kinematics problem involves the solution of a system of non-linear equations. The most efficient methods to solve such kind of equations assume convexity in a cost function which minimum is the solution of the non-linear system. In consequence, the capacity of such methods depends on the knowledge about an starting point which neighboring region is convex, hence the method can find the global minimum. This article propose a method based on probabilistic learning about an adequate starting point for the Dogleg method which assumes local convexity of the function. The proposed method efficiently avoids the local minima, without need of human intervention or apriori knowledge, thus it shows a more robust performance than the simple Dogleg method or other gradient based methods. To demonstrate the performance of the proposed hybrid method, numerical experiments and the respective discussion are presented. The proposal can be extended to other structures of closed-kinematics chains, to the general solution of systems of non-linear equations, and to the minimization of non-linear functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号