首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 170 毫秒
1.
This article proposes a variable selection method termed “subtle uprooting” for linear regression. In this proposal, variable selection is formulated into a single optimization problem by approximating cardinality involved in the information criterion with a smooth function. A technical maneuver is then employed to enforce sparsity of parameter estimates while maintaining smoothness of the objective function. To solve the resulting smooth nonconvex optimization problem, a modified Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm with established global and super-linear convergence is adopted. Both simulated experiments and an empirical example are provided for assessment and illustration. Supplementary materials for this article are available online.  相似文献   

2.
In this paper, we propose a BFGS (Broyden–Fletcher–Goldfarb–Shanno)-SQP (sequential quadratic programming) method for nonlinear inequality constrained optimization. At each step, the method generates a direction by solving a quadratic programming subproblem. A good feature of this subproblem is that it is always consistent. Moreover, we propose a practical update formula for the quasi-Newton matrix. Under mild conditions, we prove the global and superlinear convergence of the method. We also present some numerical results.  相似文献   

3.
We present a superlinearly convergent exact penalty method for solving constrained nonlinear least squares problems, in which the projected exact penalty Hessian is approximated by using a structured secant updating scheme. We give general conditions for the two-step superlinear convergence of the algorithm and prove that the projected structured Broyden–Fletcher–Goldfarb–Shanno (BFGS), Powell-symmetric-Broyden (PSB), and Davidon–Fletcher–Powell (DFP) update formulas satisfy these conditions. Then we extend the results to the projected structured convex Broyden family update formulas. Extensive testing results obtained by an implementation of our algorithms, as compared to the results obtained by several other competent algorithms, demonstrate the efficiency and robustness of the proposed approach.  相似文献   

4.
A new family of conjugate gradient methods is proposed by minimizing the distance between two certain directions. It is a subfamily of Dai–Liao family, which consists of Hager–Zhang family and Dai–Kou method. The direction of the proposed method is an approximation to that of the memoryless Broyden–Fletcher–Goldfarb–Shanno method. With the suitable intervals of parameters, the direction of the proposed method possesses the sufficient descent property independent of the line search. Under mild assumptions, we analyze the global convergence of the method for strongly convex functions and general functions where the stepsize is obtained by the standard Wolfe rules. Numerical results indicate that the proposed method is a promising method which outperforms CGOPT and CG_DESCENT on a set of unconstrained optimization testing problems.  相似文献   

5.
This paper is aimed to extend a certain damped technique, suitable for the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method, to the limited memory BFGS method in the case of the large-scale unconstrained optimization. It is shown that the proposed technique maintains the global convergence property on uniformly convex functions for the limited memory BFGS method. Some numerical results are described to illustrate the important role of the damped technique. Since this technique enforces safely the positive definiteness property of the BFGS update for any value of the steplength, we also consider only the first Wolfe–Powell condition on the steplength. Then, as for the backtracking framework, only one gradient evaluation is performed on each iteration. It is reported that the proposed damped methods work much better than the limited memory BFGS method in several cases.  相似文献   

6.
In this paper, the first two terms on the right-hand side of the Broyden–Fletcher–Goldfarb–Shanno update are scaled with a positive parameter, while the third one is also scaled with another positive parameter. These scaling parameters are determined by minimizing the measure function introduced by Byrd and Nocedal (SIAM J Numer Anal 26:727–739, 1989). The obtained algorithm is close to the algorithm based on clustering the eigenvalues of the Broyden–Fletcher–Goldfarb–Shanno approximation of the Hessian and on shifting its large eigenvalues to the left, but it is not superior to it. Under classical assumptions, the convergence is proved by using the trace and the determinant of the iteration matrix. By using a set of 80 unconstrained optimization test problems, it is proved that the algorithm minimizing the measure function of Byrd and Nocedal is more efficient and more robust than some other scaling Broyden–Fletcher–Goldfarb–Shanno algorithms, including the variants of Biggs (J Inst Math Appl 12:337–338, 1973), Yuan (IMA J Numer Anal 11:325–332, 1991), Oren and Luenberger (Manag Sci 20:845–862, 1974) and of Nocedal and Yuan (Math Program 61:19–37, 1993). However, it is less efficient than the algorithms based on clustering the eigenvalues of the iteration matrix and on shifting its large eigenvalues to the left, as shown by Andrei (J Comput Appl Math 332:26–44, 2018, Numer Algorithms 77:413–432, 2018).  相似文献   

7.
Based on an eigenvalue analysis, condition number of the scaled memoryless BFGS (Broyden–Fletcher–Goldfarb–Shanno) updating formula is obtained. Then, a modified scaling parameter is proposed for the mentioned updating formula, minimizing the given condition number. The suggested scaling parameter can be considered as a modified version of the self–scaling parameter proposed by Oren and Spedicato. Numerical experiments are done; they demonstrate practical effectiveness of the proposed scaling parameter.  相似文献   

8.
Acceleration schemes can dramatically improve existing optimization procedures. In most of the work on these schemes, such as nonlinear generalized minimal residual (N‐GMRES), acceleration is based on minimizing the ?2 norm of some target on subspaces of R n . There are many numerical examples that show how accelerating general‐purpose and domain‐specific optimizers with N‐GMRES results in large improvements. We propose a natural modification to N‐GMRES, which significantly improves the performance in a testing environment originally used to advocate N‐GMRES. Our proposed approach, which we refer to as O‐ACCEL (objective acceleration), is novel in that it minimizes an approximation to the objective function on subspaces of R n . We prove that O‐ACCEL reduces to the full orthogonalization method for linear systems when the objective is quadratic, which differentiates our proposed approach from existing acceleration methods. Comparisons with the limited‐memory Broyden–Fletcher–Goldfarb–Shanno and nonlinear conjugate gradient methods indicate the competitiveness of O‐ACCEL. As it can be combined with domain‐specific optimizers, it may also be beneficial in areas where limited‐memory Broyden–Fletcher–Goldfarb–Shanno and nonlinear conjugate gradient methods are not suitable.  相似文献   

9.
Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N‐GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, whereas the second employs a predefined small step. A simple global convergence proof is provided for the N‐GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N‐GMRES optimization is also motivated by relating it to standard non‐preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N‐GMRES optimization algorithm is able to very significantly accelerate convergence of stand‐alone steepest descent optimization. Moreover, performance of steepest‐descent preconditioned N‐GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited‐memory Broyden–Fletcher–Goldfarb–Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest‐descent preconditioned N‐GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared with established techniques. In addition, it is argued that the real potential of the N‐GMRES optimization framework lies in the fact that it can make use of problem‐dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N‐GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
Nonsmooth optimization via quasi-Newton methods   总被引:1,自引:0,他引:1  
We investigate the behavior of quasi-Newton algorithms applied to minimize a nonsmooth function f, not necessarily convex. We introduce an inexact line search that generates a sequence of nested intervals containing a set of points of nonzero measure that satisfy the Armijo and Wolfe conditions if f is absolutely continuous along the line. Furthermore, the line search is guaranteed to terminate if f is semi-algebraic. It seems quite difficult to establish a convergence theorem for quasi-Newton methods applied to such general classes of functions, so we give a careful analysis of a special but illuminating case, the Euclidean norm, in one variable using the inexact line search and in two variables assuming that the line search is exact. In practice, we find that when f is locally Lipschitz and semi-algebraic with bounded sublevel sets, the BFGS (Broyden–Fletcher–Goldfarb–Shanno) method with the inexact line search almost always generates sequences whose cluster points are Clarke stationary and with function values converging R-linearly to a Clarke stationary value. We give references documenting the successful use of BFGS in a variety of nonsmooth applications, particularly the design of low-order controllers for linear dynamical systems. We conclude with a challenging open question.  相似文献   

11.
The formulation of a particular fluid--structure interaction as an optimal control problem is the departure point of this work. The control is the vertical component of the force acting on the interface and the observation is the vertical component of the velocity of the fluid on the interface. This approach permits us to solve the coupled fluid--structure problem by partitioned procedures. The analytic expression for the gradient of the cost function is obtained in order to devise accurate numerical methods for the minimization problem. Numerical results arising from blood flow in arteries are presented. To solve the optimal control problem numerically, we use a quasi-Newton method which employs the analytic gradient of the cost function and the approximation of the inverse Hessian is updated by the Broyden, Fletcher, Goldforb, Shano (BFGS) scheme. This algorithm is faster than fixed point with relaxation or block Newton methods.  相似文献   

12.
We present Nesterov‐type acceleration techniques for alternating least squares (ALS) methods applied to canonical tensor decomposition. While Nesterov acceleration turns gradient descent into an optimal first‐order method for convex problems by adding a momentum term with a specific weight sequence, a direct application of this method and weight sequence to ALS results in erratic convergence behavior. This is so because ALS is accelerated instead of gradient descent for our nonconvex problem. Instead, we consider various restart mechanisms and suitable choices of momentum weights that enable effective acceleration. Our extensive empirical results show that the Nesterov‐accelerated ALS methods with restart can be dramatically more efficient than the stand‐alone ALS or Nesterov's accelerated gradient methods, when problems are ill‐conditioned or accurate solutions are desired. The resulting methods perform competitively with or superior to existing acceleration methods for ALS, including ALS acceleration by nonlinear conjugate gradient, nonlinear generalized minimal residual method, or limited‐memory Broyden‐Fletcher‐Goldfarb‐Shanno, and additionally enjoy the benefit of being much easier to implement. We also compare with Nesterov‐type updates where the momentum weight is determined by a line search (LS), which are equivalent or closely related to existing LS methods for ALS. On a large and ill‐conditioned 71×1,000×900 tensor consisting of readings from chemical sensors to track hazardous gases, the restarted Nesterov‐ALS method shows desirable robustness properties and outperforms any of the existing methods we compare with by a large factor. There is clear potential for extending our Nesterov‐type acceleration approach to accelerating other optimization algorithms than ALS applied to other nonconvex problems, such as Tucker tensor decomposition.  相似文献   

13.
Filter methods were initially designed for nonlinear programming problems by Fletcher and Leyffer. In this paper we propose a secant algorithm with line search filter method for nonlinear equality constrained optimization. The algorithm yields the global convergence under some reasonable conditions. By using the Lagrangian function value in the filter we establish that the proposed algorithm can overcome the Maratos effect without using second order correction step, so that fast local superlinear convergence to second order sufficient local solution is achieved. The primary numerical results are presented to confirm the robustness and efficiency of our approach.  相似文献   

14.
Artificial bee colony (ABC) algorithm invented recently by Karaboga is a biological-inspired optimization algorithm, which has been shown to be competitive with some conventional biological-inspired algorithms, such as genetic algorithm (GA), differential evolution (DE) and particle swarm optimization (PSO). However, there is still an insufficiency in ABC algorithm regarding its solution search equation, which is good at exploration but poor at exploitation. Inspired by PSO, we propose an improved ABC algorithm called gbest-guided ABC (GABC) algorithm by incorporating the information of global best (gbest) solution into the solution search equation to improve the exploitation. The experimental results tested on a set of numerical benchmark functions show that GABC algorithm can outperform ABC algorithm in most of the experiments.  相似文献   

15.
提出了一种基于正态云模型的果蝇优化算法(NCMFOA).该算法通过直接将果蝇位置赋值给气味浓度判定值和引入正态云模型来刻画果蝇嗅觉搜索行为的随机性与模糊性,从而解决了果蝇优化算法(FOA)不能搜索负值空间的缺陷,并有效克服了FOA算法在解决复杂优化问题时容易陷入局部极值的不足.通过正态云模型熵值的动态调整,使得NCMFOA算法在进化的前期阶段具有较强的随机性与模糊性,以提高算法的全局探索能力;随着迭代次数的增加,算法搜索行为的随机性与模糊性逐渐减弱,使得其局部开发能力逐渐增强,算法收敛精度得到提高.此外,通过引入视觉实时更新方案,进一步加速了算法的收敛速度.用经典的基准测试函数验证了NCMFOA算法的可行性与有效性,结果表明该算法具有收敛速度快、收敛精度高以及鲁棒性好等优点,对于高维复杂优化问题,该算法同样获得了良好的优化效果.将NCMFOA算法用于解决混沌系统的参数估计问题,进一步验证了该算法具有较强的解决实际工程优化问题的能力.  相似文献   

16.
In the optimization problem for pseudo-Boolean functions we consider a local search algorithm with a generalized neighborhood. This neighborhood is constructed for a locally optimal solution and includes nearby locally optimal solutions. We present some results of simulations for pseudo-Boolean functions whose optimization is equivalent to the problems of facility location, set covering, and competitive facility location. The goal of these experiments is to obtain a comparative estimate for the locally optimal solutions found by the standard local search algorithm and the local search algorithm using a generalized neighborhood.  相似文献   

17.
This paper considers the routing of vehicles with limited capacity from a central depot to a set of geographically dispersed customers where actual demand is revealed only when the vehicle arrives at the customer. The solution to this vehicle routing problem with stochastic demand (VRPSD) involves the optimization of complete routing schedules with minimum travel distance, driver remuneration, and number of vehicles, subject to a number of constraints such as time windows and vehicle capacity. To solve such a multiobjective and multi-modal combinatorial optimization problem, this paper presents a multiobjective evolutionary algorithm that incorporates two VRPSD-specific heuristics for local exploitation and a route simulation method to evaluate the fitness of solutions. A new way of assessing the quality of solutions to the VRPSD on top of comparing their expected costs is also proposed. It is shown that the algorithm is capable of finding useful tradeoff solutions for the VRPSD and the solutions are robust to the stochastic nature of the problem. The developed algorithm is further validated on a few VRPSD instances adapted from Solomon’s vehicle routing problem with time windows (VRPTW) benchmark problems.  相似文献   

18.
Practical industrial process is usually a dynamic process including uncertainty. Stochastic constraints can be used for industrial process modeling, when system sate and/or control input constraints cannot be strictly satisfied. Thus, optimal control of switched systems with stochastic constraints can be available to address practical industrial process problems with different modes. In general, obtaining an analytical solution of the optimal control problem is usually very difficult due to the discrete nature of the switching law and the complexity of stochastic constraints. To obtain a numerical solution, this problem is formulated as a constrained nonlinear parameter selection problem (CNPSP) based on a relaxation transformation (RT) technique, an adaptive sample approximation (ASA) method, a smooth approximation (SA) technique, and a control parameterization (CP) method. Following that, a penalty function-based random search (PFRS) algorithm is designed for solving the CNPSP based on a novel search rule-based penalty function (NSRPF) method and a novel random search (NRS) algorithm. The convergence results show that the proposed method is globally convergent. Finally, an optimal control problem in automobile test-driving with gear shifts (ATGS) is further extended to illustrate the effectiveness of the proposed method by taking into account some stochastic constraints. Numerical results show that compared with other typical methods, the proposed method is less conservative and can obtain a stable and robust performance when considering the small perturbations in initial system state. In addition, to balance the computation amount and the numerical solution accuracy, a tolerance setting method is also provided by the numerical analysis technique.  相似文献   

19.
一个优化问题的逆问题是这样一类问题,在给定该优化问题的一个可行解时,通过最小化目标函数中参数的改变量(在某个范数下)使得该可行解成为改变参数后的该优化问题的最优解。对于本是NP-难问题的无容量限制设施选址问题,证明了其逆问题仍是NP-难的。研究了使用经典的行生成算法对无容量限制设施选址的逆问题进行计算,并给出了求得逆问题上下界的启发式方法。两种方法分别基于对子问题的线性松弛求解给出上界和利用邻域搜索以及设置迭代循环次数的方式给出下界。数值结果表明线性松弛法得到的上界与最优值差距较小,但求解效率提升不大;而启发式方法得到的下界与最优值差距极小,极大地提高了求解该逆问题的效率。  相似文献   

20.
A new filter-line-search algorithm for unconstrained nonlinear optimization problems is proposed. Based on the filter technique introduced by Fletcher and Leyffer (Math. Program. 91:239–269, 2002) it extends an existing technique of Wächter and Biegler (SIAM J. Comput. 16:1–31, 2005) for nonlinear equality constrained problem to the fully general unconstrained optimization problem. The proposed method, which differs from their approach, does not depend on any external restoration procedure. Global and local quadratic convergence is established under some reasonable conditions. The results of numerical experiments indicate that it is very competitive with the classical line search algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号