首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new approximation method is presented for directly minimizing a composite nonsmooth function that is locally Lipschitzian. This method approximates only the generalized gradient vector, enabling us to use directly well-developed smooth optimization algorithms for solving composite nonsmooth optimization problems. This generalized gradient vector is approximated on each design variable coordinate by using only the active components of the subgradient vectors; then, its usability is validated numerically by the Pareto optimum concept. In order to show the performance of the proposed method, we solve four academic composite nonsmooth optimization problems and two dynamic response optimization problems with multicriteria. Specifically, the optimization results of the two dynamic response optimization problems are compared with those obtained by three typical multicriteria optimization strategies such as the weighting method, distance method, and min–max method, which introduces an artificial design variable in order to replace the max-value cost function with additional inequality constraints. The comparisons show that the proposed approximation method gives more accurate and efficient results than the other methods.  相似文献   

2.
基于支持向量机的飞行事故率预测模型   总被引:1,自引:0,他引:1  
飞行事故率是表征飞行安全水平的重要指标,其预测是典型的小样本问题.针对目前飞行事故率预测中存在的预测精度不高的问题,提出了一种基于回归支持向量机的飞行事故率预测建模方法.最后结合实际算例,采用SVR进行了飞行事故率预测建模并把预测结果与灰色预测和灰色马尔柯夫链预测进行了对比.仿真结果表明SVR具有很高的建模精度和泛化能力,从而验证了采用SVR进行航空飞行事故率预测的合理性和先进性.  相似文献   

3.
Vector optimization problems are a significant extension of multiobjective optimization, which has a large number of real life applications. In vector optimization the preference order is related to an arbitrary closed and convex cone, rather than the nonnegative orthant. We consider extensions of the projected gradient gradient method to vector optimization, which work directly with vector-valued functions, without using scalar-valued objectives. We provide a direction which adequately substitutes for the projected gradient, and establish results which mirror those available for the scalar-valued case, namely stationarity of the cluster points (if any) without convexity assumptions, and convergence of the full sequence generated by the algorithm to a weakly efficient optimum in the convex case, under mild assumptions. We also prove that our results still hold when the search direction is only approximately computed.  相似文献   

4.
Molecular similarity index measures the similarity between two molecules. Computing the optimal similarity index is a hard global optimization problem. Since the objective function value is very hard to compute and its gradient vector is usually not available, previous research has been based on non-gradient algorithms such as random search and the simplex method. In a recent paper, McMahon and King introduced a Gaussian approximation so that both the function value and the gradient vector can be computed analytically. They then proposed a steepest descent algorithm for computing the optimal similarity index of small molecules. In this paper, we consider a similar problem. Instead of computing atom-based derivatives, we directly compute the derivatives with respect to the six free variables describing the relative positions of the two molecules.. We show that both the function value and gradient vector can be computed analytically and apply the more advanced BFGS method in addition to the steepest descent algorithm. The algorithms are applied to compute the similarities among the 20 amino acids and biomolecules like proteins. Our computational results show that our algorithm can achieve more accuracy than previous methods and has a 6-fold speedup over the steepest descent method.  相似文献   

5.
通过定义了一种基于数据最优分区间相似度算法,利用学习样本得单位相似度向量,并得各维数据的最优分区间.利用最优分区间得预测样本与学习样本的单位相似度向量,从而得预测样本的预测值.通过实例表明,算法所预测的结果相对误差可达百分位,并且本算法能应用到其它数据处理中,具有较广泛的通用性.  相似文献   

6.
Boosting in the context of linear regression has become more attractive with the invention of least angle regression (LARS), where the connection between the lasso and forward stagewise fitting (boosting) has been established. Earlier it has been found that boosting is a functional gradient optimization. Instead of the gradient, we propose a conjugate direction method (CDBoost). As a result, we obtain a fast forward stepwise variable selection algorithm. The conjugate direction of CDBoost is analogous to the constrained gradient in boosting. Using this analogy, we generalize CDBoost to: (1) include small step sizes (shrinkage) which often improves prediction accuracy; and (2) the nonparametric setting with fitting methods such as trees or splines, where least angle regression and the lasso seem to be unfeasible. The step size in CDBoost has a tendency to govern the degree between L0- and L1-penalization. This makes CDBoost surprisingly flexible. We compare the different methods on simulated and real datasets. CDBoost achieves the best predictions mainly in complicated settings with correlated covariates, where it is difficult to determine the contribution of a given covariate to the response. The gain of CDBoost over boosting is especially high in sparse cases with high signal to noise ratio and few effective covariates.  相似文献   

7.
This article combines techniques from two fields of applied mathematics: optimization theory and inverse problems. We investigate a generalized conditional gradient method and its connection to an iterative shrinkage method, which has been recently proposed for solving inverse problems. The iterative shrinkage method aims at the solution of non-quadratic minimization problems where the solution is expected to have a sparse representation in a known basis. We show that it can be interpreted as a generalized conditional gradient method. We prove the convergence of this generalized method for general class of functionals, which includes non-convex functionals. This also gives a deeper understanding of the iterative shrinkage method.  相似文献   

8.
The spectral gradient method has proved to be effective for solving large-scale unconstrained optimization problems. It has been recently extended and combined with the projected gradient method for solving optimization problems on convex sets. This combination includes the use of nonmonotone line search techniques to preserve the fast local convergence. In this work we further extend the spectral choice of steplength to accept preconditioned directions when a good preconditioner is available. We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties. We discuss implementation details for solving large-scale problems.  相似文献   

9.
The spectral gradient method has proved to be effective for solving large-scale uncon-strained optimization problems.It has been recently extended and combined with theprojected gradient method for solving optimization problems on convex sets.This combi-nation includes the use of nonmonotone line search techniques to preserve the fast localconvergence.In this work we further extend the spectral choice of steplength to accept pre-conditioned directions when a good preconditioner is available.We present an algorithmthat combines the spectral projected gradient method with preconditioning strategies toincrease the local speed of convergence while keeping the global properties.We discussimplementation details for solving large-scale problems.  相似文献   

10.
Vector Variational Inequality and Vector Pseudolinear Optimization   总被引:7,自引:0,他引:7  
The study of a vector variational inequality has been advanced because it has many applications in vector optimization problems and vector equilibrium flows. In this paper, we discuss relations between a solution of a vector variational inequality and a Pareto solution or a properly efficient solution of a vector optimization problem. We show that a vector variational inequality is a necessary and sufficient optimality condition for an efficient solution of the vector pseudolinear optimization problem.  相似文献   

11.
简金宝 《数学研究》1996,29(4):72-78
本文借助一种新的求基转轴运算建立了带非线性不等式约束最优化问题的一个新的广义既约梯度法.算法不引入任何松驰变量,以致扩大问题的规模,也不需对约束函数和变量的界预先估计.另一重要特点是方法不再使用隐函数理论确定搜索方向,而是由简单的显式给出.因此方法计算量小,结构简单,便于应用.对于非K—T点x,我们构造的方向为可行下降的.本文证明了算法具有全局收敛性.  相似文献   

12.
基于最小二乘法的道路交通事故预测机理模型   总被引:1,自引:0,他引:1  
基于相似理论提出一种新的道路交通事故预测方法,建立了新的交通事故预测非线性机理模型,作为道路交通事故预测的初步探讨.采用机动车保有量作为模型的输入变量,非线性最小二乘法求出模型参数.通过计算表明新预测模型预测精度较高,有应用价值,同时也为交通事故预测提出了新的预测理论.  相似文献   

13.
梯度投影法是一类有效的约束最优化算法,在最优化领域中占有重要的地位.但是,梯度投影法所采用的投影是正交投影,不包含目标函数和约束函数的二阶导数信息·因而;收敛速度不太令人满意.本文介绍一种共轭投影概念,利用共轭投影构造了一般线性或非线性约束下的共轭投影变尺度算法,并证明了算法在一定条件下具有全局收敛性.由于算法中的共轭投影恰当地包含了目标函数和约束函数的二阶导数信息,因而收敛速度有希望加快.数值试验的结果表明算法是有效的.  相似文献   

14.
为了充分发挥概率神经网络在企业财务危机预警中的作用,克服概率神经网络平滑参数难以确定和空间复杂度高的不足,本文提出一类新的参数动态调整的粒子群算法优化概率神经网络的平滑参数,进而采用改进粒子群算法优化初始隶属度矩阵的模糊聚类方法实现对样本的选择,解决了概率神经网络平滑参数的确定及空间结构复杂的问题。提出了基于改进粒子群算法的模糊聚类-概率神经网络企业财务危机预警模型,并以我国上市公司作为研究对象进行了实证研究。结果表明,经过模糊聚类和改进粒子群算法优化的概率神经网络具有更优的预测性能,并在企业财务危机长期预警方面具有一定效用。  相似文献   

15.
Geometric consideration of duality in vector optimization   总被引:1,自引:0,他引:1  
Recently, duality in vector optimization has been attracting the interest of many researchers. In order to derive duality in vector optimization, it seems natural to introduce some vector-valued Lagrangian functions with matrix (or linear operator, in some cases) multipliers. This paper gives an insight into the geometry of vector-valued Lagrangian functions and duality in vector optimization. It is observed that supporting cones for convex sets play a key role, as well as supporting hyperplanes, traditionally used in single-objective optimization.The author would like to express his sincere gratitude to Prof. T. Tanino of Tohoku University and to some anonymous referees for their valuable comments.  相似文献   

16.
In this paper we propose a new Riemannian conjugate gradient method for optimization on the Stiefel manifold. We introduce two novel vector transports associated with the retraction constructed by the Cayley transform. Both of them satisfy the Ring-Wirth nonexpansive condition, which is fundamental for convergence analysis of Riemannian conjugate gradient methods, and one of them is also isometric. It is known that the Ring-Wirth nonexpansive condition does not hold for traditional vector transports as the differentiated retractions of QR and polar decompositions. Practical formulae of the new vector transports for low-rank matrices are obtained. Dai’s nonmonotone conjugate gradient method is generalized to the Riemannian case and global convergence of the new algorithm is established under standard assumptions. Numerical results on a variety of low-rank test problems demonstrate the effectiveness of the new method.  相似文献   

17.
Adaptive Two-Point Stepsize Gradient Algorithm   总被引:7,自引:0,他引:7  
Combined with the nonmonotone line search, the two-point stepsize gradient method has successfully been applied for large-scale unconstrained optimization. However, the numerical performances of the algorithm heavily depend on M, one of the parameters in the nonmonotone line search, even for ill-conditioned problems. This paper proposes an adaptive nonmonotone line search. The two-point stepsize gradient method is shown to be globally convergent with this adaptive nonmonotone line search. Numerical results show that the adaptive nonmonotone line search is specially suitable for the two-point stepsize gradient method.  相似文献   

18.
Convex optimization methods are used for many machine learning models such as support vector machine. However, the requirement of a convex formulation can place limitations on machine learning models. In recent years, a number of machine learning methods not requiring convexity have emerged. In this paper, we study non-convex optimization problems on the Stiefel manifold in which the feasible set consists of a set of rectangular matrices with orthonormal column vectors. We present examples of non-convex optimization problems in machine learning and apply three nonlinear optimization methods for finding a local optimal solution; geometric gradient descent method, augmented Lagrangian method of multipliers, and alternating direction method of multipliers. Although the geometric gradient method is often used to solve non-convex optimization problems on the Stiefel manifold, we show that the alternating direction method of multipliers generally produces higher quality numerical solutions within a reasonable computation time.  相似文献   

19.
An improved hybrid adjoint method to the viscous, compressible Reynold-Averaged Navier-Stokes Equation (RANS) is developed for the computation of objective function gradient and demonstrated for external aerodynamic design optimization. In this paper, the main idea is to extend the previous coupling of the discrete and continuous adjoint method by the grid-node coordinates variation technique for the computation of the variation in the gradients of flow variables. This approach in combination with the Jacobian matrices of flow fluxes refrained the objective function from field integrals and coordinate transformation matrix. Thus, it opens up the possibility of employing the hybrid adjoint method to evaluate the subsequent objective function gradient analogous to many shape parameters, comprises of only boundary integrals. This avoids the grid regeneration in the geometry for every surface perturbation in a structured and unstructured grid. Hence, this viable technique reduces the overall CPU cost. Moreover, the new hybrid adjoint method has been successfully applied to the computation of accurate sensitivity derivatives. Finally, for the investigation of the presented numerical method, simulations are carried out on NACA0012 airfoil in a transonic regime and its accuracy and effectiveness related to the new gradient equation have been verified with the Finite Difference Method (FDM). The analysis reveals that the presented methodology for the optimization provides the designer with an indispensable CPU-cost effective tool to reshape the complex geometry airfoil surfaces, useful relative to the state-of-the-art, in a less computing time.  相似文献   

20.
During the last few years, conjugate-gradient methods have been found to be the best available tool for large-scale minimization of nonlinear functions occurring in geophysical applications. While vectorization techniques have been applied to linear conjugate-gradient methods designed to solve symmetric linear systems of algebraic equations, arising mainly from discretization of elliptic partial differential equations, due to their suitability for vector or parallel processing, no such effort was undertaken for the nonlinear conjugate-gradient method for large-scale unconstrained minimization.Computational results are presented here using a robust memoryless quasi-Newton-like conjugate-gradient algorithm by Shanno and Phua applied to a set of large-scale meteorological problems. These results point to the vectorization of the conjugate-gradient code inducing a significant speed-up in the function and gradient evaluation for the nonlinear conjugate-gradient method, resulting in a sizable reduction in the CPU time for minimizing nonlinear functions of 104 to 105 variables. This is particularly true for many real-life problems where the gradient and function evaluation take the bulk of the computational effort.It is concluded that vector computers are advantageous for largescale numerical optimization problems where local minima of nonlinear functions are to be found using the nonlinear conjugate-gradient method.This research was supported by the Florida State University Supercomputer Computations Research Institute, which is partially funded by the US Department of Energy through Contract No. DE-FC05-85ER250000.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号