首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 146 毫秒
1.
单调优化是指目标函数与约束函数均为单调函数的全局优化问题.本文提出一种新的凸化变换方法把单调函数化为凸函数,进而把单调优化问题化为等价的凸极大或凹极小问题,然后采用Hoffman的外逼近方法来求得问题的全局最优解.我们把这种凸化方法同Tuy的Polyblock外逼近方法作了比较,通过数值比较可以看出本文提出的凸化的方法在收敛速度上明显优于Polyblock方法.  相似文献   

2.
本文讨论了可分非凸大规模系统的全局优化控制问题 .提出了一种 3级递阶优化算法 .该算法首先把原问题转化为可分的多目标优化问题 ,然后凸化非劣前沿 ,再从非劣解集中挑出原问题的全局最优解 .建立了算法的理论基础 ,证明了算法的收敛性 .仿真结果表明算法是有效的 .  相似文献   

3.
郭科  韩德仁 《计算数学》2018,40(4):418-435
本文主要回顾了单调算子理论与分裂算法的基本概念和结果,重点介绍Forward-Backward分裂算法和Douglas-Rachford分裂算法的收敛性理论及应用.同时,也介绍了这些方法处理非凸优化问题的最新进展以及一些前沿和热点问题.最后提出了几个未来可以继续研究的方向.  相似文献   

4.
广义度量S-KKM映射的性质及其对变分不等式的应用   总被引:3,自引:0,他引:3  
引入了超S-γ-广义拟凸(凹)函数,建立了广义度量S-KKM映射与超S-γ-广义拟凸(凹)函数的关系.作为应用,获得了超凸度量空间中的新的KyFan极大极小不等式和鞍点定理.  相似文献   

5.
郦旭东 《计算数学》2020,42(4):385-404
在大数据时代,随着数据采集手段的不断提升,大规模复合凸优化问题大量的出现在包括统计数据分析,机器与统计学习以及信号与图像处理等应用中.本文针对大规模复合凸优化问题介绍了一类快速邻近点算法.在易计算的近似准则和较弱的平稳性条件下,本文给出了该算法的全局收敛与局部渐近超线性收敛结果.同时,我们设计了基于对偶原理的半光滑牛顿法来高效稳定求解邻近点算法所涉及的重要子问题.最后,本文还讨论了如何通过深入挖掘并利用复合凸优化问题中由非光滑正则函数所诱导的非光滑二阶信息来极大减少半光滑牛顿算法中求解牛顿线性系统所需的工作量,从而进一步加速邻近点算法.  相似文献   

6.
首先给出集值映射的几个通有唯一性定理,然后将其应用于研究极大极小问题、向量优化问题和不动点问题等解的唯一性.证明了在Baire分类意义下,大多数极大极小问题、向量优化问题和不动点问题都有唯一解.  相似文献   

7.
通过引入广义弧连通概念,在Rn空间中,研究极大极小非凸分式规划问题的最优性充分条件及其对偶问题.首先获得了极大极小非凸分式规划问题的最优性充分条件;然后建立分式规划问题的一个对偶模型并得到了弱对偶定理,强对偶定理和逆对偶定理.  相似文献   

8.
一些类型的数学规划问题的全局最优解   总被引:4,自引:0,他引:4  
本文对严格单调函数给出了几个凸化和凹化的方法,利用这些方法可将一个严格单调的规划问题转化为一个等价的标准D.C.规划或凹极小问题.本文还对只有一个严格单调的约束的非单调规划问题给出了目标函数的一个凸化和凹化方法,利用这些方法可将只有一个严格单调约束的非单调规划问题转化为一个等价的凹极小问题.再利用已有的关于D.C.规划和凹极小的算法,可以求得原问题的全局最优解.  相似文献   

9.
焦红伟  陈永强 《应用数学》2008,21(2):270-276
本文对一类非凸规划问题(NP)给出一确定性全局优化算法.这类问题包括:在非凸的可行域上极小化有限个带指数的线性函数乘积的和与差,广义线性多乘积规划,多项式规划等.通过利用等价问题和线性化技巧提出的算法收敛到问题(NP)的全局极小.  相似文献   

10.
广义度量S-KKM映射的性质及其对鞍点问题的应用   总被引:3,自引:0,他引:3  
引入了S为集盥映射情况下的广义度量S-KKM映射和超S-γ-广义拟凸(凹)函数,建立了广义度量S-KKM映射原理和广义度量S-KKM映射与超S-γ-广义拟凸(凹)函数的关系.作为应用,获得了超凸度量空间中的新的Ky Fan极大极小不等式和鞍点定理.  相似文献   

11.
This article is concerned with a numerical simulation of shape optimization of the Oseen flow around a solid body. The shape gradient for shape optimization problem in a viscous incompressible flow is computed by the velocity method. The flow is governed by the Oseen equations with mixed boundary conditions containing the pressure. The structure of continuous shape gradient of the cost functional is derived by using the differentiability of a minimax formulation involving a Lagrange functional with a function space parametrization technique. A gradient type algorithm is applied to the shape optimization problem. Numerical examples show that our theory is useful for practical purpose and the proposed algorithm is feasible. © 2009 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2010  相似文献   

12.
The minimax theorem for a convex-concave bifunction is a fundamental theorem in optimization and convex analysis, and has a lot of applications in economics. In the last two decades, a nonconvex extension of this minimax theorem has been well studied under various generalized convexity assumptions. In this note, by exploiting the hidden convexity (joint range convexity) of separable homogeneous polynomials, we establish a nonconvex minimax theorem involving separable homogeneous polynomials. Our result complements the existing study of nonconvex minimax theorem by obtaining easily verifiable conditions for the nonconvex minimax theorem to hold.  相似文献   

13.
An algorithm for selecting features in the classification learning problem is considered. The algorithm is based on a modification of the standard criterion used in the support vector machine method. The new criterion adds to the standard criterion a penalty function that depends on the selected features. The solution of the problem is reduced to finding the minimax of a convex-concave function. As a result, the initial set of features is decomposed into three classes—unconditionally selected, weighted selected, and eliminated features. Original Russian Text Yu.V. Goncharov, I.B. Muchnik, L.V. Shvartser @, 2008, published in Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 2008, Vol. 48, No. 7, pp. 1318–1336.  相似文献   

14.
In this article, an approach for solving finite minimax problems is proposed. This approach is based on the use of hyperbolic smoothing functions. In order to apply the hyperbolic smoothing we reformulate the objective function in the minimax problem and study the relationship between the original minimax and reformulated problems. We also study main properties of the hyperbolic smoothing function. Based on these results an algorithm for solving the finite minimax problem is proposed and this algorithm is implemented in general algebraic modelling system. We present preliminary results of numerical experiments with well-known nonsmooth optimization test problems. We also compare the proposed algorithm with the algorithm that uses the exponential smoothing function as well as with the algorithm based on nonlinear programming reformulation of the finite minimax problem.  相似文献   

15.
高维约束矩阵回归是指高维情况下带非凸约束的多响应多预测统计回归问题,其数学模型是一个NP-难的矩阵优化,它在机器学习与人工智能、医学影像疾病诊疗、基因表达分析、脑神经网络、风险管理等领域有广泛应用.从高维约束矩阵回归的优化理论和算法两方面总结和评述这些新成果,同时,列出了相应的重要文献.  相似文献   

16.
During the last years, kernel based methods proved to be very successful for many real-world learning problems. One of the main reasons for this success is the efficiency on large data sets which is a result of the fact that kernel methods like support vector machines (SVM) are based on a convex optimization problem. Solving a new learning problem can now often be reduced to the choice of an appropriate kernel function and kernel parameters. However, it can be shown that even the most powerful kernel methods can still fail on quite simple data sets in cases where the inherent feature space induced by the used kernel function is not sufficient. In these cases, an explicit feature space transformation or detection of latent variables proved to be more successful. Since such an explicit feature construction is often not feasible for large data sets, the ultimate goal for efficient kernel learning would be the adaptive creation of new and appropriate kernel functions. It can, however, not be guaranteed that such a kernel function still leads to a convex optimization problem for Support Vector Machines. Therefore, we have to enhance the optimization core of the learning method itself before we can use it with arbitrary, i.e., non-positive semidefinite, kernel functions. This article motivates the usage of appropriate feature spaces and discusses the possible consequences leading to non-convex optimization problems. We will show that these new non-convex optimization SVM are at least as accurate as their quadratic programming counterparts on eight real-world benchmark data sets in terms of the generalization performance. They always outperform traditional approaches in terms of the original optimization problem. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite or indefinite kernel functions.  相似文献   

17.
本文以处理半无限最优化问题的一般技巧,将一类针对有限极小极大问题的信赖域算法推广到半无限极小极大问题。并证明了新建算法的全局收敛性和超线性收敛性。  相似文献   

18.
Convex optimization methods are used for many machine learning models such as support vector machine. However, the requirement of a convex formulation can place limitations on machine learning models. In recent years, a number of machine learning methods not requiring convexity have emerged. In this paper, we study non-convex optimization problems on the Stiefel manifold in which the feasible set consists of a set of rectangular matrices with orthonormal column vectors. We present examples of non-convex optimization problems in machine learning and apply three nonlinear optimization methods for finding a local optimal solution; geometric gradient descent method, augmented Lagrangian method of multipliers, and alternating direction method of multipliers. Although the geometric gradient method is often used to solve non-convex optimization problems on the Stiefel manifold, we show that the alternating direction method of multipliers generally produces higher quality numerical solutions within a reasonable computation time.  相似文献   

19.
20.
A stochastic approximation algorithm for minimax optimization problems is analyzed. At each iterate, it performs one random experiment, based on which it computes a direction vector. It is shown that, under suitable conditions, it a.s. converges to the set of points satisfying necessary optimality conditions. The algorithm and its analysis bring together ideas from stochastic approximation and nondifferentiable optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号