首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the problem of the Choquet integral maximization over a convex set. The problem is shown to be generally non-convex (and non-differentiable). We analyze the problem structure, and propose local and global search algorithms. The special case when the problem becomes concave is analyzed separately. For the non-convex case we propose a decomposition scheme which allows to reduce a non-convex problem to several concave ones. Decomposition is performed by finding the coarsest partition of a capacity into disjunction of totally monotone measures. We discuss its effectiveness and prove that the scheme is optimal for 2-additive capacities. An application of the developed methods to resource allocation problems concludes the article.  相似文献   

2.
In this paper we present a robust duality theory for generalized convex programming problems in the face of data uncertainty within the framework of robust optimization. We establish robust strong duality for an uncertain nonlinear programming primal problem and its uncertain Lagrangian dual by showing strong duality between the deterministic counterparts: robust counterpart of the primal model and the optimistic counterpart of its dual problem. A robust strong duality theorem is given whenever the Lagrangian function is convex. We provide classes of uncertain non-convex programming problems for which robust strong duality holds under a constraint qualification. In particular, we show that robust strong duality is guaranteed for non-convex quadratic programming problems with a single quadratic constraint with the spectral norm uncertainty under a generalized Slater condition. Numerical examples are given to illustrate the nature of robust duality for uncertain nonlinear programming problems. We further show that robust duality continues to hold under a weakened convexity condition.  相似文献   

3.
In this paper, we propose a robust support vector regression with a novel generic nonconvex quadratic ε-insensitive loss function. The proposed method is robust to outliers or noise since it can adaptively control the loss value and decrease the negative influence of outliers or noise on the decision function by adjusting the elastic interval parameter and adaptive robustification parameter. Given the nature of the nonconvexity of the optimization problem, a concave-convex programming procedure is employed to solve the proposed problem. Experimental results on two artificial data sets and three real-world data sets indicate that the proposed method outperforms support vector regression, L1-norm support vector regression, least squares support vector regression, robust least squares support vector regression, and support vector regression with the Huber loss function on both robustness and generalization ability.  相似文献   

4.
In this paper, we present a novel sequential convex bilevel programming algorithm for the numerical solution of structured nonlinear min–max problems which arise in the context of semi-infinite programming. Here, our main motivation are nonlinear inequality constrained robust optimization problems. In the first part of the paper, we propose a conservative approximation strategy for such nonlinear and non-convex robust optimization problems: under the assumption that an upper bound for the curvature of the inequality constraints with respect to the uncertainty is given, we show how to formulate a lower-level concave min–max problem which approximates the robust counterpart in a conservative way. This approximation turns out to be exact in some relevant special cases and can be proven to be less conservative than existing approximation techniques that are based on linearization with respect to the uncertainties. In the second part of the paper, we review existing theory on optimality conditions for nonlinear lower-level concave min–max problems which arise in the context of semi-infinite programming. Regarding the optimality conditions for the concave lower level maximization problems as a constraint of the upper level minimization problem, we end up with a structured mathematical program with complementarity constraints (MPCC). The special hierarchical structure of this MPCC can be exploited in a novel sequential convex bilevel programming algorithm. We discuss the surprisingly strong global and locally quadratic convergence properties of this method, which can in this form neither be obtained with existing SQP methods nor with interior point relaxation techniques for general MPCCs. Finally, we discuss the application fields and implementation details of the new method and demonstrate the performance with a numerical example.  相似文献   

5.
基于指数Laplace损失函数的回归估计鲁棒超限学习机   总被引:1,自引:0,他引:1       下载免费PDF全文
实际问题的数据集通常受到各种噪声的影响,超限学习机(extreme learning machine, ELM)对这类数据集进行学习时,表现出预测精度低、预测结果波动大.为了克服该缺陷,采用了能够削弱噪声影响的指数Laplace损失函数.该损失函数是建立在Gauss核函数基础上,具有可微、非凸、有界且能够趋近于Laplace函数的特点.将其引入到超限学习机中,提出了鲁棒超限学习机回归估计(exponential Laplace loss function based robust ELM for regression, ELRELM)模型.利用迭代重赋权算法求解模型的优化问题.在每次迭代中,噪声样本点被赋予较小的权值,能够有效地提高预测精度.真实数据集实验验证了所提出的模型相比较于对比算法具有更优的学习性能和鲁棒性.  相似文献   

6.
When a radial basis function network (RBFN) is used for identification of a nonlinear multi-input multi-output (MIMO) system, the number of hidden layer nodes, the initial parameters of the kernel, and the initial weights of the network must be determined first. For this purpose, a systematic way that integrates the support vector regression (SVR) and the least squares regression (LSR) is proposed to construct the initial structure of the RBFN. The first step of the proposed method is to determine the number of hidden layer nodes and the initial parameters of the kernel by the SVR method. Then the weights of the RBFN are determined by solving a simple minimization problem based on the concept of LSR. After initialization, an annealing robust learning algorithm (ARLA) is then applied to train the RBFN. With the proposed initialization approach, one can find that the designed RBFN has few hidden layer nodes while maintaining good performance. To show the feasibility and superiority of the annealing robust radial basis function networks (ARRBFNs) for identification of MIMO systems, several illustrative examples are included.  相似文献   

7.
When the follower's optimality conditions are both necessary and sufficient, the nonlinear bilevel program can be solved as a global optimization problem. The complementary slackness condition is usually the complicating constraint in such problems. We show how this constraint can be replaced by an equivalent system of convex and separable quadratic constraints. In this paper, we propose different methods for finding the global minimum of a concave function subject to quadratic separable constraints. The first method is of the branch and bound type, and is based on rectangular partitions to obtain upper and lower bounds. Convergence of the proposed algorithm is also proved. For computational purposes, different procedures that accelerate the convergence of the proposed algorithm are analysed. The second method is based on piecewise linear approximations of the constraint functions. When the constraints are convex, the problem is reduced to global concave minimization subject to linear constraints. In the case of non-convex constraints, we use zero-one integer variables to linearize the constraints. The number of integer variables depends only on the concave parts of the constraint functions.Parts of the present paper were prepared while the second author was visiting Georgia Tech and the University of Florida.  相似文献   

8.
We consider the problem of deleting bad influential observations (outliers) in linear regression models. The problem is formulated as a Quadratic Mixed Integer Programming (QMIP) problem, where penalty costs for discarding outliers are used into the objective function. The optimum solution defines a robust regression estimator called penalized trimmed squares (PTS). Due to the high computational complexity of the resulting QMIP problem, the proposed robust procedure is computationally suitable for small sample data. The computational performance and the effectiveness of the new procedure are improved significantly by using the idea of ε-Insensitive loss function from support vectors machine regression. Small errors are ignored, and the mathematical formula gains the sparseness property. The good performance of the ε-Insensitive PTS (IPTS) estimator allows identification of multiple outliers avoiding masking or swamping effects. The computational effectiveness and successful outlier detection of the proposed method is demonstrated via simulated experiments. This research has been partially funded by the Greek Ministry of Education under the program Pythagoras II.  相似文献   

9.
This paper proposes a robust procedure for solving multiphase regression problems that is efficient enough to deal with data contaminated by atypical observations due to measurement errors or those drawn from heavy-tailed distributions. Incorporating the expectation and maximization algorithm with the M-estimation technique, we simultaneously derive robust estimates of the change-points and regression parameters, yet as the proposed method is still not resistant to high leverage outliers we further suggest a modified version by first moderately trimming those outliers and then implementing the new procedure for the trimmed data. This study sets up two robust algorithms using the Huber loss function and Tukey's biweight function to respectively replace the least squares criterion in the normality-based expectation and maximization algorithm, illustrating the effectiveness and superiority of the proposed algorithms through extensive simulations and sensitivity analyses. Experimental results show the ability of the proposed method to withstand outliers and heavy-tailed distributions. Moreover, as resistance to high leverage outliers is particularly important due to their devastating effect on fitting a regression model to data, various real-world applications show the practicability of this approach.  相似文献   

10.
王亮 《经济数学》2011,28(2):16-20
从菲利普斯曲线非线性和央行损失函数非对称性两个视角论证了非线性泰勒规则的形成机理,数理证明结论显示:无论菲利普斯曲线是上凸还是下凹,都意味着通货膨胀对产出缺口的反应是非对称的,都会导致泰勒规则的非线性;央行损失函数的非对称性是导致泰勒规则非线性的另一个原因.当存在扩张谨慎需求时,利率曲线是下凹的;当存在价格平稳谨慎需求...  相似文献   

11.
Deleting Outliers in Robust Regression with Mixed Integer Programming   总被引:1,自引:0,他引:1  
In robust regression we often have to decide how many are the unusual observations, which should be removed from the sample in order to obtain better fitting for the rest of the observations. Generally, we use the basic principle of LTS, which is to fit the majority of the data, identifying as outliers those points that cause the biggest damage to the robust fit. However, in the LTS regression method the choice of default values for high break down-point affects seriously the efficiency of the estimator. In the proposed approach we introduce penalty cost for discarding an outlier, consequently, the best fit for the majority of the data is obtained by discarding only catastrophic observations. This penalty cost is based on robust design weights and high break down-point residual scale taken from the LTS estimator. The robust estimation is obtained by solving a convex quadratic mixed integer programming problem, where in the objective function the sum of the squared residuals and penalties for discarding observations is minimized. The proposed mathematical programming formula is suitable for small-sample data. Moreover, we conduct a simulation study to compare other robust estimators with our approach in terms of their efficiency and robustness.  相似文献   

12.
Many important classes of decision models give rise to the problem of finding a global maximum of a convex function over a convex set. This problem is known also as concave minimization, concave programming or convex maximization. Such problems can have many local maxima, therefore finding the global maximum is a computationally difficult problem, since standard nonlinear programming procedures fail. In this article, we provide a very simple and practical approach to find the global solution of quadratic convex maximization problems over a polytope. A convex function achieves its global maximum at extreme points of the feasible domain. Since an inscribed ball does not contain any extreme points of the domain, we use the largest inscribed ball for an inner approximation while a minimal enclosing box is exploited for an outer approximation of the domain. The approach is based on the use of these approximations along with the standard local search algorithm and cutting plane techniques.  相似文献   

13.
《Optimization》2012,61(5):1131-1151
We present a bundle-type method for minimizing non-convex non-smooth functions. Our approach is based on the partition of the bundle into two sets, taking into account the local convex or concave behaviour of the objective function. Termination at a point satisfying an approximate stationarity condition is proved and numerical results are provided.  相似文献   

14.
Classical robust statistical methods dealing with noisy data are often based on modifications of convex loss functions. In recent years, nonconvex loss-based robust methods have been increasingly popular. A nonconvex loss can provide robust estimation for data contaminated with outliers. The significant challenge is that a nonconvex loss can be numerically difficult to optimize. This article proposes quadratic majorization algorithm for nonconvex (QManc) loss. The QManc can decompose a nonconvex loss into a sequence of simpler optimization problems. Subsequently, the QManc is applied to a powerful machine learning algorithm: quadratic majorization boosting algorithm (QMBA). We develop QMBA for robust classification (binary and multi-category) and regression. In high-dimensional cancer genetics data and simulations, the QMBA is comparable with convex loss-based boosting algorithms for clean data, and outperforms the latter for data contaminated with outliers. The QMBA is also superior to boosting when directly implemented to optimize nonconvex loss functions. Supplementary material for this article is available online.  相似文献   

15.
The purpose of this paper is to give new formulations for the unconstrained 0–1 nonlinear problem. The unconstrained 0–1 nonlinear problem is reduced to nonlinear continuous problems where the objective functions are piecewise linear. In the first formulation, the objective function is a difference of two convex functions while the other formulations lead to concave problems. It is shown that the concave problems we obtain have fewer integer local minima than has the classical concave formulation of the 0–1 unconstrained 0–1 nonlinear problem.  相似文献   

16.
高岳林  张博 《计算数学》2020,42(2):207-222
本文旨在针对线性比式和规划这一NP-Hard非线性规划问题提出新的全局优化算法.首先,通过引入p个辅助变量把原问题等价的转化为一个非线性规划问题,这个非线性规划问题的目标函数是乘积和的形式并给原问题增加了p个新的非线性约束,再通过构造凸凹包络的技巧对等价问题的目标函数和约束条件进行相应的线性放缩,构成等价问题的一个下界线性松弛规划问题,从而提出了一个求解原问题的分支定界算法,并证明了算法的收敛性.最后,通过数值结果比较表明所提出的算法是可行有效的.  相似文献   

17.
This paper presents the multiple instance classification problem that can be used for drug and molecular activity prediction, text categorization, image annotation, and object recognition. In order to model a more robust representation of outliers, hard margin loss formulations that minimize the number of misclassified instances are proposed. Although the problem is $\mathcal{NP}$ -hard, computational studies show that medium sized problems can be solved to optimality in reasonable time using integer programming and constraint programming formulations. A three-phase heuristic algorithm is proposed for larger problems. Furthermore, different loss functions such as hinge loss, ramp loss, and hard margin loss are empirically compared in the context of multiple instance classification. The proposed heuristic and robust support vector machines with hard margin loss demonstrate superior generalization performance compared to other approaches for multiple instance learning.  相似文献   

18.
Least squares support vector machine (LS-SVM) for nonlinear regression is sensitive to outliers in the field of machine learning. Weighted LS-SVM (WLS-SVM) overcomes this drawback by adding weight to each training sample. However, as the number of outliers increases, the accuracy of WLS-SVM may decrease. In order to improve the robustness of WLS-SVM, a new robust regression method based on WLS-SVM and penalized trimmed squares (WLSSVM–PTS) has been proposed. The algorithm comprises three main stages. The initial parameters are obtained by least trimmed squares at first. Then, the significant outliers are identified and eliminated by the Fast-PTS algorithm. The remaining samples with little outliers are estimated by WLS-SVM at last. The statistical tests of experimental results carried out on numerical datasets and real-world datasets show that the proposed WLSSVM–PTS is significantly robust than LS-SVM, WLS-SVM and LSSVM–LTS.  相似文献   

19.
Feature selection consists of choosing a subset of available features that capture the relevant properties of the data. In supervised pattern classification, a good choice of features is fundamental for building compact and accurate classifiers. In this paper, we develop an efficient feature selection method using the zero-norm l 0 in the context of support vector machines (SVMs). Discontinuity at the origin for l 0 makes the solution of the corresponding optimization problem difficult to solve. To overcome this drawback, we use a robust DC (difference of convex functions) programming approach which is a general framework for non-convex continuous optimisation. We consider an appropriate continuous approximation to l 0 such that the resulting problem can be formulated as a DC program. Our DC algorithm (DCA) has a finite convergence and requires solving one linear program at each iteration. Computational experiments on standard datasets including challenging feature-selection problems of the NIPS 2003 feature selection challenge and gene selection for cancer classification show that the proposed method is promising: while it suppresses up to more than 99% of the features, it can provide a good classification. Moreover, the comparative results illustrate the superiority of the proposed approach over standard methods such as classical SVMs and feature selection concave.  相似文献   

20.
We study approaches for obtaining convex relaxations of global optimization problems containing multilinear functions. Specifically, we compare the concave and convex envelopes of these functions with the relaxations that are obtained with a standard relaxation approach, due to McCormick. The standard approach reformulates the problem to contain only bilinear terms and then relaxes each term independently. We show that for a multilinear function having a single product term, this approach yields the convex and concave envelopes if the bounds on all variables are symmetric around zero. We then review and extend some results on conditions when the concave envelope of a multilinear function can be written as a sum of concave envelopes of its individual terms. Finally, for bilinear functions we prove that the difference between the concave upper bounding and convex lower bounding functions obtained from the McCormick relaxation approach is always within a constant of the difference between the concave and convex envelopes. These results, along with numerical examples we provide, give insight into how to construct strong relaxations of multilinear functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号