首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Discrete approximation, which has been the prevailing scheme in stochastic programming in the past decade, has been extended to distributionally robust optimization (DRO) recently. In this paper, we conduct rigorous quantitative stability analysis of discrete approximation schemes for DRO, which measures the approximation error in terms of discretization sample size. For the ambiguity set defined through equality and inequality moment conditions, we quantify the discrepancy between the discretized ambiguity sets and the original set with respect to the Wasserstein metric. To establish the quantitative convergence, we develop a Hoffman error bound theory with Hoffman constant calculation criteria in a infinite dimensional space, which can be regarded as a byproduct of independent interest. For the ambiguity set defined by Wasserstein ball and moment conditions combined with Wasserstein ball, we present similar quantitative stability analysis by taking full advantage of the convex property inherently admitted by Wasserstein metric. Efficient numerical methods for specifically solving discrete approximation DRO problems with thousands of samples are also designed. In particular, we reformulate different types of discrete approximation problems into a class of saddle point problems with completely separable structures. The stochastic primal-dual hybrid gradient (PDHG) algorithm where in each iteration we update a random subset of the sampled variables is then amenable as a solution method for the reformulated saddle point problems. Some preliminary numerical tests are reported.  相似文献   

2.

This work deals with a broad class of convex optimization problems under uncertainty. The approach is to pose the original problem as one of finding a zero of the sum of two appropriate monotone operators, which is solved by the celebrated Douglas-Rachford splitting method. The resulting algorithm, suitable for risk-averse stochastic programs and distributionally robust optimization with fixed support, separates the random cost mapping from the risk function composing the problem’s objective. Such a separation is exploited to compute iterates by alternating projections onto different convex sets. Scenario subproblems, free from the risk function and thus parallelizable, are projections onto the cost mappings’ epigraphs. The risk function is handled in an independent and dedicated step consisting of evaluating its proximal mapping that, in many important cases, amounts to projecting onto a certain ambiguity set. Variables get updated by straightforward projections on subspaces through independent computations for the various scenarios. The investigated approach enjoys significant flexibility and opens the way to handle, in a single algorithm, several classes of risk measures and ambiguity sets.

  相似文献   

3.
本文考虑一类特殊的极大极小化问题,即分布鲁棒优化问题.这类优化方法是不同于随机规划和鲁棒优化的一类方法,在这类问题中,不确定变量的概率分布往往是不能精确得知的,只知道概率分布所满足的一些条件,比如一次信息、二次信息以及支撑集合信息等.如此分布鲁棒优化问题便是寻求在所有满足条件的分布中找寻满足最坏可能分布的解.一般情况下,这类优化问题的求解都是NP难的.本文考虑一类简单的情形,即考虑不确定变量的概率分布只满足一次信息、支撑集合信息以及仿射一次信息,通过应用半无限规划问题的对偶性,本文指出这类分布鲁棒优化问题等价于线性规划问题,从而原分布鲁棒优化问题可以应用现成的求解线性规划的方法进行求解.为验证方法的有效性,本文将新方法应用于解决不确定条件下含有交易费用的利率管理问题.  相似文献   

4.
In this paper, the option pricing problem is formulated as a distributionally robust optimization problem, which seeks to minimize the worst case replication error for a given distributional uncertainty set(DUS) of the random underlying asset returns. The DUS is defined as a Wasserstein ball centred the empirical distribution of the underlying asset returns. It is proved that the proposed model can be reformulated as a computational tractable linear programming problem. Finally, the results of the empirical tests are presented to show the significance of the proposed approach.  相似文献   

5.
On Distributionally Robust Chance-Constrained Linear Programs   总被引:1,自引:0,他引:1  
In this paper, we discuss linear programs in which the data that specify the constraints are subject to random uncertainty. A usual approach in this setting is to enforce the constraints up to a given level of probability. We show that, for a wide class of probability distributions (namely, radial distributions) on the data, the probability constraints can be converted explicitly into convex second-order cone constraints; hence, the probability-constrained linear program can be solved exactly with great efficiency. Next, we analyze the situation where the probability distribution of the data is not completely specified, but is only known to belong to a given class of distributions. In this case, we provide explicit convex conditions that guarantee the satisfaction of the probability constraints for any possible distribution belonging to the given class.Communicated by B. T. PolyakThis work was supported by FIRB funds from the Italian Ministry of University and Research.  相似文献   

6.
We combine the robust criterion with the lasso penalty together for the high-dimensional threshold model. It estimates regression coefficients as well as the threshold parameter robustly that can be resistant to outliers or heavy-tailed noises and perform variable selection simultaneously. We illustrate our approach with the absolute loss, the Huber’s loss, and the Tukey’s loss, it can also be extended to any other robust losses.Simulation studies are conducted to demonstrate the usefulness of o...  相似文献   

7.
In this article, we consider nonparametric smoothing and variable selection in varying-coefficient models. Varying-coefficient models are commonly used for analyzing the time-dependent effects of covariates on responses measured repeatedly (such as longitudinal data). We present the P-spline estimator in this context and show its estimation consistency for a diverging number of knots (or B-spline basis functions). The combination of P-splines with nonnegative garrote (which is a variable selection method) leads to good estimation and variable selection. Moreover, we consider APSO (additive P-spline selection operator), which combines a P-spline penalty with a regularization penalty, and show its estimation and variable selection consistency. The methods are illustrated with a simulation study and real-data examples. The proofs of the theoretical results as well as one of the real-data examples are provided in the online supplementary materials.  相似文献   

8.
We investigate a robust penalized logistic regression algorithm based on a minimum distance criterion. Influential outliers are often associated with the explosion of parameter vector estimates, but in the context of standard logistic regression, the bias due to outliers always causes the parameter vector to implode, that is, shrink toward the zero vector. Thus, using LASSO-like penalties to perform variable selection in the presence of outliers can result in missed detections of relevant covariates. We show that by choosing a minimum distance criterion together with an elastic net penalty, we can simultaneously find a parsimonious model and avoid estimation implosion even in the presence of many outliers in the important small n large p situation. Minimizing the penalized minimum distance criterion is a challenging problem due to its nonconvexity. To meet the challenge, we develop a simple and efficient MM (majorization–minimization) algorithm that can be adapted gracefully to the small n large p context. Performance of our algorithm is evaluated on simulated and real datasets. This article has supplementary materials available online.  相似文献   

9.
Variable selection methods using a penalized likelihood have been widely studied in various statistical models. However, in semiparametric frailty models, these methods have been relatively less studied because the marginal likelihood function involves analytically intractable integrals, particularly when modeling multicomponent or correlated frailties. In this article, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of semiparametric frailty models, in which random effects may be shared, nested, or correlated. We consider three penalty functions (least absolute shrinkage and selection operator [LASSO], smoothly clipped absolute deviation [SCAD], and HL) in our variable selection procedure. We show that the proposed method can be easily implemented via a slight modification to existing HL estimation approaches. Simulation studies also show that the procedure using the SCAD or HL penalty performs well. The usefulness of the new method is illustrated using three practical datasets too. Supplementary materials for the article are available online.  相似文献   

10.
Journal of Optimization Theory and Applications - We consider a distributionally robust formulation of stochastic optimization problems arising in statistical learning, where robustness is with...  相似文献   

11.
This article presents a Markov chain Monte Carlo algorithm for both variable and covariance selection in the context of logistic mixed effects models. This algorithm allows us to sample solely from standard densities with no additional tuning. We apply a stochastic search variable approach to select explanatory variables as well as to determine the structure of the random effects covariance matrix.

Prior determination of explanatory variables and random effects is not a prerequisite because the definite structure is chosen in a data-driven manner in the course of the modeling procedure. To illustrate the method, we give two bank data examples.  相似文献   

12.
Journal of Optimization Theory and Applications - Optimization of low-thrust trajectories is necessary in the design of space missions using electric propulsion systems. We consider the problem of...  相似文献   

13.
The Zellner's g-prior and its recent hierarchical extensions are the most popular default prior choices in the Bayesian variable selection context. These prior setups can be expressed as power-priors with fixed set of imaginary data. In this article, we borrow ideas from the power-expected-posterior (PEP) priors to introduce, under the g-prior approach, an extra hierarchical level that accounts for the imaginary data uncertainty. For normal regression variable selection problems, the resulting power-conditional-expected-posterior (PCEP) prior is a conjugate normal-inverse gamma prior that provides a consistent variable selection procedure and gives support to more parsimonious models than the ones supported using the g-prior and the hyper-g prior for finite samples. Detailed illustrations and comparisons of the variable selection procedures using the proposed method, the g-prior, and the hyper-g prior are provided using both simulated and real data examples. Supplementary materials for this article are available online.  相似文献   

14.
15.
变量选择在回归分析建模中是一个非常重要的基本问题,在回归模型中应该保留对响应的影响最显著的变量。变量选择在分析实际经济问题中得到广泛的应用。本文以混料模型为基础,主要研究混料模型中的变量选择问题。  相似文献   

16.
This article proposes a variable selection method termed “subtle uprooting” for linear regression. In this proposal, variable selection is formulated into a single optimization problem by approximating cardinality involved in the information criterion with a smooth function. A technical maneuver is then employed to enforce sparsity of parameter estimates while maintaining smoothness of the objective function. To solve the resulting smooth nonconvex optimization problem, a modified Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm with established global and super-linear convergence is adopted. Both simulated experiments and an empirical example are provided for assessment and illustration. Supplementary materials for this article are available online.  相似文献   

17.
Abstract

Test-based variable selection algorithms in regression often are based on sequential comparison of test statistics to cutoff values. A predetermined a level typically is used to determine the cutoffs based on an assumed probability distribution for the test statistic. For example, backward elimination or forward stepwise involve comparisons of test statistics to prespecified t or F cutoffs in Gaussian linear regression, while a likelihood ratio. Wald, or score statistic, is typically used with standard normal or chi square cutoffs in nonlinear settings. Although such algorithms enjoy widespread use, their statistical properties are not well understood, either theoretically or empirically. Two inherent problems with these methods are that (1) as in classical hypothesis testing, the value of α is arbitrary, while (2) unlike hypothesis testing, there is no simple analog of type I error rate corresponding to application of the entire algorithm to a data set. In this article we propose a new method, backward elimination via cross-validation (BECV), for test-based variable selection in regression. It is implemented by first finding the empirical p value α*, which minimizes a cross-validation estimate of squared prediction error, then selecting the model by running backward elimination on the entire data set using α* as the nominal p value for each test. We present results of an extensive computer simulation to evaluate BECV and compare its performance to standard backward elimination and forward stepwise selection.  相似文献   

18.
一类分布鲁棒线性决策随机优化研究   总被引:1,自引:0,他引:1  
随机优化广泛应用于经济、管理、工程和国防等领域,分布鲁棒优化作为解决分布信息模糊下的随机优化问题近年来成为学术界的研究热点.本文基于φ-散度不确定集和线性决策方式研究一类分布鲁棒随机优化的建模与计算,构建了易于计算实现的分布鲁棒随机优化的上界和下界问题.数值算例验证了模型分析的有效性.  相似文献   

19.
??When the data has heavy tail feature or contains outliers, conventional variable selection methods based on penalized least squares or likelihood functions perform poorly. Based on Bayesian inference method, we study the Bayesian variable selection problem for median linear models. The Bayesian estimation method is proposed by using Bayesian model selection theory and Bayesian estimation method through selecting the Spike and Slab prior for regression coefficients, and the effective posterior Gibbs sampling procedure is also given. Extensive numerical simulations and Boston house price data analysis are used to illustrate the effectiveness of the proposed method.  相似文献   

20.
In this article, we advocate the ensemble approach for variable selection. We point out that the stochastic mechanism used to generate the variable-selection ensemble (VSE) must be picked with care. We construct a VSE using a stochastic stepwise algorithm and compare its performance with numerous state-of-the-art algorithms. Supplemental materials for the article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号