首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 41 毫秒
1.
Optimality functions define stationarity in nonlinear programming, semi-infinite optimization, and optimal control in some sense. In this paper, we consider optimality functions for stochastic programs with nonlinear, possibly nonconvex, expected value objective and constraint functions. We show that an optimality function directly relates to the difference in function values at a candidate point and a local minimizer. We construct confidence intervals for the value of the optimality function at a candidate point and, hence, provide a quantitative measure of solution quality. Based on sample average approximations, we develop an algorithm for classes of stochastic programs that include CVaR-problems and utilize optimality functions to select sample sizes.  相似文献   

2.
Determining whether a solution is of high quality (optimal or near optimal) is fundamental in optimization theory and algorithms. In this paper, we develop Monte Carlo sampling-based procedures for assessing solution quality in stochastic programs. Quality is defined via the optimality gap and our procedures' output is a confidence interval on this gap. We review a multiple-replications procedure that requires solution of, say, 30 optimization problems and then, we present a result that justifies a computationally simplified single-replication procedure that only requires solving one optimization problem. Even though the single replication procedure is computationally significantly less demanding, the resulting confidence interval might have low coverage probability for small sample sizes for some problems. We provide variants of this procedure that require two replications instead of one and that perform better empirically. We present computational results for a newsvendor problem and for two-stage stochastic linear programs from the literature. We also discuss when the procedures perform well and when they fail, and we propose using ɛ-optimal solutions to strengthen the performance of our procedures.  相似文献   

3.
The sample average approximation (SAA) method is an approach for solving stochastic optimization problems by using Monte Carlo simulation. In this technique the expected objective function of the stochastic problem is approximated by a sample average estimate derived from a random sample. The resulting sample average approximating problem is then solved by deterministic optimization techniques. The process is repeated with different samples to obtain candidate solutions along with statistical estimates of their optimality gaps.We present a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems. These stochastic problems involve an extremely large number of scenarios and first-stage integer variables. For each of the three problem classes, we use decomposition and branch-and-cut to solve the approximating problem within the SAA scheme. Our computational results indicate that the proposed method is successful in solving problems with up to 21694 scenarios to within an estimated 1.0% of optimality. Furthermore, a surprising observation is that the number of optimality cuts required to solve the approximating problem to optimality does not significantly increase with the size of the sample. Therefore, the observed computation times needed to find optimal solutions to the approximating problems grow only linearly with the sample size. As a result, we are able to find provably near-optimal solutions to these difficult stochastic programs using only a moderate amount of computation time.  相似文献   

4.
In the nonlinear regression model we consider the optimal design problem with a second order design D-criterion. Our purpose is to present a general approach to this problem, which includes the asymptotic second order bias and variance criterion of the least squares estimator and criteria using the volume of confidence regions based on different statistics. Under assumptions of regularity for these statistics a second order approximation of the volume of these regions is derived which is proposed as a quadratic optimality criterion. These criteria include volumes of confidence regions based on the u n - representable statistics. An important difference between the criteria presented in this paper and the second order criteria commonly employed in the recent literature is that the former criteria are independent of the vector of residuals. Moreover, a refined version of the commonly applied criteria is obtained, which also includes effects of nonlinearity caused by third derivatives of the response function.  相似文献   

5.
Monte Carlo sampling-based estimators of optimality gaps for stochastic programs are known to be biased. When bias is a prominent factor, estimates of optimality gaps tend to be large on average even for high-quality solutions. This diminishes our ability to recognize high-quality solutions. In this paper, we present a method for reducing the bias of the optimality gap estimators for two-stage stochastic linear programs with recourse via a probability metrics approach, motivated by stability results in stochastic programming. We apply this method to the Averaged Two-Replication Procedure (A2RP) by partitioning the observations in an effort to reduce bias, which can be done in polynomial time in sample size. We call the resulting procedure the Averaged Two-Replication Procedure with Bias Reduction (A2RP-B). We provide conditions under which A2RP-B produces strongly consistent point estimators and an asymptotically valid confidence interval. We illustrate the effectiveness of our approach analytically on a newsvendor problem and test the small-sample behavior of A2RP-B on a number of two-stage stochastic linear programs from the literature. Our computational results indicate that the procedure effectively reduces bias. We also observe variance reduction in certain circumstances.  相似文献   

6.
In this work we consider a stochastic optimal control problem with either convex control constraints or finitely many equality and inequality constraints over the final state. Using the variational approach, we are able to obtain first and second order expansions for the state and cost function, around a local minimum. This fact allows us to prove general first order necessary condition and, under a geometrical assumption over the constraint set, second order necessary conditions are also established. We end by giving second order optimality conditions for problems with constraints on expectations of the final state.  相似文献   

7.
We present two randomized entropy-based algorithms for approximating quite general #P-complete counting problems, like the number of Hamiltonian cycles in a graph, the permanent, the number of self-avoiding walks and the satisfiability problem. In our algorithms we first cast the underlying counting problem into an associate rare-event probability estimation, and then apply dynamic importance sampling (IS) to estimate efficiently the desired counting quantity. We construct the IS distribution by using two different approaches: one based on the cross-entropy (CE) method and the other one on the stochastic version of the well known minimum entropy (MinxEnt) method. We also establish convergence of our algorithms and confidence intervals for some special settings and present supportive numerical results, which strongly suggest that both ones (CE and MinxEnt) have polynomial running time in the size of the problem.  相似文献   

8.
We develop sufficient conditions for optimality in the generalized problem of Bolza. The basis of our approach is the dual Hamilton–Jacobi inequality leading to a new sufficient criterion for optimality in which we assume the existence of a function satisfying, together with the Hamiltonian, a certain inequality. Consequently, using this criterion, we derive new sufficient conditions for optimality of first and second order for a relative minimum.  相似文献   

9.
We consider stochastic optimization problems where risk-aversion is expressed by a stochastic ordering constraint. The constraint requires that a random vector depending on our decisions stochastically dominates a given benchmark random vector. We identify a suitable multivariate stochastic order and describe its generator in terms of von Neumann–Morgenstern utility functions. We develop necessary and sufficient conditions of optimality and duality relations for optimization problems with this constraint. Assuming convexity we show that the Lagrange multipliers corresponding to dominance constraints are elements of the generator of this order, thus refining and generalizing earlier results for optimization under univariate stochastic dominance constraints. Furthermore, we obtain necessary conditions of optimality for non-convex problems under additional smoothness assumptions.  相似文献   

10.

This paper proposes two algorithms for solving stochastic control problems with deep learning, with a focus on the utility maximisation problem. The first algorithm solves Markovian problems via the Hamilton Jacobi Bellman (HJB) equation. We solve this highly nonlinear partial differential equation (PDE) with a second order backward stochastic differential equation (2BSDE) formulation. The convex structure of the problem allows us to describe a dual problem that can either verify the original primal approach or bypass some of the complexity. The second algorithm utilises the full power of the duality method to solve non-Markovian problems, which are often beyond the scope of stochastic control solvers in the existing literature. We solve an adjoint BSDE that satisfies the dual optimality conditions. We apply these algorithms to problems with power, log and non-HARA utilities in the Black-Scholes, the Heston stochastic volatility, and path dependent volatility models. Numerical experiments show highly accurate results with low computational cost, supporting our proposed algorithms.

  相似文献   

11.
Stochastic dominance relations are well studied in statistics, decision theory and economics. Recently, there has been significant interest in introducing dominance relations into stochastic optimization problems as constraints. In the discrete case, stochastic optimization models involving second order stochastic dominance constraints can be solved by linear programming. However, problems involving first order stochastic dominance constraints are potentially hard due to the non-convexity of the associated feasible regions. In this paper we consider a mixed 0–1 linear programming formulation of a discrete first order constrained optimization model and present a relaxation based on second order constraints. We derive some valid inequalities and restrictions by employing the probabilistic structure of the problem. We also generate cuts that are valid inequalities for the disjunctive relaxations arising from the underlying combinatorial structure of the problem by applying the lift-and-project procedure. We describe three heuristic algorithms to construct feasible solutions, based on conditional second order constraints, variable fixing, and conditional value at risk. Finally, we present numerical results for several instances of a real world portfolio optimization problem. This research was supported by the NSF awards DMS-0603728 and DMI-0354678.  相似文献   

12.
离散型最小和最大次序统计量相关性研究   总被引:7,自引:0,他引:7       下载免费PDF全文
本文研究离散型随机变量之间的相关性度量, 讨论了最小次序统计量和最大次序统计量的渐近独立性, 给出了计算最小次序统计量和最大次序统计量的Kendall和Spearman秩相关系数的公式.  相似文献   

13.
We consider risk-averse convex stochastic programs expressed in terms of extended polyhedral risk measures. We derive computable confidence intervals on the optimal value of such stochastic programs using the Robust Stochastic Approximation and the Stochastic Mirror Descent (SMD) algorithms. When the objective functions are uniformly convex, we also propose a multistep extension of the Stochastic Mirror Descent algorithm and obtain confidence intervals on both the optimal values and optimal solutions. Numerical simulations show that our confidence intervals are much less conservative and are quicker to compute than previously obtained confidence intervals for SMD and that the multistep Stochastic Mirror Descent algorithm can obtain a good approximate solution much quicker than its nonmultistep counterpart.  相似文献   

14.
First-order optimality conditions have been extensively studied for the development of algorithms for identifying locally optimal solutions. In this work, we propose two novel methods that directly exploit these conditions to expedite the solution of box-constrained global optimization problems. These methods carry out domain reduction by application of bounds tightening methods on optimality conditions. This scheme is implicit and avoids explicit generation of optimality conditions through symbolic differentation, which can be memory and time intensive. The proposed bounds tightening methods are implemented in the global solver BARON. Computational results on a test library of 327 problems demonstrate the value of our proposed approach in reducing the computational time and number of nodes required to solve these problems to global optimality.  相似文献   

15.
16.
We consider a stochastically forced epidemic model with medical-resource constraints. In the deterministic case, the model can exhibit two type bistability phenomena, i.e., bistability between an endemic equilibrium or an interior limit cycle and the disease-free equilibrium, which means that whether the disease can persist in the population is sensitive to the initial values of the model. In the stochastic case, the phenomena of noise-induced state transitions between two stochastic attractors occur. Namely, under the random disturbances, the stochastic trajectory near the endemic equilibrium or the interior limit cycle will approach to the disease-free equilibrium. Besides, based on the stochastic sensitivity function method, we analyze the dispersion of random states in stochastic attractors and construct the confidence domains (confidence ellipse or confidence band) to estimate the threshold value of the intensity for noise caused transition from the endemic to disease eradication.  相似文献   

17.
We analyze nonlinear stochastic optimization problems with probabilistic constraints on nonlinear inequalities with random right hand sides. We develop two numerical methods with regularization for their numerical solution. The methods are based on first order optimality conditions and successive inner approximations of the feasible set by progressive generation of p-efficient points. The algorithms yield an optimal solution for problems involving α-concave probability distributions. For arbitrary distributions, the algorithms solve the convex hull problem and provide upper and lower bounds for the optimal value and nearly optimal solutions. The methods are compared numerically to two cutting plane methods.  相似文献   

18.
Urban rail planning is extremely complex, mainly because it is a decision problem under different uncertainties. In practice, travel demand is generally uncertain, and therefore, the timetabling decisions must be based on accurate estimation. This research addresses the optimization of train timetable at public transit terminals of an urban rail in a stochastic setting. To cope with stochastic fluctuation of arrival rates, a two‐stage stochastic programming model is developed. The objective is to construct a daily train schedule that minimizes the expected waiting time of passengers. Due to the high computational cost of evaluating the expected value objective, the sample average approximation method is applied. The method provided statistical estimations of the optimality gap as well as lower and upper bounds and the associated confidence intervals. Numerical experiments are performed to evaluate the performance of the proposed model and the solution method.  相似文献   

19.
《Optimization》2012,61(5):671-685
The paper concerns a necessary optimality condition in form of a Pontryagin Minimum Principle for a system governed by a linear two point boundary value problem with homogeneous Dibichlet conditions, whereby the control vector occurs in all coefficients of the differential equation. Without any convexity assumption the optimality condition is derived using a needle-like variation of the optimal control. In case of convex local control constraints the optimality condition implies the linearized minimum principle, which we have proved in [2]. An example shows that for this linearized optimality condition the convexity of the set of all admissible controls is essential.  相似文献   

20.
In this paper, we deal with two-person zero-sum stochastic games for discrete-time Markov processes. The optimality criterion to be studied is the discounted payoff criterion during a first passage time to some target set, where the discount factor is state-dependent. The state and action spaces are all Borel spaces, and the payoff functions are allowed to be unbounded. Under the suitable conditions, we first establish the optimality equation. Then, using dynamic programming techniques, we obtain the existence of the value of the game and a pair of optimal stationary policies. Moreover, we present the exponential convergence of the value iteration and a ‘martingale characterization’ of a pair of optimal policies. Finally, we illustrate the applications of our main results with an inventory system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号