首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
A scenario tree is an efficient way to represent a stochastic data process in decision problems under uncertainty. This paper addresses how to efficiently generate appropriate scenario trees. A knowledge‐based scenario tree generation method is proposed; the new method is further improved by accounting for subjective judgements or expectations about the random future. Compared with existing approaches, complicated mathematical models and time‐consuming estimation, simulation and optimization problem solution are avoided in our knowledge‐based algorithms, and large‐scale scenario trees can be quickly generated. To show the advantages of the new algorithms, a multiperiod portfolio selection problem is considered, and a dynamic risk measure is adopted to control the intermediate risk, which is superior to the single‐period risk measure used in the existing literature. A series of numerical experiments are carried out by using real trading data from the Shanghai stock market. The results show that the scenarios generated by our algorithms can properly represent the underlying distribution; our algorithms have high performance, say, a scenario tree with up to 10,000 scenarios can be generated in less than a half minute. The applications in the multiperiod portfolio management problem demonstrate that our scenario tree generation methods are stable, and the optimal trading strategies obtained with the generated scenario tree are reasonable, efficient and robust. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
Approximate Bayesian inference by importance sampling derives probabilistic statements from a Bayesian network, an essential part of evidential reasoning with the network and an important aspect of many Bayesian methods. A critical problem in importance sampling on Bayesian networks is the selection of a good importance function to sample a network’s prior and posterior probability distribution. The initially optimal importance functions eventually start deviating from the optimal function when sampling a network’s posterior distribution given evidence, even when adaptive methods are used that adjust an importance function to the evidence by learning. In this article we propose a new family of Refractor Importance Sampling (RIS) algorithms for adaptive importance sampling under evidential reasoning. RIS applies “arc refractors” to a Bayesian network by adding new arcs and refining the conditional probability tables. The goal of RIS is to optimize the importance function for the posterior distribution and reduce the error variance of sampling. Our experimental results show a significant improvement of RIS over state-of-the-art adaptive importance sampling algorithms.  相似文献   

3.
Two noniterative algorithms for computing posteriors   总被引:1,自引:0,他引:1  
In this paper, we first propose a noniterative sampling method to obtain an i.i.d. sample approximately from posteriors by combining the inverse Bayes formula, sampling/importance resampling and posterior mode estimates. We then propose a new exact algorithm to compute posteriors by improving the PMDA-Exact using the sampling-wise IBF. If the posterior mode is available from the EM algorithm, then these two algorithms compute posteriors well and eliminate the convergence problem of Markov Chain Monte Carlo methods. We show good performances of our methods by some examples.  相似文献   

4.
The bond portfolio management problem is formulated as a multiperiod two-stage or multistage stochastic program based on interest rate scenarios. These scenarios depend on the available market data, on the applied estimation and sampling techniques, etc., and are used to evaluate coefficients of the resulting large scale mathematical program. The aim of the contribution is to analyze stability and sensitivity of this program on small changes of the coefficients – the (scenario dependent) values of future interest rates and prices. We shall prove that under sensible assumptions, the scenario subproblems are stable linear programs and that also the optimal first-stage decisions and the optimal value of the considered stochastic program possess acceptable continuity properties.  相似文献   

5.
The special constraint structure and large dimension are characteristic for multistage stochastic optimization. This results from modeling future uncertainty via branching process or scenario tree. Most efficient algorithms for solving this type of problems use certain decomposition schemes, and often only a part of the whole set of scenarios is taken into account in order to make the problem tractable.We propose a primal–dual method based on constraint aggregation, which constructs a sequence of iterates converging to a solution of the initial problem. At each iteration, however, only a reduced sub-problem with smaller number of aggregate constraints has to be solved. Number of aggregates and their composition are determined by the user, and the procedure for calculating aggregates can be parallelized. The method provides a posteriori estimates of the quality of the current solution approximation in terms of the objective function value and the residual.Results of numerical tests for a portfolio allocation problem with quadratic utility function are presented.  相似文献   

6.
Scenario tree modeling for multistage stochastic programs   总被引:2,自引:0,他引:2  
An important issue for solving multistage stochastic programs consists in the approximate representation of the (multivariate) stochastic input process in the form of a scenario tree. In this paper, we develop (stability) theory-based heuristics for generating scenario trees out of an initial set of scenarios. They are based on forward or backward algorithms for tree generation consisting of recursive scenario reduction and bundling steps. Conditions are established implying closeness of optimal values of the original process and its tree approximation, respectively, by relying on a recent stability result in Heitsch, Römisch and Strugarek (SIAM J Optim 17:511–525, 2006) for multistage stochastic programs. Numerical experience is reported for constructing multivariate scenario trees in electricity portfolio management.  相似文献   

7.
The class of logconcave functions in ℝn is a common generalization of Gaussians and of indicator functions of convex sets. Motivated by the problem of sampling from logconcave density functions, we study their geometry and introduce a technique for “smoothing” them out. These results are applied to analyze two efficient algorithms for sampling from a logconcave distribution in n dimensions, with no assumptions on the local smoothness of the density function. Both algorithms, the ball walk and the hit‐and‐run walk, use a random walk (Markov chain) to generate a random point. After appropriate preprocessing, they produce a point from approximately the right distribution in time O*(n4) and in amortized time O*(n3) if n or more sample points are needed (where the asterisk indicates that dependence on the error parameter and factors of log n are not shown). These bounds match previous bounds for the special case of sampling from the uniform distribution over a convex body.© 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2007  相似文献   

8.
This article proposes a method for approximating integrated likelihoods in finite mixture models. We formulate the model in terms of the unobserved group memberships, z, and make them the variables of integration. The integral is then evaluated using importance sampling over the z. We propose an adaptive importance sampling function which is itself a mixture, with two types of component distributions, one concentrated and one diffuse. The more concentrated type of component serves the usual purpose of an importance sampling function, sampling mostly group assignments of high posterior probability. The less concentrated type of component allows for the importance sampling function to explore the space in a controlled way to find other, unvisited assignments with high posterior probability. Components are added adaptively, one at a time, to cover areas of high posterior probability not well covered by the current importance sampling function. The method is called incremental mixture importance sampling (IMIS).

IMIS is easy to implement and to monitor for convergence. It scales easily for higher dimensional mixture distributions when a conjugate prior is specified for the mixture parameters. The simulated values on which the estimate is based are independent, which allows for straightforward estimation of standard errors. The self-monitoring aspects of the method make it easier to adjust tuning parameters in the course of estimation than standard Markov chain Monte Carlo algorithms. With only small modifications to the code, one can use the method for a wide variety of mixture distributions of different dimensions. The method performed well in simulations and in mixture problems in astronomy and medical research.  相似文献   

9.
We study the maximum weighted independent-set problem on interval graphs with uncertainty on the vertex weights. We use the absolute robustness criterion and the min–max regret criterion to evaluate solutions. For a discrete scenario set, we find that the problem is NP-hard for each of the robustness criteria; we also provide pseudo-polynomial time algorithms when there is a constant number of scenarios and show that the problem is strongly NP-hard when the set of scenarios is unbounded. When the scenario set is a Cartesian product, we prove that the problem is equivalent to a maximum weighted independent-set problem on the same interval graph but without uncertainty for the first objective function and that the scenario set can be reduced for the second objective function.  相似文献   

10.
Many numerical optimization methods use scenario trees as a discrete approximation for the true (multi-dimensional) probability distributions of the problem’s random variables. Realistic specifications in financial optimization models can lead to tree sizes that quickly become computationally intractable. In this paper we focus on the two main approaches proposed in the literature to deal with this problem: scenario reduction and state aggregation. We first state necessary conditions for the node structure of a tree to rule out arbitrage. However, currently available scenario reduction algorithms do not take these conditions explicitly into account. State aggregation excludes arbitrage opportunities by relying on the risk-neutral measure. This is, however, only appropriate for pricing purposes but not for optimization. Both limitations are illustrated by numerical examples. We conclude that neither of these methods is suitable to solve financial optimization models in asset–liability or portfolio management.  相似文献   

11.
Solving a discrete stochastic optimization problem involves two efforts: one is to sample and search the design space; and the other is to estimate the performance values of the sampled designs. When the amount of computational resources is limited, we need to know how to balance these two efforts in order to obtain the best result. In this paper, two performance measures which quantify the marginal contribution of the searching process as well as the performance evaluation process are proposed. Using these two measures, we recommend a framework that can dynamically allocate the computational resources to the search process and the performance evaluation process. Two algorithms based on the proposed framework are tested on several scenarios, and the results produced are very promising.  相似文献   

12.
When solving scenario-based stochastic programming problems, it is imperative that the employed solution methodology be based on some form of problem decomposition: mathematical, stochastic, or scenario decomposition. In particular, the scenario decomposition resulting from scenario approximations has perhaps the least tendency to be computationally tedious due to increases in the number of scenarios. Scenario approximations discussed in this paper utilize the second-moment information of the given scenarios to iteratively construct a (relatively) small number of representative scenarios that are used to derive bounding approximations on the stochastic program. While the sizes of these approximations grow only linearly in the number of random parameters, their refinement is performed by exploiting the behavior of the value function in the most effective manner. The implementation SMART discussed here demonstrates the aptness of the scheme for solving two-stage stochastic programs described with a large number of scenarios.This paper was presented at the IFIP Workshop onStochastic Programming: Algorithms and Models, Lillehammer, Norway, January 1994.  相似文献   

13.
《Optimization》2012,61(8):1039-1073
This article deals with multicriteria optimization models and algorithms of movement scheduling for many objects to synchronize their movement (2CMSS problem). The model consists of two parts: (1) node–disjoint path planning visiting specified nodes for K objects with a given vector of intermediate nodes for each one (NDSP problem); (2) movement synchronization in some intermediate nodes (MS problem). For synchronous movement, two categories of criteria are defined: time of movement and ‘distance’ of K-moved objects from the movement pattern. We defined the problem as a discrete-continuous, non-linear, two-criteria mathematical programming problem. We proposed to use a two-stage algorithm to solve the 2CMSS problem (as lexicographic solution): At first we have to find the vector of node–disjoint shortest paths for K objects visiting intermediate nodes to set optimal paths under the assumption that we use maximal possible velocities on each arc belonging to a path for each object (solution of the NDSP problem), and next we try to decrease the values of velocities to optimize the second criterion (synchronization, solution of the MS problem). Experimental analyses of effectiveness and complexity of the algorithms are presented.  相似文献   

14.
We address the problem of designing a multi-layer network with survivability requirements. We are given a two-layer network: the lower layer represents the potential physical connections that can be activated, the upper layer is made of logical connections that can be set up using physical links. We are given origin-destination demands (commodities) to be routed at the upper layer. We are also given a set of failure scenarios and, for every scenario, an associated subset of commodities. The goal is to install minimum cost integer capacities on the links of both layers in order to ensure that the commodities can be routed simultaneously on the network. In addition, in every failure scenario the routing of the associated commodities must be guaranteed. We consider two variants of the problem and develop a branch-and-cut scheme based on the capacity formulation. Computational results on instances derived from the SNDLib for single node failure scenarios are discussed.  相似文献   

15.
Scenario Reduction Algorithms in Stochastic Programming   总被引:4,自引:0,他引:4  
We consider convex stochastic programs with an (approximate) initial probability distribution P having finite support supp P, i.e., finitely many scenarios. The behaviour of such stochastic programs is stable with respect to perturbations of P measured in terms of a Fortet-Mourier probability metric. The problem of optimal scenario reduction consists in determining a probability measure that is supported by a subset of supp P of prescribed cardinality and is closest to P in terms of such a probability metric. Two new versions of forward and backward type algorithms are presented for computing such optimally reduced probability measures approximately. Compared to earlier versions, the computational performance (accuracy, running time) of the new algorithms has been improved considerably. Numerical experience is reported for different instances of scenario trees with computable optimal lower bounds. The test examples also include a ternary scenario tree representing the weekly electrical load process in a power management model.  相似文献   

16.
The quality of multi-stage stochastic optimization models as they appear in asset liability management, energy planning, transportation, supply chain management, and other applications depends heavily on the quality of the underlying scenario model, describing the uncertain processes influencing the profit/cost function, such as asset prices and liabilities, the energy demand process, demand for transportation, and the like. A common approach to generate scenarios is based on estimating an unknown distribution and matching its moments with moments of a discrete scenario model. This paper demonstrates that the problem of finding valuable scenario approximations can be viewed as the problem of optimally approximating a given distribution with some distance function. We show that for Lipschitz continuous cost/profit functions it is best to employ the Wasserstein distance. The resulting optimization problem can be viewed as a multi-dimensional facility location problem, for which at least good heuristic algorithms exist. For multi-stage problems, a scenario tree is constructed as a nested facility location problem. Numerical convergence results for financial mean-risk portfolio selection conclude the paper.  相似文献   

17.
Strongly polynomial dual simplex methods for the maximum flow problem   总被引:1,自引:0,他引:1  
This paper presents dual network simplex algorithms that require at most 2nm pivots and O(n 2 m) time for solving a maximum flow problem on a network ofn nodes andm arcs. Refined implementations of these algorithms and a related simplex variant that is not strictly speaking a dual simplex algorithm are shown to have a complexity of O(n 3). The algorithms are based on the concept of apreflow and depend upon the use of node labels that are underestimates of the distances from the nodes to the sink node in the extended residual graph associated with the current flow. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.Research was supported by NSF Grants DMS 91-06195, DMS 94-14438 and CDR 84-21402 and DOE Grant DE-FG02-92ER25126.Research was supported by NSF Grant CDR 84-21402 at Columbia University.  相似文献   

18.
Piecewise affine inverse problems form a general class of nonlinear inverse problems. In particular inverse problems obeying certain variational structures, such as Fermat's principle in travel time tomography, are of this type. In a piecewise affine inverse problem a parameter is to be reconstructed when its mapping through a piecewise affine operator is observed, possibly with errors. A piecewise affine operator is defined by partitioning the parameter space and assigning a specific affine operator to each part. A Bayesian approach with a Gaussian random field prior on the parameter space is used. Both problems with a discrete finite partition and a continuous partition of the parameter space are considered.

The main result is that the posterior distribution is decomposed into a mixture of truncated Gaussian distributions, and the expression for the mixing distribution is partially analytically tractable. The general framework has, to the authors' knowledge, not previously been published, although the result for the finite partition is generally known.

Inverse problems are currently of large interest in many fields. The Bayesian approach is popular and most often highly computer intensive. The posterior distribution is frequently concentrated close to high-dimensional nonlinear spaces, resulting in slow mixing for generic sampling algorithms. Inverse problems are, however, often highly structured. In order to develop efficient sampling algorithms for a problem at hand, the problem structure must be exploited.

The decomposition of the posterior distribution that is derived in the current work can be used to develop specialized sampling algorithms. The article contains examples of such sampling algorithms. The proposed algorithms are applicable also for problems with exact observations. This is a case for which generic sampling algorithms tend to fail.  相似文献   

19.
This paper supplies algorithms for the best approximation to a real-valued function, defined as a table of values, by a linear approximating function in both theL 1 andL norms. The algorithms are modified simplex algorithms which due to the particular structures of the tableaux have been condensed and require minimal storage space. Both algorithms are given asAlgol procedures and sample times are noted for several examples.  相似文献   

20.
This paper studies the following line balancing problem with uncertain operation execution times. Operations on the same product have to be assigned to the stations of a transfer line. The product moves along the stations in the same direction, and operations assigned to the same station are executed sequentially. Exclusion, inclusion and precedence relations are given on the set of operations. Operation execution times are uncertain in the sense that their set belongs to a given set of scenarios. The objective is to minimize the line cycle time, which is equal to the maximum total execution time of operations of the same station, for the worst scenario. An approach to reducing the scenario set is described. Several special cases of the problem are proved NP-hard and strongly NP-hard. Enumerative dynamic programming algorithms and problem-specific polynomial time algorithms are suggested for some cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号