首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Stochastic linear programs can be solved approximately by drawing a subset of all possible random scenarios and solving the problem based on this subset, an approach known as sample average approximation (SAA). The value of the objective function at the optimal solution obtained via SAA provides an estimate of the true optimal objective function value. This estimator is known to be optimistically biased; the expected optimal objective function value for the sampled problem is lower (for minimization problems) than the optimal objective function value for the true problem. We investigate how two alternative sampling methods, antithetic variates (AV) and Latin Hypercube (LH) sampling, affect both the bias and variance, and thus the mean squared error (MSE), of this estimator. For a simple example, we analytically express the reductions in bias and variance obtained by these two alternative sampling methods. For eight test problems from the literature, we computationally investigate the impact of these sampling methods on bias and variance. We find that both sampling methods are effective at reducing mean squared error, with Latin Hypercube sampling outperforming antithetic variates. For our analytic example and the eight test problems we derive or estimate the condition number as defined in Shapiro et al. (Math. Program. 94:1–19, 2002). We find that for ill-conditioned problems, bias plays a larger role in MSE, and AV and LH sampling methods are more likely to reduce bias.  相似文献   

2.
The single row facility layout problem (SRFLP) is the problem of arranging n departments with given lengths on a straight line so as to minimize the total weighted distance between all department pairs. We present a polyhedral study of the triplet formulation of the SRFLP introduced by Amaral [A.R.S. Amaral, A new lower bound for the single row facility layout problem, Discrete Applied Mathematics 157 (1) (2009) 183-190]. For any number of departments n, we prove that the dimension of the triplet polytope is n(n−1)(n−2)/3 (this is also true for the projections of this polytope presented by Amaral). We then prove that several valid inequalities presented by Amaral for this polytope are facet-defining. These results provide theoretical support for the fact that the linear program solved over these valid inequalities gives the optimal solution for all instances studied by Amaral.  相似文献   

3.
Interval linear programming addresses problems with uncertain coefficients and the only information that we have is that the true values lie somewhere in the prescribed intervals. For the inequality constraint problem, computing the worst case scenario and the corresponding optimal value is an easy task, but the best case optimal value calculation is known to be NP-hard. In this paper, we discuss lower and upper bound approximation for the best case optimal value, and propose suitable methods for both of them. We also propose a not apriori exponential algorithm for computing the best case optimal value. The presented techniques are tested by randomly generated data, and also applied in a simple data classification problem.  相似文献   

4.
5.
We consider the NP-hard problem of minimizing a convex quadratic function over the integer lattice \({\mathbf{Z}}^n\). We present a simple semidefinite programming (SDP) relaxation for obtaining a nontrivial lower bound on the optimal value of the problem. By interpreting the solution to the SDP relaxation probabilistically, we obtain a randomized algorithm for finding good suboptimal solutions, and thus an upper bound on the optimal value. The effectiveness of the method is shown for numerical problem instances of various sizes.  相似文献   

6.
We make a conjecture that the number of isolated local minimum points of a 2n-degree or (2n+1)-degree r-variable polynomial is not greater than n r when n 2. We show that this conjecture is the minimal estimate, and is true in several cases. In particular, we show that a cubic polynomial of r variables may have at most one local minimum point though it may have 2r critical points. We then study the global minimization problem of an even-degree multivariate polynomial whose leading order coefficient tensor is positive definite. We call such a multivariate polynomial a normal multivariate polynomial. By giving a one-variable polynomial majored below a normal multivariate polynomial, we show the existence of a global minimum of a normal multivariate polynomial, and give an upper bound of the norm of the global minimum and a lower bound of the global minimization value. We show that the quartic multivariate polynomial arising from broad-band antenna array signal processing, is a normal polynomial, and give a computable upper bound of the norm of the global minimum and a computable lower bound of the global minimization value of this normal quartic multivariate polynomial. We give some sufficient and necessary conditions for an even order tensor to be positive definite. Several challenging questions remain open.  相似文献   

7.
In this paper, two meta-strategies for computing lower bounds (for minimization problems) are described. Constructive (direct) methods directly calculate a bound value by relaxing a problem and solving this relaxation. Destructive improvement techniques restrict a problem by setting a maximal objective function value F and try to contradict (destruct) the feasibility of this reduced problem. In case of success, F or even F + 1 is a valid lower bound value. The fundamental properties and differences of both meta-strategies are explained by applying them to the well-known resource-constrained project scheduling problem (RCPSP). For this problem, only a few constructive bound arguments are available in the literature. We present a number of extensions and new methods as well as techniques for reducing problem data which can be exploited within the destructive improvement framework. Comprehensive numerical experiments show that the new constructive bound arguments clearly provide better bounds than the former ones and that further really significant improvements are obtained through an appropriate application of destructive improvement.  相似文献   

8.
We return to a classic problem of structural optimization whose solution requires microstructure. It is well‐known that perimeter penalization assures the existence of an optimal design. We are interested in the regime where the perimeter penalization is weak; i.e., in the effect of perimeter as a selection mechanism in structural optimization. To explore this topic in a simple yet challenging example, we focus on a two‐dimensional elastic shape optimization problem involving the optimal removal of material from a rectangular region loaded in shear. We consider the minimization of a weighted sum of volume, perimeter, and compliance (i.e., the work done by the load), focusing on the behavior as the weight ɛ of the perimeter term tends to 0. Our main result concerns the scaling of the optimal value with respect to ɛ. Our analysis combines an upper bound and a lower bound. The upper bound is proved by finding a near‐optimal structure, which resembles a rank‐2 laminate except that the approximate interfaces are replaced by branching constructions. The lower bound, which shows that no other microstructure can be much better, uses arguments based on the Hashin‐Shtrikman variational principle. The regime being considered here is particularly difficult to explore numerically due to the intrinsic nonconvexity of structural optimization and the spatial complexity of the optimal structures. While perimeter has been considered as a selection mechanism in other problems involving microstructure, the example considered here is novel because optimality seems to require the use of two well‐separated length scales.© 2016 Wiley Periodicals, Inc.  相似文献   

9.
We propose asymptotically optimal algorithms for the job shop scheduling and packet routing problems. We propose a fluid relaxation for the job shop scheduling problem in which we replace discrete jobs with the flow of a continuous fluid. We compute an optimal solution of the fluid relaxation in closed form, obtain a lower bound Cmax to the job shop scheduling problem, and construct a feasible schedule from the fluid relaxation with objective value at most where the constant in the O( · ) notation is independent of the number of jobs, but it depends on the processing time of the jobs, thus producing an asymptotically optimal schedule as the total number of jobs tends to infinity. If the initially present jobs increase proportionally, then our algorithm produces a schedule with value at most Cmax + O(1). For the packet routing problem with fixed paths the previous algorithm applies directly. For the general packet routing problem we propose a linear programming relaxation that provides a lower bound Cmax and an asymptotically optimal algorithm that uses the optimal solution of the relaxation with objective value at most Unlike asymptotically optimal algorithms that rely on probabilistic assumptions, our proposed algorithms make no probabilistic assumptions and they are asymptotically optimal for all instances with a large number of jobs (packets). In computational experiments our algorithms produce schedules which are within 1% of optimality even for moderately sized problems.  相似文献   

10.
In this paper, we discuss an application of the Stochastic Dual Dynamic Programming (SDDP) type algorithm to nested risk-averse formulations of Stochastic Optimal Control (SOC) problems. We propose a construction of a statistical upper bound for the optimal value of risk-averse SOC problems. This outlines an approach to a solution of a long standing problem in that area of research. The bound holds for a large class of convex and monotone conditional risk mappings. Finally, we show the validity of the statistical upper bound to solve a real-life stochastic hydro-thermal planning problem.  相似文献   

11.
This paper is concerned with the development of an algorithm for general bilinear programming problems. Such problems find numerous applications in economics and game theory, location theory, nonlinear multi-commodity network flows, dynamic assignment and production, and various risk management problems. The proposed approach develops a new Reformulation-Linearization Technique (RLT) for this problem, and imbeds it within a provably convergent branch-and-bound algorithm. The method first reformulates the problem by constructing a set of nonnegative variable factors using the problem constraints, and suitably multiplies combinations of these factors with the original problem constraints to generate additional valid nonlinear constraints. The resulting nonlinear program is subsequently linearized by defining a new set of variables, one for each nonlinear term. This RLT process yields a linear programming problem whose optimal value provides a tight lower bound on the optimal value to the bilinear programming problem. Various implementation schemes and constraint generation procedures are investigated for the purpose of further tightening the resulting linearization. The lower bound thus produced theoretically dominates, and practically is far tighter, than that obtained by using convex envelopes over hyper-rectangles. In fact, for some special cases, this process is shown to yield an exact linear programming representation. For the associated branch-and-bound algorithm, various admissible branching schemes are discussed, including one in which branching is performed by partitioning the intervals for only one set of variables x or y, whichever are fewer in number. Computational experience is provided to demonstrate the viability of the algorithm. For a large number of test problems from the literature, the initial bounding linear program itself solves the underlying bilinear programming problem.This paper was presented at the II. IIASA Workshop on Global Optimization, Sopron (Hungary), December 9–14, 1990.  相似文献   

12.
This paper deals with the theory of sample approximation techniques applied to stochastic programming problems with expected value constraints. We extend the results of Branda (Optimization 61(8):949–968, 2012c) and Wang and Ahmed (Oper Res Lett 36:515–519, 2008) on the rates of convergence to the problems with a mixed-integer bounded set of feasible solutions and several expected value constraints. Moreover, we enable non-iid sampling and consider Hölder-calmness of the constraints. We derive estimates on the sample size necessary to get a feasible solution or a lower bound on the optimal value of the original problem using the sample approximation. We present an application of the estimates to an investment problem with the Conditional Value at Risk constraints, integer allocations and transaction costs.  相似文献   

13.

Multi-stage stochastic linear programs (MSLPs) are notoriously hard to solve in general. Linear decision rules (LDRs) yield an approximation of an MSLP by restricting the decisions at each stage to be an affine function of the observed uncertain parameters. Finding an optimal LDR is a static optimization problem that provides an upper bound on the optimal value of the MSLP, and, under certain assumptions, can be formulated as an explicit linear program. Similarly, as proposed by Kuhn et al. (Math Program 130(1):177–209, 2011) a lower bound for an MSLP can be obtained by restricting decisions in the dual of the MSLP to follow an LDR. We propose a new approximation approach for MSLPs, two-stage LDRs. The idea is to require only the state variables in an MSLP to follow an LDR, which is sufficient to obtain an approximation of an MSLP that is a two-stage stochastic linear program (2SLP). We similarly propose to apply LDR only to a subset of the variables in the dual of the MSLP, which yields a 2SLP approximation of the dual that provides a lower bound on the optimal value of the MSLP. Although solving the corresponding 2SLP approximations exactly is intractable in general, we investigate how approximate solution approaches that have been developed for solving 2SLP can be applied to solve these approximation problems, and derive statistical upper and lower bounds on the optimal value of the MSLP. In addition to potentially yielding better policies and bounds, this approach requires many fewer assumptions than are required to obtain an explicit reformulation when using the standard static LDR approach. A computational study on two example problems demonstrates that using a two-stage LDR can yield significantly better primal policies and modestly better dual policies than using policies based on a static LDR.

  相似文献   

14.
We consider the problem of finding a smallest set of edges whose addition four-connects a triconnected graph. This is a fundamental graph-theoretic problem that has applications in designing reliable networks and improving statistical database security. We present an O(n · α(m, n) + m)-time algorithm for four-connecting an undirected graph G that is triconnected by adding the smallest number of edges, where n and m are the number of vertices and edges in G, respectively, and α(m, n) is the inverse Ackermann function. This is the first polynomial time algorithm to solve this problem exactly.In deriving our algorithm, we present a new lower bound for the number of edges needed to four-connect a triconnected graph. The form of this lower bound is different from the form of the lower bound known for biconnectivity augmentation and triconnectivity augmentation. Our new lower bound applies for arbitrary k and gives a tighter lower bound than the one known earlier for the number of edges needed to k-connect a (k − 1)-connected graph. For k = 4, we show that this lower bound is tight by giving an efficient algorithm to find a set of edges whose size equals the new lower bound and whose addition four-connects the input triconnected graph.  相似文献   

15.
16.
Up to this time, the only known method to solve the discrete-time mixed sensitivity minimization problem inl 1 has been to use a certain infinite-dimensional linear programming approach, presented by Dahleh and Pearson in 1988 and later modified by Mendlovitz. That approach does not give in general true optimal solutions; only suboptimal ones are obtained. Here, for the first time, the truel 1-optimal solutions are found for some mixed sensitivity minimization problems. In particular, Dahleh and Pearson construct an 11h order suboptimal compensator for a certain second-order plan with first-order weight functions; it is shown that the unique optimal compensator for that problem is rational and of order two. The author discovered this fact when trying out a new scheme of solving the infinite-dimensional linear programming system. This scheme is of independent interest, because when it is combined with the Dahleh-Pearson-Mendlovitz scheme, it gives both an upper bound and a lower bound on the optimal performance; hence, it provides the missing error bound that enables one to truncate the solution. Of course, truncation is appropriate only if the order of the optimal compensator is too high. This may indeed be the case, as is shown with an example where the order of the optimal compensator can be arbitrarily high.  相似文献   

17.
It is widely accepted that next-generation networks will provide guaranteed services, in contrast to the “best effort” approach today. We study and analyze queueing policies for network switches that support the QoS (Quality of Service) feature. One realization of the QoS feature is that packets are not necessarily all equal, with some having higher priorities than the others. We model this situation by assigning an intrinsic value to each packet. In this paper we are concerned with three different queueing policies: the nonpreemptive model, the FIFO preemptive model, and the bounded delay model. We concentrate on the situation where the incoming traffic overloads the queue, resulting in packet loss. The objective is to maximize the total value of packets transmitted by the queueing policy. The difficulty lies in the unpredictable nature of the future packet arrivals. We analyze the performance of the online queueing policies via competitive analysis, providing upper and lower bounds for the competitive ratios. We develop practical yet sophisticated online algorithms (queueing policies) for the three queueing models. The algorithms in many cases have provably optimal worst-case bounds. For the nonpreemptive model, we devise an optimal online algorithm for the common 2-value model. We provide a tight logarithmic bound for the general nonpreemptive model. For the FIFO preemptive model, we improve the general lower bound to 1.414, while showing a tight bound of 1.434 for the special case of queue size 2. We prove that the bounded delay model with uniform delay 2 is equivalent to a modified FIFO preemptive model with queue size 2. We then give improved upper and lower bounds on the 2-uniform bounded delay model. We also show an improved lower bound of 1.618 for the 2-variable bounded delay model, matching the previously known upper bound.  相似文献   

18.
The topological complexity of algorithms is studied in a general context in the first part and for zero-finding in the second part. In the first part thelevel of discontinuityof a functionfis introduced and it is proved that it is a lower bound for the total number of comparisons plus 1 in any algorithm computingfthat uses only continuous operations and comparisons. This lower bound is proved to be sharp if arbitrary continuous operations are allowed. Then there exists even a balanced optimal computation tree forf. In the second part we use these results in order to determine the topological complexity of zero-finding for continuous functionsfon the unit interval withf(0) ·f(1) < 0. It is proved that roughly log2log2?−1comparisons are optimal during a computation in order to approximate a zero up to ?. This is true regardless of whether one allows arbitrary continuous operations or just function evaluations, the arithmetic operations {+, −, *, /}, and the absolute value. It is true also for the subclass of nondecreasing functions. But for the subclass of increasing functions the topological complexity drops to zero even for the smaller class of operations.  相似文献   

19.
In this paper, we investigate scenario generation methods to establish lower bounds on the optimal objective value for stochastic scheduling problems that contain random parameters with continuous distributions. In contrast to the Sample Average Approximation (SAA) approach, which yields probabilistic bound values, we use an alternative bounding method that relies on the ideas of discrete bounding and recursive stratified sampling. Theoretical support is provided for deriving exact lower bounds for both expectation and conditional value-at-risk objectives. We illustrate the use of our method on the single machine total weighted tardiness problem. The results of our numerical investigation demonstrate good properties of our bounding method, compared with the SAA method and an earlier discrete bounding method.  相似文献   

20.
In this paper we show that (n) variables are needed for first-order logic with counting to identify graphs onn vertices. Thek-variable language with counting is equivalent to the (k–1)-dimensional Weisfeiler-Lehman method. We thus settle a long-standing open problem. Previously it was an open question whether or not 4 variables suffice. Our lower bound remains true over a set of graphs of color class size 4. This contrasts sharply with the fact that 3 variables suffice to identify all graphs of color class size 3, and 2 variables suffice to identify almost all graphs. Our lower bound is optimal up to multiplication by a constant becausen variables obviously suffice to identify graphs onn vertices.Research supported by NSF grant CCR-8709818.Research supported by NSF grant CCR-8805978 and Pennsylvania State University Research Initiation grant 428-45.Research supported by NSF grants DCR-8603346 and CCR-8806308.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号