首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 868 毫秒
1.
We consider the problem of the non-sequential detection of a change in the drift coefficient of a stochastic differential equation, when a misspecified model is used. We formulate the generalized likelihood ratio (GLR) test for this problem, and we study the behaviour of the associated error probabilities (false alarm and nodetection) in the small noise asymptotics. We obtain the following robustness result: even though a wrong model is used, the error probabilities go to zero with exponential rate, and the maximum likelihood estimator (MLE) of the change time is consistent, provided the change to be detected is larger (in some sense) than the misspecification error. We give also computable bounds for selecting the threshold of the test so as to achieve these exponential rates.  相似文献   

2.
We study a semi-discretisation scheme for stochastic optimal control problems whose dynamics are given by controlled stochastic delay (or functional) differential equations with bounded memory. Performance is measured in terms of expected costs. By discretising time in two steps, we construct a sequence of approximating finite-dimensional Markovian optimal control problems in discrete time. The corresponding value functions converge to the value function of the original problem, and we derive an upper bound on the discretisation error or, equivalently, a worst-case estimate for the rate of convergence.  相似文献   

3.
In this paper, we study a data-driven risk-averse stochastic optimization approach with Wasserstein Metric for the general distribution case. By using the Wasserstein Metric, we can successfully reformulate the risk-averse two-stage stochastic optimization problem with distributional ambiguity to a traditional two-stage robust optimization problem. In addition, we derive the worst-case distribution and perform convergence analysis to show that the risk aversion of the proposed formulation vanishes as the size of historical data grows to infinity.  相似文献   

4.
We show that for even quasi-concave objective functions the worst-case distribution, with respect to a family of unimodal distributions, of a stochastic programming problem is a uniform distribution. This extends the so-called ``Uniformity Principle' of Barmish and Lagoa (1997) where the objective function is the indicator function of a convex symmetric set.  相似文献   

5.
Minimum bounded edge-partition divides the edge set of a tree into the minimum number of disjoint connected components given a maximum weight for any component. It is an adaptation of the uniform edge-partition of a tree. An optimization algorithm is developed for this NP-hard problem, based on repeated bin packing of inter-related instances. The algorithm has linear running time for the class of ‘balanced trees’ common for the stochastic programming application which motivated investigation of this problem.Fast 2-approximation algorithms are formed for general instances by replacing the optimal bin packing with almost any bin packing heuristic. The asymptotic worst-case ratio of these approximation algorithms is never better than the absolute worst-case ratio of the bin packing heuristic used.  相似文献   

6.
The paper presents a stochastic method of optimization of a curriculum based on a new model of the education process, where sequential teaching may cause non-sequential learning.  相似文献   

7.
We study a vehicle routing problem in which vehicles are dispatched multiple times a day for product delivery. In this problem, some customer orders are known in advance while others are uncertain but are progressively realized during the day. The key decisions include determining which known orders should be delivered in the first dispatch and which should be delivered in a later dispatch, and finding the routes and schedules for customer orders. This problem is formulated as a two-stage stochastic programming problem with the objective of minimizing the expected total cost. A worst-case analysis is performed to evaluate the potential benefit of the stochastic approach against a deterministic approach. Furthermore, a sample-based heuristic is proposed. Computational experiments are conducted to assess the effectiveness of the model and the heuristic.   相似文献   

8.
We study the problem of scheduling n non-preemptable jobs on a single machine which is not available for processing during a given time period. The objective is to minimize the sum of the job completion times. The best known approximation algorithm for this NP-hard problem has a relative worst-case error bound of 17.6%. We present a parametric O(nlog n)-algorithm H with which better worst-case error bounds can be obtained. The best error bound calculated for the algorithm in the paper is 7.4%. In a computational experiment, we test the algorithm with the performance guarantee set to 10.2%. It turns out that randomly generated instances with up to 1000 jobs can be solved with a mean (maximum) error of 0.31% (3.18%) and a mean (maximum) computation time of 0.8 (9.7) seconds.  相似文献   

9.
Solving large-scale p-median problems is usually time consuming. People often aggregate the demand points in a large-scale p-median problem to reduce its problem size and make it easier to solve. Most traditional research on demand point aggregation is either experimental or assuming uniformly distributed demand points in analytical studies. In this paper, we study demand point aggregation for planar p-median problem when demand points are arbitrarily distributed. Efficient demand aggregation approaches are proposed with the corresponding attainable worst-case aggregation error bounds measured. We demonstrate that these demand aggregation approaches introduce smaller worst-case aggregation error bounds than that of the honeycomb heuristic [Papadimitriou, C.H., 1981. Worst-case and probabilistic analysis of a geometric location problem. SIAM Journal on Computing 10, 542–557] when demand points are arbitrarily distributed. We also conduct numerical experiments to show their effectiveness.  相似文献   

10.
New theoretical foundations for analyzing the newsboy problem under incomplete information about the probability distribution of random demand are presented. Firstly, we reveal that the distribution-free newsboy problem under the worst-case and best-case demand scenarios actually reduces to the standard newsboy problem with demand distributions that bound the allowable distributions in the sense of increasing concave order. Secondly, we provide a theoretical tool for seeking the best-case and worst-case order quantities when merely the support and the first k moments of the demand are known. Using this tool we derive closed form formulas for such quantities in the case of known support, mean and variance, i.e. k = 2. Consequently, we generalize all results presented so far in literature for the worst-case and best-case scenarios, and present some new ones. Extensions of our findings to the cases of the known mode of a unimodal demand distribution, the known median, and to other stochastic inventory problems are indicated.  相似文献   

11.
Locating flow-intercepting facilities: New approaches and results   总被引:1,自引:0,他引:1  
The problem of locating flow-intercepting facilities on a network with probabilistic customer flows and with facility set-up costs is studied in this paper. Two types of models are investigated, namely the double-counting model and the no-double-counting model (double-counting refers to multiple interceptions of the same customer). For each model, a nonlinear integer programming formulation is first obtained via the theory of Markov chains, and an equivalent linear integer program is then derived. A simple greedy heuristic is proposed for solving both models and a worst-case bound is established, which is shown to be tight under certain conditions.This work is in part supported by the NSERC grants of O. Berman and D. Krass.  相似文献   

12.
This paper deals with the optimal solution of ill-posed linear problems, i.e..linear problems for which the solution operator is unbounded. We consider worst-case ar,and averagecase settings. Our main result is that algorithms having finite error (for a given setting) exist if and only if the solution operator is bounded (in that setting). In the worst-case setting, this means that there is no algorithm for solving ill-posed problems having finite error. In the average-case setting, this means that algorithms having finite error exist if and only lf the solution operator is bounded on the average. If the solution operator is bounded on the average, we find average-case optimal information of cardinality n and optimal algorithms using this information, and show that the average error of these algorithms tends to zero as n→∞. These results are then used to determine the [euro]-complexity, i.e., the minimal costof finding an [euro]-accurate approximation. In the worst-case setting, the [euro]comp1exity of an illposed problem is infinite for all [euro]>0; that is, we cannot find an approximation having finite error and finite cost. In the average-case setting, the [euro]-complexity of an ill-posed problem is infinite for all [euro]>0 iff the solution operator is not bounded on the average, moreover, if the the solutionoperator is bounded on the average, then the [euro]-complexity is finite for all [euro]>0.  相似文献   

13.
We present a robust model for determining the optimal order quantity and market selection for short-life-cycle products in a single period, newsvendor setting. Due to limited information about demand distribution in particular for short-life-cycle products, stochastic modeling approaches may not be suitable. We propose the minimax regret multi-market newsvendor model, where the demands are only known to be bounded within some given interval. In the basic version of the problem, a linear time solution method is developed. For the capacitated case, we establish some structural results to reduce the problem size, and then propose an approximation solution algorithm based on integer programming. Finally, we compare the performance of the proposed minimax regret model against the typical average-case and worst-case models. Our test results demonstrate that the proposed minimax regret model outperformed the average-case and worst-case models in terms of risk-related criteria and mean profit, respectively.  相似文献   

14.
In this paper, we deal with a special case of the two-machine flow shop scheduling problem with several availability constraints on the second machine, under the resumable scenario. We develop an improved algorithm with a relative worst-case error bound of 4/3.  相似文献   

15.
We consider consumption-investment problems in a financial market with general random coefficients where the market price of risk process is unknown. The investor tries to maximize his expected utility under the worst-case parameter configuration. To solve robust consumption-investment problems, we make use of stochastic Bellman?CIsaac equations. These equations can be explicitly solved for power, exponential and logarithmic utility. This enables us to characterize a robust optimal consumption-investment strategy and a worst-case market price of risk process in terms of the solution of a backward stochastic differential equation.  相似文献   

16.
We introduce a new construction algorithm for digital nets for integration in certain weighted tensor product Hilbert spaces. The first weighted Hilbert space we consider is based on Walsh functions. Dick and Pillichshammer calculated the worst-case error for integration using digital nets for this space. Here we extend this result to a special construction method for digital nets based on polynomials over finite fields. This result allows us to find polynomials which yield a small worst-case error by computer search. We prove an upper bound on the worst-case error for digital nets obtained by such a search algorithm which shows that the convergence rate is best possible and that strong tractability holds under some condition on the weights.

We extend the results for the weighted Hilbert space based on Walsh functions to weighted Sobolev spaces. In this case we use randomly digitally shifted digital nets. The construction principle is the same as before, only the worst-case error is slightly different. Again digital nets obtained from our search algorithm yield a worst-case error achieving the optimal rate of convergence and as before strong tractability holds under some condition on the weights. These results show that such a construction of digital nets yields the until now best known results of this kind and that our construction methods are comparable to the construction methods known for lattice rules.

We conclude the article with numerical results comparing the expected worst-case error for randomly digitally shifted digital nets with those for randomly shifted lattice rules.

  相似文献   


17.
In this paper we deal with the two-machine flow shop scheduling problem with several availability constraints on the first machine, under the resumable scenario. We first develop an improved algorithm with a relative worst-case error bound of 4/3. We then present a polynomially solvable case.  相似文献   

18.
In this paper we study the problem of scheduling n jobs on a single machine with availability constraints. The objective is to minimize total weighted job completion times. We show that the problem is NP-hard in the strong sense. Then we consider two intractable special cases, namely, proportional weight case, and single availability constraint case. We propose two heuristics for these cases and analyze their worst-case error bounds.  相似文献   

19.
In this paper we consider numerical integration of smooth functions lying in a particular reproducing kernel Hilbert space. We show that the worst-case error of numerical integration in this space converges at the optimal rate, up to some power of a log?N factor. A similar result is shown for the mean square worst-case error, where the bound for the latter is always better than the bound for the square worst-case error. Finally, bounds for integration errors of functions lying in the reproducing kernel Hilbert space are given. The paper concludes by illustrating the theory with numerical results.  相似文献   

20.
A multi-item inventory system is considered which has the property that, for each single item, a reorder policy using the E.O.Q. formula would be appropriate. Holding costs are linear, and fixed ordering costs are assumed to be composed of a major set-up cost reflecting the pure fact of placing an order, and a sum of minor set-up costs corresponding to the items included in the order. If it is desirable to form a certain number of groups of items where all items of one group share the same order cycle, it is shown that there is always an optimal grouping in which items are arranged in increasing order of their ratio of yearly holding costs and minor set-up costs.A heuristic for forming the groups is given which turns out to be an optimal algorithm for the case that there are no major set-up costs. After an initial sorting of ratios, the worst-case complexity of this procedure is linear in the number of items.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号