共查询到20条相似文献,搜索用时 15 毫秒
1.
Conclusion By the method of numerical integration of a two-dimensional normal density function within limits established by constraints we constructed a family of isolines with constant probability (0.01–0.99) of the appearance of the desired realizations near the compromise solution of the problem determined from the mean values. In combination with the graphic image of the Pareto subdomain of optimal solutions this provides a clear characteristic of the selected compromise project. It becomes possible to follow up numerically how with more stringent requirements as to the quality of the project, the probability of its realization decreases. It was shown that the probability P* of obtaining characteristics of a project not poorer than the deterministic one (according to mean values) is very sensitive to the geometry of the scattering domain of properties, and really changes within the limits 0 < P* < 0.5 independently of the dimension of the space of properties ny.Translated from Mekhanika Kompozitnykh Materialov, Vol. 30, No. 3, pp. 391–397, May–June, 1994. 相似文献
2.
Tran-Dinh Quoc Pham Nhan H. Phan Dzung T. Nguyen Lam M. 《Mathematical Programming》2022,191(2):1005-1071
Mathematical Programming - We introduce a new approach to develop stochastic optimization algorithms for a class of stochastic composite and possibly nonconvex optimization problems. The main idea... 相似文献
3.
An optimal method for stochastic composite optimization 总被引:1,自引:0,他引:1
Guanghui Lan 《Mathematical Programming》2012,133(1-2):365-397
This paper considers an important class of convex programming (CP) problems, namely, the stochastic composite optimization (SCO), whose objective function is given by the summation of general nonsmooth and smooth stochastic components. Since SCO covers non-smooth, smooth and stochastic CP as certain special cases, a valid lower bound on the rate of convergence for solving these problems is known from the classic complexity theory of convex programming. Note however that the optimization algorithms that can achieve this lower bound had never been developed. In this paper, we show that the simple mirror-descent stochastic approximation method exhibits the best-known rate of convergence for solving these problems. Our major contribution is to introduce the accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP (Nesterov in Doklady AN SSSR 269:543–547, 1983; Nesterov in Math Program 103:127–152, 2005), and show that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO. To the best of our knowledge, it is also the first universally optimal algorithm in the literature for solving non-smooth, smooth and stochastic CP problems. We illustrate the significant advantages of the AC-SA algorithm over existing methods in the context of solving a special but broad class of stochastic programming problems. 相似文献
4.
Xiao Wang Shuxiong Wang Hongchao Zhang 《Computational Optimization and Applications》2017,68(3):579-618
We study an inexact proximal stochastic gradient (IPSG) method for convex composite optimization, whose objective function is a summation of an average of a large number of smooth convex functions and a convex, but possibly nonsmooth, function. Variance reduction techniques are incorporated in the method to reduce the stochastic gradient variance. The main feature of this IPSG algorithm is to allow solving the proximal subproblems inexactly while still keeping the global convergence with desirable complexity bounds. Different subproblem stopping criteria are proposed. Global convergence and the component gradient complexity bounds are derived for the both cases when the objective function is strongly convex or just generally convex. Preliminary numerical experiment shows the overall efficiency of the IPSG algorithm. 相似文献
5.
6.
7.
Service organizations that operate outside the normal 8-hour day and face wide fluctuations in demand constantly struggle
to optimize the size and composition of their workforce. Recent research has shown that improved personnel scheduling methods
that take demand uncertainty into account can lead to significant reductions in labor costs. This paper addresses a staff
planning and scheduling problem that arises at United States Postal Service (USPS) mail processing & distribution centers
(P&DCs) and develops a two-stage stochastic integer program with recourse for the analysis. In the first stage, before the
demand is known, the number of full-time and part-time employees is determined for the permanent workforce. In the second
stage, the demand is revealed and workers are assigned to specific shifts during the week. When necessary, overtime and casual
labor are used to satisfy demand.
This paper consists of two parts: (1) the analysis of the demand distribution in light of historical data, and (2) the development
and analysis of the stochastic integer programming model. Using weekly demand for a three-year period, we first investigate
the possibility that there exists an end-of-month effect, i.e., the week at the end of month has larger volume than the other
weeks. We show that the data fail to indicate that this is the case.
In the computational phase of the work, three scenarios are considered: high, medium, and low demand. The stochastic optimization
problem that results is a large-scale integer program that embodies the full set of contractual agreements and labor rules
governing the design of the workforce at a P&DC. The usefulness of the model is evaluated by solving a series of instances
constructed from data provided by the Dallas facility. The results indicate that significant savings are likely when the recourse
problem is used to help structure the workforce.
This work was supported in part by the National Science Foundation under grants DMI-0218701 and DMI-0217927. 相似文献
8.
A new concept of (normalized) convergence of random variables is introduced. This convergence is preserved under Lipschitz transformations, follows from convergence in mean and itself implies convergence in probability. If a sequence of random variables satisfies a limit theorem then it is a normalized convergent sequence. The introduced concept is applied to the convergence rate study of a statistical approach in stochastic optimization. 相似文献
9.
We consider a multiperiod mean-variance model where the model parameters change according to a stochastic market. The mean
vector and covariance matrix of the random returns of risky assets all depend on the state of the market during any period
where the market process is assumed to follow a Markov chain. Dynamic programming is used to solve an auxiliary problem which,
in turn, gives the efficient frontier of the mean-variance formulation. An explicit expression is obtained for the efficient
frontier and an illustrative example is given to demonstrate the application of the procedure. 相似文献
10.
Georg Bechler Claudius Steinhardt Jochen Mackert Robert Klein 《European Journal of Operational Research》2021,288(3):902-917
Recent advances in customer choice analysis demonstrated the strong impact of compromise alternatives on the behaviour of decision-makers in a wide range of decision situations. Compromise alternatives are characterized by an intermediate performance on some of the relevant attributes. For instance, price compromises are well known in the sense that customers tend to buy neither the cheapest, nor the most expensive alternative, but the mid-priced one. However, thus far, the literature on product line optimization has not considered such context effects.In this paper, we propose a model-based approach for optimal product line selection which incorporates customers’ preferences for compromise alternatives. We consider customer choice in a realistic, sophisticated fashion by applying an established utility model that integrates compromise variables into a multinomial logit model. We formulate the resulting optimization problem as a mixed-integer linear program. The challenging feature for modelling – making the formulation substantially more complicated than existing ones without compromises – are the endogenous effects of selected products on other alternatives’ utilities that need to be adequately captured via compromise variables. Based on data we collected by a stated choice experiment in a retail setting, we perform a computational study and demonstrate the superiority of our product line selection approach compared to a reference model that does not take compromises into account. Even under uncertainty of the estimated utility parameters, profit gains of, on average, 23% can be achieved in our experimental setting. 相似文献
11.
We consider multistage stochastic optimization models containing nonconvex constraints, e.g., due to logical or integrality requirements. We study three variants of Lagrangian relaxations and of the corresponding decomposition schemes, namely, scenario, nodal and geographical decomposition. Based on convex equivalents for the Lagrangian duals, we compare the duality gaps for these decomposition schemes. The first main result states that scenario decomposition provides a smaller or equal duality gap than nodal decomposition. The second group of results concerns large stochastic optimization models with loosely coupled components. The results provide conditions implying relations between the duality gaps of geographical decomposition and the duality gaps for scenario and nodal decomposition, respectively.Mathematics Subject Classification (1991): 90C15Acknowledgments. This work was supported by the Priority Programme Online Optimization of Large Scale Systems of the Deutsche Forschungsgemeinschaft. The authors wish to thank Andrzej Ruszczyski (Rutgers University) for helpful discussions. 相似文献
12.
《European Journal of Operational Research》2006,171(3):723-724
The shifting bottleneck (SB) heuristic is among the most successful approximation methods for solving the job shop problem. It is essentially a machine based decomposition procedure where a series of one machine sequencing problems (OMSPs) are solved. However, such a procedure has been reported to be highly ineffective for the flow shop problems. In particular, we show that for the 2-machine flow shop problem, the SB heuristic will deliver the optimal solution in only a small number of instances. We examine the reason behind the failure of the machine based decomposition method for the flow shop. An optimal machine based decomposition procedure is formulated for the 2-machine flow shop, the time complexity of which is worse than that of the celebrated Johnson’s rule. The contribution of the present study lies in showing that the same machine based decomposition procedures which are so successful in solving complex job shops can also be suitably modified to optimally solve the simpler flow shops. 相似文献
13.
14.
A. M. Kagan 《Journal of Mathematical Sciences》1981,16(5):1379-1385
Wide-sense analogs of a series of well-known characteristic properties of the normal distribution are adduced.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 61, pp. 59–67, 1976. 相似文献
15.
B Desmet E-H Aghezzaf H Vanmaele 《The Journal of the Operational Research Society》2010,61(1):156-163
This paper presents an approximation model for the retailer replenishment lead-times in a two-echelon distribution system, and discusses its implementation for safety stock optimization in a one-warehouse and N-identical retailers system. The model assumes normality of demand and nominal lead times. It takes into account not only the averages of these parameters but also their variances. This approximation model is first tested on a two-echelon, one-warehouse and N-identical retailers system using discrete event simulation. It is then applied to optimize the safety stock in a two-echelon distribution system of a European market leader in the production and distribution of air conditioning equipment. Results of this implementation are analysed and discussed in detail. 相似文献
16.
Under the reliability NBU/NWU conditions, the exponential distribution is characterized by stochastic ordering properties
which link the geometric compound with minimum order statistics or spacings of order statistics. This somewhat answers a question
posed by Kakosyan, Klebanov and Melamed (1984,Characterization of Distributions by the Method of intensively Monotone Operators, Springer, New York). We also show the related results based on the residual life in a renewal process and on record values.
Finally, some fundamental properties of the NBUC/NWUC classes of life distributions are investigated. 相似文献
17.
18.
19.
We consider nonlinear multistage stochastic optimization problems in the spaces of integrable functions. We allow for nonlinear dynamics and general objective functionals, including dynamic risk measures. We study causal operators describing the dynamics of the system and derive the Clarke subdifferential for a penalty function involving such operators. Then we introduce the concept of subregular recourse in nonlinear multistage stochastic optimization and establish subregularity of the resulting systems in two formulations: with built-in nonanticipativity and with explicit nonanticipativity constraints. Finally, we derive optimality conditions for both formulations and study their relations.
相似文献20.
Vincent Leclère 《Operations Research Letters》2019,47(6):553-559
We consider relaxation of almost sure constraint in dynamic stochastic optimization problems and their convergence. We show an epiconvergence result relying on the Kudo convergence of -algebras and continuity of the objective and constraint operators. We present classical constraints and objective functions with conditions ensuring their continuity. We are motivated by a Lagrangian decomposition algorithm, known as Dual Approximate Dynamic Programming, that relies on relaxation, and can also be understood as a decision rule approach in the dual. 相似文献