首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This contribution gives an overview on the state-of-the-art and recent advances in mixed integer optimization to solve planning and design problems in the process industry. In some case studies specific aspects are stressed and the typical difficulties of real world problems are addressed. Mixed integer linear optimization is widely used to solve supply chain planning problems. Some of the complicating features such as origin tracing and shelf life constraints are discussed in more detail. If properly done the planning models can also be used to do product and customer portfolio analysis. We also stress the importance of multi-criteria optimization and correct modeling for optimization under uncertainty. Stochastic programming for continuous LP problems is now part of most optimization packages, and there is encouraging progress in the field of stochastic MILP and robust MILP. Process and network design problems often lead to nonconvex mixed integer nonlinear programming models. If the time to compute the solution is not bounded, there are already a commercial solvers available which can compute the global optima of such problems within hours. If time is more restricted, then tailored solution techniques are required.  相似文献   

2.
As a synchronization parallel framework, the parallel variable transformation (PVT) algorithm is effective to solve unconstrained optimization problems. In this paper, based on the idea that a constrained optimization problem is equivalent to a differentiable unconstrained optimization problem by introducing the Fischer Function, we propose an asynchronous PVT algorithm for solving large-scale linearly constrained convex minimization problems. This new algorithm can terminate when some processor satisfies terminal condition without waiting for other processors. Meanwhile, it can enhances practical efficiency for large-scale optimization problem. Global convergence of the new algorithm is established under suitable assumptions. And in particular, the linear rate of convergence does not depend on the number of processors.  相似文献   

3.
Stochastic programming is recognized as a powerful tool to help decision making under uncertainty in financial planning. The deterministic equivalent formulations of these stochastic programs have huge dimensions even for moderate numbers of assets, time stages and scenarios per time stage. So far models treated by mathematical programming approaches have been limited to simple linear or quadratic models due to the inability of currently available solvers to solve NLP problems of typical sizes. However stochastic programming problems are highly structured. The key to the efficient solution of such problems is therefore the ability to exploit their structure. Interior point methods are well-suited to the solution of very large non-linear optimization problems. In this paper we exploit this feature and show how portfolio optimization problems with sizes measured in millions of constraints and decision variables, featuring constraints on semi-variance, skewness or non-linear utility functions in the objective, can be solved with the state-of-the-art solver.  相似文献   

4.
The Advantages of Fuzzy Optimization Models in Practical Use   总被引:1,自引:0,他引:1  
Classical mathematical programming models require well-defined coefficients and right hand sides. In order to avoid a non satisfying modeling usually a broad information gathering and processing is necessary. In case of real problems some model parameters can be only roughly estimated. While in case of classical models the vague data is replaced by "average data", fuzzy models offer the opportunity to model subjective imaginations of the decision maker as precisely as a decision maker will be able to describe it. Thus the risk of applying a wrong model of the reality and selecting solutions which do not reflect the real problem can be clearly reduced. The modeling of real problems by means of deterministic and stochastic models requires extensive information processing. On the other hand we know that an optimum solution is finally defined only by few restrictions. Especially in case of larger systems we notice afterwards that most of the information is useless. The dilemma of data processing is due to the fact that first we have to calculate the solution in order to define, whether the information must be well-defined or whether vague data may be sufficient. Based on multicriteria programming problems it should be demonstrated that the dilemma of data processing in case of real programming problems can be handled adequately by modeling them as fuzzy system combined with an interactive problem-solving. Describing the real problem by means of a fuzzy system first of all only the available information or such information which can be achieved easily will be considered. Then we try to develop an optimum solution. With reference to the cost-benefit relation further information can be gathered in order to describe the solution more precisely. Furthermore it should be pointed out that some interactive fuzzy solution algorithms, e.g. FULPAL provide the opportunity to solve mixed integer multicriteria programming models as well.  相似文献   

5.
The parallel solution of multiple systems of initial-value problems (IVPs) in ordinary differential equations is challenging because the amount of computation involved in solving a given IVP is generally not well correlated with that of solving another. In this paper, we describe how to efficiently solve multiple systems of stiff IVPs in parallel within a single-instruction, multiple-data (SIMD) implementation on the Cell Broadband Engine (CBE) of the RODAS solver for stiff IVPs. We solve two systems of stiff IVPs simultaneously on each of the eight synergistic processing elements per CBE chip for a total of 16 systems of IVPs. We demonstrate a speedup of 1.89 (a parallel efficiency of over 94%) over the corresponding serial code on a realistic example involving the operation of a chemical reactor. The techniques described apply to other multi-core processors besides the CBE and can be expected to increase in importance as computer architectures evolve to feature larger word sizes.  相似文献   

6.
Many practical large-scale optimization problems are not only sparse, but also display some form of block-structure such as primal or dual block angular structure. Often these structures are nested: each block of the coarse top level structure is block-structured itself. Problems with these characteristics appear frequently in stochastic programming but also in other areas such as telecommunication network modelling. We present a linear algebra library tailored for problems with such structure that is used inside an interior point solver for convex quadratic programming problems. Due to its object-oriented design it can be used to exploit virtually any nested block structure arising in practical problems, eliminating the need for highly specialised linear algebra modules needing to be written for every type of problem separately. Through a careful implementation we achieve almost automatic parallelisation of the linear algebra. The efficiency of the approach is illustrated on several problems arising in the financial planning, namely in the asset and liability management. The problems are modelled as multistage decision processes and by nature lead to nested block-structured problems. By taking the variance of the random variables into account the problems become non-separable quadratic programs. A reformulation of the problem is proposed which reduces density of matrices involved and by these means significantly simplifies its solution by an interior point method. The object-oriented parallel solver achieves high efficiency by careful exploitation of the block sparsity of these problems. As a result a problem with over 50 million decision variables is solved in just over 2 hours on a parallel computer with 16 processors. The approach is by nature scalable and the parallel implementation achieves nearly perfect speed-ups on a range of problems. Supported by the Engineering and Physical Sciences Research Council of UK, EPSRC grant GR/R99683/01  相似文献   

7.
Demand fluctuations that cause variations in output levels will affect a firm’s technical inefficiency. To assess this demand effect, a demand-truncated production function is developed and an “effectiveness” measure is proposed. Often a firm can adjust some input resources influencing the output level in an attempt to match demand. We propose a short-run capacity planning method, termed proactive data envelopment analysis, which quantifies the effectiveness of a firm’s production system under demand uncertainty. Using a stochastic programming DEA approach, we improve upon short-run capacity expansion planning models by accounting for the decreasing marginal benefit of inputs and estimating the expected value of effectiveness, given demand. The law of diminishing marginal returns is an important property of production function; however, constant marginal productivity is usually assumed for capacity expansion problems resulting in biased capacity estimates. Applying the proposed model in an empirical study of convenience stores in Japan demonstrates the actionable advice the model provides about the levels of variable inputs in uncertain demand environments. We conclude that the method is most suitable for characterizing production systems with perishable goods or service systems that cannot store inventories.  相似文献   

8.
基于多参数线性规划理论,将不确定型二层线性规划问题转化为多个关于不确定参数的线性规划问题。利用不确定型决策方法中的悲观准则.从最不利的结果中选择最有利的结果,从而得到不确定型二层线性规划的最优解。数值实例的仿真结果表明,所提出的悲观决策方法对解决诸如不确定供应链的规划与运作等问题不失为一种有效的决策支持工具。  相似文献   

9.
Power flow calculations are one of the most important computational tools for planning and operating electric power systems. After the stabilization of the deterministic power flow calculation methods, the need to capture uncertainty in load definition lead first to the development of probabilistic models, and later to fuzzy approaches able to deal with qualitative declarations and other non-probabilistic information about the value of the loads. Present fuzzy power flow (FPF) calculations use typically incremental techniques, in order to obtain a good approximation of the fuzzy state variables. However, these models and procedures are not entirely satisfactory for the evaluation of the adequacy of the electric transmission system, since they are not completely symmetric. In this paper, we show how to perform the detailed calculation of the state variables of the FPF problem in an exact and symmetrical way, by means of solving multiple optimization problems. The procedure is illustrated using the IEEE 118 test system.  相似文献   

10.
Parallel computation offers a challenging opportunity to speed up the time consuming enumerative procedures that are necessary to solve hard combinatorial problems. Theoretical analysis of such a parallel branch and bound algorithm is very hard and empirical analysis is not straightforward because the performance of a parallel algorithm cannot be evaluated simply by executing the algorithm on a few parallel systems. Among the difficulties encountered are the noise produced by other users on the system, the limited variation in parallelism (the number of processors in the system is strictly bounded) and the waste of resources involved: most of the time, the outcomes of all computations are already known and the only issue of interest is when these outcomes are produced.We will describe a way to simulate the execution of parallel branch and bound algorithms on arbitrary parallel systems in such a way that the memory and cpu requirements are very reasonable. The use of simulation has only minor consequences for the formulation of the algorithm.  相似文献   

11.
Most of the applied models written with an algebraic modeling language involve simultaneously several dimensions such as materials, location, time or uncertainty. The information about dimensions available in the algebraic formulation is usually sufficient to retrieve different block structures from mathematical programs. These structured problems can then be solved by adequate solution techniques. To illustrate this idea we focus on stochastic programming problems with recourse. Taking into account both time and uncertainty dimensions of these problems, we are able to retrieve different customized structures in their constraint matrices. We applied the Structure Exploiting Tool to retrieve the structure from models built with the GAMS modeling language. The underlying mathematical programs are solved with the decomposition algorithm that applies interior point methods. The optimization algorithm is run in a sequential and in a parallel computing environment.  相似文献   

12.
System reliability, especially for serial parallel systems, has attracted much attention in recent years. Redundancy allocation is a technique to increase the reliability of the serial parallel systems. Supplying redundant components depends on some restrictions such as available budget, weight, space, etc. This paper proposes a new model for redundancy allocation problems (RAPs) by considering discount policy. The proposed model attempts to maximize the reliability of a system by gathering various components where there are some limitations on budgeting. We present two models with different assumptions including all unit discount and incremental discount strategies. The resulted formulations are nonlinear integer models and categorized as NP-hard. Therefore, some heuristics and meta-heuristics are designed to solve the resulted models, efficiently.  相似文献   

13.
The channel-assignment problem is central to the integrated circuit fabrication process. Given a two-sided printed circuit board, we wish to maken pairs of components electrically equivalent. The connections are made using two vertical runs along with a horizontal one. Each horizontal run lies in a channel. The channel-assignment problem involves minimizing the total number of channels used.Recent advances in VLSI have made it possible to build massively parallel machines. To overcome the inefficiency of long distance communications among processors, some parallel architectures have been augmented by bus systems. If such a bus system can be dynamically changed to suit communication needs among processors, it is referred to asreconfigurable. The reconfigurable mesh is one of the practical models featuring a reconfigurable bus system. In this paper, we propose a constant-time algorithm to solve the channel-assignment problem of sizen on a reconfigurable mesh of sizen×n.This work was supported by NASA under grant NCC1-99.Additional support by the National Science Foundation under grant CCR-8909996 is gratefully acknowledged.  相似文献   

14.
In this study, a two-stage fuzzy robust integer programming (TFRIP) method has been developed for planning environmental management systems under uncertainty. This approach integrates techniques of robust programming and two-stage stochastic programming within a mixed integer linear programming framework. It can facilitate dynamic analysis of capacity-expansion planning for waste management facilities within a multi-stage context. In the modeling formulation, uncertainties can be presented in terms of both possibilistic and probabilistic distributions, such that robustness of the optimization process could be enhanced. In its solution process, the fuzzy decision space is delimited into a more robust one by specifying the uncertainties through dimensional enlargement of the original fuzzy constraints. The TFRIP method is applied to a case study of long-term waste-management planning under uncertainty. The generated solutions for continuous and binary variables can provide desired waste-flow-allocation and capacity-expansion plans with a minimized system cost and a maximized system feasibility.  相似文献   

15.
A new decomposition method for multistage stochastic linear programming problems is proposed. A multistage stochastic problem is represented in a tree-like form and with each node of the decision tree a certain linear or quadratic subproblem is associated. The subproblems generate proposals for their successors and some backward information for their predecessors. The subproblems can be solved in parallel and exchange information in an asynchronous way through special buffers. After a finite time the method either finds an optimal solution to the problem or discovers its inconsistency. An analytical illustrative example shows that parallelization can speed up computation over every sequential method. Computational experiments indicate that for large problems we can obtain substantial gains in efficiency with moderate numbers of processors.This work was partly supported by the International Institute for Applied Systems Analysis, Laxenburg, Austria.  相似文献   

16.
The convergence of the simulated annealing algorithm is accelerated by a probabilistic feedback control scheme. This scheme uses two or more parallel processors to solve the same or related combinatorial optimization problems and are coupled by a probabilistic measure of quality (PMQ). The PMQ is used to generate an error signal for use in feedback control. Control over the search process is achieved by using the error signal to modulate the temperature parameter. Other aspects of control theory, such as the system gain and its effects on system performance, are described. Theoretical and experimental results show that such a scheme increases the steadystate probability of the globally optimal solutions.  相似文献   

17.
Effective methods are required to solve three-dimensional solidification problems with fine meshes. This paper considers some of the problems associated with mapping control volume-based enthalpy algorithms onto vector processors and transputer-based parallel networks. Although vector processors may bring speedups of a factor of 5 or so, it appears that transputer-based networks have the facility to yield speedups that increase linearly with the number of transputer chips used.  相似文献   

18.
The problem of optimal scheduling n tasks in a parallel processor system is studied. The tasks are malleable, i.e., a task may be executed by several processors simultaneously and the processing speed of a task is a nonlinear function of the number of processors allocated to it. The total number of processors is m and it is an upper bound on the number of processors that can be used by all the tasks simultaneously. It is assumed that the number of processors is sufficient to process all the tasks simultaneously, i.e. nm. The objective is to find a task schedule and a processor allocation such that the overall task completion time, i.e. the makespan, is minimized. The problem is motivated by real-life applications of parallel computer systems in scientific computing of highly parallelizable tasks. An O(n) algorithm is presented to solve this problem when all the processing speed functions are convex. If these functions are all concave and the number of tasks is a constant, the problem can be solved in polynomial time. A relaxed problem, in which the number of processors allocated to each task is not required to be integer, can be solved in O(nmax {m,nlog 2 m}) time. It is proved that the minimum makespan values for the original and relaxed problems coincide. For n=2 or n=3, an optimal solution for the relaxed problem can be converted into an optimal solution for the original problem in a constant time.  相似文献   

19.
Probability constraints play a key role in optimization problems involving uncertainties. These constraints request that an inequality system depending on a random vector has to be satisfied with a high enough probability. In specific settings, copulæ can be used to model the probabilistic constraints with uncertainty on the left-hand side. In this paper, we provide eventual convexity results for the feasible set of decisions under local generalized concavity properties of the constraint mappings and involved copulæ. The results cover all Archimedean copulæ. We consider probabilistic constraints wherein the decision and random vector are separated, i.e. left/right-hand side uncertainty. In order to solve the underlying optimization problem, we propose and analyse convergence of a regularized supporting hyperplane method: a stabilized variant of generalized Benders decomposition. The algorithm is tested on a large set of instances involving several copulæ among which the Gaussian copula. A Numerical comparison with a (pure) supporting hyperplane algorithm and a general purpose solver for non-linear optimization is also presented.  相似文献   

20.
Standard LPs, widely used for planning in the processing industry, are powerful tools for making economic decisions, but do not cover time relationships inside the planning timeframe. To include sequence-dependent issues within the optimization process, an algorithmic extension to the LP was developed. DPX (Dynamic Programming Extension) is used to solve a whole class of problems that are dynamic in time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号