首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
An important aspect related to wind energy integration into the electrical power system is the fluctuation of the generated power due to the stochastic variations of the wind speed across the area where wind turbines are installed. Simulation models are useful tools to evaluate the impact of the wind power on the power system stability and on the power quality. Aggregate models reduce the simulation time required by detailed dynamic models of multiturbine systems.In this paper, a new behavioral model representing the aggregate contribution of several variable-speed-pitch-controlled wind turbines is introduced. It is particularly suitable for the simulation of short term power fluctuations due to wind turbulence, where steady-state models are not applicable.The model relies on the output rescaling of a single turbine dynamic model. The single turbine output is divided into its steady state and dynamic components, which are then multiplied by different scaling factors. The smoothing effect due to wind incoherence at different locations inside a wind farm is taken into account by filtering the steady state power curve by means of a Gaussian filter as well as applying a proper damping on the dynamic part.The model has been developed to be one of the building-blocks of a model of a large electrical system, therefore a significant reduction of simulation time has been pursued. Comparison against a full model obtained by repeating a detailed single turbine model, shows that a proper trade-off between accuracy and computational speed has been achieved.  相似文献   

3.
This paper presents a case study of a railway timetable optimization for the very dense Simplon corridor, a major railway connection in the Alps between Switzerland and Italy. The key to deal with the complexity of this scenario is the use of a novel aggregation-disaggregation method. Starting from a detailed microscopic representation as it is used in railway simulation, the data is transformed by an automatic procedure into a less detailed macroscopic representation, that is sufficient for the purpose of capacity planning and amenable to state-of-the-art integer programming optimization methods. This macroscopic railway network is saturated with trains. Finally, the optimized timetable is re-transformed to the microscopic level in such a way that it can be operated without any conflicts among the train paths. Using this micro-macro aggregation-disaggregation approach in combination with integer programming methods, it becomes for the first time possible to generate a profit maximal and conflict free timetable for the complete Simplon corridor over an entire day by a simultaneous optimization of all trains requests. In addition, this also allows us to undertake a sensitivity analysis of various problem parameters.  相似文献   

4.
Deep neural networks (DNNs) have emerged as a state-of-the-art tool in very different research fields due to its adaptive power to the decision space since they do not presuppose any linear relationship between data. Some of the main disadvantages of these trending models are that the choice of the network underlying architecture profoundly influences the performance of the model and that the architecture design requires prior knowledge of the field of study. The use of questionnaires is hugely extended in social/behavioral sciences. The main contribution of this work is to automate the process of a DNN architecture design by using an agglomerative hierarchical algorithm that mimics the conceptual structure of such surveys. Although the train had regression purposes, it is easily convertible to deal with classification tasks. Our proposed methodology will be tested with a database containing socio-demographic data and the responses to five psychometric Likert scales related to the prediction of happiness. These scales have been already used to design a DNN architecture based on the subdimension of the scales. We show that our new network configurations outperform the previous existing DNN architectures.  相似文献   

5.
Peter Eberhard  Pascal Ziegler 《PAMM》2007,7(1):4010017-4010018
  相似文献   

6.
Optimal power flow problems arise in the context of the optimization and secure exploitation of electrical power in alternating current (AC) networks. This optimization problem evaluates the flow on each line and to ensure that they are under line thermal limits. To improve the reliability of the power supply, a secure network is necessary, i.e., it has to be able to cope with some contingencies. Nowadays high performance solution methods, that are based on nonlinear programming algorithms, search for an optimal state while considering certain contingencies. According to the number of contingencies the problem size increases linearly. As the base case can already be large-scaled, the optimization time computation increases quickly. Parallelization seems to be a good way to solve quickly this kind of problem. This paper considers the minimization of an objective function and at least two constraints at each node. This optimization problem is solved by IPOPT, an interior point method, coupled with ADOL-C, an algorithmic differentiation tool, and MA27, a linear solver. Several results on employed parallelizing strategies will be discussed. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.
The power system is a complex interconnected network which can be subdivided into three components: generation, distribution, and transmission. Capacitors of specific sizes are placed in the distribution network so that losses in transmission and distribution is minimum. But the decision of size and position of capacitors in this network is a complex optimization problem. In this paper, Limaçon curve inspired local search strategy (LLS) is proposed and incorporated into spider monkey optimization (SMO) algorithm to deal optimal placement and the sizing problem of capacitors. The proposed strategy is named as Limaçon inspired SMO (LSMO) algorithm. In the proposed local search strategy, the Limaçon curve equation is modified by incorporating the persistence and social learning components of SMO algorithm. The performance of LSMO is tested over 25 benchmark functions. Further, it is applied to solve optimal capacitor placement and sizing problem in IEEE-14, 30 and 33 test bus systems with the proper allocation of 3 and 5-capacitors. The reported results are compared with a network without a capacitor (un-capacitor) and other existing methods.  相似文献   

8.
An electrical power system is a large scale system composed of complicated and sophisticated combination of multiple electronic and electromechanical components. In general, these components are nonlinear. Power system is also characterized by a wide range of normal operating conditions which continuously vary. To help the designer study the voltage control problems in power systems, a simulation tool is presented in this paper. It is based on decomposition of the overall system simulation task into three subtasks so as to attain both computational efficiency and flexibility. The use of the proposed simulation tool in a voltage control problem is also presented.  相似文献   

9.
A generalised equilibrium solution to the stochastic two-echelon newsvendor problem is achievable when formulated in the context of some cooperation and coordination between the primal (retailer) and dual (manufacturer) operators. We build on previous work detailing this equilibrium solution and apply it to the newspaper business. The solution incorporates changes in variability encountered due to promotional activity which extends the efficient frontier. We also consider consequences for profit and goodwill costs of identifying an equilibrium solution when additional income is generated from a source outside of the supply chain, such as advertising. We generalise to the supply chain network where there is some knowledge of demand or supply distributions further up or down the supply chain. We find that the primal–dual formulation and equilibrium solution apply to interactions between components of supply chain networks and illustrate with the transition to the direct distribution of newspapers.  相似文献   

10.
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online.  相似文献   

11.
The problem of determining the number of protection devices and their locations on an electrical tree network with subtrees dependency is investigated. The aim is to reduce the amount of inconvenience caused to customers that are affected by any given fault on the network. A constructive heuristic and an appropriate implementation of tabu search are proposed and compared against a method currently used by the electrical supply companies. Computational tests are performed on randomly generated electrical tree networks varying in size and branch complexity. Both the proposed methods outperformed the one used in practice. In particular our tabu search implementation was found to produce the best results without taking an excessive amount of computational time.  相似文献   

12.
In studying the supply pattern of goods delivered to a depot by a fleet of vehicles all operating from a common source of supply on an identical route, it is necessary to assess the statistical properties of the times between the arrivals of the vehicles at the depot. This would seem to depend critically on the journey-time distribution, i.e. the distribution of times taken from the depot to collect the goods and return to the depot. This paper demonstrates, however, that this is not necessarily true, and that very often the interarrival-time distribution is essentially independent of the detailed form of the journey-time distribution. The only knowledge required in such situations is the mean, and minimum possible, journey-time; two quantities which are usually quite well known.  相似文献   

13.
This article presents a method for generating samples from an unnormalized posterior distribution f(·) using Markov chain Monte Carlo (MCMC) in which the evaluation of f(·) is very difficult or computationally demanding. Commonly, a less computationally demanding, perhaps local, approximation to f(·) is available, say f**x(·). An algorithm is proposed to generate an MCMC that uses such an approximation to calculate acceptance probabilities at each step of a modified Metropolis–Hastings algorithm. Once a proposal is accepted using the approximation, f(·) is calculated with full precision ensuring convergence to the desired distribution. We give sufficient conditions for the algorithm to converge to f(·) and give both theoretical and practical justifications for its usage. Typical applications are in inverse problems using physical data models where computing time is dominated by complex model simulation. We outline Bayesian inference and computing for inverse problems. A stylized example is given of recovering resistor values in a network from electrical measurements made at the boundary. Although this inverse problem has appeared in studies of underground reservoirs, it has primarily been chosen for pedagogical value because model simulation has precisely the same computational structure as a finite element method solution of the complete electrode model used in conductivity imaging, or “electrical impedance tomography.” This example shows a dramatic decrease in CPU time, compared to a standard Metropolis–Hastings algorithm.  相似文献   

14.
We consider the problem of finding an optimal replacement policy for a system which has many components. The main difficulty in this problem is that there is an interaction among the items in the system. Thus, the optimal replacement decision for each item depends not only on its state, but also on those of the other items in the systems. This interaction is due to the the fact that the both the stock size and the supply of replacement items is limited. (Instead of an unlimited supply of standard replacement items as is implicitly assumed by most of the replacement models). In our application to dairy herd managenent the problem is further complicated by the fact that this limited supply is not exogenous to the process but is actually generated by it. This is due to the fact that almost all the replacement young cows are home grown. The traits of these young cows have genetic dependence on those of their parents. The dairy herd management problem is actually a special case of the joint replacement and inventory problem, where the groups of cows are the stock of replacement items. At each point of time the decision problem is to find the optimal composition of items from the available population of items. An exact derivation of the optimal replacement policy for such problems is very complicated because the optimal decisions for each period depends on the state of the whole stock, and of all the available replacement components. This leads to a dynamic programming problem with a very large number of state variables which is not feasible to solve numerically due to the great amount of computer time involved.This paper presents a practical method for obtaining an approximate solution for the above described problem. The computational difficulty caused by the tremendously large dimensionality of the state variable is overcome by means of an iterative method which combines simulation and Dynamic Programming approach to compute successive linear approximations of the value function.  相似文献   

15.
This paper examines structural change in the power and heat producing sector (energy supply) and its implications for the economy. An integrated approach is used to describe the interactions between this sector and the rest of the economy. Thus, a very detailed model of the sector for Denmark has been linked to a macroeconometric model of the Danish economy. It is argued that analysing sectors that undergo radical changes, for example the energy supply sector, should be undertaken by using a model that describes the technological and organisational changes in production along with implications for the demand of the produced goods. Environmental priorities and targets for emission reductions are important for defining energy policy in Denmark. As the energy supply sector at present is a major contributor to emissions of CO2 and SO2, knowledge of this sector is vital for reducing these emissions. It is shown that quite substantial emission reductions are possible without encountering a substantial negative impact on the economy. The reduction potential through such economic incentives as fuel taxes is shown to be very sensitive to the technology used at present and in the future. This study also emphasises that the large reduction potential of emissions from the energy supply sector is a one-time gain. Fuel switching and increasing use of wind power cannot be repeated. Scenarios carried out with the combined model show that emission reduction in the energy supply sector will decrease the share of this sector in total emissions remarkably, and that the importance of the sector as a key element in any overall emission reduction strategy will decline. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

16.
A computer simulation of a rail segment is presented. The goal is to provide a capability for scheduling and routing with respect to predetermined objectives. The simulation is founded on a decomposition of the given line segment into fundamental units representing node to node subsegments with each node being an interlocking of the real system. A decision subroutine is activated every time a train reaches a node; all feasible options are then examined with respect to the current configuration of the system. Ultimately, it is hoped the simulation will have on-line monitoring capabilities.  相似文献   

17.
In this paper, we demonstrate how a new network performance/efficiency measure, which captures demands, flows, costs, and behavior on networks, can be used to assess the importance of network components and their rankings. We provide new results regarding the measure, which we refer to as the Nagurney–Qiang measure, or, simply, the N–Q measure, and a previously proposed one, which did not explicitly consider demands and flows. We apply both measures to such critical infrastructure networks as transportation networks and the Internet and further explore the new measure through an application to an electric power generation and distribution network in the form of a supply chain. The Nagurney and Qiang network performance/efficiency measure that captures flows and behavior can identify which network components, that is, nodes and links, have the greatest impact in terms of their removal and, hence, are important from both vulnerability as well as security standpoints.  相似文献   

18.
为研究碳减排政策对多周期供应链网络均衡决策的影响,分析了供应链网络结构中各层的最优条件,建立了多周期碳减排供应链网络均衡模型.首先将其转化为等价的变分不等式问题,然后利用变分不等式的投影收缩算法进行求解.并通过模型仿真分析了在不同周期下不同碳限额、单位碳排放量对供应链网络均衡的影响结果发现企业在环境绩效和经济绩效之间存在冲突,适当的控制碳税和调整产品的单位碳排放量可以缓解这种冲突.同时,政府对于碳限额的值过于宽松,对于碳减排的实施起不到明显作用.  相似文献   

19.
Decision making in modern supply chains can be extremely daunting due to their complex nature. Discrete-event simulation is a technique that can support decision making by providing what-if analysis and evaluation of quantitative data. However, modelling supply chain systems can result in massively large and complicated models that can take a very long time to run even with today's powerful desktop computers. Distributed simulation has been suggested as a possible solution to this problem, by enabling the use of multiple computers to run models. To investigate this claim, this paper presents experiences in implementing a simulation model with a ‘conventional’ approach and with a distributed approach. This study takes place in a healthcare setting, the supply chain of blood from donor to recipient. The study compares conventional and distributed model execution times of a supply chain model simulated in the simulation package Simul8. The results show that the execution time of the conventional approach increases almost linearly with the size of the system and also the simulation run period. However, the distributed approach to this problem follows a more linear distribution of the execution time in terms of system size and run time and appears to offer a practical alternative. On the basis of this, the paper concludes that distributed simulation can be successfully applied in certain situations.  相似文献   

20.
In hybrid electric vehicles, the electrical powertrain system has multiple energy sources that it can gather power from to satisfy the propulsion power requested by the vehicle at each instant. This paper focusses on the minimization of the fuel consumption of such a vehicle, taking advantage of the different energy sources. Based on global optimization approaches, the proposed heuristics find solutions that best split the power requested between the multi-electrical sources available. A lower bounding procedure is introduced to validate the quality of the solutions. Computational results show a significant improvement over previous results from the literature in both the computing time and the quality of the solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号