首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Car pooling is one method that can be easily instituted and can help to resolve a variety of problems that continue to plague urban areas, ranging from energy demands and traffic congestion to environmental pollution. Although car pooling is becoming more common, in practice, participant matching results are still being obtained by an inefficient manual approach, which may possibly result in an inferior solution. In the past, when car pooling studies have been done the problem has been treated as either a to-work problem (from different origins to the same destination) or return-from-work problem (from the same origin to different destinations). However, in this study we employ a time-space network flow technique to develop a model for the many-to-many car pooling problem with multiple vehicle types and person types. The model is formulated as an integer multiple commodity network flow problem. Since real problem sizes can be huge, it could be difficult to find optimal solutions within a reasonable period of time. Therefore, we develop a solution algorithm based on Lagrangian relaxation, a subgradient method, and a heuristic for the upper bound solution, to solve the model. To test how well the model and the solution algorithm can be applied to real world, we randomly generated several examples based upon data reported from a past study carried out in northern Taiwan, on which we performed numerical tests. The test results show the effectiveness of the proposed model and solution algorithm.  相似文献   

2.
Network flows over time form a fascinating area of research. They model the temporal dynamics of network flow problems occurring in a wide variety of applications. Research in this area has been pursued in two different and mainly independent directions with respect to time modeling: discrete and continuous time models. In this paper we deploy measure theory in order to introduce a general model of network flows over time combining both discrete and continuous aspects into a single model. Here, the flow on each arc is modeled as a Borel measure on the real line (time axis) which assigns to each suitable subset a real value, interpreted as the amount of flow entering the arc over the subset. We focus on the maximum flow problem formulated in a network where capacities on arcs are also given as Borel measures and storage might be allowed at the nodes of the network. We generalize the concept of cuts to the case of these Borel Flows and extend the famous MaxFlow-MinCut Theorem.  相似文献   

3.
Many real‐life systems are typically involved in sequence‐dependent failure behaviors. Such systems can be modeled by dynamic fault trees (DFTs) with priority AND gates, in which the occurrence of the top events depends on not only combinations of basic events but also their failure sequences. To the author's knowledge, the existing methods for reliability assessment of DFTs with priority AND gates are mainly Markov‐state‐space‐based, inclusion–exclusion‐based, Monte Carlo simulation‐based, or sequential binary decision diagram‐based approaches. Unfortunately, all these methods have their shortcomings. They either suffer the problem of state space explosion or are restricted to exponential components time‐to‐failure distributions or need a long computation time to obtain a solution with a high accuracy. In this article, a novel method based on dynamic binary decision tree (DBDT) is first proposed. To build the DBDT model of a given DFT, we present an adapted format of the traditional Shannon's decomposition theorem. Considering that the chosen variable index has a great effect on the final scale of disjoint calculable cut sequences generated from a built DBDT, which to some extent determines the computational efficiency of the proposed method, some heuristic branching rules are presented. To validate our proposed method, a case study is analyzed. The results indicate that the proposed method is reasonable and efficient. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
The key issue for decision making in stock trading is selection of the right stock at the right time. In order to select the superior stocks (alternatives) for investment, a finite number of alternatives have to be ranked considering several and sometimes conflicting criteria. Therefore, we are faced with a special multicriteria decision-making problem. The purpose of this paper is to develop a decision-making model for selecting superior stocks in stock exchange and a model is provided in order to structure this problem. The proposed model is structured around two pillars: Industry evaluation and Company evaluation. The preference ranking organization method for enrichment evaluation (PROMETHEE) has been used for solving the problem. The model has been applied at Tehran Stock Exchange (TSE) as a real case and a survey from the experts in order to determine the effective criteria for industry evaluation and company evaluation has been conducted.  相似文献   

5.
Probabilistic Decision Graphs (PDGs) are a class of graphical models that can naturally encode some context specific independencies that cannot always be efficiently captured by other popular models, such as Bayesian Networks. Furthermore, inference can be carried out efficiently over a PDG, in time linear in the size of the model. The problem of learning PDGs from data has been studied in the literature, but only for the case of complete data. We propose an algorithm for learning PDGs in the presence of missing data. The proposed method is based on the Expectation-Maximisation principle for estimating the structure of the model as well as the parameters. We test our proposal on both artificially generated data with different rates of missing cells and real incomplete data. We also compare the PDG models learnt by our approach to the commonly used Bayesian Network (BN) model. The results indicate that the PDG model is less sensitive to the rate of missing data than BN model. Also, though the BN models usually attain higher likelihood, the PDGs are close to them also in size, which makes the learnt PDGs preferable for probabilistic inference purposes.  相似文献   

6.
This study attempts to show how a Kohonen map can be used to improve the temporal stability of the accuracy of a financial failure model. Most models lose a significant part of their ability to generalize when data used for estimation and prediction purposes are collected over different time periods. As their lifespan is fairly short, it becomes a real problem if a model is still in use when re-estimation appears to be necessary. To overcome this drawback, we introduce a new way of using a Kohonen map as a prediction model. The results of our experiments show that the generalization error achieved with a map remains more stable over time than that achieved with conventional methods used to design failure models (discriminant analysis, logistic regression, Cox’s method, and neural networks). They also show that type-I error, the economically costliest error, is the greatest beneficiary of this gain in stability.  相似文献   

7.
单体型装配问题及其算法   总被引:1,自引:0,他引:1  
单核苷酸多态性(SNP)单体型装配问题就是从给定的来自某人染色体的SNP片段中去除错误,重构出尽可能与原来片段一致的单体型.这个问题有几个不同的模型最少片段去除(MFR)问题,最少SNP去除(MSR)问题以及最少错误纠正(MEC)问题.前两个问题的复杂性与算法已有一些学者研究过.第三个问题已被证明是NP完全问题,但这个问题的实际算法还没有.该文对MEC问题给出了一个分支定界算法,这个算法能得到问题的全局最优解.通过这个算法对实际数据的计算说明了MEC模型的合理性,即在一定条件下,通过修正最少的错误重构出的单体型确实是真实的单体型.由于分支定界算法对这样一个NP完全问题不能在可接受的时间内解规模较大的问题,文中又给出了求解MEC问题的两个基于动态聚类的算法,以便对规模较大的问题在可接受的时间内得到近似最优解.数值实际表明这两个算法很快,很有效.这两个算法总能得到与分支定界找到的全局最优解很接近的近似最优解.鉴于MEC问题是NP完全的,这两个算法是有效的、实际的算法.  相似文献   

8.
A scenario tree is an efficient way to represent a stochastic data process in decision problems under uncertainty. This paper addresses how to efficiently generate appropriate scenario trees. A knowledge‐based scenario tree generation method is proposed; the new method is further improved by accounting for subjective judgements or expectations about the random future. Compared with existing approaches, complicated mathematical models and time‐consuming estimation, simulation and optimization problem solution are avoided in our knowledge‐based algorithms, and large‐scale scenario trees can be quickly generated. To show the advantages of the new algorithms, a multiperiod portfolio selection problem is considered, and a dynamic risk measure is adopted to control the intermediate risk, which is superior to the single‐period risk measure used in the existing literature. A series of numerical experiments are carried out by using real trading data from the Shanghai stock market. The results show that the scenarios generated by our algorithms can properly represent the underlying distribution; our algorithms have high performance, say, a scenario tree with up to 10,000 scenarios can be generated in less than a half minute. The applications in the multiperiod portfolio management problem demonstrate that our scenario tree generation methods are stable, and the optimal trading strategies obtained with the generated scenario tree are reasonable, efficient and robust. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
In this study, a dual-interval vertex analysis (DIVA) method is developed, through incorporating the vertex method within an interval-parameter programming framework. The developed DIVA method can tackle uncertainties presented as dual intervals that exist in the objective function and the left- and right-hand sides of the modeling constraints. An interactive algorithm and a vertex analysis approach are proposed for solving the DIVA model. Solutions under an associated α-cut level can be generated by solving a series of deterministic submodels. They can help quantify relationships between the objective function value and the membership grade, which is meaningful for supporting in-depth analyses of tradeoffs between environmental and economic objectives as well as those between system optimality and reliability. A management problem in terms of regional air pollution control is studied to illustrate applicability of the proposed approach. The results indicate that useful solutions for planning the air quality management practices have been generated. They can help decision makers to identify desired pollution-abatement strategies with minimized costs and maximized environmental efficiencies.  相似文献   

10.
In some formulations of the vehicle replacement problem, in particular those leading to repair limit type models, the alternative policies are evaluated and compared over a fixed planning horizon. Although it has been widely recognised that the optimal policies derived under these formulations depend critically on the length of the horizon, no method has been presented so far to set appropriately this parameter. In this paper, the authors describe a method which overcomes this shortcoming. Once the best policy has been derived from a given finite horizon with length H, such a policy is repeated indefinitely over time and an equivalent annual rent is computed. The parametrisation of H leads to the definition of an annual rent function with a sequence of nearly equidistant local minima. It is suggested that in practice the second local minimum of this function leads to an adequate choice of the parameter H. The method can be applied both to stochastic and deterministic cost modelling situations. The method was tested using both real data from large samples of different types of passenger vehicles and artificially generated data.  相似文献   

11.
Tabu search for a class of scheduling problems   总被引:1,自引:0,他引:1  
Scheduling problems are often modeled as resourceconstrained problems in which critical resource assignments to tasks are known and the best assignment of resource time must be made subject to these constraints. Generalization toresource scheduling, where resource assignments are chosen concurrently with times results is a problem which is much more difficult. A simplified model of the general resource scheduling model is possible, however, in which tasks must be assigned a singleprimary resource, subject to constraints resulting from preassignment ofsecondary, or auxiliary, resources. This paper describes extensions and enhancements of tabu search for the special case of the resource scheduling problem described above. The class of problems is further restricted to those where it is reasonable to enumerate both feasible time and primary resource assignments. Potential applications include shift oriented production and manpower scheduling problems as well as course scheduling where classrooms (instructors) are primary and instructors (rooms) and students are secondary resources. The underlying model is a type of quadratic multiple choice problem which we call multiple choice quadratic vertex packing (MCQVP). Results for strategic oscillation and biased candidate sampling strategies are shown for reasonably sized real and randomly generated, synthetic, problem instances. The strategies are compared with other variations using consistent measures of solution time and quality developed for this study.  相似文献   

12.
针对风电设备制造企业向服务型制造转型的问题,提出风电设备制造企业联合组建风电场维修服务基地,为风电场提供专业维修服务。在考虑了维修时间、运输时间、服务能力约束下,构建了数学模型优化维修队的调度方式,使维修成本最小。用遗传算法求解该模型,提出了基于风电场和维修队的混合非负整数分段编码方式,避免了遗传操作过程中非法解的产生,并用MATLAB编程进行实例求解,得到了满意的结果。  相似文献   

13.
This paper investigates a large-scale scheduling problem in the iron and steel industry, called color-coating production scheduling (CCPS). The problem is to generate multiple production turns for the galvanized coils that dynamically arrive from upstream lines within a given scheduling horizon, and at the same time determine the sequence of these turns so that the productivity and product quality are maximized while the production cost and the number of generated turns are minimized. We formulate this problem as a mixed integer nonlinear program and propose a tabu search heuristic to obtain satisfactory solutions. Results on real production instances show that the presented model and heuristic are more effective and efficient with comparison to manual scheduling. A practical scheduling system for CCPS combining the model and heuristic has been developed and successfully implemented in a major iron and steel enterprise in China.  相似文献   

14.
The paper presents fast algorithms for designing finite impulse response (FIR) notch filters. The aim is to design a digital FIR notch filter so that the magnitude of the filter has a deep notch at a specified frequency, and as the notch frequency changes, the filter coefficients should be able to track the notch fast in real time. The filter design problem is first converted into a convex optimization problem in the autocorrelation domain. The frequency response of the autocorrelation of the filter impulse response is compared with the desired filter response and the integral square error is minimized with respect to the unknown autocorrelation coefficients. Spectral factorization is used to calculate the coefficients of the filter. In the optimization process, the computational advantage is obtained by exploiting the structure of the Hessian matrix which consists of a Toeplitz plus a Hankel matrix. Two methods have been used for solving the Toeplitz‐plus‐Hankel system of equations. In the first method, the computational time is reduced by using Block–Levinson's recursion for solving the Toeplitz‐plus‐Hankel system of matrices. In the second method, the conjugate gradient method with different preconditioners is used to solve the system. Comparative studies demonstrate the computational advantages of the latter. Both these algorithms have been used to obtain the autocorrelation coefficients of notch filters with different orders. The original filter coefficients are found by spectral factorization and each of these filters have been tested for filtering synthetic as well as real‐life signals. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
This paper is dedicated to a study of different extensions of the classical knapsack problem to the case when different elements of the problem formulation are subject to a degree of uncertainty described by random variables. This brings the knapsack problem into the realm of stochastic programming. Two different model formulations are proposed, based on the introduction of probability constraints. The first one is a static quadratic knapsack with a probability constraint on the capacity of the knapsack. The second one is a two-stage quadratic knapsack model, with recourse, where we introduce a probability constraint on the capacity of the knapsack in the second stage. As far as we know, this is the first time such a constraint has been used in a two-stage model. The solution techniques are based on the semidefinite relaxations. This allows for solving large instances, for which exact methods cannot be used. Numerical experiments on a set of randomly generated instances are discussed below.  相似文献   

16.
Hub and spoke networks are used to switch and transfer commodities between terminal nodes in distribution systems at minimum cost and/or time. The p-hub center allocation problem is to minimize maximum travel time in networks by locating p hubs from a set of candidate hub locations and allocating demand and supply nodes to hubs. The capacities of the hubs are given. In previous studies, authors usually considered only quantitative parameters such as cost and time to find the optimum location. But it seems not to be sufficient and often the critical role of qualitative parameters like quality of service, zone traffic, environmental issues, capability for development in the future and etc. that are critical for decision makers (DMs), have not been incorporated into models. In many real world situations qualitative parameters are as much important as quantitative ones. We present a hybrid approach to the p-hub center problem in which the location of hub facilities is determined by both parameters simultaneously. Dealing with qualitative and uncertain data, Fuzzy systems are used to cope with these conditions and they are used as the basis of this work. We use fuzzy VIKOR to model a hybrid solution to the hub location problem. Results are used by a genetic algorithm solution to successfully solve a number of problem instances. Furthermore, this method can be used to take into account more desired quantitative variables other than cost and time, like future market and potential customers easily.  相似文献   

17.
陈玲俐  于洁 《应用数学和力学》2008,29(12):1486-1494
由于网络连通可靠度计算属于NP-hard问题,当系统可靠度无法显式表达时,基于连通可靠度的大型复杂网络优化通常只能采用启发式优化算法解决.通过对复杂网络连通可靠度算法结构的分析,给出了系统连通可靠度的Taylor方程.采用遗传算法,由系统连通可靠度的Taylor方程确定种群适应值,得到一个系统最优可靠度分配方案;将最优解带入改进Minty算法或递推分解算法中,计算该最优解的连通可靠度精确值和对应的连通可靠度的Taylor展开方程;再次采用遗传算法求最优解.当最优解对应的可靠度精确值和Taylor方程算得得近似值误差小于指定精度时,则此最优解为最终的系统最优可靠度分配方案A·D2将此优化过程称为迭代遗传算法.算例显示迭代遗传算法不仅可用于大型网络的连通可靠度最优分配,而且优化迭代过程中可以得到多组阶段最优解,这些解均落在最优解附近,构成了近似最优解群,在实际工程优化中拓展了选择面.  相似文献   

18.
Naive Bayes (NB) is one of the most popular classification methods. It is particularly useful when the dimension of the predictor is high and data are generated independently. In the meanwhile, social network data are becoming increasingly accessible, due to the fast development of various social network services and websites. By contrast, data generated by a social network are most likely to be dependent. The dependency is mainly determined by their social network relationships. Then, how to extend the classical NB method to social network data becomes a problem of great interest. To this end, we propose here a network-based naive Bayes (NNB) method, which generalizes the classical NB model to social network data. The key advantage of the NNB method is that it takes the network relationships into consideration. The computational effciency makes the NNB method even feasible in large scale social networks. The statistical properties of the NNB model are theoretically investigated. Simulation studies have been conducted to demonstrate its finite sample performance. A real data example is also analyzed for illustration purpose.  相似文献   

19.
With the development of modern technology(communication, transportation, etc.), many new social networks have formed and influenced our life. The research of mining these new social networks has been used in many aspects. But compared with traditional networks, these new social networks are usually very large. Due to the complexity of the latter, few model can be adapted to mine them effectively. In this paper, we try to mine these new social networks using Wave Propagation process and mainly discuss two applications of our model, solving Message Broadcasting problem and Rumor Spreading problem. Our model has the following advantages: (1) We can simulate the real networks message transmitting process in time since we include a time factor in our model. (2) Our Message Broadcasting algorithm can mine the underlying relationship of real networks and represent some clustering properties. (3) We also provide an algorithm to detect social network and find the rumor makers. Complexity analysis shows our algorithms are scalable for large social network and stable analysis proofs our algorithms are stable.  相似文献   

20.
Mean-risk models have been widely used in portfolio optimization. However, such models may produce portfolios that are dominated with respect to second order stochastic dominance and therefore not optimal for rational and risk-averse investors. This paper considers the problem of constructing a portfolio which is non-dominated with respect to second order stochastic dominance and whose return distribution has specified desirable properties. The problem is multi-objective and is transformed into a single objective problem by using the reference point method, in which target levels, known as aspiration points, are specified for the objective functions. A model is proposed in which the aspiration points relate to ordered outcomes for the portfolio return. This concept is extended by additionally specifying reservation points, which act pre-emptively in the optimization model. The theoretical properties of the models are studied. The performance of the models on real data drawn from the Hang Seng index is also investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号