首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Generally, in the portfolio selection problem the Decision Maker (DM) considers simultaneously conflicting objectives such as rate of return, liquidity and risk. Multi-objective programming techniques such as goal programming (GP) and compromise programming (CP) are used to choose the portfolio best satisfying the DM’s aspirations and preferences. In this article, we assume that the parameters associated with the objectives are random and normally distributed. We propose a chance constrained compromise programming model (CCCP) as a deterministic transformation to multi-objective stochastic programming portfolio model. CCCP is based on CP and chance constrained programming (CCP) models. The proposed program is illustrated by means of a portfolio selection problem from the Tunisian stock exchange market.  相似文献   

2.
In this paper, we propose a genetic programming (GP) based approach to evolve fuzzy rule based classifiers. For a c-class problem, a classifier consists of c trees. Each tree, T i , of the multi-tree classifier represents a set of rules for class i. During the evolutionary process, the inaccurate/inactive rules of the initial set of rules are removed by a cleaning scheme. This allows good rules to sustain and that eventually determines the number of rules. In the beginning, our GP scheme uses a randomly selected subset of features and then evolves the features to be used in each rule. The initial rules are constructed using prototypes, which are generated randomly as well as by the fuzzy k-means (FKM) algorithm. Besides, experiments are conducted in three different ways: Using only randomly generated rules, using a mixture of randomly generated rules and FKM prototype based rules, and with exclusively FKM prototype based rules. The performance of the classifiers is comparable irrespective of the type of initial rules. This emphasizes the novelty of the proposed evolutionary scheme. In this context, we propose a new mutation operation to alter the rule parameters. The GP scheme optimizes the structure of rules as well as the parameters involved. The method is validated on six benchmark data sets and the performance of the proposed scheme is found to be satisfactory.  相似文献   

3.
Goal programming (GP) is one of the most commonly used mathematical programming tools to model multiple objective optimisation (MOO) problems. There are numerous MOO problems of various complexity modelled using GP in the literature. One of the main difficulties in the GP is to solve their mathematical formulations optimally. Due to difficulties imposed by the classical solution techniques there is a trend in the literature to solve mathematical programming formulations including goal programmes, using the modern heuristics optimisation techniques, namely genetic algorithms (GA), tabu search (TS) and simulated annealing (SA). This paper uses the multiple objective tabu search (MOTS) algorithm, which was proposed previously by the author to solve GP models. In the proposed approach, GP models are first converted to their classical MOO equivalent by using some simple conversion procedures. Then the problem is solved using the MOTS algorithm. The results obtained from the computational experiment show that MOTS can be considered as a promising candidate tool for solving GP models.  相似文献   

4.
We consider the problem of choosing the ‘best choice’ among a certain number of objects that are presented to a decision-maker in sequential order. Such a sequential selection problem is commonly referred to as the ‘best choice problem’, and its optimal stopping rule has been obtained either via the dynamic programming approach or via the Markovian approach. Based on the theory of information economics, we propose in the paper the third approach to a generalized version of the best choice problem that is intuitively more appealing. Various types of the best choice problem, such as (1) the classical secretary problem, (2) no information group interview problem, and (3) full information best choice problem with a random walk process, are shown to be special cases of the generalized best choice problem. The modelling framework of information economics has potential for building theory that ultimately would produce practical stopping rules.  相似文献   

5.
A Dual-Objective Evolutionary Algorithm for Rules Extraction in Data Mining   总被引:1,自引:0,他引:1  
This paper presents a dual-objective evolutionary algorithm (DOEA) for extracting multiple decision rule lists in data mining, which aims at satisfying the classification criteria of high accuracy and ease of user comprehension. Unlike existing approaches, the algorithm incorporates the concept of Pareto dominance to evolve a set of non-dominated decision rule lists each having different classification accuracy and number of rules over a specified range. The classification results of DOEA are analyzed and compared with existing rule-based and non-rule based classifiers based upon 8 test problems obtained from UCI Machine Learning Repository. It is shown that the DOEA produces comprehensible rules with competitive classification accuracy as compared to many methods in literature. Results obtained from box plots and t-tests further examine its invariance to random partition of datasets. An erratum to this article is available at .  相似文献   

6.
In the last 40 years, there has been a marked transformation in the development of new methodologies to assist the decision-making process, especially in the development of procedures in multi-criterion decision-making and in multi-objective programming (MOP). Goal programming (GP) is the most commonly known model of MOP and it is today alive more than ever, supported by a network of researchers and practitioners continually feeding it with theoretical developments and applications, all of these with resounding success. This paper paints a picture summarizing the history of GP and suggests a few areas of research in this era of globalization.  相似文献   

7.
This paper studies the global optimization of polynomial programming problems using Reformulation-Linearization Technique (RLT)-based linear programming (LP) relaxations. We introduce a new class of bound-grid-factor constraints that can be judiciously used to augment the basic RLT relaxations in order to improve the quality of lower bounds and enhance the performance of global branch-and-bound algorithms. Certain theoretical properties are established that shed light on the effect of these valid inequalities in driving the discrepancies between RLT variables and their associated nonlinear products to zero. To preserve computational expediency while promoting efficiency, we propose certain concurrent and sequential cut generation routines and various grid-factor selection rules. The results indicate a significant tightening of lower bounds, which yields an overall reduction in computational effort for solving a test-bed of polynomial programming problems to global optimality in comparison with the basic RLT procedure as well as the commercial software BARON.  相似文献   

8.
This paper attempts to consolidate over 15 years of attempts at designing algorithms for geometric programming (GP) and its extensions. The pitfalls encountered when solving GP problems and some proposed remedies are discussed in detail. A comprehensive summary of published software for the solution of GP problems is included. Also included is a numerical comparison of some of the more promising recently developed computer codes for geometric programming on a specially chosen set of GP test problems. The relative performance of these codes is measured in terms of their robustness as well as speed of computation. The performance of some general nonlinear programming (NLP) codes on the same set of test problems is also given and compared with the results for the GP codes. The paper concludes with some suggestions for future research.An earlier version of this paper was presented at the ORSA/TIMS Conference, Chicago, 1975.This work was supported in part by the National Research Council of Canada, Grant No. A-3552, Canada Council Grant No. S74-0418, and a research grant from the School of Organization and Management, Yale University. The author wishes to thank D. Himmelblau, T. Jefferson, M. Rijckaert, X. M. Martens, A. Templeman, J. J. Dinkel, G. Kochenberger, M. Ratner, L. Lasdon, and A. Jain for their cooperation in making the comparative study possible.  相似文献   

9.
By applying the option pricing theory ideas, this paper models the estimation of firm value distribution function as an entropy optimization problem, subject to correlation constraints. It is shown that the problem can be converted to a dual of a computationally attractive primal geometric programming (GP) problem and easily solved using publicly available software. A numerical example involving stock price data from a Japanese company demonstrates the practical value of the GP approach. Noting the use of Monte Carlo simulation in option pricing and risk analysis and its difficulties in handling distribution functions subject to correlations, the GP based method discussed here may have some computational advantages in wider areas of computational finance in addition to the application discussed here.  相似文献   

10.
The controlled Markov chains (CMC) approach to software testing treats software testing as a control problem, where the software under test serves as a controlled object that is modeled as controlled Markov chain, and the software testing strategy serves as the corresponding controller. In this paper we extend the CMC approach to software testing to the case that the number of tests that can be applied to the software under test is limited. The optimal testing strategy is derived if the true values of all the software parameters of concern are known a priori. An adaptive testing strategy is employed if the true values of the software parameters of concern are not known a priori and need to be estimated on-line during software testing by using testing data. A random testing strategy ignores all the related information (true values or estimates) of the software parameters of concern and follows a uniform probability distribution to select a possible test case. Simulation results show that the performance of an adaptive testing strategy cannot compete that of the optimal testing strategy, but should be better than that of a random testing strategy. This paper further justifies the idea of software cybernetics that is aimed to explore the interplay between software theory/engineering and control theory/engineering.  相似文献   

11.
12.
In this paper we analyze how the optimal consumption, investment and life insurance rules are modified by the introduction of a class of time-inconsistent preferences. In particular, we account for the fact that an agent’s preferences evolve along the planning horizon according to her increasing concern about the bequest left to her descendants and about her welfare at retirement. To this end, we consider a stochastic continuous time model with random terminal time for an agent with a known distribution of lifetime under heterogeneous discounting. In order to obtain the time-consistent solution, we solve a non-standard dynamic programming equation. For the case of CRRA and CARA utility functions we compare the explicit solutions for the time-inconsistent and the time-consistent agent. The results are illustrated numerically.  相似文献   

13.
In this research, we investigate stopping rules for software testing and propose two stopping rules from the aspect of software reliability testing based on the impartial reliability model. The impartial reliability difference (IRD-MP) rule considers the difference between the impartial transition-probability reliabilities estimated for both software developer and consumers at their predetermined prior information levels. The empirical–impartial reliability difference (EIRD-MP) rule suggests stopping a software test when the computed empirical transition reliability is tending to its estimated impartial transition reliability. To insure the high-standard requirement for safety-critical software, both rules take the maximum probability (MP) of untested paths into account.  相似文献   

14.
Disruptions in airline operations can result in infeasibilities in aircraft and passenger schedules. Airlines typically recover aircraft schedules and disruptions in passenger itineraries sequentially. However, passengers are severely affected by disruptions and recovery decisions. In this paper, we present a mathematical formulation for the integrated aircraft and passenger recovery problem that considers aircraft and passenger related costs simultaneously. Using the superimposition of aircraft and passenger itinerary networks, passengers are explicitly modeled in order to use realistic passenger related costs. In addition to the common routing recovery actions, we integrate several passenger recovery actions and cruise speed control in our solution approach. Cruise speed control is a very beneficial action for mitigating delays. On the other hand, it adds complexity to the problem due to the nonlinearity in fuel cost function. The problem is formulated as a mixed integer nonlinear programming (MINLP) model. We show that the problem can be reformulated as conic quadratic mixed integer programming (CQMIP) problem which can be solved with commercial optimization software such as IBM ILOG CPLEX. Our computational experiments have shown that we could handle several simultaneous disruptions optimally on a four-hub network of a major U.S. airline within less than a minute on the average. We conclude that proposed approach is able to find optimal tradeoff between operating and passenger-related costs in real time.  相似文献   

15.
Modular programming is a development paradigm that emphasizes self-contained, flexible, and independent pieces of functionality. This practice allows new features to be seamlessly added when desired, and unwanted features to be removed, thus simplifying the software's user interface. The recent rise of web-based software applications has presented new challenges for designing an extensible, modular software system. In this article, we outline a framework for designing such a system, with a focus on reproducibility of the results. We present as a case study a Shiny-based web application called intRo, that allows the user to perform basic data analyses and statistical routines. Finally, we highlight some challenges we encountered, and how to address them, when combining modular programming concepts with reactive programming as used by Shiny. Supplementary material for this article is available online.  相似文献   

16.
Since their appearance new technologies have raised many expectations about their potential for innovating teaching and learning practices; in particular any didactical software, such as a Dynamic Geometry System (DGS) or a Computer Algebra System (CAS), has been considered an innovative element suited to enhance mathematical learning and support teachers’ classroom practice. This paper shows how the teacher can exploit the potential of a DGS to overcome crucial difficulties in moving from an intuitive to a deductive approach to geometry. A specific intervention will be presented and discussed through examples drawn from a long-term teaching experiment carried out in the 9th and 10th grades of a scientific high school. Focusing on an episode through the lens of a semiotic analysis we will see how the teacher’s intervention develops, exploiting the semiotic potential offered by the DGS Cabri-Géomètre. The semiotic lens highlights specific patterns in the teacher’s action that make students’ personal meanings evolve towards the mathematical meanings that are the objective of the intervention.  相似文献   

17.
Collective intelligence is defined as the ability of a group to solve more problems than its individual members. It is argued that the obstacles created by individual cognitive limits and the difficulty of coordination can be overcome by using a collective mental map (CMM). A CMM is defined as an external memory with shared read/write access, that represents problem states, actions and preferences for actions. It can be formalized as a weighted, directed graph. The creation of a network of pheromone trails by ant colonies points us to some basic mechanisms of CMM development: averaging of individual preferences, amplification of weak links by positive feedback, and integration of specialised subnetworks through division of labor. Similar mechanisms can be used to transform the World-Wide Web into a CMM, by supplementing it with weighted links. Two types of algorithms are explored: 1) the co-occurrence of links in web pages or user selections can be used to compute a matrix of link strengths, thus generalizing the technique of &201C;collaborative filtering&201D;; 2) learning web rules extract information from a user&2018;s sequential path through the web in order to change link strengths and create new links. The resulting weighted web can be used to facilitate problem-solving by suggesting related links to the user, or, more powerfully, by supporting a software agent that discovers relevant documents through spreading activation.  相似文献   

18.
An approach to linear programs with random requirements is suggested. The procedure involves choosing actions which minimize the expected value of a certain loss function. These actions are then taken as goals, and optimal values of the decision variables are found by solving a simple linear goal programming problem.  相似文献   

19.
This paper considers several probability maximization models for multi-scenario portfolio selection problems in the case that future returns in possible scenarios are multi-dimensional random variables. In order to consider occurrence probabilities and decision makers’ predictions with respect to all scenarios, a portfolio selection problem setting a weight with flexibility to each scenario is proposed. Furthermore, by introducing aspiration levels to occurrence probabilities or future target profit and maximizing the minimum aspiration level, a robust portfolio selection problem is considered. Since these problems are formulated as stochastic programming problems due to the inclusion of random variables, they are transformed into deterministic equivalent problems introducing chance constraints based on the stochastic programming approach. Then, using a relation between the variance and absolute deviation of random variables, our proposed models are transformed into linear programming problems and efficient solution methods are developed to obtain the global optimal solution. Furthermore, a numerical example of a portfolio selection problem is provided to compare our proposed models with the basic model.  相似文献   

20.
This paper is concerned with computational experimentation leading to the design of effective branch and bound algorithms for an important class of nonlinear integer programming problems, namely linearly constrained problems, which are used to model several real-world situations. The main contribution here is a study of the effect of node and branching variable selection and storage reduction strategies on overall computational effort for this class of problems, as well as the generation of a set of adequate test problems. Several node and branching variable strategies are compared in the context of a pure breadth-first enumeration, as well as in a special breadth and depth enumeration combination approach presented herein. Also, the effect of using updated pseudocosts is briefly addressed. Computational experience is presented on a set of eighteen suitably-sized nonlinear test problems, as well as on some random linear integer programs. Some of the new rules proposed are demonstrated to be significantly superior to previously suggested strategies; interestingly, even for linear integer programming problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号