首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
This paper describes sensitivity analysis in multiple objective linear programming (MOLP) with one of the criteria function coefficients parameterized. The parametric linear programming (LP) is used for analyzing a range set—the parameters set for which a given feasible solution is efficient for MOLP. The main theoretical result is a presentation of convexity of the range set. Moreover, an algorithm based on some of LP problems is presented for generating the range set.  相似文献   

2.
The aim of this paper is to develop an effective method for solving matrix games with payoffs of triangular fuzzy numbers (TFNs) which are arbitrary. In this method, it always assures that players’ gain-floor and loss-ceiling have a common TFN-type fuzzy value and hereby any matrix game with payoffs of TFNs has a TFN-type fuzzy value. Based on duality theorem of linear programming (LP) and the representation theorem for fuzzy sets, the mean and the lower and upper limits of the TFN-type fuzzy value are easily computed through solving the derived LP models with data taken from 1-cut set and 0-cut set of fuzzy payoffs. Hereby the TFN-type fuzzy value of any matrix game with payoffs of TFNs can be explicitly obtained. Moreover, we can easily compute the upper and lower bounds of any Alfa-cut set of the TFN-type fuzzy value for any matrix game with payoffs of TFNs and players’ optimal mixed strategies through solving the derived LP models at any specified confidence level Alfa. The proposed method in this paper is demonstrated with a numerical example and compared with other methods to show the validity, applicability and superiority.  相似文献   

3.
Robustness of the efficient DMUs in data envelopment analysis   总被引:2,自引:0,他引:2  
By means of modified versions of CCR model based on evaluation of a decision making unit (DMU) relative to a reference set grouped by all other DMUs, sensitivity analysis of the CCR model in data envelopment analysis (DEA) is studied in this paper. The methods for sensitivity analysis are linear programming problems whose optimal values yield particular regions of stability. Sufficient and necessary conditions for upward variations of inputs and for downward variations of outputs of an (extremely) efficient DMU which remains efficient are provided. The approach does not require calculation of the basic solutions and of the inverse of the corresponding optimal basis matrix. The approach is illustrated by two numerical examples.  相似文献   

4.
We discuss an approach for solving the Bilinear Matrix Inequality (BMI) based on its connections with certain problems defined over matrix cones. These problems are, among others, the cone generalization of the linear programming (LP) and the linear complementarity problem (LCP) (referred to as the Cone-LP and the Cone-LCP, respectively). Specifically, we show that solving a given BMI is equivalent to examining the solution set of a suitably constructed Cone-LP or Cone-LCP. This approach facilitates our understanding of the geometry of the BMI and opens up new avenues for the development of the computational procedures for its solution. Research supported in part by the National Science Foundation under Grant CCR-9222734.  相似文献   

5.
In the tolerance approach to sensitivity analysis of matrix coefficients in a linear programme, the maximum tolerance percentage characterizes the largest percentage within which selected matrix coefficients may vary simultaneously and independently while retaining the same set of optimal basic variables. While it is difficult to calculate exactly the maximum tolerance percentage under perturbations in multiple rows or columns, we show herein how it can be approximated.  相似文献   

6.
We study convex conic optimization problems in which the right-hand side and the cost vectors vary linearly as functions of a scalar parameter. We present a unifying geometric framework that subsumes the concept of the optimal partition in linear programming (LP) and semidefinite programming (SDP) and extends it to conic optimization. Similar to the optimal partition approach to sensitivity analysis in LP and SDP, the range of perturbations for which the optimal partition remains constant can be computed by solving two conic optimization problems. Under a weaker notion of nondegeneracy, this range is simply given by a minimum ratio test. We discuss briefly the properties of the optimal value function under such perturbations.  相似文献   

7.
In the literature, sensitivity analysis of linear programming (LP) has been widely studied. However, only some very simple and special cases were considered when right-hand side (RHS) parameters or objective function coefficients (OFC) correlate to each other. In the presence of correlation when one parameter changes, other parameters vary, too. Here principal component analysis is used to convert the correlation of the LP homogenous parameters into functional relations. Then, using the derivatives of the functional relations, it is possible to perform classical sensitivity analysis for the LP with correlation among RHS parameters or OFC. The validity of the devised method is corroborated by open literature examples having correlation among homogenous parameters.  相似文献   

8.
Global sensitivity analysis is a widely used tool for uncertainty apportionment and is very useful for decision making, risk assessment, model simplification, optimal design of experiments, etc. Density-based sensitivity analysis and regional sensitivity analysis are two widely used approaches. Both of them can work with a given sample set of model input-output pairs. One significant difference between them is that density-based sensitivity analysis analyzes output distributions conditional on input values (forward), while regional sensitivity analysis analyzes input distributions conditional on output values (reverse). In this paper, we study the relationship between these two approaches and show that regional sensitivity analysis (reverse), when focusing on probability density functions of input, converges towards density-based sensitivity analysis (forward) as the number of classes for conditioning model outputs in the reverse method increases. Similar to the existing general form of forward sensitivity indices, we derive a general form of the reverse sensitivity indices and provide the corresponding reverse given-data method. Due to the shown equivalence, the reverse given-data method provides an efficient way to approximate density-based sensitivity indices. Two test examples are used to verify this connection and compare the results. Finally, we use the reverse given-data method to perform sensitivity analysis in a carbon dioxide storage benchmark problem with multiple outputs, where forward analysis of density-based indices would be impossible due to the high-dimensionality of its model outputs.  相似文献   

9.
We propose to compute the search direction at each interior-point iteration for a linear program via a reduced augmented system that typically has a much smaller dimension than the original augmented system. This reduced system is potentially less susceptible to the ill-conditioning effect of the elements in the (1,1) block of the augmented matrix. A preconditioner is then designed by approximating the block structure of the inverse of the transformed matrix to further improve the spectral properties of the transformed system. The resulting preconditioned system is likely to become better conditioned toward the end of the interior-point algorithm. Capitalizing on the special spectral properties of the transformed matrix, we further proposed a two-phase iterative algorithm that starts by solving the normal equations with PCG in each IPM iteration, and then switches to solve the preconditioned reduced augmented system with symmetric quasi-minimal residual (SQMR) method when it is advantageous to do so. The experimental results have demonstrated that our proposed method is competitive with direct methods in solving large-scale LP problems and a set of highly degenerate LP problems. Research supported in parts by NUS Research Grant R146-000-076-112 and SMA IUP Research Grant.  相似文献   

10.
Invented in the 1960s, permutation codes have reemerged in recent years as a topic of great interest because of properties making them attractive for certain modern technological applications, especially flash memory. In 2011 a polynomial time algorithm called linear programming (LP) decoding was introduced for a class of permutation codes where the feasible set of codewords was a subset of the vertex set of a code polytope. In this paper we investigate a new class of linear constraints for matrix polytopes with no fractional vertices through a new concept called “consolidation.” We then introduce a necessary and sufficient condition for which LP decoding methods, originally designed for the Euclidean metric, may be extended to provide an efficient decoding algorithm for permutation codes with the Kendall tau metric.  相似文献   

11.
In this paper we discuss some instances where dense matrix techniques can be utilized within a sparse simplex linear programming solver. The main emphasis is on the use of the Schur complement matrix as a part of the basis matrix representation. This approach enables to represent the basis matrix as an easily invertible sparse matrix and one or more dense Schur complement matrices. We describe our variant of this method which uses updating of the QR factorization of the Schur complement matrix. We also discuss some implementation issues of the LP software package which is based on this approach.  相似文献   

12.
In this paper, a notion of Levitin–Polyak (LP in short) well-posedness is introduced for a vector optimization problem in terms of minimizing sequences and efficient solutions. Sufficient conditions for the LP well-posedness are studied under the assumptions of compactness of the feasible set, closedness of the set of minimal solutions and continuity of the objective function. The continuity assumption is then weakened to cone lower semicontinuity for vector-valued functions. A notion of LP minimizing sequence of sets is studied to establish another set of sufficient conditions for the LP well-posedness of the vector problem. For a quasiconvex vector optimization problem, sufficient conditions are obtained by weakening the compactness of the feasible set to a certain level-boundedness condition. This in turn leads to the equivalence of LP well-posedness and compactness of the set of efficient solutions. Some characterizations of LP well-posedness are given in terms of the upper Hausdorff convergence of the sequence of sets of approximate efficient solutions and the upper semicontinuity of an approximate efficient map by assuming the compactness of the set of efficient solutions, even when the objective function is not necessarily quasiconvex. Finally, a characterization of LP well-posedness in terms of the closedness of the approximate efficient map is provided by assuming the compactness of the feasible set.  相似文献   

13.
The nature of hydrologic parameters in reservoir management models is uncertain. In mathematical programming models the uncertainties are dealt with either indirectly (sensitivity analysis of a deterministic model) or directly by applying a chance-constrained type of formulation or some of the stochastic programming techniques (LP and DP based models). Various approaches are reviewed in the paper. Moran's theory of storage is an alternative stochastic modelling approach to mathematical programming techniques. The basis of the approach and its application is presented. Reliability programming is a stochastic technique based on the chance-constrained approach, where the reliabilities of the chance constraints are considered as extra decision variables in the model. The problem of random event treatment in the reservoir management model formulation using reliability programming is addressed in this paper.  相似文献   

14.
In order to address the imprecision often introduced by widening operators in static analysis, policy iteration based on min-computations amounts to considering the characterization of reachable value set of a program as an iterative computation of policies, starting from a post-fixpoint. Computing each policy and the associated invariant relies on a sequence of numerical optimizations. While the early research efforts relied on linear programming (LP) to address linear properties of linear programs, the current state of the art is still limited to the analysis of linear programs with at most quadratic invariants, relying on semidefinite programming (SDP) solvers to compute policies, and LP solvers to refine invariants.We propose here to extend the class of programs considered through the use of Sums-of-Squares (SOS) based optimization. Our approach enables the precise analysis of switched systems with polynomial updates and guards. The analysis presented has been implemented in Matlab and applied on existing programs coming from the system control literature, improving both the range of analyzable systems and the precision of previously handled ones.  相似文献   

15.
The focus of this paper is the regional GDP analysis of Croatian Counties. It is a part of an extensive on-going scientific research about Croatian economic challenges within the global recession environment. Although, as EU accession country, Croatia is divided into three NUTS 2 regions, twenty one Croatian Counties show significant economic and social disproportions. In multiple regression model it is researched to what extent regional GDP per capita depends on a set of regional variables (employment, gross investment, production of more important agricultural products, GVA per person employed, construction works value, exports, imports, foreign tourists arrivals, foreign tourist nights, ecology...). Subsequently parameters are evaluated by Monte Carlo simulations which are used for the first time in comparative regional analysis. Also Croatian Counties are classified using Cluster analysis to make a comparative analysis with official spacing into three NUTS 2 regions which are geographical and political areas rather than real and homogenous socio-economic areas.  相似文献   

16.
In this paper we report validation efforts around the finite-to-finite strand of a provisional learning progression (LP) for the concept of function. We regard an LP as an empirically-verified account of how student understandings form over time and in response to instruction. The finite-to-finite strand of the LP was informed by literature on students’ thinking and learning related to functions as well as the Algebra Project’s curricular approach, which is designed for students who are traditionally underserved by mathematics education. Developing and validating an LP is a multi-step, cyclic process. Here we report on one step in this process, an item and response analysis. Data sources include 680 students’ responses to 13 multipart computer-delivered tasks. Results suggest that revisions to the items, associated scoring rubrics, and in some instances the LP are warranted. We illustrate this task, rubric, and LP revision process through an item analysis for a selected task.  相似文献   

17.
We introduce a combined facility location/network design problem in which facilities have constraining capacities on the amount of demand they can serve. This model has a number of applications in regional planning, distribution, telecommunications, energy management, and other areas. Our model includes the classical capacitated facility location problem (CFLP) on a network as a special case. We present a mixed integer programming formulation of the problem, and several classes of valid inequalities are derived to strengthen its LP relaxation. Computational experience with problems with up to 40 nodes and 160 candidate links is reported, and a sensitivity analysis provides insight into the behavior of the model in response to changes in key problem parameters.  相似文献   

18.
We consider an approach for ex post evaluation of approximate solutions obtained by a well known simple greedy method for set packing. A performance bound is derived that is a function of the highest average reward per item over subsets as well as the number of allocated subsets and ground items. This a posterior bound can enable much revelation of optimality when the solution is near optimal. One of the advantages of the ex post analysis is that it does not require computing the optimal solution to the LP relaxation. The ex post bound will not be guaranteed to reveal substantial levels of optimality for all problem instances but can be a useful tool that is complementary to other traditional methods for ex post evaluation for the set packing problem.  相似文献   

19.
The linear programming (LP) approach has been commonly proposed for joint cost allocation purposes. Within a LP framework, the allocation rules are based on a marginal analysis. Unfortunately, the additivity property which is required to completely allocate joint costs fails in presence of capacity, institutional or environmental constraints.  相似文献   

20.
Based on interval mathematical theory, the interval analysis method for the sensitivity analysis of the structure is advanced in this paper. The interval analysis method deals with the upper and lower bounds on eigenvalues of structures with uncertain-but-bounded (or interval) parameters. The stiffness matrix and the mass matrix of the structure, whose elements have the initial errors, are unknown except for the fact that they belong to given bounded matrix sets. The set of possible matrices can be described by the interval matrix. In terms of structural parameters, the stiffness matrix and the mass matrix take the non-negative decomposition. By means of interval extension, the generalized interval eigenvalue problem of structures with uncertain-but-bounded parameters can be divided into two generalized eigenvalue problems of a pair of real symmetric matrix pair by the real analysis method. Unlike normal sensitivity analysis method, the interval analysis method obtains informations on the response of structures with structural parameters (or design variables) changing and without any partial differential operation. Low computational effort and wide application rang are the characteristic of the proposed method. Two illustrative numerical examples illustrate the efficiency of the interval analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号