首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does not introduce artificial variables. In the PBVA, a reduced linear program is formed by eliminating as many variables as there are equality constraints. A subproblem containing one ‘less than or equal to’ constraint is solved by executing the simplex method modified such that an upper bound is placed on an unbounded entering variable. The remaining constraints of the reduced problem are added to the optimal tableau of the subproblem to form an augmented tableau, which is solved by applying the dual simplex method for bounded variables. Lastly, the variables that were eliminated are restored by substitution. Differences between the PBVA and two other variants of the simplex method are identified. The PBVA is applied to solve an example problem with five decision variables, two equality constraints, and two inequality constraints. In addition, three other types of linear programming problems are solved to justify the advantages of the PBVA.  相似文献   

2.
In bound constrained global optimization problems, partitioning methods utilizing Interval Arithmetic are powerful techniques that produce reliable results. Subdivision direction selection is a major component of partitioning algorithms and it plays an important role in convergence speed. Here, we propose a new subdivision direction selection scheme that uses symbolic computing in interpreting interval arithmetic operations. We call this approach symbolic interval inference approach (SIIA). SIIA targets the reduction of interval bounds of pending boxes directly by identifying the major impact variables and re-partitioning them in the next iteration. This approach speeds up the interval partitioning algorithm (IPA) because it targets the pending status of sibling boxes produced. The proposed SIIA enables multi-section of two major impact variables at a time. The efficiency of SIIA is illustrated on well-known bound constrained test functions and compared with established subdivision direction selection methods from the literature.  相似文献   

3.
The scheduling and rostering of personnel is a problem that occurs in many organizations. Aircrew scheduling has attracted considerable attention with many heuristic methods being proposed, but in recent times set partitioning optimization methods have become more popular. The aircrew rostering problem is discussed and formulated as a generalized set partitioning model. Because of the extremely large optimization models that are generated in practical situations, some special computational techniques have been developed to produce solutions efficiently. These techniques are used to solve problems arising from an airline application in which set partitioning models with more than 650 constraints and 200 000 binary variables are generated. The solutions are produced on a Motorola 68020 microprocessor in little more than three hours.  相似文献   

4.
Regularization techniques, i.e., modifications on the diagonal elements of the scaling matrix, are considered to be important methods in interior point implementations. So far, regularization in interior point methods has been described for linear programming problems, in which case the scaling matrix is diagonal. It was shown that by regularization, free variables can be handled in a numerically stable way by avoiding column splitting that makes the set of optimal solutions unbounded. Regularization also proved to be efficient for increasing the numerical stability of the computations during the solutions of ill-posed linear programming problems. In this paper, we study the factorization of the augmented system arising in interior point methods. In our investigation, we generalize the methods developed and used in linear programming to the case when the scaling matrix is positive semidefinite, but not diagonal. We show that regularization techniques may be applied beyond the linear programming case.  相似文献   

5.
Moving (weighted) average techniques provide a flexible tool for the graduation (smoothing) of vital rates. As is shown in the present paper, one may construct linear graduation methods which are superior with respect to some formal evaluation criteria, but such ‘improved’ methods lack an important robustness property which the moving average graduation techniques possess and which is a main motivation for their use.A linear graduation technique is an estimation method, and an important consideration in the evaluation of its properties is the class for which it is unbiased, i.e., the class of functions which it reproduces unchanged. A classical test usually said to be a chi-square test of the goodness-of-fit of a graduation is revealed to be a test of the hypothesis that the graduation method is indeed unbiased for the set of theoretical rates actually estimated, and its properties are investigated along with those of alternative chi-square tests.  相似文献   

6.
It is often possible (and profitable) to reduce or ‘Presolve’ linear programs. In particular, there are frequently constraints which force many of the variables to be at bound. Unfortunately, the solution found by the simplex method for such reduced models is not usually ‘formally’ optimal, in the sense that nonoptimal dual values may be present when the original problem is restored. Furthermore, the restored (full) problem is now totally degenerate, and may require many iterations to achieved formal optimality. We describe an efficient ‘Postsolve’ procedure for attaining the formal optimum solution, and give computational results.  相似文献   

7.
We study the structure of dual optimization problems associated with linear constraints, bounds on the variables, and separable cost. We show how the separability of the dual cost function is related to the sparsity structure of the linear equations. As a result, techniques for ordering sparse matrices based on nested dissection or graph partitioning can be used to decompose a dual optimization problem into independent subproblems that could be solved in parallel. The performance of a multilevel implementation of the Dual Active Set algorithm is compared with CPLEX Simplex and Barrier codes using Netlib linear programming test problems.   相似文献   

8.
The supervised classification of fuzzy data obtained from a random experiment is discussed. The data generation process is modelled through random fuzzy sets which, from a formal point of view, can be identified with certain function-valued random elements. First, one of the most versatile discriminant approaches in the context of functional data analysis is adapted to the specific case of interest. In this way, discriminant analysis based on nonparametric kernel density estimation is discussed. In general, this criterion is shown not to be optimal and to require large sample sizes. To avoid such inconveniences, a simpler approach which eludes the density estimation by considering conditional probabilities on certain balls is introduced. The approaches are applied to two experiments; one concerning fuzzy perceptions and linguistic labels and another one concerning flood analysis. The methods are tested against linear discriminant analysis and random K-fold cross validation.  相似文献   

9.
We study Gevrey properties and summability of power series in two variables that are formal solutions of a Cauchy problem for general linear partial differential equations with constant coefficients. In doing so, we extend earlier results in two articles of Balser and Lutz, Miyake, and Schäfke for the complex heat equation, as well as in a paper of Balser and Miyake, who have investigated the same questions for a certain class of linear PDE with constant coefficients subject to some restrictive assumptions. Moreover, we also present an example of a PDE where the formal solution of the Cauchy problem is not k-summable for whatever value of k, but instead is multisummable with two levels under corresponding conditions upon the Cauchy data. That this can occur has not been observed up to now.  相似文献   

10.
部分线性模型也就是响应变量关于一个或者多个协变量是线性的, 但对于其他的协变量是非线性的关系\bd 对于部分线性模型中的参数和非参数部分的估计方法, 惩罚最小二乘估计是重要的估计方法之一\bd 对于这种估计方法, 广义交叉验证法提供了一种确定光滑参数的方法\bd 但是, 在部分线性模型中, 用广义交叉验证法确定光滑参数的最优性还没有被证明\bd 本文证明了利用惩罚最小二乘估计对于部分线性模型估计时, 用广义交叉验证法选择光滑参数的最优性\bd 通过模拟验证了本文中所提出的用广义交叉验证法选择光滑参数具有很好的效果, 同时, 本文在模拟部分比较了广义交叉验证和最小二乘交叉验证的优劣.  相似文献   

11.
In recent years, it has been shown that strategies based on an interval-Newton approach can be used to reliably solve a variety of nonlinear equation solving and optimization problems in chemical process engineering, including problems in parameter estimation and in the computation of phase behavior. These strategies provide a mathematical and computational guarantee either that all solutions have been located in an equation solving problem or that the global optimum has been found in an optimization problem. The primary drawback to this approach is the potentially high computational cost. In this paper, we consider strategies for bounding the solution set of the linear interval equation system that must be solved in the context of the interval-Newton method. Recent preconditioning techniques for this purpose are reviewed, and a new bounding approach based on the use of linear programming (LP) techniques is presented. Using this approach it is possible to determine the desired bounds exactly (within round out), leading to significant overall improvements in computational efficiency. These techniques will be demonstrated using several global optimization problems, with focus on problems arising in chemical engineering, including parameter estimation and molecular modeling. These problems range in size from under ten variables to over two hundred, and are solved deterministically using the interval methodology.  相似文献   

12.
One of the main problems in quantitative dynamic modelling of complex systems is due to the huge number of interacting subsystems. This gives rise not only to large sets of coupled differential equations, whose results are difficult to understand, but makes it almost impossible to give a meaningful set of initial conditions. It is therefore extremely important to investigate projection techniques which allow us to single out a limited subset of relevant variables (order parameters). The effect of the irrelevant variables should then be modelled as some sort of noise, with suitable statistical properties, perturbing the evolution of the order parameters. Such formal projection techniques have been developed for the case where the time evolution of the relevant variables is slower than the time evolution of the irrelevant degrees of freedom, which is a case frequently encountered in practice. We first discuss the simplest of such adiabatic elimination procedures (direct adiabatic elimination). We then discuss a more refined formal procedure, which can be applied to cases where the time scale separation between fast and slow variables is not exceedingly large. We also illustrate the application of adiabatic elimination procedures to a problem which has been stimulated by previous work undertaken by our group (the tourist-environment interaction in a tourist village in Sardinia). However, we greatly simplified the original model assumptions, so the example is to be regarded as illustrative of the adiabatic technique, and should not be considered a case study. We focus our attention on such interesting phenomena as noise induced phase transitions, and discuss also the choice of the correct type of stochastic calculus.  相似文献   

13.
In econometrics it is common for variables to be related together in a set of linear, multilateral and causal interdependencies. This type of system generally has properties which are unsatisfactory for application of classical regression techniques. Consequently, alternative estimation methods have been developed. This paper explores the relations between several such methods in terms of symmetric idempotents of predetermined variables and their orthogonal complements. Generalizations of two‐ and three‐stage least squares and instrumental variables are considered, including Wicken's estimator.2 The relative efficiencies of the estimators are also discussed.

  相似文献   

14.
One of the main services of National Statistical Agencies (NSAs) for the current Information Society is the dissemination of large amounts of tabular data, which is obtained from microdata by crossing one or more categorical variables. NSAs must guarantee that no confidential individual information can be obtained from the released tabular data. Several statistical disclosure control methods are available for this purpose. These methods result in large linear, mixed integer linear, or quadratic mixed integer linear optimization problems. This paper reviews some of the existing approaches, with an emphasis on two of them: cell suppression problem (CSP) and controlled tabular adjustment (CTA). CSP and CTA have concentrated most of the recent research in the tabular data protection field. The particular focus of this work is on methods and results of practical interest for end-users (mostly, NSAs). Therefore, in addition to the resulting optimization models and solution approaches, computational results comparing the main optimization techniques - both optimal and heuristic - using real-world instances are also presented.  相似文献   

15.
The well known DIRECT (DIviding RECTangles) algorithm for global optimization requires bound constraints on variables and does not naturally address additional linear or nonlinear constraints. A feasible region defined by linear constraints may be covered by simplices, therefore simplicial partitioning may tackle linear constraints in a very subtle way. In this paper we demonstrate this advantage of simplicial partitioning by applying a recently proposed deterministic simplicial partitions based DISIMPL algorithm for optimization problems defined by general linear constraints (Lc-DISIMPL). An extensive experimental investigation reveals advantages of this approach to such problems comparing with different constraint-handling methods, proposed for use with DIRECT. Furthermore the Lc-DISIMPL algorithm gives very competitive results compared to a derivative-free particle swarm algorithm (PSwarm) which was previously shown to give very promising results. Moreover, DISIMPL guarantees the convergence to the global solution, whereas the PSwarm algorithm sometimes fails to converge to the global minimum.  相似文献   

16.
In this paper, we develop a simultaneous column-and-row generation algorithm that could be applied to a general class of large-scale linear programming problems. These problems typically arise in the context of linear programming formulations with exponentially many variables. The defining property for these formulations is a set of linking constraints, which are either too many to be included in the formulation directly, or the full set of linking constraints can only be identified, if all variables are generated explicitly. Due to this dependence between columns and rows, we refer to this class of linear programs as problems with column-dependent-rows. To solve these problems, we need to be able to generate both columns and rows on-the-fly within an efficient solution approach. We emphasize that the generated rows are structural constraints and distinguish our work from the branch-and-cut-and-price framework. We first characterize the underlying assumptions for the proposed column-and-row generation algorithm. These assumptions are general enough and cover all problems with column-dependent-rows studied in the literature up until now to the best of our knowledge. We then introduce in detail a set of pricing subproblems, which are used within the proposed column-and-row generation algorithm. This is followed by a formal discussion on the optimality of the algorithm. To illustrate our approach, the paper is concluded by applying the proposed framework to the multi-stage cutting stock and the quadratic set covering problems.  相似文献   

17.
KmL: k-means for longitudinal data   总被引:2,自引:0,他引:2  
Cohort studies are becoming essential tools in epidemiological research. In these studies, measurements are not restricted to single variables but can be seen as trajectories. Statistical methods used to determine homogeneous patient trajectories can be separated into two families: model-based methods (like Proc Traj) and partitional clustering (non-parametric algorithms like k-means). KmL is a new implementation of k-means designed to work specifically on longitudinal data. It provides scope for dealing with missing values and runs the algorithm several times, varying the starting conditions and/or the number of clusters sought; its graphical interface helps the user to choose the appropriate number of clusters when the classic criterion is not efficient. To check KmL efficiency, we compare its performances to Proc Traj both on artificial and real data. The two techniques give very close clustering when trajectories follow polynomial curves. KmL gives much better results on non-polynomial trajectories.  相似文献   

18.
Several features and interrelations of projector methods and reduction techniques for the analysis of linear time-varying differential-algebraic equations (DAEs) are addressed in this work. The application of both procedures to regular index-1 problems is reviewed, leading to some new results which extend the scope of reduction techniques through a projector approach. Certain singular points are well accommodated by reduction methods; the projector framework is adapted in this paper to handle (not necessarily isolated) singularities in an index-1 context. The inherent problem can be described in terms of a scalarly implicit ODE with continuous operators, in which the leading coefficient function does not depend on the choice of projectors. The nice properties of projectors concerning smoothness assumptions are carried over to the singular setting. In analytic problems, the kind of singularity arising in the scalarly implicit inherent ODE is also proved independent of the choice of projectors. The discussion is driven by a simple example coming from electrical circuit theory. Higher index cases and index transitions are the scope of future research.  相似文献   

19.
Suppose that a large-scale block-diagonal linear programming problem has been solved by the Dantzig—Wolfe decomposition algorithm and that an optimal solution has been attained. Suppose further that it is desired to perform a post-optimality analysis or a complete parametric analysis on the cost-coefficients or the RHS of the linking constraints. Efficient techniques for performing these analyses for the ordinary simplex case have not been easily applied to this case as one operation involves doing a minimizing ratio between all columns of two rows of the tableau. As the columns are not readily known in Dantzig—Wolfe decomposition, other techniques must be used. To date, suggested methods involve solving small linear programs to find these minimizing ratios. In this paper a method is presented which requires solving no linear programs (except possibly in the case of degeneracy of a subproblem) using and utilizing only the information typically stored for Dantzig—Wolfe decomposition.  相似文献   

20.
We introduce and study the notions of computable formal context and computable formal concept. We give some examples of computable formal contexts in which the computable formal concepts fail to form a lattice and study the complexity aspects of formal concepts in computable contexts. In particular, we give some sufficient conditions under which the computability or noncomputability of a formal concept could be recognized from its lattice-theoretic properties. We prove the density theorem showing that in a Cantor-like topology every formal concept can be approximated by computable ones. We also show that not all formal concepts have lattice-theoretic approximations as suprema or infima of families of computable formal concepts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号