首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper considers the problem of simple linear regression with interval-censored data. That is, \(n\) pairs of intervals are observed instead of the \(n\) pairs of precise values for the two variables (dependent and independent). Each of these intervals is closed but possibly unbounded, and contains the corresponding (unobserved) value of the dependent or independent variable. The goal of the regression is to describe the relationship between (the precise values of) these two variables by means of a linear function. Likelihood-based Imprecise Regression (LIR) is a recently introduced, very general approach to regression for imprecisely observed quantities. The result of a LIR analysis is in general set-valued: it consists of all regression functions that cannot be excluded on the basis of likelihood inference. These regression functions are said to be undominated. Since the interval data can be unbounded, a robust regression method is necessary. Hence, we consider the robust LIR method based on the minimization of the residuals’ quantiles. For this method, we prove that the set of all the intercept-slope pairs corresponding to the undominated regression functions is the union of finitely many polygons. We give an exact algorithm for determining this set (i.e., for determining the set-valued result of the robust LIR analysis), and show that it has worst-case time complexity \(O(n^{3}\log n)\) . We have implemented this exact algorithm as part of the R package linLIR.  相似文献   

2.
We aim to construct suitable tests when we have imprecise information about a sample. More specifically, we assume that we get a collection of n sets of values, each one characterizing an imprecise measurement. Each set specifies where the true sample value is (and where it is not) with full confidence, but it does not provide any additional information.Our main objectives are twofold: first we will review different kinds of tests in the literature about inferential statistics with random sets and discuss the approach that best suits our definition of imprecise data. Secondly, we will show that we can take advantage from mark and recapture techniques to improve the accuracy of our decisions. These techniques will be specially important when the population is small enough (with respect to the sample size) that recaptures are common. They also seem to be useful when resampling techniques are involved in the decision process.  相似文献   

3.
This paper extends the Log-robust portfolio management approach to the case with short sales, i.e., the case where the manager can sell shares he does not yet own. We model the continuously compounded rates of return, which have been established in the literature as the true drivers of uncertainty, as uncertain parameters belonging to polyhedral uncertainty sets, and maximize the worst-case portfolio wealth over that set in a one-period setting. The degree of the manager’s aversion to ambiguity is incorporated through a single, intuitive parameter, which determines the size of the uncertainty set. The presence of short-selling requires the development of problem-specific techniques, because the optimization problem is not convex. In the case where assets are independent, we show that the robust optimization problem can be solved exactly as a series of linear programming problems; as a result, the approach remains tractable for large numbers of assets. We also provide insights into the structure of the optimal solution. In the case of correlated assets, we develop and test a heuristic where correlation is maintained only between assets invested in. In computational experiments, the proposed approach exhibits superior performance to that of the traditional robust approach.  相似文献   

4.
In this paper, we propose a distribution-free model instead of considering a particular distribution for multiple objective games with incomplete information. We assume that each player does not know the exact value of the uncertain payoff parameters, but only knows that they belong to an uncertainty set. In our model, the players use a robust optimization approach for each of their objective to contend with payoff uncertainty. To formulate such a game, named “robust multiple objective games” here, we introduce three kinds of robust equilibrium under different preference structures. Then, by using a scalarization method and an existing result on the solutions for the generalized quasi-vector equilibrium problems, we obtain the existence of these robust equilibria. Finally, we give an example to illustrate our model and the existence theorems. Our results are new and fill the gap in the game theory literature.  相似文献   

5.
In this study, we establish a bilevel electricity trading model where fuzzy set theory is applied to address future load uncertainty, system reliability as well as human imprecise knowledge. From the literature, there have been some studies focused on this bilevel problem while few of them consider future load uncertainty and unit commitment optimization which handles the collaboration of generation units. Then, our study makes the following contributions: First, the future load uncertainty is characterized by fuzzy set theory, as the various factors that affect the load forecasting are often assessed with some non-statistical uncertainties. Second, the generation costs are obtained by solving complicated unit commitment problems, rather than approximate calculations used in existing studies. Third, this model copes with the optimizations of both the generation companies and the market operator, where the unexpected load risk is particularly analyzed by using fuzzy value-at-risk as a quantitative risk measurement. Forth, a mechanism to encourage the convergence of the bilevel model is proposed based on fuzzy maxmin approach, and a bilevel particle swarm optimization algorithm is developed to solve the problem in a proper runtime. To illustrate the effectiveness of this research, we provide a test system-based numerical example and discuss about the experimental results according to the principle of social welfare maximization. Finally, we also compare the model and algorithm with conventional methods.  相似文献   

6.
In this work, the problem of allocating a set of production lots to satisfy customer orders is considered. This research is of relevance to lot-to-order matching problems in semiconductor supply chain settings. We consider that lot-splitting is not allowed during the allocation process due to standard practices. Furthermore, lot-sizes are regarded as uncertain planning data when making the allocation decisions due to potential yield loss. In order to minimize the total penalties of demand un-fulfillment and over-fulfillment, a robust mixed-integer optimization approach is adopted to model is proposed the problem of allocating a set of work-in-process lots to customer orders, where lot-sizes are modeled using ellipsoidal uncertainty sets. To solve the optimization problem efficiently we apply the techniques of branch-and-price and Benders decomposition. The advantages of our model are that it can represent uncertainty in a straightforward manner with little distributional assumptions, and it can produce solutions that effectively hedge against the uncertainty in the lot-sizes using very reasonable amounts of computational effort.  相似文献   

7.
We address the problem of determining a robust maximum flow value in a network with uncertain link capacities taken in a polyhedral uncertainty set. Besides a few polynomial cases, we focus on the case where the uncertainty set is taken to be the solution set of an associated (continuous) knapsack problem. This class of problems is shown to be polynomially solvable for planar graphs, but NP-hard for graphs without special structure. The latter result provides evidence of the fact that the problem investigated here has a structure fundamentally different from the robust network flow models proposed in various other published works.  相似文献   

8.
In many real-life problems one has to base decision on information which is both fuzzily imprecise and probabilistically uncertain. Although consistency indexes providing a union nexus between possibilistic and probabilistic representation of uncertainty exist, there are no reliable transformations between them. This calls for new paradigms for incorporating the two kinds of uncertainty into mathematical models. Fuzzy stochastic linear programming is an attempt to fulfill this need. It deals with modelling and problem solving issues related to situations where randomness and fuzziness co-occur in a linear programming framework. In this paper we provide a survey of the essential elements, methods and algorithms for this class of linear programming problems along with promising research directions. Being a survey, the paper includes many references to both give due credit to results in the field and to help readers obtain more detailed information on issues of interest.  相似文献   

9.
Applications of traditional data envelopments analysis (DEA) models require knowledge of crisp input and output data. However, the real-world problems often deal with imprecise or ambiguous data. In this paper, the problem of considering uncertainty in the equality constraints is analyzed and by using the equivalent form of CCR model, a suitable robust DEA model is derived in order to analyze the efficiency of decision-making units (DMUs) under the assumption of uncertainty in both input and output spaces. The new model based on the robust optimization approach is suggested. Using the proposed model, it is possible to evaluate the efficiency of the DMUs in the presence of uncertainty in a fewer steps compared to other models. In addition, using the new proposed robust DEA model and envelopment form of CCR model, two linear robust super-efficiency models for complete ranking of DMUs are proposed. Two different case studies of different contexts are taken as numerical examples in order to compare the proposed model with other approaches. The examples also illustrate various possible applications of new models.  相似文献   

10.
Rational approximation of vertical segments   总被引:1,自引:0,他引:1  
In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.  相似文献   

11.
An uncertainty set is a crucial component in robust optimization. Unfortunately, it is often unclear how to specify it precisely. Thus it is important to study sensitivity of the robust solution to variations in the uncertainty set, and to develop a method which improves stability of the robust solution. In this paper, to address these issues, we focus on uncertainty in the price impact parameters in an optimal portfolio execution problem. We first illustrate that a small variation in the uncertainty set may result in a large change in the robust solution. We then propose a regularized robust optimization formulation which yields a solution with a better stability property than the classical robust solution. In this approach, the uncertainty set is regularized through a regularization constraint, defined by a linear matrix inequality using the Hessian of the objective function and a regularization parameter. The regularized robust solution is then more stable with respect to variation in the uncertainty set specification, in addition to being more robust to estimation errors in the price impact parameters. The regularized robust optimal execution strategy can be computed by an efficient method based on convex optimization. Improvement in the stability of the robust solution is analyzed. We also study implications of the regularization on the optimal execution strategy and its corresponding execution cost. Through the regularization parameter, one can adjust the level of conservatism of the robust solution.  相似文献   

12.
In this paper we present a duality approach for finding a robust best approximation from a set involving interpolation constraints and uncertain inequality constraints in a Hilbert space that is immunized against the data uncertainty using a nonsmooth Newton method. Following the framework of robust optimization, we assume that the input data of the inequality constraints are not known exactly while they belong to an ellipsoidal data uncertainty set. We first show that finding a robust best approximation is equivalent to solving a second-order cone complementarity problem by establishing a strong duality theorem under a strict feasibility condition. We then examine a nonsmooth version of Newton’s method and present their convergence analysis in terms of the metric regularity condition.  相似文献   

13.
A previous approach to robust intensity-modulated radiation therapy (IMRT) treatment planning for moving tumors in the lung involves solving a single planning problem before the start of treatment and using the resulting solution in all of the subsequent treatment sessions. In this paper, we develop an adaptive robust optimization approach to IMRT treatment planning for lung cancer, where information gathered in prior treatment sessions is used to update the uncertainty set and guide the reoptimization of the treatment for the next session. Such an approach allows for the estimate of the uncertain effect to improve as the treatment goes on and represents a generalization of existing robust optimization and adaptive radiation therapy methodologies. Our method is computationally tractable, as it involves solving a sequence of linear optimization problems. We present computational results for a lung cancer patient case and show that using our adaptive robust method, it is possible to attain an improvement over the traditional robust approach in both tumor coverage and organ sparing simultaneously. We also prove that under certain conditions our adaptive robust method is asymptotically optimal, which provides insight into the performance observed in our computational study. The essence of our method – solving a sequence of single-stage robust optimization problems, with the uncertainty set updated each time – can potentially be applied to other problems that involve multi-stage decisions to be made under uncertainty.  相似文献   

14.
Refinery operation planning is a complex task since refinery processes and inventories are tightly interconnected. We study refinery planning when ships are loaded with a blend of components and where arrival times of ships are uncertain. Any delay in ship arrival may result in overfull component tanks which results in less efficient blending alternatives, reduced process operations or even shut downs. We propose a planning approach where we use robust optimization as a decision tool. By using robust optimization uncertainty in arrival times is explicitly dealt with and the resulting plan and schedule will always be feasible. The approach includes a flexible way to describe and model uncertainties. To compare the robust approach with a traditional deterministic approach, we use a simulation process. Computational results from a case study and simulations show that the proposed methodology is substantially better than a deterministic approach.  相似文献   

15.
Dokka  Trivikram  Goerigk  Marc  Roy  Rahul 《Optimization Letters》2020,14(6):1323-1337

In robust optimization, the uncertainty set is used to model all possible outcomes of uncertain parameters. In the classic setting, one assumes that this set is provided by the decision maker based on the data available to her. Only recently it has been recognized that the process of building useful uncertainty sets is in itself a challenging task that requires mathematical support. In this paper, we propose an approach to go beyond the classic setting, by assuming multiple uncertainty sets to be prepared, each with a weight showing the degree of belief that the set is a “true” model of uncertainty. We consider theoretical aspects of this approach and show that it is as easy to model as the classic setting. In an extensive computational study using a shortest path problem based on real-world data, we auto-tune uncertainty sets to the available data, and show that with regard to out-of-sample performance, the combination of multiple sets can give better results than each set on its own.

  相似文献   

16.
The Markowitz Mean Variance model (MMV) and its variants are widely used for portfolio selection. The mean and covariance matrix used in the model originate from probability distributions that need to be determined empirically. It is well known that these parameters are notoriously difficult to estimate. In addition, the model is very sensitive to these parameter estimates. As a result, the performance and composition of MMV portfolios can vary significantly with the specification of the mean and covariance matrix. In order to address this issue we propose a one-period mean-variance model, where the mean and covariance matrix are only assumed to belong to an exogenously specified uncertainty set. The robust mean-variance portfolio selection problem is then written as a conic program that can be solved efficiently with standard solvers. Both second order cone program (SOCP) and semidefinite program (SDP) formulations are discussed. Using numerical experiments with real data we show that the portfolios generated by the proposed robust mean-variance model can be computed efficiently and are not as sensitive to input errors as the classical MMV??s portfolios.  相似文献   

17.
In this research, a robust optimization approach applied to multiclass support vector machines (SVMs) is investigated. Two new kernel based-methods are developed to address data with input uncertainty where each data point is inside a sphere of uncertainty. The models are called robust SVM and robust feasibility approach model (Robust-FA) respectively. The two models are compared in terms of robustness and generalization error. The models are compared to robust Minimax Probability Machine (MPM) in terms of generalization behavior for several data sets. It is shown that the Robust-SVM performs better than robust MPM.  相似文献   

18.
We consider a problem where a company must decide the order in which to launch new products within a given time horizon and budget constraints, and where the parameters of the adoption rate of these new products are subject to uncertainty. This uncertainty can bring significant change to the optimal launch sequence. We present a robust optimization approach that incorporates such uncertainty on the Bass diffusion model for new products as well as on the price response function of partners that collaborate with the company in order to bring its products to market. The decision-maker optimizes his worst-case profit over an uncertainty set where nature chooses the time periods in which (integer) units of the budgets of uncertainty are used for worst impact. This leads to uncertainty sets with binary variables. We show that a conservative approximation of the robust problem can nonetheless be reformulated as a mixed integer linear programming problem, is therefore of the same structure as the deterministic problem and can be solved in a tractable manner. Finally, we illustrate our approach on numerical experiments. Our model also incorporates contracts with potential commercialization partners. The key output of our work is a sequence of product launch times that protects the decision-maker against parameter uncertainty for the adoption rates of the new products and the response of potential partners to partnership offers.  相似文献   

19.
In our study, we integrate the data uncertainty of real-world models into our regulatory systems and robustify them. We newly introduce and analyse robust time-discrete target–environment regulatory systems under polyhedral uncertainty through robust optimization. Robust optimization has reached a great importance as a modelling framework for immunizing against parametric uncertainties and the integration of uncertain data is of considerable importance for the model’s reliability of a highly interconnected system. Then, we present a numerical example to demonstrate the efficiency of our new robust regression method for regulatory networks. The results indicate that our approach can successfully approximate the target–environment interaction, based on the expression values of all targets and environmental factors.  相似文献   

20.
The robust optimization methodology is known as a popular method dealing with optimization problems with uncertain data and hard constraints. This methodology has been applied so far to various convex conic optimization problems where only their inequality constraints are subject to uncertainty. In this paper, the robust optimization methodology is applied to the general nonlinear programming (NLP) problem involving both uncertain inequality and equality constraints. The uncertainty set is defined by conic representable sets, the proposed uncertainty set is general enough to include many uncertainty sets, which have been used in literature, as special cases. The robust counterpart (RC) of the general NLP problem is approximated under this uncertainty set. It is shown that the resulting approximate RC of the general NLP problem is valid in a small neighborhood of the nominal value. Furthermore a rather general class of programming problems is posed that the robust counterparts of its problems can be derived exactly under the proposed uncertainty set. Our results show the applicability of robust optimization to a wider area of real applications and theoretical problems with more general uncertainty sets than those considered so far. The resulting robust counterparts which are traditional optimization problems make it possible to use existing algorithms of mathematical optimization to solve more complicated and general robust optimization problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号