首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
The phenomenon of the limit of detection (LoD) often happens in many practical situations because of technique and instrument limitations. In the literature, some reports show that, in general, to apply conventional methods to evaluate the diagnostic power of variables while ignoring LoD could be seriously biased. Although the area under the receiver operating characteristic (ROC) curve can be estimated consistently if the distribution of variables are known. In practical situation, such information is usually not available. On the other hand, it has been proved that the area under ROC curve of a variable with a LoD and no distribution assumptions is usually biased no matter what kinds of replacement strategies are used. However, there is a lack of similar studies on the partial area under ROC curve (pAUC), and because this measure is usually preferred in practice, it is of interest to examine whether the estimate of pAUC of a variable measured with a LoD behaves the same. In this study, we found that for some LoD scenarios, and even without distribution assumption, consistent estimate of pAUC can be constructed. When the consistent estimate of pAUC cannot be obtained, the bias can be ineffectual in practical situations, and the proposed estimator can be a good approximation of pAUC. Numerical studies using simulated data sets and real data examples are reported.  相似文献   

2.
Existing stochastic dominance rules apply to variables such as income, wealth and rates of return, all of which are measured on cardinal scales. This study develops and applies stochastic dominance rules for ordinal data. It is shown that the new rules are consistent with the traditional von Neumann-Morgenstern expected utility approach, and that they are applicable and relevant in a wide variety of managerial decision making situations, where existing stochastic dominance rules fail to apply. We apply ordinal SD rules to the transformation of random variables.  相似文献   

3.
《Optimization》2012,61(8):949-968
If the constraints in an optimization problem are dependent on a random parameter, we would like to ensure that they are fulfilled with a high level of reliability. The most natural way is to employ chance constraints. However, the resulting problem is very hard to solve. We propose an alternative formulation of stochastic programs using penalty functions. The expectations of penalties can be left as constraints leading to generalized integrated chance constraints, or incorporated into the objective as a penalty term. We show that the penalty problems are asymptotically equivalent under quite mild conditions. We discuss applications of sample-approximation techniques to the problems with generalized integrated chance constraints and propose rates of convergence for the set of feasible solutions. We will direct our attention to the case when the set of feasible solutions is finite, which can appear in integer programming. The results are then extended to the bounded sets with continuous variables. Additional binary variables are necessary to solve sample-approximated chance-constrained problems, leading to a large mixed-integer non-linear program. On the other hand, the problems with penalties can be solved without adding binary variables; just continuous variables are necessary to model the penalties. The introduced approaches are applied to the blending problem leading to comparably reliable solutions.  相似文献   

4.
Methods of scoring in multiple choice examinations are outlined in which the examinee's expected change in score because of random guessing must be negative for all situations of partial knowledge. Thus, it can be argued that the guessing component in multiple choice examinations can be completely removed. The examinee is instructed to tick the correct box or, if the correct box is unknown, to cross boxes known to be incorrect, or to leave the question blank. Under the proposed scoring system, it is argued that an examinee will not attempt to demonstrate more knowledge on any particular question than the examinee in fact has. Also, the examiner has quite a degree of freedom as to the size of the rewards for correctly crossed boxes; rewards which need not be equal. There is also quite a range in the possible values of the penalties for incorrect responses and appropriate recommendations are made.  相似文献   

5.
We develop fixed-point algorithms for the approximation of structured matrices with rank penalties. In particular we use these fixed-point algorithms for making approximations by sums of exponentials, i.e., frequency estimation. For the basic formulation of the fixed-point algorithm we show that it converges to the solution of a related minimization problem, namely the one obtained by replacing the original objective function with its convex envelope and keeping the structured matrix constraint unchanged.It often happens that this solution agrees with the solution to the original minimization problem, and we provide a simple criterion for when this is true. We also provide more general fixed-point algorithms that can be used to treat the problems of making weighted approximations by sums of exponentials given equally or unequally spaced sampling. We apply the method to the case of missing data, although the above mentioned convergence results do not hold in this case. However, it turns out that the method often gives perfect reconstruction (up to machine precision) in such cases. We also discuss multidimensional extensions, and illustrate how the proposed algorithms can be used to recover sums of exponentials in several variables, but when samples are available only along a curve.  相似文献   

6.
Several types of regulations limit the amount of different emissions that a firm may create from its production processes. Depending on the emission, these regulations could include threshold values, penalties and taxes, and/or emission allowances that can be traded. However, many firms try to comply with these regulations without a systematic plan, often leading not only to emission violations and high penalties, but also to high costs. In this paper, we present two mathematical models that can be used by firms to determine their optimal product mix and production quantities in the presence of several different types of environmental constraints, in addition to typical production constraints. Both models are comprehensive and incorporate several diverse production and environmental issues. The first model, which assumes that each product has just one operating procedure, is a linear program while the second model, which assumes that the firm has the option of producing each product using more than one operating procedure, is a mixed integer linear program. The solutions of both models identify the products that the firm should produce along with their production quantities. These models can be used by firms to quickly analyze several “what if” scenarios such as the impact of changes in emission threshold values, emission taxes, trading allowances, and trading transaction costs.  相似文献   

7.
One of the most important tasks in service and manufacturing systems is how to schedule arriving jobs such that some criteria will be satisfied. Up to now there have been defined a great variety of scheduling problems as well as corresponding models and solution approaches. Most models suffer from such more or less restrictive assumptions like single machine, unique processing times, zero set-up times or a single criterion. On the other hand some classical approaches like linear or dynamic programming are practicable for small-size problems only. Therefore over the past years we can observe an increasing application of heuristic search methods. But scheduling problems with multiple machines, forbidden setups and multiple objectives are scarcely considered. In our paper we apply a Genetic Algorithm to such a problem which was found at a continuous casting plant. Because of the forbidden setups the probability for a random generated schedule to be feasible is nearly zero. To resolve this problem we use three kinds of penalties, a global, a local and a combined approach. For performance investigations of these penalty types we applied our approaches to a real world test instance with 96 jobs, three machines and two objectives. We tested five different penalty levels with 51 independent runs to evaluate the impact of the penalties.  相似文献   

8.
This paper considers several probability maximization models for multi-scenario portfolio selection problems in the case that future returns in possible scenarios are multi-dimensional random variables. In order to consider occurrence probabilities and decision makers’ predictions with respect to all scenarios, a portfolio selection problem setting a weight with flexibility to each scenario is proposed. Furthermore, by introducing aspiration levels to occurrence probabilities or future target profit and maximizing the minimum aspiration level, a robust portfolio selection problem is considered. Since these problems are formulated as stochastic programming problems due to the inclusion of random variables, they are transformed into deterministic equivalent problems introducing chance constraints based on the stochastic programming approach. Then, using a relation between the variance and absolute deviation of random variables, our proposed models are transformed into linear programming problems and efficient solution methods are developed to obtain the global optimal solution. Furthermore, a numerical example of a portfolio selection problem is provided to compare our proposed models with the basic model.  相似文献   

9.
多数基于线性混合效应模型的变量选择方法分阶段对固定效应和随机效应进行选择,方法繁琐、易产生模型偏差,且大部分非参数和半参数的线性混合效应模型只涉及非参数部分的光滑度或者固定效应的选择,并未涉及非参变量或随机效应的选择。本文用B样条函数逼近非参数函数部分,从而把半参数线性混合效应模型转化为带逼近误差的线性混合效应模型。对随机效应的协方差矩阵采用改进的乔里斯基分解并重新参数化线性混合效应模型,接着对该模型的极大似然函数施加集群ALASSO惩罚和ALASSO惩罚两类惩罚,该法能实现非参数变量、固定效应和随机效应的联合变量选择,基于该法得出的估计量也满足相合性、稀疏性和Oracle性质。文章最后做了个数值模拟,模拟结果表明,本文提出的估计方法在变量选择的准确性、参数估计的精度两个方面均表现较好。  相似文献   

10.
Classification of items as good or bad can often be achieved more economically by examining the items in groups rather than individually. If the result of a group test is good, all items within it can be classified as good, whereas one or more items are bad in the opposite case. Whether it is necessary to identify the bad items or not, and if so, how, is described by the screening policy. In the course of time, a spectrum of group screening models has been studied, each including some policy. However, the majority ignores that items may arrive at random time epochs at the testing center in real life situations. This dynamic aspect leads to two decision variables: the minimum and maximum group size. In this paper, we analyze a discrete-time batch-service queueing model with a general dependency between the service time of a batch and the number of items within it. We deduce several important quantities, by which the decision variables can be optimized. In addition, we highlight that every possible screening policy can, in principle, be studied, by defining the dependency between the service time of a batch and the number of items within it appropriately.  相似文献   

11.
The conventional model structures presented in the data envelopment analysis (DEA) literature view all variables as behaving in a linear fashion, meaning that regardless of the amounts, large or small, of a variable held by the set of decision-making units, we apply the same multiplier to those various amounts. In certain situations this linearity assumption is not appropriate, and the conventional models need to be altered to accommodate nonlinear representations. In the current paper, we propose a modified DEA structure that captures certain forms of nonlinear behaviour within the additive DEA model, namely those that exhibit diminishing marginal value.  相似文献   

12.
In many biomedical studies, identifying effects of covariate interactions on survival is a major goal. Important examples are treatment–subgroup interactions in clinical trials, and gene–gene or gene–environment interactions in genomic studies. A common problem when implementing a variable selection algorithm in such settings is the requirement that the model must satisfy the strong heredity constraint, wherein an interaction may be included in the model only if the interaction’s component variables are included as main effects. We propose a modified Lasso method for the Cox regression model that adaptively selects important single covariates and pairwise interactions while enforcing the strong heredity constraint. The proposed method is based on a modified log partial likelihood including two adaptively weighted penalties, one for main effects and one for interactions. A two-dimensional tuning parameter for the penalties is determined by generalized cross-validation. Asymptotic properties are established, including consistency and rate of convergence, and it is shown that the proposed selection procedure has oracle properties, given proper choice of regularization parameters. Simulations illustrate that the proposed method performs reliably across a range of different scenarios.  相似文献   

13.
The classical theory of random dynamical systems is a pathwise theory based on a skew-product system consisting of a measure theoretic autonomous system that represents the driving noise and a topological cocycle mapping for the state evolution. This theory does not, however, apply to nonlocal dynamics such as when the dynamics of a sample path depends on other sample paths through an expectation or when the evolution of random sets depends on nonlocal properties such as the diameter of the sets. The authors showed recently in terms of stochastic morphological evolution equations that such nonlocal random dynamics can be characterized by a deterministic two-parameter process from the theory of nonautonomous dynamical systems acting on a state space of random variables or random sets with the mean-square topology. This observation is exploited here to provide a definition of mean-square random dynamical systems and their attractors. The main difficulty in applying the theory is the lack of useful characterizations of compact sets of mean-square random variables. It is illustrated through simple but instructive examples how this can be avoided in strictly contractive cases or circumvented by using weak compactness. The existence of a pullback attractor then follows from the much more easily determined mean-square ultimate boundedness of solutions.  相似文献   

14.
We apply the stochastic dynamic programming to obtain a lower bound for the mean project completion time in a PERT network, where the activity durations are exponentially distributed random variables. Moreover, these random variables are non-static in that the distributions themselves vary according to some randomness in society like strike or inflation. This social randomness is modelled as a function of a separate continuous-time Markov process over the time horizon. The results are verified by simulation.  相似文献   

15.
We analyze in this paper the longest increasing contiguous sequence or maximal ascending run of random variables with common uniform distribution but not independent. Their dependence is characterized by the fact that two successive random variables cannot take the same value. Using a Markov chain approach, we study the distribution of the maximal ascending run and we develop an algorithm to compute it. This problem comes from the analysis of several self-organizing protocols designed for large-scale wireless sensor networks, and we show how our results apply to this domain.  相似文献   

16.
Linear stochastic programming provides a flexible toolbox for analyzing real-life decision situations, but it can become computationally cumbersome when recourse decisions are involved. The latter are usually modeled as decision rules, i.e., functions of the uncertain problem data. It has recently been argued that stochastic programs can quite generally be made tractable by restricting the space of decision rules to those that exhibit a linear data dependence. In this paper, we propose an efficient method to estimate the approximation error introduced by this rather drastic means of complexity reduction: we apply the linear decision rule restriction not only to the primal but also to a dual version of the stochastic program. By employing techniques that are commonly used in modern robust optimization, we show that both arising approximate problems are equivalent to tractable linear or semidefinite programs of moderate sizes. The gap between their optimal values estimates the loss of optimality incurred by the linear decision rule approximation. Our method remains applicable if the stochastic program has random recourse and multiple decision stages. It also extends to cases involving ambiguous probability distributions.  相似文献   

17.
不等式证明中的概率思想方法   总被引:1,自引:0,他引:1  
不等式的证明往往比较复杂,有时直观含义也比较抽象,代数的方法难以发挥作用。如果能够建立适当的概率模型,赋以一些随机事件或随机变量的具体含义,再利用概率的理论加以证明,则常常能使证明过程得到简化。同时还可以为抽象的数学问题提供具体的概率背景,沟通各数学分支之间的联系。文中通过几个不等式的证明阐明了常用的概率思想方法。  相似文献   

18.
A Heuristic for Moment-Matching Scenario Generation   总被引:1,自引:0,他引:1  
In stochastic programming models we always face the problem of how to represent the random variables. This is particularly difficult with multidimensional distributions. We present an algorithm that produces a discrete joint distribution consistent with specified values of the first four marginal moments and correlations. The joint distribution is constructed by decomposing the multivariate problem into univariate ones, and using an iterative procedure that combines simulation, Cholesky decomposition and various transformations to achieve the correct correlations without changing the marginal moments.With the algorithm, we can generate 1000 one-period scenarios for 12 random variables in 16 seconds, and for 20 random variables in 48 seconds, on a Pentium III machine.  相似文献   

19.
Adly  Samir  Attouch  Hedy 《Mathematical Programming》2022,191(1):405-444

We present a Branch-and-Cut algorithm for a class of nonlinear chance-constrained mathematical optimization problems with a finite number of scenarios. Unsatisfied scenarios can enter a recovery mode. This class corresponds to problems that can be reformulated as deterministic convex mixed-integer nonlinear programming problems with indicator variables and continuous scenario variables, but the size of the reformulation is large and quickly becomes impractical as the number of scenarios grows. The Branch-and-Cut algorithm is based on an implicit Benders decomposition scheme, where we generate cutting planes as outer approximation cuts from the projection of the feasible region on suitable subspaces. The size of the master problem in our scheme is much smaller than the deterministic reformulation of the chance-constrained problem. We apply the Branch-and-Cut algorithm to the mid-term hydro scheduling problem, for which we propose a chance-constrained formulation. A computational study using data from ten hydroplants in Greece shows that the proposed methodology solves instances faster than applying a general-purpose solver for convex mixed-integer nonlinear programming problems to the deterministic reformulation, and scales much better with the number of scenarios.

  相似文献   

20.
In problems of portfolio selection the reward-risk ratio criterion is optimized to search for a risky portfolio offering the maximum increase of the mean return, compared to the risk-free investment opportunities. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several polyhedral risk measures, being linear programming (LP) computable in the case of discrete random variables represented by their realizations under specified scenarios, have been introduced and applied in portfolio optimization. The reward-risk ratio optimization with polyhedral risk measures can be transformed into LP formulations. The LP models typically contain the number of constraints proportional to the number of scenarios while the number of variables (matrix columns) proportional to the total of the number of scenarios and the number of instruments. Real-life financial decisions are usually based on more advanced simulation models employed for scenario generation where one may get several thousands scenarios. This may lead to the LP models with huge number of variables and constraints thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by alternative models based on the inverse ratio minimization and taking advantages of the LP duality. In the introduced models the number of structural constraints (matrix rows) is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号