首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The accelerated failure time model is a useful alternative to the Cox proportional hazard model. We investigate whether or not a misspecified accelerated failure time model provides a valid test of the no-treatment effect in randomized clinical trials. We show that the minimum dispersion statistic based on rank regression by Wei et al. (1990) must be modified in order to conduct valid tests under misspecification, whereas the resampling-based methods by Jin et al. (2003) are valid without any modification. Numerical studies are conducted to examine the small sample behavior of the modified minimum dispersion statistic and the resampling-based method. Finally, an illustration is given with a dataset from a clinical trial.  相似文献   

2.
We studied the part-type selection problem in a Flexible Manufacturing Environment. We formulated the problem as a bicriteria mathematical programming problem where the objectives are maximisation of a number of part-types selected and minimisation of a measure of total tardiness. Taking the vector optimisation approach, we found all of the efficient solutions to the problem. The efficient set to this problem portrays the trade-off information between the two objectives. We suggest that the Decision Maker should choose one of the efficient solutions for implementation depending on the dynamics of the system. We report computational results that highlight some characteristics of the efficient solutions. We compare the quality of solutions obtained through the bicriteria model with those of a more traditional approach: maximising the weighted sum of part-types selected where the weights are assigned based on due-dates. Our results imply a strong possibility of significant improvement in due date performance by taking a bicriteria approach to the part-type selection problem.  相似文献   

3.
The Fisher information matrix summarizes the amount of information in the data relative to the quantities of interest. There are many applications of the information matrix in modeling, systems analysis, and estimation, including confidence region calculation, input design, prediction bounds, and “noninformative” priors for Bayesian analysis. This article reviews some basic principles associated with the information matrix, presents a resampling-based method for computing the information matrix together with some new theory related to efficient implementation, and presents some numerical results. The resampling-based method relies on an efficient technique for estimating the Hessian matrix, introduced as part of the adaptive (“second-order”) form of the simultaneous perturbation stochastic approximation (SPSA) optimization algorithm.  相似文献   

4.
5.
In this paper we consider the problem of selecting an object or a course of action from a set of possible alternatives. To give the paper focus, we concentrate initially on an object recognition problem in which the characteristic features of the object are reported by remote sensors. We then extend the method to a more general class of selection problems and consider several different scenarios.

Information is provided by a set of knowledge system reports on a single feature, and the output from these systems is not totally explicit but provides posible values for the observed feature along with a degree of certitude.We use fuzzy sets to represent this vague information. Information from independent sources is combined using the Dempster-Shafer approach adapted to the situation in which the focal elements are fuzzy as in the recent paper by J. Yen [7]. We base our selection rule on the belief and plausibility functions generated by this approach to accessing evidence.

For situations in which the information is too sparse and/or too vague to make a single selection, we construct a preference relationship based on the concept of averaged subsethood for fuzzy sets as discussed by B. Koskoin [4]. We also define an explicit metric upon which to base our selection mechanism for situations in which the Dempster-Shafer rule of combination is inappropriate  相似文献   

6.
This article deals with constructing a confidence interval for the reliability parameter using ranked set sampling. Some asymptotic and resampling-based intervals are suggested, and compared with their simple random sampling counterparts using Monte Carlo simulations. Finally, the methods are applied on a real data set in the context of agriculture.  相似文献   

7.
Model selection strategies have been routinely employed to determine a model for data analysis in statistics, and further study and inference then often proceed as though the selected model were the true model that were known a priori. Model averaging approaches, on the other hand, try to combine estimators for a set of candidate models. Specifically, instead of deciding which model is the 'right' one, a model averaging approach suggests to fit a set of candidate models and average over the estimators using data adaptive weights.In this paper we establish a general frequentist model averaging framework that does not set any restrictions on the set of candidate models. It broaden, the scope of the existing methodologies under the frequentist model averaging development. Assuming the data is from an unknown model, we derive the model averaging estimator and study its limiting distributions and related predictions while taking possible modeling biases into account.We propose a set of optimal weights to combine the individual estimators so that the expected mean squared error of the average estimator is minimized. Simulation studies are conducted to compare the performance of the estimator with that of the existing methods. The results show the benefits of the proposed approach over traditional model selection approaches as well as existing model averaging methods.  相似文献   

8.
Clustered interval-censored failure time data often arises in medical studies when study subjects come from the same cluster. Furthermore, the failure time may be related to the cluster size. A simple and common approach is to simplify interval-censored data due to the lack of proper inference procedures for direct analysis. For this reason, we proposed the within-cluster resampling-based method to consider the case II interval-censored data under the additive hazards model. With-cluster resampling is simple but computationally intensive. A major advantage of the proposed approach is that the estimator can be easily implemented when the cluster size is informative. Asymptotic properties and some simulation results are provided and indicate that the proposed approach works well.  相似文献   

9.
This paper presents an approach to the portfolio selection problem based on Sharpe's single-index model and on Fuzzy Sets Theory. In this sense, expert estimations about future Betas of each financial asset have been included in the portfolio selection model denoted as ‘Expert Betas’ and modelled as trapezoidal fuzzy numbers. Value, ambiguity and fuzziness are three basic concepts involved in the model which provide enough information about fuzzy numbers representing ‘Expert Betas’ and that are simple to handle. In order to select an optimal portfolio, a Goal Programming model has been proposed including imprecise investor's aspirations concerning asset's proportions of both, high-and low-risk assets. Semantics of these goals are based on the fuzzy membership of a goal satisfaction set. To illustrate the proposed model a real portfolio selection problem is presented.  相似文献   

10.
Empirical model selection in generalized linear mixed effects models   总被引:1,自引:0,他引:1  
This paper focuses on model selection in generalized linear mixed models using an information criterion approach. In these models in general, the response marginal distribution cannot be analytically derived. Thus, for parameter estimation, two approximations are revisited both leading to iterative model linearizations. We propose simple model selection criteria adapted from information criteria and based on the linearized model obtained at convergence of the algorithm. The quality of derived criteria are evaluated through simulations.  相似文献   

11.
We consider the problem of performing matrix completion with side information on row-by-row and column-by-column similarities. We build upon recent proposals for matrix estimation with smoothness constraints with respect to row and column graphs. We present a novel iterative procedure for directly minimizing an information criterion to select an appropriate amount of row and column smoothing, namely, to perform model selection. We also discuss how to exploit the special structure of the problem to scale up the estimation and model selection procedure via the Hutchinson estimator, combined with a stochastic Quasi-Newton approach. Supplementary material for this article is available online.  相似文献   

12.
This article suggests a method for variable and transformation selection based on posterior probabilities. Our approach allows for consideration of all possible combinations of untransformed and transformed predictors along with transformed and untransformed versions of the response. To transform the predictors in the model, we use a change-point model, or “change-point transformation,” which can yield more interpretable models and transformations than the standard Box–Tidwell approach. We also address the problem of model uncertainty in the selection of models. By averaging over models, we account for the uncertainty inherent in inference based on a single model chosen from the set of models under consideration. We use a Markov chain Monte Carlo model composition (MC3) method which allows us to average over linear regression models when the space of models under consideration is very large. This considers the selection of variables and transformations at the same time. In an example, we show that model averaging improves predictive performance as compared with any single model that might reasonably be selected, both in terms of overall predictive score and of the coverage of prediction intervals. Software to apply the proposed methodology is available via StatLib.  相似文献   

13.
Product design and selection using fuzzy QFD and fuzzy MCDM approaches   总被引:1,自引:0,他引:1  
Quality function deployment (QFD) is a useful analyzing tool in product design and development. To solve the uncertainty or imprecision in QFD, numerous researchers have applied the fuzzy set theory to QFD and developed various fuzzy QFD models. Three issues are investigated by examining their models. First, the extant studies focused on identifying important engineering characteristics and seldom explored the subsequent prototype product selection issue. Secondly, the previous studies usually use fuzzy number algebraic operations to calculate the fuzzy sets in QFD. This approach may cause a great deviation in the result from the correct value. Thirdly, few studies have paid attention to the competitive analysis in QFD. However, it can provide product developers with a large amount of valuable information. Aimed at these three issues, this study integrates fuzzy QFD and the prototype product selection model to develop a product design and selection (PDS) approach. In fuzzy QFD, the α-cut operation is adopted to calculate the fuzzy set of each component. Competitive analysis and the correlations among engineering characteristics are also considered. In prototype product selection, engineering characteristics and the factors involved in product development are considered. A fuzzy multi-criteria decision making (MCDM) approach is proposed to select the best prototype product. A case study is given to illustrate the research steps for the proposed PDS method. The proposed method provides product developers with more useful information and precise analysis results. Thus, the PDS method can serve as a helpful decision-aid tool in product design.  相似文献   

14.
Conditional probabilities are one promising and widely used approach to model uncertainty in information systems. This paper discusses the DUCK-calculus, which is founded on the cautious approach to uncertain probabilistic inference. Based on a set of sound inference rules, derived probabilistic information is gained by local bounds propagation techniques. Precision being always a central point of criticism to such systems, we demonstrate that DUCK need not necessarily suffer from these problems. We can show that the popular Bayesian networks are subsumed by DUCK, implying that precise probabilities can be deduced by local propagation techniques, even in the multiply connected case. A comparative study with INFERNO and with inference techniques based on global operations-research techniques yields quite favorable results for our approach. Since conditional probabilities are also suited to model nonmonotonic situations by considering different contexts, we investigate the problems of maximal and relevant contexts, needed to draw default conclusions about individuals.  相似文献   

15.
We introduce here the concept of Bayesian networks, in compound Poisson model, which provides a graphical modeling framework that encodes the joint probability distribution for a set of random variables within a directed acyclic graph. We suggest an approach proposal which offers a new mixed implicit estimator. We show that the implicit approach applied in compound Poisson model is very attractive for its ability to understand data and does not require any prior information. A comparative study between learned estimates given by implicit and by standard Bayesian approaches is established. Under some conditions and based on minimal squared error calculations, we show that the mixed implicit estimator is better than the standard Bayesian and the maximum likelihood estimators. We illustrate our approach by considering a simulation study in the context of mobile communication networks.  相似文献   

16.
以突发危机事件应急决策为应用背景,讨论了双论域上模糊粗糙集的基本理论,建立了基于模糊相容关系的双论域模糊粗糙集模型. 在此基础上,把突发危机事件应急决策转化为一个具有模糊决策对象的双论域决策近似空间上的粗糙近似问题,构建了基于双论域模糊粗糙集的应急决策模型.首先在双论域近似空间中计算模糊决策对象的上(下)近似,进而结合经典非确定型决策的思想给出了突发危机事件应急决策的规则.同时,给出了模型的算法.该模型给出了一种在不完全信息环境下应急决策的方法,给出了在充分考虑决策者个人偏好信息基础上的决策置信度以及最优决策规则.该方法能够比较充分地符合应急决策信息不充分、资源有限以及时间紧迫的基本特征, 进而对突发危机事件应急决策提供科学的理论基础和现实的决策方法.最后,通过应用算例说明了模型的应用过程,结果验证了本文给出模型的有效性。  相似文献   

17.
Making decisions challenges foreign exchange (FX) market brokers due to the volatility of the foreign exchange market, as well as the unmanageable flood of possibly relevant information. Thus, decision making in this complex and dynamically changing environment is a difficult task requiring automated decision support systems. In this contribution, we describe an econometric decision support approach, which enables the extraction of essential information indispensable to set up accurate forecasting models. Our approach is based on a genetic algorithm (GA) and applies the resulting models to forecast daily EUR/USD-exchange rates. In doing so, the genetic algorithm optimizes single-equation regression forecast models. The approach discussed is new in literature and, moreover, allows flexibility in automated model selection within a reasonably short time.  相似文献   

18.
We present a new approach that enables investors to seek a reasonably robust policy for portfolio selection in the presence of rare but high-impact realization of moment uncertainty. In practice, portfolio managers face difficulty in seeking a balance between relying on their knowledge of a reference financial model and taking into account possible ambiguity of the model. Based on the concept of Distributionally Robust Optimization (DRO), we introduce a new penalty framework that provides investors flexibility to define prior reference models using the distributional information of the first two moments and accounts for model ambiguity in terms of extreme moment uncertainty. We show that in our approach a globally-optimal portfolio can in general be obtained in a computationally tractable manner. We also show that for a wide range of specifications our proposed model can be recast as semidefinite programs. Computational experiments show that our penalized moment-based approach outperforms classical DRO approaches in terms of both average and downside-risk performance using historical data.  相似文献   

19.
The selection of the optimal ensembles of classifiers in multiple-classifier selection technique is un-decidable in many cases and it is potentially subjected to a trial-and-error search. This paper introduces a quantitative meta-learning approach based on neural network and rough set theory in the selection of the best predictive model. This approach depends directly on the characteristic, meta-features of the input data sets. The employed meta-features are the degree of discreteness and the distribution of the features in the input data set, the fuzziness of these features related to the target class labels and finally the correlation and covariance between the different features. The experimental work that consider these criteria are applied on twenty nine data sets using different classification techniques including support vector machine, decision tables and Bayesian believe model. The measures of these criteria and the best result classification technique are used to build a meta data set. The role of the neural network is to perform a black-box prediction of the optimal, best fitting, classification technique. The role of the rough set theory is the generation of the decision rules that controls this prediction approach. Finally, formal concept analysis is applied for the visualization of the generated rules.  相似文献   

20.
This paper introduces a novel methodology for the global optimization of general constrained grey-box problems. A grey-box problem may contain a combination of black-box constraints and constraints with a known functional form. The novel features of this work include (i) the selection of initial samples through a subset selection optimization problem from a large number of faster low-fidelity model samples (when a low-fidelity model is available), (ii) the exploration of a diverse set of interpolating and non-interpolating functional forms for representing the objective function and each of the constraints, (iii) the global optimization of the parameter estimation of surrogate functions and the global optimization of the constrained grey-box formulation, and (iv) the updating of variable bounds based on a clustering technique. The performance of the algorithm is presented for a set of case studies representing an expensive non-linear algebraic partial differential equation simulation of a pressure swing adsorption system for \(\hbox {CO}_{2}\). We address three significant sources of variability and their effects on the consistency and reliability of the algorithm: (i) the initial sampling variability, (ii) the type of surrogate function, and (iii) global versus local optimization of the surrogate function parameter estimation and overall surrogate constrained grey-box problem. It is shown that globally optimizing the parameters in the parameter estimation model, and globally optimizing the constrained grey-box formulation has a significant impact on the performance. The effect of sampling variability is mitigated by a two-stage sampling approach which exploits information from reduced-order models. Finally, the proposed global optimization approach is compared to existing constrained derivative-free optimization algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号