首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Andrea Saltelli  Michaela Saisana 《PAMM》2007,7(1):2140013-2140014
Global sensitivity analysis offers a set of tools tailored to the impact assessment of certain assumptions on a models output. A recent book on the topic covers those issues [1]. Given the limited space for discussing thoroughly any of those methods, we will next summarize the main conclusions that derive from the application of various global sensitivity analysis methods on chemical models [2], econometric studies [3] financial models [4] and composite indicators [5, 6]. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
We assessed the ability of several penalized regression methods for linear and logistic models to identify outcome-associated predictors and the impact of predictor selection on parameter inference for practical sample sizes. We studied effect estimates obtained directly from penalized methods (Algorithm 1), or by refitting selected predictors with standard regression (Algorithm 2). For linear models, penalized linear regression, elastic net, smoothly clipped absolute deviation (SCAD), least angle regression and LASSO had a low false negative (FN) predictor selection rates but false positive (FP) rates above 20 % for all sample and effect sizes. Partial least squares regression had few FPs but many FNs. Only relaxo had low FP and FN rates. For logistic models, LASSO and penalized logistic regression had many FPs and few FNs for all sample and effect sizes. SCAD and adaptive logistic regression had low or moderate FP rates but many FNs. 95 % confidence interval coverage of predictors with null effects was approximately 100 % for Algorithm 1 for all methods, and 95 % for Algorithm 2 for large sample and effect sizes. Coverage was low only for penalized partial least squares (linear regression). For outcome-associated predictors, coverage was close to 95 % for Algorithm 2 for large sample and effect sizes for all methods except penalized partial least squares and penalized logistic regression. Coverage was sub-nominal for Algorithm 1. In conclusion, many methods performed comparably, and while Algorithm 2 is preferred to Algorithm 1 for estimation, it yields valid inference only for large effect and sample sizes.  相似文献   

3.
非寿险分类费率模型及其参数估计   总被引:1,自引:1,他引:0  
在非寿险分类费率厘定中,存在各种模型可供选择,如加法模型、乘法模型、混合模型和广义线性模型等,而在这些模型的参数估计中,还存在各种可供选择的估计方法,如最小二乘法、极大似然法、最小x2法、直接法和边际总和法等。这些模型和参数估计方法散见于各种精算学文献中,本文对这些模型和参数估计方法进行了系统的比较和分析,并揭示了它们之间存在的一些等价关系。  相似文献   

4.

In this article, we deal with sparse high-dimensional multivariate regression models. The models distinguish themselves from ordinary multivariate regression models in two aspects: (1) the dimension of the response vector and the number of covariates diverge to infinity; (2) the nonzero entries of the coefficient matrix and the precision matrix are sparse. We develop a two-stage sequential conditional selection (TSCS) approach to the identification and estimation of the nonzeros of the coefficient matrix and the precision matrix. It is established that the TSCS is selection consistent for the identification of the nonzeros of both the coefficient matrix and the precision matrix. Simulation studies are carried out to compare TSCS with the existing state-of-the-art methods, which demonstrates that the TSCS approach outperforms the existing methods. As an illustration, the TSCS approach is also applied to a real dataset.

  相似文献   

5.
Dimension reduction is a well-known pre-processing step in the text clustering to remove irrelevant, redundant and noisy features without sacrificing performance of the underlying algorithm. Dimension reduction methods are primarily classified as feature selection (FS) methods and feature extraction (FE) methods. Though FS methods are robust against irrelevant features, they occasionally fail to retain important information present in the original feature space. On the other hand, though FE methods reduce dimensions in the feature space without losing much information, they are significantly affected by the irrelevant features. The one-stage models, FS/FE methods, and the two-stage models, a combination of FS and FE methods proposed in the literature are not sufficient to fulfil all the above mentioned requirements of the dimension reduction. Therefore, we propose three-stage dimension reduction models to remove irrelevant, redundant and noisy features in the original feature space without loss of much valuable information. These models incorporates advantages of the FS and the FE methods to create a low dimension feature subspace. The experiments over three well-known benchmark text datasets of different characteristics show that the proposed three-stage models significantly improve performance of the clustering algorithm as measured by micro F-score, macro F-score, and total execution time.  相似文献   

6.
Abstract

Recently developed small-sample asymptotics provide nearly exact inference for parametric statistical models. One approach is via approximate conditional and marginal inference, respectively, in multiparameter exponential families and regression-scale models. Although the theory is well developed, these methods are under-used in practical work. This article presents a set of S-Plus routines for approximate conditional inference in logistic and loglinear regression models. It represents the first step of a project to create a library for small-sample inference which will include methods for some of the most widely used statistical models. Details of how the methods have been implemented are discussed. An example illustrates the code.  相似文献   

7.
C. Kuhn  B. Eidel 《PAMM》2007,7(1):2090019-2090020
For the numerical treatment of inelastic material behavior within the finite element method a partitioned ansatz is standard in most of the software frameworks; the weak form of equilibrium is discretized in space and solved on a global level, whereas the initial value problem for the evolution equations of internal state variables is separately solved on a local, i.e. Gauss-point level, where strains, derived from global displacements, serve as input, [1]. Applying higher order methods (p > 2) to the time integration of plasticity models an order reduction is reported where Runge-Kutta schemes have shown hardly more than order two at best [2, 3]. In the present contribution, we analyze the reason for order reduction and in doing so, introduce an improved strain approximation and switching point detection which play a crucial role for the convergence order of multi-stage methods used in this context. We apply Runge-Kutta methods of Radau IIa class to the evolution equations of viscoelastic and elastoplastic material models and show ther improved performence in numerical examples. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
Using methods of perturbation theory, some applied models of interactions of strongly nonhomogeneous layered bases of the following forms are constructed: three-layered stack that lies on an absolutely rigid base and is composed from smooth (Problem 1) and linked (Problem 2) layers; and two smooth layers that lie on an elastic half-space. Some applied models of deformation for different relations of elastic parameters of layers are obtained.Translated from Dinamicheskie Sistemy, No. 8, pp. 37–40, 1989.  相似文献   

9.
10.
Digital inpainting is a fundamental problem in image processing and many variational models for this problem have appeared recently in the literature. Among them are the very successfully Total Variation (TV) model [11] designed for local inpainting and its improved version for large scale inpainting: the Curvature-Driven Diffusion (CDD) model [10]. For the above two models, their associated Euler Lagrange equations are highly nonlinear partial differential equations. For the TV model there exists a relatively fast and easy to implement fixed point method, so adapting the multigrid method of [24] to here is immediate. For the CDD model however, so far only the well known but usually very slow explicit time marching method has been reported and we explain why the implementation of a fixed point method for the CDD model is not straightforward. Consequently the multigrid method as in [Savage and Chen, Int. J. Comput. Math., 82 (2005), pp. 1001-1015] will not work here. This fact represents a strong limitation to the range of applications of this model since usually fast solutions are expected. In this paper, we introduce a modification designed to enable a fixed point method to work and to preserve the features of the original CDD model. As a result, a fast and efficient multigrid method is developed for the modified model. Numerical experiments are presented to show the very good performance of the fast algorithm.  相似文献   

11.
A type-2 fuzzy variable is a map from a fuzzy possibility space to the real number space; it is an appropriate tool for describing type-2 fuzziness. This paper first presents three kinds of critical values (CVs) for a regular fuzzy variable (RFV), and proposes three novel methods of reduction for a type-2 fuzzy variable. Secondly, this paper applies the reduction methods to data envelopment analysis (DEA) models with type-2 fuzzy inputs and outputs, and develops a new class of generalized credibility DEA models. According to the properties of generalized credibility, when the inputs and outputs are mutually independent type-2 triangular fuzzy variables, we can turn the proposed fuzzy DEA model into its equivalent parametric programming problem, in which the parameters can be used to characterize the degree of uncertainty about type-2 fuzziness. For any given parameters, the parametric programming model becomes a linear programming one that can be solved using standard optimization solvers. Finally, one numerical example is provided to illustrate the modeling idea and the efficiency of the proposed DEA model.  相似文献   

12.
Variational registration models are non-rigid and deformable imaging techniques for accurate registration of two images. As with other models for inverse problems using the Tikhonov regularization, they must have a suitably chosen regularization term as well as a data fitting term. One distinct feature of registration models is that their fitting term is always highly nonlinear and this nonlinearity restricts the class of numerical methods that are applicable. This paper first reviews the current state-of-the-art numerical methods for such models and observes that the nonlinear fitting term is mostly ‘avoided’ in developing fast multigrid methods. It then proposes a unified approach for designing fixed point type smoothers for multigrid methods. The diffusion registration model (second-order equations) and a curvature model (fourth-order equations) are used to illustrate our robust methodology. Analysis of the proposed smoothers and comparisons to other methods are given. As expected of a multigrid method, being many orders of magnitude faster than the unilevel gradient descent approach, the proposed numerical approach delivers fast and accurate results for a range of synthetic and real test images.  相似文献   

13.
This study examines the paper of Cárdenas-Barrón, entitled “The derivation of EOQ/EPQ inventory models with two backorders costs using analytic geometry and algebra.” that was published in Applied Mathematical Modelling at 2011 to point out it contains questionable results. The algebraic geometry and algebra applied by Cárdenas-Barrón [2] violated the rule of the arithmetic-geometric mean (AGM) mentioned by Cárdenas-Barrón [74]. Moreover, we point out Sphicas [1] had already solved this kind of EOQ and EPQ models using algebraic methods.  相似文献   

14.
Hidden Markov models are used as tools for pattern recognition in a number of areas, ranging from speech processing to biological sequence analysis. Profile hidden Markov models represent a class of so-called “left–right” models that have an architecture that is specifically relevant to classification of proteins into structural families based on their amino acid sequences. Standard learning methods for such models employ a variety of heuristics applied to the expectation-maximization implementation of the maximum likelihood estimation procedure in order to find the global maximum of the likelihood function. Here, we compare maximum likelihood estimation to fully Bayesian estimation of parameters for profile hidden Markov models with a small number of parameters. We find that, relative to maximum likelihood methods, Bayesian methods assign higher scores to data sequences that are distantly related to the pattern consensus, show better performance in classifying these sequences correctly, and continue to perform robustly with regard to misspecification of the number of model parameters. Though our study is limited in scope, we expect our results to remain relevant for models with a large number of parameters and other types of left–right hidden Markov models.  相似文献   

15.
16.
We briefly describe the theory of root transfer matrices for four-line models with the field in the new indexless form. We use theoretical and numerical methods to reveal new effects in the theory of singular points and phase transitions. A substantial part of the results is obtained using a numerical algorithm that drastically (at least exponentially) reduces the computational complexity of Ising-type models by using the extremely sparse root transfer matrix. __________ Translated from Teoreticheskaya i Matematicheskaya Fizika, Vol. 149, No. 2, pp. 281–298, November, 2006.  相似文献   

17.
In actuarial practice, regression models serve as a popular statistical tool for analyzing insurance data and tariff ratemaking. In this paper, we consider classical credibility models that can be embedded within the framework of mixed linear models. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators are commonly pursued. However, it is well-known that these standard and fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to the occurrence of outliers. To obtain better estimators for premium calculation and prediction of future claims, various robust methods have been successfully adapted to credibility theory in the actuarial literature. The objective of this work is to develop robust and efficient methods for credibility when heavy-tailed claims are approximately log-location-scale distributed. To accomplish that, we first show how to express additive credibility models such as Bühlmann-Straub and Hachemeister ones as mixed linear models with symmetric or asymmetric errors. Then, we adjust adaptively truncated likelihood methods and compute highly robust credibility estimates for the ordinary but heavy-tailed claims part. Finally, we treat the identified excess claims separately and find robust-efficient credibility premiums. Practical performance of this approach is examined-via simulations-under several contaminating scenarios. A widely studied real-data set from workers’ compensation insurance is used to illustrate functional capabilities of the new robust credibility estimators.  相似文献   

18.
This paper uses data envelopment analysis (DEA) to assess the operational effectiveness of the UK Coastguard (Maritime Rescue) coordination centres over the period 1995–1998. Based on the development of a performance measurement framework that is considerably more realistic and complex than the one apparently used by the Government, the main grounds for the latter's decision—confirmed in 1999—to close the Oban, Pentland and Tyne Tees coordination centres are called into question. The paper aims to contribute to the relevant academic literature in two ways: (1) by using formal analysis methods to measure the operational performance of a vital government-supplied service where such methods have not been applied before; and (2) by demonstrating how the results of suitable regression models can be used to inform the specification of appropriate DEA models, particularly with respect to the incorporation of relevant environmental factors.  相似文献   

19.
We introduce the discrete automaton models of gene networks with weight functions of vertices accounting for the various forms of the regulatory interaction of agents. We study the discrete mapping that describes the operation of a fragment of the gene network of the bacteria E. coli. For this mapping, we find its fixed points (stationary states) on using the SAT approach. We also study the mappings that are defined by the random graphs of the network which we generate in accordance with the Gilbert-Erdos-Renyi and Watts-Strogatz models. For these mappings, we find the fixed points and the length 2 and 3 cycles. This article can be regarded as a survey of our results on the discrete models of gene networks and the numerical methods for studying their operation.  相似文献   

20.
High‐order variational models are powerful methods for image processing and analysis, but they can lead to complicated high‐order nonlinear partial differential equations that are difficult to discretise to solve computationally. In this paper, we present some representative high‐order variational models and provide detailed descretisation of these models and numerical implementation of the split Bregman algorithm for solving these models using the fast Fourier transform. We demonstrate the advantages and disadvantages of these high‐order models in the context of image denoising through extensive experiments. The methods and techniques can also be used for other applications, such as image decomposition, inpainting and segmentation. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号