首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
This paper considers clustered doubly-censored data that occur when there exist several correlated survival times of interest and only doubly censored data are available for each survival time. In this situation, one approach is to model the marginal distribution of failure times using semiparametric linear transformation models while leaving the dependence structure completely arbitrary. We demonstrate that the approach of Cai et al. (Biometrika 87:867–878, 2000) can be extended to clustered doubly censored data. We propose two estimators by using two different estimated censoring weights. A simulation study is conducted to investigate the proposed estimators.  相似文献   

2.
This article is concerned with the problem of the alignment of multiple sets of curves. We analyze two real examples arising from the biomedical area for which we need to test whether there are any statistically significant differences between two subsets of subjects. To synchronize a set of curves, we propose a new nonparametric landmark-based registration method based on the alignment of the structural intensity of the zero-crossings of a wavelet transform. The structural intensity is a recently proposed multiscale technique that highlights the main features of a signal observed with noise. We conduct a simulation study to compare our landmark-based registration approach with some existing methods for curve alignment. For the two real examples, we compare the registered curves with FANOVA techniques, and a detailed analysis of the warping functions is provided.  相似文献   

3.
We consider the use ofB-spline nonparametric regression models estimated by the maximum penalized likelihood method for extracting information from data with complex nonlinear structure. Crucial points inB-spline smoothing are the choices of a smoothing parameter and the number of basis functions, for which several selectors have been proposed based on cross-validation and Akaike information criterion known as AIC. It might be however noticed that AIC is a criterion for evaluating models estimated by the maximum likelihood method, and it was derived under the assumption that the ture distribution belongs to the specified parametric model. In this paper we derive information criteria for evaluatingB-spline nonparametric regression models estimated by the maximum penalized likelihood method in the context of generalized linear models under model misspecification. We use Monte Carlo experiments and real data examples to examine the properties of our criteria including various selectors proposed previously.  相似文献   

4.
This work exploits links between Data Envelopment Analysis (DEA) and multicriteria decision analysis (MCDA), with decision making units (DMUs) playing the role of decision alternatives. A novel perspective is suggested on the use of the additive DEA model in order to overcome some of its shortcomings, using concepts from multiattribute utility models with imprecise information. The underlying idea is to convert input and output factors into utility functions that are aggregated using a weighted sum (additive model of multiattribute utility theory), and then let each DMU choose the weights associated with these functions that minimize the difference of utility to the best DMU. The resulting additive DEA model with oriented projections has a clear rationale for its efficiency measures, and allows meaningful introduction of constraints on factor weights.  相似文献   

5.
Variational registration models are non-rigid and deformable imaging techniques for accurate registration of two images. As with other models for inverse problems using the Tikhonov regularization, they must have a suitably chosen regularization term as well as a data fitting term. One distinct feature of registration models is that their fitting term is always highly nonlinear and this nonlinearity restricts the class of numerical methods that are applicable. This paper first reviews the current state-of-the-art numerical methods for such models and observes that the nonlinear fitting term is mostly ‘avoided’ in developing fast multigrid methods. It then proposes a unified approach for designing fixed point type smoothers for multigrid methods. The diffusion registration model (second-order equations) and a curvature model (fourth-order equations) are used to illustrate our robust methodology. Analysis of the proposed smoothers and comparisons to other methods are given. As expected of a multigrid method, being many orders of magnitude faster than the unilevel gradient descent approach, the proposed numerical approach delivers fast and accurate results for a range of synthetic and real test images.  相似文献   

6.
We provide analytic pricing formulas for Fixed and Floating Range Accrual Notes within the multifactor Wishart affine framework which extends significantly the standard affine model. Using estimates for three short rate models, two of which are based on the Wishart process whilst the third one belongs to the standard affine framework, we price these structured products using the FFT methodology. Thanks to the Wishart tractability the hedge ratios are also easily computed. As the models are estimated on the same dataset, our results illustrate how the fit discrepancies (meaning differences in the likelihood functions) between models translate in terms of derivatives pricing errors, and we show that the models can produce different price evolutions for the Range Accrual Notes. The differences can be substantial and underline the importance of model risk both from a static and a dynamic perspective. These results are confirmed by an analysis performed at the hedge ratios level.  相似文献   

7.
We study the numerical integration of functions depending on an infinite number of variables. We provide lower error bounds for general deterministic algorithms and provide matching upper error bounds with the help of suitable multilevel algorithms and changing-dimension algorithms. More precisely, the spaces of integrands we consider are weighted, reproducing kernel Hilbert spaces with norms induced by an underlying anchored function space decomposition. Here the weights model the relative importance of different groups of variables. The error criterion used is the deterministic worst-case error. We study two cost models for function evaluations that depend on the number of active variables of the chosen sample points, and we study two classes of weights, namely product and order-dependent weights and the newly introduced finite projective dimension weights. We show for these classes of weights that multilevel algorithms achieve the optimal rate of convergence in the first cost model while changing-dimension algorithms achieve the optimal convergence rate in the second model. As an illustrative example, we discuss the anchored Sobolev space with smoothness parameter \(\alpha \) and provide new optimal quasi-Monte Carlo multilevel algorithms and quasi-Monte Carlo changing-dimension algorithms based on higher-order polynomial lattice rules.  相似文献   

8.
Scoring rules are an important disputable subject in data envelopment analysis (DEA). Various organizations use voting systems whose main object is to rank alternatives. In these methods, the ranks of alternatives are obtained by their associated weights. The method for determining the ranks of alternatives by their weights is an important issue. This problem has been the subject at hand of some authors. We suggest a three-stage method for the ranking of alternatives. In the first stage, the rank position of each alternative is computed based on the best and worst weights in the optimistic and pessimistic cases, respectively. The vector of weights obtained in the first stage is not a singleton. Hence, to deal with this problem, a secondary goal is used in the second stage. In the third stage of our method, the ranks of the alternatives approach the optimistic or pessimistic case. It is mentionable that the model proposed in the third stage is a multi-criteria decision making (MCDM) model and there are several methods for solving it; we use the weighted sum method in this paper. The model is solved by mixed integer programming. Also, we obtain an interval for the rank of each alternative. We present two models on the basis of the average of ranks in the optimistic and pessimistic cases. The aim of these models is to compute the rank by common weights.  相似文献   

9.
Monitoring population trends in harbor seals (Phoca vitulina) generally involves two steps: (i) a census obtained from aerial surveys of haul‐out sites, and (ii) an upward correction based on the proportion of seals hauled out as estimated from a sample of telemetry‐tagged seals. Here we present a mathematical method for obtaining site‐specific correction factors without telemetry. The method also determines site‐specific environmental factors associated with haulout and provides algebraic equations that predict diurnal haul‐out numbers and correction factors as functions of these variables. We applied the method at a haul‐out site on Protection Island, Washington, USA. The haul‐out model and correction factor model were functions of tide height, current velocity, and time of day, and the haul‐out model explained 46% of the observed variability in diurnal haul‐out dynamics. Although the particular models are site‐specific, the general model and methods are portable. A suite of such models for haul‐out sites of a regional stock would allow managers to monitor long‐term population trends without telemetry.  相似文献   

10.
This article proposes a probability model for k-dimensional ordinal outcomes, that is, it considers inference for data recorded in k-dimensional contingency tables with ordinal factors. The proposed approach is based on full posterior inference, assuming a flexible underlying prior probability model for the contingency table cell probabilities. We use a variation of the traditional multivariate probit model, with latent scores that determine the observed data. In our model, a mixture of normals prior replaces the usual single multivariate normal model for the latent variables. By augmenting the prior model to a mixture of normals we generalize inference in two important ways. First, we allow for varying local dependence structure across the contingency table. Second, inference in ordinal multivariate probit models is plagued by problems related to the choice and resampling of cutoffs defined for these latent variables. We show how the proposed mixture model approach entirely removes these problems. We illustrate the methodology with two examples, one simulated dataset and one dataset of interrater agreement.  相似文献   

11.
We propose two nutrient-phytoplankton models with instantaneous and time delayed recycling, investigate the dynamics and examine the responses to model complexities. Instead of the familiar specific uptake rate and growth rate functions, we assume only that the nutrient uptake and phytoplankton growth rate functions are positive, increasing and bounded above. We use geometrical and analytical methods to find conditions for the existence of none, one, or at most two positive steady states and analyze the stability properties of each of these equilibria. With the variation of parameters, the system may lose its stability and bifurcation may occur. We study the occurrence of Hopf bifurcation and the possibility of stability switching. Numerical simulations illustrate the analytical results and provide further insight into the dynamics of the models, biological interpretations are given.  相似文献   

12.
Functional regression modeling via regularized Gaussian basis expansions   总被引:1,自引:0,他引:1  
We consider the problem of constructing functional regression models for scalar responses and functional predictors, using Gaussian basis functions along with the technique of regularization. An advantage of our regularized Gaussian basis expansions to functional data analysis is that it creates a much more flexible instrument for transforming each individual’s observations into functional form. In constructing functional regression models there remains the problem of how to determine the number of basis functions and an appropriate value of a regularization parameter. We present model selection criteria for evaluating models estimated by the method of regularization in the context of functional regression models. The proposed functional regression models are applied to Canadian temperature data. Monte Carlo simulations are conducted to examine the efficiency of our modeling strategies. The simulation results show that the proposed procedure performs well especially in terms of flexibility and stable estimates.  相似文献   

13.
Variational registration models are non-rigid and deformable imaging techniques for accurate registration of two images. As with other models for inverse problems using the Tikhonov regularization, they must have a suitably chosen regularization term as well as a data fitting term. One distinct feature of registration models is that their fitting term is always highly nonlinear and this nonlinearity restricts the class of numerical methods that are applicable. This paper first reviews the current state-of-the-art numerical methods for such models and observes that the nonlinear fitting term is mostly ‘avoided’ in developing fast multigrid methods. It then proposes a unified approach for designing fixed point type smoothers for multigrid methods. The diffusion registration model (second-order equations) and a curvature model (fourth-order equations) are used to illustrate our robust methodology. Analysis of the proposed smoothers and comparisons to other methods are given. As expected of a multigrid method, being many orders of magnitude faster than the unilevel gradient descent approach, the proposed numerical approach delivers fast and accurate results for a range of synthetic and real test images.  相似文献   

14.
Our experiment shows that the division of attributes in value trees can either increase or decrease the weight of an attribute. The structural variation of value trees may also change the rank of attributes. We propose that our new findings related to the splitting bias, some other phenomena appearing with attribute weighting in value trees, and the number-of-attribute-levels effect in conjoint analysis may have the same origins. One origin for these phenomena is that decision makers' responses mainly reflect the rank of attributes and not to the full extent the strength of their preferences as the value theory assumes. We call this the unadjustment phenomenon. A procedural source of biases is the normalization of attribute weights. One consequence of these two factors is that attribute weights change if attributes are divided in a value tree. We also discuss how the biases in attribute weighting could be avoided in practice.  相似文献   

15.
Data Envelopment Analysis (DEA) is basically a linear programming-based technique used for measuring the relative performance of organizational units, referred to as Decision Making Units (DMUs). The flexibility in selecting the weights in standard DEA models deters the comparison among DMUs on a common base. Moreover, these weights are not suitable to measure the preferences of a decision maker (DM). For dealing with the first difficulty, the concept of common weights was proposed in the DEA literature. But, none of the common weights approaches address the second difficulty. This paper proposes an alternative approach that we term as ‘preference common weights’, which is both practical and intellectually consistent with the DEA philosophy. To do this, we introduce a multiple objective linear programming model in which objective functions are input/output variables subject to the constraints similar to the equations that define production possibility set of standard DEA models. Then by using the Zionts–Wallenius method, we can generate common weights as the DM's underlying value structure about objective functions.  相似文献   

16.
There are often two important types of variation in functional data: the horizontal (or phase) variation and the vertical (or amplitude) variation. These two types of variation have been appropriately separated and modeled through a domain warping method (or curve registration) based on the Fisher–Rao metric. This article focuses on the analysis of the horizontal variation, captured by the domain warping functions. The square-root velocity function representation transforms the manifold of the warping functions to a Hilbert sphere. Motivated by recent results on manifold analogs of principal component analysis, we propose to analyze the horizontal variation via a principal nested spheres approach. Compared with earlier approaches, such as approximating tangent plane principal component analysis, this is seen to be an efficient and interpretable approach to decompose the horizontal variation in both simulated and real data examples.  相似文献   

17.
When model the heteroscedasticity in a broad class of partially linear models, we allow the variance function to be a partial linear model as well and the parameters in the variance function to be different from those in the mean function. We develop a two-step estimation procedure, where in the first step some initial estimates of the parameters in both the mean and variance functions are obtained and then in the second step the estimates are updated using the weights calculated based on the initial estimates. The resulting weighted estimators of the linear coefficients in both the mean and variance functions are shown to be asymptotically normal, more efficient than the initial un-weighted estimators, and most efficient in the sense of semiparametric efficiency for some special cases. Simulation experiments are conducted to examine the numerical performance of the proposed procedure, which is also applied to data from an air pollution study in Mexico City.  相似文献   

18.
The aim of this paper is to investigate approximation operators with logarithmic sigmoidal function of a class of two neural networks weights and a class of quasi-interpolation operators. Using these operators as approximation tools, the upper bounds of estimate errors are estimated for approximating continuous functions.  相似文献   

19.
An underlying assumption in DEA is that the weights coupled with the ratio scales of the inputs and outputs imply linear value functions. In this paper, we present a general modeling approach to deal with outputs and/or inputs that are characterized by nonlinear value functions. To this end, we represent the nonlinear virtual outputs and/or inputs in a piece-wise linear fashion. We give the CCR model that can assess the efficiency of the units in the presence of nonlinear virtual inputs and outputs. Further, we extend the models with the assurance region approach to deal with concave output and convex input value functions. Actually, our formulations indicate a transformation of the original data set to an augmented data set where standard DEA models can then be applied, remaining thus in the grounds of the standard DEA methodology. To underline the usefulness of such a new development, we revisit a previous work of one of the authors dealing with the assessment of the human development index on the light of DEA.  相似文献   

20.
Recently there has been a lot of effort to model extremes of spatially dependent data. These efforts seem to be divided into two distinct groups: the study of max-stable processes, together with the development of statistical models within this framework; the use of more pragmatic, flexible models using Bayesian hierarchical models (BHM) and simulation based inference techniques. Each modeling strategy has its strong and weak points. While max-stable models capture the local behavior of spatial extremes correctly, hierarchical models based on the conditional independence assumption, lack the asymptotic arguments the max-stable models enjoy. On the other hand, they are very flexible in allowing the introduction of physical plausibility into the model. When the objective of the data analysis is to estimate return levels or kriging of extreme values in space, capturing the correct dependence structure between the extremes is crucial and max-stable processes are better suited for these purposes. However when the primary interest is to explain the sources of variation in extreme events Bayesian hierarchical modeling is a very flexible tool due to the ease with which random effects are incorporated in the model. In this paper we model a data set on Portuguese wildfires to show the flexibility of BHM in incorporating spatial dependencies acting at different resolutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号