首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到5条相似文献,搜索用时 0 毫秒
1.
We propose a new class of models for image restoration and decomposition by functional minimization. Following ideas of Y. Meyer in a total variation minimization framework of L. Rudin, S. Osher, and E. Fatemi, our model decomposes a given (degraded or textured) image u 0 into a sum u+v. Here uBV is a function of bounded variation (a cartoon component), while the noisy (or textured) component v is modeled by tempered distributions belonging to the negative Hilbert-Sobolev space H s . The proposed models can be seen as generalizations of a model proposed by S. Osher, A. Solé, L. Vese and have been also motivated by D. Mumford and B. Gidas. We present existence, uniqueness and two characterizations of minimizers using duality and the notion of convex functions of measures with linear growth, following I. Ekeland and R. Temam, F. Demengel and R. Temam. We also give a numerical algorithm for solving the minimization problem, and we present numerical results of denoising, deblurring, and decompositions of both synthetic and real images.  相似文献   

2.
When using linguistic approaches to solve decision problems, we need linguistic representation models. The symbolic model, the 2-tuple fuzzy linguistic representation model and the continuous linguistic model are three existing linguistic representation models based on position indexes. Together with these three linguistic models, the corresponding ordered weighted averaging operators, such as the linguistic ordered weighted averaging operator, the 2-tuple ordered weighted averaging operator and the extended ordered weighted averaging operator, have been developed, respectively. In this paper, we analyze the internal relationship among these operators, and propose a consensus operator under the continuous linguistic model (or the 2-tuple fuzzy linguistic representation model). The proposed consensus operator is based on the use of the ordered weighted averaging operator and the deviation measures. Some desired properties of the consensus operator are also presented. In particular, the consensus operator provides an alternative consensus model for group decision making. This consensus model preserves the original preference information given by the decision makers as much as possible, and supports consensus process automatically, without moderator.  相似文献   

3.
In the Property and Casualty (P&C) ratemaking process, it is critical to understand the effect of policyholders’ risk profile to the number and amount of claims, the dependence among various business lines and the claim distributions. To include all the above features, it is essential to develop a regression model which is flexible and theoretically justified. Motivated by the issues above, we propose a class of logit-weighted reduced mixture of experts (LRMoE) models for multivariate claim frequencies or severities distributions. LRMoE is interpretable, as it has two components: Gating functions, which classify policyholders into various latent sub-classes; and Expert functions, which govern the distributional properties of the claims. Also, upon the development of denseness theory in regression setting, we can heuristically interpret the LRMoE as a “fully flexible” model to capture any distributional, dependence and regression structures subject to a denseness condition. Further, the mathematical tractability of the LRMoE is guaranteed since it satisfies various marginalization and moment properties. Finally, we discuss some special choices of expert functions that make the corresponding LRMoE “fully flexible”. In the subsequent paper (Fung et al., 2019b), we will focus on the estimation and application aspects of the LRMoE.  相似文献   

4.
We propose a multinomial probit (MNP) model that is defined by a factor analysis model with covariates for analyzing unordered categorical data, and discuss its identification. Some useful MNP models are special cases of the proposed model. To obtain maximum likelihood estimates, we use the EM algorithm with its M-step greatly simplified under Conditional Maximization and its E-step made feasible by Monte Carlo simulation. Standard errors are calculated by inverting a Monte Carlo approximation of the information matrix using Louis’s method. The methodology is illustrated with a simulated data.  相似文献   

5.
This is the second part of two papers which are concerned with generalized Petrov-Galerkin schemes for elliptic periodic pseudodifferential equations in n . This setting covers classical Galerkin methods, collocation, and quasi-interpolation. The numerical methods are based on a general framework of multiresolution analysis, i.e. of sequences of nested spaces which are generated by refinable functions. In this part, we analyse compression techniques for the resulting stiffness matrices relative to wavelet-type bases. We will show that, although these stiffness matrices are generally not sparse, the order of the overall computational work which is needed to realize a certain accuracy is of the formO(N(logN) b ), whereN is the number of unknowns andb 0 is some real number.Dedicated to Charles A. Micchelli on the occasion of his fiftieth birthdayThe third author has been supported by a grant of the Deutsche Forschungsgemeinschaft under Grant No. Ko 634/32-1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号