首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In this paper we derive some alternative estimators of the superparameter and predictors of the population total for a multinormal superpopulation model. The estimators and predictors obtained are better than the maximum likelihood predictor near the “natural origin” though possibly worse farther away. The technique employed is to shrink the maximum likelihood predictor towards a “natural origin”. With a numerical example, it is shown that the shrunken predictor of the population total works better than the maximum likelihood predictor for small sample sizes and near the “natural origin”.A combined predictor which is not as good as the shrunken predictor near the origin but not disasterously far away from the origin is introduced.  相似文献   

2.
The two-sided deconvolution problem, for a certain class of a bandlimited kernels, is reduced to a discrete deconvolution problem by the sampling theorem, yielding a bandlimited solution. For this solution, in addition, a Galerkin type approximation is given. In general, the solution of the convolutioon equation for bandlimited kernels is not bandlimited. This follows from a characterization of the general solution  相似文献   

3.
In this paper, we propose a new kernel-based fuzzy clustering algorithm which tries to find the best clustering results using optimal parameters of each kernel in each cluster. It is known that data with nonlinear relationships can be separated using one of the kernel-based fuzzy clustering methods. Two common fuzzy clustering approaches are: clustering with a single kernel and clustering with multiple kernels. While clustering with a single kernel doesn’t work well with “multiple-density” clusters, multiple kernel-based fuzzy clustering tries to find an optimal linear weighted combination of kernels with initial fixed (not necessarily the best) parameters. Our algorithm is an extension of the single kernel-based fuzzy c-means and the multiple kernel-based fuzzy clustering algorithms. In this algorithm, there is no need to give “good” parameters of each kernel and no need to give an initial “good” number of kernels. Every cluster will be characterized by a Gaussian kernel with optimal parameters. In order to show its effective clustering performance, we have compared it to other similar clustering algorithms using different databases and different clustering validity measures.  相似文献   

4.
The objective of this paper is to introduce a multi-resolution approximation (MRA) approach to the study of continuous function extensions with emphasis on surface completion and image inpainting. Along the line of the notion of diffusion maps introduced by Coifman and Lafon with some “heat kernels” as integral kernels of these operators in formulating the diffusion maps, we apply the directional derivatives of the heat kernels with respect to the inner normal vectors (on the boundary of the hole to be filled in) as integral kernels of the “propagation” operators. The extension operators defined by propagations followed by the corresponding sequent diffusion processes provide the MRA continuous function extensions to be discussed in this paper. As a case study, Green's functions of some “anisotropic” differential operators are used as heat kernels, and the corresponding extension operators provide a vehicle to transport the surface or image data, along with some mixed derivatives, from the exterior of the hole to recover the missing data in the hole in an MRA fashion, with the propagated mixed derivative data to provide the surface or image “details” in the hole. An error formula in terms of the heat kernels is formulated, and this formula is applied to give the exact order of approximation for the isotropic setting.  相似文献   

5.
For interpolation of smooth functions by smooth kernels having an expansion into eigenfunctions (e.g., on the circle, the sphere, and the torus), good results including error bounds are known, provided that the smoothness of the function is closely related to that of the kernel. The latter fact is usually quantified by the requirement that the function should lie in the “native” Hilbert space of the kernel, but this assumption rules out the treatment of less smooth functions by smooth kernels. For the approximation of functions from “large” Sobolev spaces W by functions generated by smooth kernels, this paper shows that one gets at least the known order for interpolation with a less smooth kernel that has W as its native space.  相似文献   

6.
The usual mathematical method to represent uncertain quantities, for example the state of a dynamical system with uncertain initial conditions, are random variables (RVs). In many problems the space of elementary events Ω, on which the RVs are defined as functions of these events, is not concretely accessible, so that the usual idea of a function (e.g. given as a formula) loses much of its meaning. The representation of RVs is therefore often strikingly different from what is used for “normal” functions. With the help of RVs one can formulate Bayesian estimators for the uncertain quantity when additional information (usually noisy, incomplete measurements) becomes available. A common way to derive such an estimator is to use an instance of the projection theorem for Hilbert spaces. In this work we present a linear Bayesian estimation method which results from using a recently popular representation of an RV, the polynomial chaos expansion (PCE), also known as “white noise analysis”. The resulting method is completely deterministic, as well as computationally efficient. (© 2011 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.

In this paper, our aim is to revisit the nonparametric estimation of a square integrable density f on \({\mathbb {R}}\), by using projection estimators on a Hermite basis. These estimators are studied from the point of view of their mean integrated squared error on \({\mathbb {R}}\). A model selection method is described and proved to perform an automatic bias variance compromise. Then, we present another collection of estimators, of deconvolution type, for which we define another model selection strategy. Although the minimax asymptotic rates of these two types of estimators are mainly equivalent, the complexity of the Hermite estimators is usually much lower than the complexity of their deconvolution (or kernel) counterparts. These results are illustrated through a small simulation study.

  相似文献   

8.
In this paper, we address some fundamental issues concerning “time marching” numerical schemes for computing steady state solutions of boundary value problems for nonlinear partial differential equations. Simple examples are used to illustrate that even theoretically convergent schemes can produce numerical steady state solutions that do not correspond to steady state solutions of the boundary value problem. This phenomenon must be considered in any computational study of nonunique solutions to partial differential equations that govern physical systems such as fluid flows. In particular, numerical calculations have been used to “suggest” that certain Euler equations do not have a unique solution. For Burgers' equation on a finite spatial interval with Neumann boundary conditions the only steady state solutions are constant (in space) functions. Moreover, according to recent theoretical results, for any initial condition the corresponding solution to Burgers' equation must converge to a constant as t → ∞. However, we present a convergent finite difference scheme that produces false nonconstant numerical steady state “solutions.” These erroneous solutions arise out of the necessary finite floating point arithmetic inherent in every digital computer. We suggest the resulting numerical steady state solution may be viewed as a solution to a “nearby” boundary value problem with high sensitivity to changes in the boundary conditions. Finally, we close with some comments on the relevance of this paper to some recent “numerical based proofs” of the existence of nonunique solutions to Euler equations and to aerodynamic design.  相似文献   

9.
Abstract

The so-called “Rao-Blackwellized” estimators proposed by Gelfand and Smith do not always reduce variance in Markov chain Monte Carlo when the dependence in the Markov chain is taken into account. An illustrative example is given, and a theorem characterizing the necessary and sufficient condition for such an estimator to always reduce variance is proved.  相似文献   

10.
This paper reviews and extends some of the known results in the estimation in “errors-in-variables” models, treating the structural and the functional cases on a unified basis. The generalized least-squares method proposed by some previous authors is extended to the case where the error covariance matrix contains an unknown vector parameter. This alleviates the difficulty of multiple roots arising from defining estimators as roots to a set of unbiased estimating equations. An alternative method is also considered for cases with both known and unknown error covariance matrix. The relationship between this method and the usual maximum-likelihood and generalized least-squares approaches is also investigated, and it is shown that in a special case they do not necessarily give identical results in finite samples. Finally, asymptotic results are presented.  相似文献   

11.
Traditional means of studying environmental economics and management problems consist of optimal control and dynamic game models that are solved for optimal or equilibrium strategies. Notwithstanding the possibility of multiple equilibria, the models’ users—managers or planners—will usually be provided with a single optimal or equilibrium strategy no matter how reliable, or unreliable, the underlying models and their parameters are. In this paper we follow an alternative approach to policy making that is based on viability theory. It establishes “satisficing” (in the sense of Simon), or viable, policies that keep the dynamic system in a constraint set and are, generically, multiple and amenable to each manager’s own prioritisation. Moreover, they can depend on fewer parameters than the optimal or equilibrium strategies and hence be more robust. For the determination of these (viable) policies, computation of “viability kernels” is crucial. We introduce a MATLAB application, under the name of VIKAASA, which allows us to compute approximations to viability kernels. We discuss two algorithms implemented in VIKAASA. One approximates the viability kernel by the locus of state space positions for which solutions to an auxiliary cost-minimising optimal control problem can be found. The lack of any solution implies the infinite value function and indicates an evolution which leaves the constraint set in finite time, therefore defining the point from which the evolution originates as belonging to the kernel’s complement. The other algorithm accepts a point as viable if the system’s dynamics can be stabilised from this point. We comment on the pros and cons of each algorithm. We apply viability theory and the VIKAASA software to a problem of by-catch fisheries exploited by one or two fleets and provide rules concerning the proportion of fish biomass and the fishing effort that a sustainable fishery’s exploitation should follow.  相似文献   

12.
In this note, we give a positive answer to a question addressed in Nadin et al. (2011) [7]. To be precise, we prove that, for any kernel and any slope at the origin, there exist traveling wave solutions (actually those which are “rapid”) of the nonlocal Fisher equation that connect the two homogeneous steady states 0 (dynamically unstable) and 1. In particular, this allows situations where 1 is unstable in the sense of Turing. Our proof does not involve any maximum principle argument and applies to kernels with fat tails.  相似文献   

13.
In the frame of mathematical optimization procedures or parameter fitting the same problem, modeled with partial differential equations depending on a parameter has to be solved many times for different sets of parameters. The reduced basis method may be successful in this frame and recent progress have permitted to make the computations reliable thanks to a posteriori estimators and to extend the method to non linear problems thanks to the “magic points” interpolation. However, in an industrial context, it may not be possible to use the code (for example of finite element type that allows for evaluating the elements of the reduced basis) to perform all the “off-line” computations necessary for an efficient performance of the reduced basis method. We propose here an alternating approach based on a coarse grid finite element the convergence of which is accelerated through the reduced basis. To cite this article: R. Chakir, Y. Maday, C. R. Acad. Sci. Paris, Ser. I 347 (2009).  相似文献   

14.
15.
In this paper, we consider the estimation problem for partially linear models with additive measurement errors in the nonparametric part. Two kinds of estimators are proposed. The first one is an integral moment-based estimator with deconvolution kernel techniques, associated with the strong consistency for the estimator. Another one is a simulation-based estimator to avoid the integrals involved in the integral moment-based estimator. Simulation studies are conducted to examine the performance of the proposed estimators.  相似文献   

16.
Among the forces affecting the course of mathematical evolution is what the author has called “hereditary stress.” The term designates a cultural, not a psychological force, and it is internal to mathematics, not environmental. It appears to be synonymous with what A. L. Kroeber called “potentialities” and G. Sarton termed “growth pressure.” Neither of these scholars gave any analysis of it. The present article attempts to do this in the restricted context of mathematics.Its chief components seem to be: (A) Capacity. The quantity and intrinsic interest of the results that the basic theory and methodology of a field are capable of yielding. (B) Significance. The field's promise of yielding results significant for the advancement of mathematics or related fields. (C) Challenge. The emergence of problems whose solutions require an ingenuity and/or methodology which distinguish them from those problems whose solutions are of a more routine character. (D) Status. The esteem in which the field is held. (E) Conceptual Stress. The stresses created by the need for new conceptual materials to furnish a logical basis for explaining phenomena; outstanding among these is symbolic stress. (F) Paradox. Emergence of paradoxes or inconsistencies.These are discussed individually. Analysis of Component (E), for example, shows that it evidently stems from several sources; e.g., the necessity for a new concept which will afford means of solving problems previously inaccessible; stresses created by the need for introducing order into a chaos of materials recognizably related; and the need for new attitudes toward mathematical existence and mathematical “reality.”  相似文献   

17.
Simple and multiple linear regression models are considered between variables whose “values” are convex compact random sets in ${\mathbb{R}^p}$ , (that is, hypercubes, spheres, and so on). We analyze such models within a set-arithmetic approach. Contrary to what happens for random variables, the least squares optimal solutions for the basic affine transformation model do not produce suitable estimates for the linear regression model. First, we derive least squares estimators for the simple linear regression model and examine them from a theoretical perspective. Moreover, the multiple linear regression model is dealt with and a stepwise algorithm is developed in order to find the estimates in this case. The particular problem of the linear regression with interval-valued data is also considered and illustrated by means of a real-life example.  相似文献   

18.
A large part of the European natural gas imports originates from unstable regions exposed to the risk of supply failure due to economical and political reasons. This has increased the concerns on the security of supply in the European natural gas market. In this paper, we analyze the security of external supply of the Italian gas market that mainly relies on natural gas imports to cover its internal demand. To this aim, we develop an optimization problem that describes the equilibrium state of a gas supply chain where producers, mid-streamers, and final consumers exchange natural gas and liquefied natural gas. Both long-term contracts (LTCs) and spot pricing systems are considered. Mid-streamers are assumed to be exposed to the external supply risk, which is estimated with indicators that we develop starting from those already existing in the literature. In addition, we investigate different degrees of mid-streamers’ flexibility by comparing a situation where mid-streamers fully satisfy the LTC volume clause (“No FLEX” assumption) to a case where the fulfillment of this volume clause is not compulsory (“FLEX” assumption). Our analysis shows that, in the “No FLEX” case, mid-streamers do not significantly change their supplying choices even when the external supply risk is considered. Under this assumption, they face significant profit losses that, instead, disappear in the “FLEX” case when mid-streamers are more flexible and can modify their supply mix. However, the “FLEX” strategy limits the gas availability in the supply chain leading to a curtailment of the social welfare.  相似文献   

19.
Cognitive level in problem segments and theory segments   总被引:1,自引:0,他引:1  
Problems play an important role in mathematics instruction and are therefore frequently seen as central points of application for measures of instructional development. The research project “Quality of instruction and mathematical understanding in different cultures” examines the cognitive level of practice problems and theory problems in a three-lesson unit on the Introduction to Pythagorean theorem1: Analogously to the TIMSS 1999 video study, a differentiation was made between the cognitive level of problem statement and the cognitive level of problem implementation. Additionally, the lesson time was also divided into practice and theory segments. The results show that teachers with a high proportion of connection activities in practice segments do not necessarily also spend a greater proportion of time on an analogous level for theory.  相似文献   

20.
The problem of determination of relaxation and retardation spectra (RRS) is considered from the viewpoint of up-to-date signal processing. It is shown that the recovery of RRS represents the Mellin deconvolution problem, which transforms into the Fourier deconvolution problem for data on a logarithmic time or frequency scale, where it can also be treated as the inverse filtering problem. On this basis, discrete deconvolution (inverse) filters operating with geometrically sampled data are proposed to use as RRS estimators. Appropriate frequency responses and algorithms are derived for estimating RRS from eight different material functions. The noise amplification coefficient is suggested to use as a measure for quantifying the degree of ill-posedness and illconditioness of the RRS recovery problem and algorithms. A methodology is developed for designing RRS estimators with a desired noise amplification, producing maximum accurate spectra for available limited input data. Practical algorithms for determining RRS are proposed, and their performance is studied. The algorithms suggested are compared with the so-called moving-average formulae. It is demonstrated that the minimum frequency range for recovering the point estimate of a relaxation spectrum depends on the allowable noise amplification (the degree of ill-conditioness) and is in no way limited by 1.36 decades, as it is stated by the sampling localization theorem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号