共查询到20条相似文献,搜索用时 10 毫秒
1.
2.
This paper establishes a central limit theorem and an invariance principle for a wide class of stationary random fields under natural and easily verifiable conditions. More precisely, we deal with random fields of the form Xk=g(εk−s,s∈Zd), k∈Zd, where (εi)i∈Zd are iid random variables and g is a measurable function. Such kind of spatial processes provides a general framework for stationary ergodic random fields. Under a short-range dependence condition, we show that the central limit theorem holds without any assumption on the underlying domain on which the process is observed. A limit theorem for the sample auto-covariance function is also established. 相似文献
3.
Wolfgang Well 《Acta Appl Math》1987,9(1-2):103-136
Point processesX of cylinders, compact sets (particles), or flats inR
d
are mathematical models for fields of sets as they occur, e.g., in practical problems of image analysis and stereology. For the estimation of geometric quantities of such fields, mean value formulas forX are important. By a systematic approach, integral geometric formulas for curvature measures are transformed into density formulas for geometric point processes. In particular, a number of results which are known for stationary and isotropic Poisson processes of convex sets are generalized to nonisotropic processes, to non-Poissonian processes, and to processes of nonconvex sets. The integral geometric background (including recent results from translative integral geometry), the fundamentals of geometric point processes, and the resulting density formulas are presented in detail. Generalizations of the theory and applications in image analysis and stereology are mentioned shortly. 相似文献
4.
Let ηt be a Poisson point process of intensity t≥1 on some state space Y and let f be a non-negative symmetric function on Yk for some k≥1. Applying f to all k-tuples of distinct points of ηt generates a point process ξt on the positive real half-axis. The scaling limit of ξt as t tends to infinity is shown to be a Poisson point process with explicitly known intensity measure. From this, a limit theorem for the m-th smallest point of ξt is concluded. This is strengthened by providing a rate of convergence. The technical background includes Wiener–Itô chaos decompositions and the Malliavin calculus of variations on the Poisson space as well as the Chen–Stein method for Poisson approximation. The general result is accompanied by a number of examples from geometric probability and stochastic geometry, such as k-flats, random polytopes, random geometric graphs and random simplices. They are obtained by combining the general limit theorem with tools from convex and integral geometry. 相似文献
5.
In this paper we carry over the concept of reverse probabilistic representations developed in Milstein, Schoenmakers, Spokoiny [G.N. Milstein, J.G.M. Schoenmakers, V. Spokoiny, Transition density estimation for stochastic differential equations via forward–reverse representations, Bernoulli 10 (2) (2004) 281–312] for diffusion processes, to discrete time Markov chains. We outline the construction of reverse chains in several situations and apply this to processes which are connected with jump–diffusion models and finite state Markov chains. By combining forward and reverse representations we then construct transition density estimators for chains which have root-N accuracy in any dimension and consider some applications. 相似文献
6.
We consider an insurance company in the case when the premium rate is a bounded non-negative random function ct and the capital of the insurance company is invested in a risky asset whose price follows a geometric Brownian motion with mean return a and volatility σ>0. If β?2a/σ2-1>0 we find exact the asymptotic upper and lower bounds for the ruin probability Ψ(u) as the initial endowment u tends to infinity, i.e. we show that C*u-β?Ψ(u)?C*u-β for sufficiently large u . Moreover if ct=c*eγt with γ?0 we find the exact asymptotics of the ruin probability, namely Ψ(u)∼u-β. If β?0, we show that Ψ(u)=1 for any u?0. 相似文献
7.
8.
We study the error induced by the time discretization of decoupled forward–backward stochastic differential equations (X,Y,Z). The forward component X is the solution of a Brownian stochastic differential equation and is approximated by a Euler scheme XN with N time steps. The backward component is approximated by a backward scheme. Firstly, we prove that the errors (YN−Y,ZN−Z) measured in the strong Lp-sense (p≥1) are of order N−1/2 (this generalizes the results by Zhang [J. Zhang, A numerical scheme for BSDEs, The Annals of Applied Probability 14 (1) (2004) 459–488]). Secondly, an error expansion is derived: surprisingly, the first term is proportional to XN−X while residual terms are of order N−1. 相似文献
9.
We study a random design regression model generated by dependent observations, when the regression function itself (or its ν-th derivative) may have a change or discontinuity point. A method based on the local polynomial fits with one-sided kernels to estimate the location and the jump size of the change point is applied in this paper. When the jump location is known, a central limit theorem for the estimator of the jump size is established; when the jump location is unknown, we first obtain a functional limit theorem for a local dilated-rescaled version estimator of the jump size and then give the asymptotic distributions for the estimators of the location and the jump size of the change point. The asymptotic results obtained in this paper can be viewed as extensions of corresponding results for independent observations. Furthermore, a simulated example is given to show that our theory and method perform well in practice. 相似文献
10.
11.
Annette M. Molinaro Sandrine Dudoit Mark J. van der Laan 《Journal of multivariate analysis》2004,90(1):154-177
We propose a unified strategy for estimator construction, selection, and performance assessment in the presence of censoring. This approach is entirely driven by the choice of a loss function for the full (uncensored) data structure and can be stated in terms of the following three main steps. (1) First, define the parameter of interest as the minimizer of the expected loss, or risk, for a full data loss function chosen to represent the desired measure of performance. Map the full data loss function into an observed (censored) data loss function having the same expected value and leading to an efficient estimator of this risk. (2) Next, construct candidate estimators based on the loss function for the observed data. (3) Then, apply cross-validation to estimate risk based on the observed data loss function and to select an optimal estimator among the candidates. A number of common estimation procedures follow this approach in the full data situation, but depart from it when faced with the obstacle of evaluating the loss function for censored observations. Here, we argue that one can, and should, also adhere to this estimation road map in censored data situations.Tree-based methods, where the candidate estimators in Step 2 are generated by recursive binary partitioning of a suitably defined covariate space, provide a striking example of the chasm between estimation procedures for full data and censored data (e.g., regression trees as in CART for uncensored data and adaptations to censored data). Common approaches for regression trees bypass the risk estimation problem for censored outcomes by altering the node splitting and tree pruning criteria in manners that are specific to right-censored data. This article describes an application of our unified methodology to tree-based estimation with censored data. The approach encompasses univariate outcome prediction, multivariate outcome prediction, and density estimation, simply by defining a suitable loss function for each of these problems. The proposed method for tree-based estimation with censoring is evaluated using a simulation study and the analysis of CGH copy number and survival data from breast cancer patients. 相似文献
12.
We propose a generic framework for the analysis of Monte Carlo simulation schemes of backward SDEs. The general results are used to re-visit the convergence of the algorithm suggested by Bouchard and Touzi (2004) [6]. By keeping the higher order terms in the expansion of the Skorohod integrals resulting from the Malliavin integration by parts in [6], we introduce a variant of the latter algorithm which allows for a significant reduction of the numerical complexity. We prove the convergence of this improved Malliavin-based algorithm, and derive a bound on the induced error. In particular, we show that the price to pay for our simplification is to use a more accurate localizing function. 相似文献
13.
Volker Krätschmer 《Journal of multivariate analysis》2006,97(5):1044-1069
Linear regression models with vague concepts extend the classical single equation linear regression models by admitting observations in form of fuzzy subsets instead of real numbers. They have lately been introduced (cf. [V. Krätschmer, Induktive Statistik auf Basis unscharfer Meßkonzepte am Beispiel linearer Regressionsmodelle, unpublished postdoctoral thesis, Faculty of Law and Economics of the University of Saarland, Saarbrücken, 2001; V. Krätschmer, Least squares estimation in linear regression models with vague concepts, Fuzzy Sets and Systems, accepted for publication]) to improve the empirical meaningfulness of the relationships between the involved items by a more sensitive attention to the problems of data measurement, in particular, the fundamental problem of adequacy. The parameters of such models are still real numbers, and a method of estimation can be applied which extends directly the ordinary least squares method. In another recent contribution (cf. [V. Krätschmer, Strong consistency of least squares estimation in linear regression models with vague concepts, J. Multivar. Anal., accepted for publication]) strong consistency and -consistency of this generalized least squares estimation have been shown. The aim of the paper is to complete these results by an investigation of the limit distributions of the estimators. It turns out that the classical results can be transferred, in some cases even asymptotic normality holds. 相似文献
14.
This paper is concerned with the parameter estimation problem for the three-parameter Weibull density which is widely employed as a model in reliability and lifetime studies. Our approach is a combination of nonparametric and parametric methods. The basic idea is to start with an initial nonparametric density estimate which needs to be as good as possible, and then apply the nonlinear least squares method to estimate the unknown parameters. As a main result, a theorem on the existence of the least squares estimate is obtained. Some simulations are given to show that our approach is satisfactory if the initial density is of good enough quality. 相似文献
15.
C.Robinson Edward Raja 《Bulletin des Sciences Mathématiques》2003,127(4):283-291
A Markov operator P on a σ-finite measure space (X,Σ,m) with invariant measure m is said to have Krengel-Lin decomposition if L2(X)=E0⊕L2(X,Σd) where E0={f∈L2(X)∣‖Pn(f)‖→0} and Σd is the deterministic σ-field of P. We consider convolution operators and we show that a measure λ on a hypergroup has Krengel-Lin decomposition if and only if the sequence converges to an idempotent or λ is scattered. We verify this condition for probabilities on Tortrat groups, on commutative hypergroups and on central hypergroups. We give a counter-example to show that the decomposition is not true for measures on discrete hypergroups. 相似文献
16.
Paavo Salminen 《Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques》2007,43(6):655
The joint distribution of maximum increase and decrease for Brownian motion up to an independent exponential time is computed. This is achieved by decomposing the Brownian path at the hitting times of the infimum and the supremum before the exponential time. It is seen that an important element in our formula is the distribution of the maximum decrease for the three-dimensional Bessel process with drift started from 0 and stopped at the first hitting of a given level. From the joint distribution of the maximum increase and decrease it is possible to calculate the correlation coefficient between these at a fixed time and this is seen to be . 相似文献
17.
Christian Hering 《Journal of multivariate analysis》2010,101(6):1428-1433
A probabilistic interpretation for hierarchical Archimedean copulas based on Lévy subordinators is given. Independent exponential random variables are divided by group-specific Lévy subordinators which are evaluated at a common random time. The resulting random vector has a hierarchical Archimedean survival copula. This approach suggests an efficient sampling algorithm and allows one to easily construct several new parametric families of hierarchical Archimedean copulas. 相似文献
18.
19.
The paper presents a unified approach to local likelihood estimation for a broad class of nonparametric models, including
e.g. the regression, density, Poisson and binary response model. The method extends the adaptive weights smoothing (AWS) procedure
introduced in Polzehl and Spokoiny (2000) in context of image denoising. The main idea of the method is to describe a greatest
possible local neighborhood of every design point Xi in which the local parametric assumption is justified by the data. The method is especially powerful for model functions
having large homogeneous regions and sharp discontinuities. The performance of the proposed procedure is illustrated by numerical
examples for density estimation and classification. We also establish some remarkable theoretical nonasymptotic results on
properties of the new algorithm. This includes the ``propagation' property which particularly yields the root-n consistency
of the resulting estimate in the homogeneous case. We also state an ``oracle' result which implies rate optimality of the
estimate under usual smoothness conditions and a ``separation' result which explains the sensitivity of the method to structural
changes. 相似文献
20.
Colm Art O'Cinneide 《Numerische Mathematik》1993,65(1):109-120
Summary Grassmann, Taksar, and Heyman introduced a variant of Gaussian climination for computing the steady-state vector of a Markov chain. In this paper we prove that their algorithm is stable, and that the problem itself is well-conditioned, in the sense of entrywise relative error. Thus the algorithm computes each entry of the steady-state vector with low relative error. Even the small steady-state probabilities are computed accurately. The key to our analysis is to focus on entrywise relative error in both the data and the computed solution, rather than making the standard assessments of error based on norms. Our conclusions do not depend on any Condition numbers for the problem.This work was supported by NSF under grants DMS-9106207 and DDM-9203134 相似文献