共查询到20条相似文献,搜索用时 31 毫秒
1.
A. G. Belov 《Computational Mathematics and Modeling》2009,20(4):383-396
We investigate OLS parameter estimation for a linear paired model in the case of a passive experiment with errors in both
variables. The explicit form of the OLS estimates is obtained, their equivalence to maximum likelihood estimates is demonstrated
in the presence of normal errors, and estimate consistency is proved. The OLS estimates are compared analytically and numerically
with known parameter estimates of “direct,” “orthogonal,” and “diagonal” regression models. 相似文献
2.
Estimating Functions for Nonlinear Time Series Models 总被引:1,自引:0,他引:1
S. Ajay Chandra Masanobu Taniguchi 《Annals of the Institute of Statistical Mathematics》2001,53(1):125-141
This paper discusses the problem of estimation for two classes of nonlinear models, namely random coefficient autoregressive (RCA) and autoregressive conditional heteroskedasticity (ARCH) models. For the RCA model, first assuming that the nuisance parameters are known we construct an estimator for parameters of interest based on Godambe's asymptotically optimal estimating function. Then, using the conditional least squares (CLS) estimator given by Tjøstheim (1986, Stochastic Process. Appl., 21, 251–273) and classical moment estimators for the nuisance parameters, we propose an estimated version of this estimator. These results are extended to the case of vector parameter. Next, we turn to discuss the problem of estimating the ARCH model with unknown parameter vector. We construct an estimator for parameters of interest based on Godambe's optimal estimator allowing that a part of the estimator depends on unknown parameters. Then, substituting the CLS estimators for the unknown parameters, the estimated version is proposed. Comparisons between the CLS and estimated optimal estimator of the RCA model and between the CLS and estimated version of the ARCH model are given via simulation studies. 相似文献
3.
Using computationally efficient wavelet methods, we study two nonlinear models of financial returns {r
t
}: linear ARCH (LARCH) and fractionally integrated GARCH (FIGARCH). We estimate the tail index α and the long memory parameter d of the squared returns Xt = rt2{X_t= r_t^2} of LARCH, and of the powers X
t
= |r
t
|
p
of FIGARCH. We find that the X
t
have infinite variance and long memory, and show how the estimates of α and d depend on the model parameters. These relationships are determined empirically, as the setting is quite complex, and no suitable
theory has been developed so far. In particular, we provide empirical relationships between the estimates [^(d)]{\hat d} and the difference parameters in LARCH and FIGARCH. Our computational work uncovers tail and memory properties of LARCH and
FIGARCH for practically relevant parameter ranges, and provides some guidance on modeling returns on speculative assets including
FX rates, stocks and market indices. 相似文献
4.
The score tests of independence in multivariate extreme values derived by Tawn (Tawn, J.A., “Bivariate extreme value theory:
models and estimation,” Biometrika 75, 397–415, 1988) and Ledford and Tawn (Ledford, A.W. and Tawn, J.A., “Statistics for near independence in multivariate
extreme values,” Biometrika 83, 169–187, 1996) have non-regular properties that arise due to violations of the usual regularity conditions of maximum
likelihood. Two distinct types of regularity violation are encountered in each of their likelihood frameworks: independence
within the underlying model corresponding to a boundary point of the parameter space and the score function having an infinite
second moment. For applications, the second form of regularity violation has the more important consequences, as it results
in score statistics with non-standard normalisation and poor rates of convergence. The corresponding tests are difficult to
use in practical situations because their asymptotic properties are unrepresentative of their behaviour for the sample sizes
typical of applications, and extensive simulations may be needed in order to evaluate adequately their null distribution.
Overcoming this difficulty is the primary focus of this paper.
We propose a modification to the likelihood based approaches used by Tawn (Tawn, J.A., “Bivariate extreme value theory: models
and estimation,” Biometrika 75, 397–415, 1988) and Ledford and Tawn (Ledford, A.W. and Tawn, J.A., “Statistics for near independence in multivariate
extreme values,” Biometrika 83, 169–187, 1996) that provides asymptotically normal score tests of independence with regular normalisation and rapid convergence.
The resulting tests are straightforward to implement and are beneficial in practical situations with realistic amounts of
data.
AMS 2000 Subject Classification Primary—60G70
Secondary—62H15 相似文献
5.
Shaul K. Bar-Lev 《Annals of the Institute of Statistical Mathematics》1984,36(1):217-222
Summary Consider a truncated exponential family of absolutely continuous distributions with natural parameter θ and truncation parameter
γ. Strong consistency and asymptotic normality are shown to hold for the maximum likelihood and maximum conditional likelihood
estimates of θ with γ unknown. Moreover, these two estimates are also shown to have the same limiting distribution, coinciding
with that of the maximum likelihood estimate for θ when γ is assumed to be known. 相似文献
6.
Ram C. Tripathi Ramesh C. Gupta John Gurland 《Annals of the Institute of Statistical Mathematics》1994,46(2):317-331
This paper contains some alternative methods for estimating the parameters in the beta binomial and truncated beta binomial models. These methods are compared with maximum likelihood on the basis of Asymptotic Relative Efficiency (ARE). For the beta binomial distribution a simple estimator based on moments or ratios of factorial moments has high ARE for most of the parameter space and it is an attractive and viable alternative to computing the maximum likelihood estimator. It is also simpler to compute than an estimator based on the mean and zeros, proposed by Chatfield and Goodhart (1970,Appl. Statist.,19, 240–250), and has much higher ARE for most part of the parameter space. For the truncated beta binomial, the simple estimator based on two moment relations does not behave quite as well as for the BB distribution, but a simple estimator based on two linear relations involving the first three moments and the frequency of ones has extremely high ARE. Some examples are provided to illustrate the procedure for the two models. 相似文献
7.
This work studies a proportional hazards model for survival data with "long-term survivors",in which covariates are subject to linear measurement error.It is well known that the naive estimators from both partial and full likelihood methods are inconsistent under this measurement error model.For measurement error models,methods of unbiased estimating function and corrected likelihood have been proposed in the literature.In this paper,we apply the corrected partial and full likelihood approaches to estimate the model and obtain statistical inference from survival data with long-term survivors.The asymptotic properties of the estimators are established.Simulation results illustrate that the proposed approaches provide useful tools for the models considered. 相似文献
8.
We prove a general functional central limit theorem for weak dependent time series. A very large variety of models, for instance,
causal or non causal linear, ARCH(∞), LARCH(∞), Volterra processes, satisfies this theorem. Moreover, it provides numerous
applications as well for bounding the distance between the empirical mean and the Gaussian measure than for obtaining central
limit theorem for sample moments and cumulants.
C. José Rafael León—Partially supported by the program ECOS-NORD of Fonacit, Venezuela. 相似文献
9.
We give an “excluded minor” and a “structural” characterization of digraphs D that have the property that for every subdigraph H of D, the maximum number of disjoint circuits in H is equal to the minimum cardinality of a set T ⊆ V(H) such that H\T is acyclic. 相似文献
10.
Christian N. Brinch 《Computational Statistics》2012,27(1):13-28
There exists an overall negative assessment of the performance of the simulated maximum likelihood algorithm in the statistics
literature, founded on both theoretical and empirical results. At the same time, there also exist a number of highly successful
applications. This paper explains the negative assessment by the coupling of the algorithm with “simple importance samplers”,
samplers that are not explicitly parameter dependent. The successful applications in the literature are based on explicitly
parameter dependent importance samplers. Simple importance samplers may efficiently simulate the likelihood function value,
but fail to efficiently simulate the score function, which is the key to efficient simulated maximum likelihood. The theoretical
points are illustrated by applying Laplace importance sampling in both variants to the classic salamander mating model. 相似文献
11.
Liudas Giraitis Piotr Kokoszka Remigijus Leipus Gilles Teyssière 《Acta Appl Math》2003,78(1-3):285-299
The paper deals with the power and robustness of the R/S type tests under contiguous alternatives. We briefly review some long memory models in levels and volatility, and describe the R/S-type tests used to test for the presence of long memory. The empirical power of the tests is investigated when replacing the fractional difference operator (1–L)
d
by the operator (1–rL)
d
, with r<1 close to 1, in the FARIMA, LARCH and ARCH time series models. We also investigate the Gegenbauer process with a pole of the spectral density at frequency close to zero. 相似文献
12.
In this article, we introduce a conditional marginal model for longitudinal data, in which the residuals form a martingale
difference sequence. This model allows us to consider a rich class of estimating equations which contains several estimating
equations proposed in the literature. A particular sequence of estimating equations in this class contains a random matrix
R
i−1*(β) as a replacement for the “true” conditional correlation matrix of the ith individual. Using the approach of [12], we identify some sufficient conditions under which this particular sequence of
equations is asymptotically optimal (in our class). In the second part of the article, we identify a second set of conditions
under which we prove the existence and strong consistency of a sequence of estimators of β defined as roots of estimation equations which are martingale transforms (in particular, roots of the sequence of asymptotically
optimal equations). 相似文献
13.
For the regression parameter β
0 in the Cox model, there have been several estimators constructed based on various types of approximated likelihood, but none
of them has demonstrated small-sample advantage over Cox’s partial likelihood estimator. In this article, we derive the full
likelihood function for (β
0, F
0), where F
0 is the baseline distribution in the Cox model. Using the empirical likelihood parameterization, we explicitly profile out
nuisance parameter F
0 to obtain the full-profile likelihood function for β
0 and the maximum likelihood estimator (MLE) for (β
0, F
0). The relation between the MLE and Cox’s partial likelihood estimator for β
0 is made clear by showing that Taylor’s expansion gives Cox’s partial likelihood estimating function as the leading term of
the full-profile likelihood estimating function. We show that the log full-likelihood ratio has an asymptotic chi-squared
distribution, while the simulation studies indicate that for small or moderate sample sizes, the MLE performs favorably over
Cox’s partial likelihood estimator. In a real dataset example, our full likelihood ratio test and Cox’s partial likelihood
ratio test lead to statistically different conclusions. 相似文献
14.
Analysis of rounded data from dependent sequences 总被引:1,自引:0,他引:1
Baoxue Zhang Tianqing Liu Z. D. Bai 《Annals of the Institute of Statistical Mathematics》2010,62(6):1143-1173
Observations on continuous populations are often rounded when recorded due to the precision of the recording mechanism. However,
classical statistical approaches have ignored the effect caused by the rounding errors. When the observations are independent
and identically distributed, the exact maximum likelihood estimation (MLE) can be employed. However, if rounded data are from
a dependent structure, the MLE of the parameters is difficult to calculate since the integral involved in the likelihood equation
is intractable. This paper presents and examines a new approach to the parameter estimation, named as “short, overlapping
series” (SOS), to deal with the α-mixing models in presence of rounding errors. We will establish the asymptotic properties
of the SOS estimators when the innovations are normally distributed. Comparisons of this new approach with other existing
techniques in the literature are also made by simulation with samples of moderate sizes. 相似文献
15.
We introduce in this paper the concept of “impulse evolutionary game”. Examples of evolutionary games are usual differential
games, differentiable games with history (path-dependent differential games), mutational differential games, etc. Impulse
evolutionary systems and games cover in particular “hybrid systems” as well as “qualitative systems”. The conditional viability
kernel of a constrained set (with a target) is the set of initial states such that for all strategies (regarded as continuous
feedbacks) played by the second player, there exists a strategy of the first player such that the associated run starting
from this initial state satisfies the constraints until it hits the target. This paper characterizes the concept of conditional
viability kernel for “qualitative games” and of conditional valuation function for “qualitative games” maximinimizing an intertemporal
criterion. The theorems obtained so far about viability/capturability issues for evolutionary systems, conditional viability
for differential games and about impulse and hybrid systems are used to provide characterizations of conditional viability
under impulse evolutionary games. 相似文献
16.
Milan Stehlík Rastislav Potocký Helmut Waldl Zdeněk Fabián 《Computational Statistics》2010,25(3):485-503
Assessment of heavy tailed data and its compound sums has many applications in insurance, auditing and operational risk capital
assessment among others. In this paper, we compare the classical estimators (maximum likelihood, QQ and moment estimators)
with the recently introduced robust estimators of “generalized median”, “trimmed mean” and estimators based on t-score moments.
We derive the exact distribution of the likelihood ratio tests of homogeneity and simple hypothesis on the tail index of a
two-parameter Pareto model. Such exact tests support the assessment of the performance of estimators. In particular, we discuss
some problems that one can encounter when misemploying the log-normal assumption based methods supported by the Basel II framework.
Real data and simulated examples illustrate the methods. 相似文献
17.
The ordinary least squares estimation is based on minimization of the squared distance of the response variable to its conditional
mean given the predictor variable. We extend this method by including in the criterion function the distance of the squared
response variable to its second conditional moment. It is shown that this “second-order” least squares estimator is asymptotically
more efficient than the ordinary least squares estimator if the third moment of the random error is nonzero, and both estimators
have the same asymptotic covariance matrix if the error distribution is symmetric. Simulation studies show that the variance
reduction of the new estimator can be as high as 50% for sample sizes lower than 100. As a by-product, the joint asymptotic
covariance matrix of the ordinary least squares estimators for the regression parameter and for the random error variance
is also derived, which is only available in the literature for very special cases, e.g. that random error has a normal distribution.
The results apply to both linear and nonlinear regression models, where the random error distributions are not necessarily
known. 相似文献
18.
T. V. Panchapagesan 《Rendiconti del Circolo Matematico di Palermo》1995,44(3):417-440
The concept of an orthogonal spectral representation (OTSR) of a Hilbert spaceH relative to a spectral measureE(.) is introduced and it is shown that every Hilbert space admits an OTSR relative to a given spectral measure. Apart from
the various results obtained about OTSRs, the principal result of Allan Brown (1974) is deduced as an easy consequence of
this study. A new complete system of unitary invariants called the “equivalence of OTSRs”, is given for spectral measures.
Two special types of OTSRs called “BOTSR” and “COBOTSR” are introduced and characterized respectively in terms of the “GCGS-property”
and “CGS-property” of the associated spectral measure. Various complete systems of unitary invariants are given for spectral
measures with the GCGS-property. Finally, the Wecken-Plesner-Rohlin theorem on hermitian operators with simple spectra is
generalized to arbitrary spectral measures. 相似文献
19.
For an ARMA model, we test the hypothesis that the coefficients of this model remain constant in time and satisfy the stationarity
condition against the alternative that the coefficients change (“drift”) in time. We propose asymptotically distribution free
tests for such hypothesis based on sequential residual processes. A similar problem is solved for the ARCH model.
相似文献
20.
Reuven Rubinstein 《Methodology and Computing in Applied Probability》2009,11(4):491-549
We present a randomized algorithm, called the cloning algorithm, for approximating the solutions of quite general NP-hard
combinatorial optimization problems, counting, rare-event estimation and uniform sampling on complex regions. Similar to the
algorithms of Diaconis–Holmes–Ross and Botev–Kroese the cloning algorithm is based on the MCMC (Gibbs) sampler equipped with
an importance sampling pdf and, as usual for randomized algorithms, it uses a sequential sampling plan to decompose a “difficult”
problem into a sequence of “easy” ones. The cloning algorithm combines the best features of the Diaconis–Holmes–Ross and the
Botev–Kroese. In addition to some other enhancements, it has a special mechanism, called the “cloning” device, which makes
the cloning algorithm, also called the Gibbs cloner fast and accurate. We believe that it is the fastest and the most accurate randomized algorithm for counting known so far.
In addition it is well suited for solving problems associated with the Boltzmann distribution, like estimating the partition
functions in an Ising model. We also present a combined version of the cloning and cross-entropy (CE) algorithms. We prove
the polynomial complexity of a particular version of the Gibbs cloner for counting. We finally present efficient numerical
results with the Gibbs cloner and the combined version, while solving quite general integer and combinatorial optimization
problems as well as counting ones, like SAT. 相似文献