首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a method for estimating nonstationary spatial covariance functions by representing a spatial process as a linear combination of some local basis functions with uncorrelated random coefficients and some stationary processes, based on spatial data sampled in space with repeated measurements. By incorporating a large collection of local basis functions with various scales at various locations and stationary processes with various degrees of smoothness, the model is flexible enough to represent a wide variety of nonstationary spatial features. The covariance estimation and model selection are formulated as a regression problem with the sample covariances as the response and the covariances corresponding to the local basis functions and the stationary processes as the predictors. A constrained least squares approach is applied to select appropriate basis functions and stationary processes as well as estimate parameters simultaneously. In addition, a constrained generalized least squares approach is proposed to further account for the dependencies among the response variables. A simulation experiment shows that our method performs well in both covariance function estimation and spatial prediction. The methodology is applied to a U.S. precipitation dataset for illustration. Supplemental materials relating to the application are available online.  相似文献   

2.
In this paper, we report the results of a series of experiments on a version of the centipede game in which the total payoff to the two players is constant. Standard backward induction arguments lead to a unique Nash equilibrium outcome prediction, which is the same as the prediction made by theories of “fair” or “focal” outcomes. We find that subjects frequently fail to select the unique Nash outcome prediction. While this behavior was also observed in McKelvey and Palfrey (1992) in the “growing pie” version of the game they studied, the Nash outcome was not “fair”, and there was the possibility of Pareto improvement by deviating from Nash play. Their findings could therefore be explained by small amounts of altruistic behavior. There are no Pareto improvements available in the constant-sum games we examine. Hence, explanations based on altruism cannot account for these new data. We examine and compare two classes of models to explain these data. The first class consists of non-equilibrium modifications of the standard “Always Take” model. The other class we investigate, the Quantal Response Equilibrium model, describes an equilibrium in which subjects make mistakes in implementing their best replies and assume other players do so as well. One specification of this model fits the experimental data best, among the models we test, and is able to account for all the main features we observe in the data.  相似文献   

3.
The present paper deals with the identification and maximum likelihood estimation of systems of linear stochastic differential equations using panel data. So we only have a sample of discrete observations over time of the relevant variables for each individual. A popular approach in the social sciences advocates the estimation of the “exact discrete model” after a reparameterization with LISREL or similar programs for structural equations models. The “exact discrete model” corresponds to the continuous time model in the sense that observations at equidistant points in time that are generated by the latter system also satisfy the former. In the LISREL approach the reparameterized discrete time model is estimated first without taking into account the nonlinear mapping from the continuous to the discrete time parameters. In a second step, using the inverse mapping, the fundamental system parameters of the continuous time system in which we are interested, are inferred. However, some severe problems arise with this “indirect approach”. First, an identification problem may arise in multiple equation systems, since the matrix exponential function denning some of the new parameters is in general not one‐to‐one, and hence the inverse mapping mentioned above does not exist. Second, usually some sort of approximation of the time paths of the exogenous variables is necessary before the structural parameters of the system can be estimated with discrete data. Two simple approximation methods are discussed. In both approximation methods the resulting new discrete time parameters are connected in a complicated way. So estimating the reparameterized discrete model by OLS without restrictions does not yield maximum likelihood estimates of the desired continuous time parameters as claimed by some authors. Third, a further limitation of estimating the reparameterized model with programs for structural equations models is that even simple restrictions on the original fundamental parameters of the continuous time system cannot be dealt with. This issue is also discussed in some detail. For these reasons the “indirect method” cannot be recommended. In many cases the approach leads to misleading inferences. We strongly advocate the direct estimation of the continuous time parameters. This approach is more involved, because the exact discrete model is nonlinear in the original parameters. A computer program by Hermann Singer that provides appropriate maximum likelihood estimates is described.  相似文献   

4.
Maximum a Posteriori Sequence Estimation Using Monte Carlo Particle Filters   总被引:1,自引:0,他引:1  
We develop methods for performing maximum a posteriori (MAP) sequence estimation in non-linear non-Gaussian dynamic models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. MAP sequence estimation is then performed using a classical dynamic programming technique applied to the discretised version of the state space. In contrast with standard approaches to the problem which essentially compare only the trajectories generated directly during the filtering stage, our method efficiently computes the optimal trajectory over all combinations of the filtered states. A particular strength of the method is that MAP sequence estimation is performed sequentially in one single forwards pass through the data without the requirement of an additional backward sweep. An application to estimation of a non-linear time series model and to spectral estimation for time-varying autoregressions is described.  相似文献   

5.
Wai Kwong Cheang 《PAMM》2007,7(1):2080015-2080016
Trends in population have important implications for a government in formulating its manpower-related policies. A business may also need to adjust its long-term market strategies according to these trends. This paper analyses the Singapore population data using time series regression model with autoregressive error term. In addition to a linear trend, other regressors included in the model are (i) a seasonal component of period 12 to account for the effect of the auspicious “dragon” years in the Chinese calendar; (ii) level-shift interventions to account for the effect of the government campaigns on family planning. Two methods of estimating the autoregressive parameter are considered: maximum likelihood (ML) and restricted maximum likelihood (REML). For a time series of short or moderate sample length, it is shown in Cheang and Reinsel (2000) that the REML estimator is generally much less biased than the ML estimator. Consequently, the REML approach leads to more accurate inferences for the regression parameters. This paper compares the ML and REML estimation results, and examines the implications for the nature of nonstationarity (deterministic or stochastic trend component) exhibited by the Singapore population data series. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
We propose a novel “tree-averaging” model that uses the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian ensemble trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplementary materials for this article are available online.  相似文献   

7.
An Algorithm for Combined Code and Carrier Phase Based GPS Positioning   总被引:2,自引:0,他引:2  
The Global Positioning System (GPS) is a satellite based navigation system. GPS satellites transmit signals that allow one to quite accurately estimate the location of GPS receivers. In GPS a typical technique for kinematic position estimation is relative positioning where two receivers are used, one receiver is stationary and its exact position is known, the other is roving and its position is to be estimated. We describe the physical situation and give the mathematical model based on the difference of the measurements at the stationary and roving receivers. The model we consider combines both the code and carrier phase measurements. We then present a recursive least squares approach for position estimation. We take full account of the structure of the problem to make our algorithm efficient, and use orthogonal transformations to ensure numerical reliability of the algorithm. Real data test results suggest our algorithm is effective. An additional benefit of this approach is that the drawbacks of double differencing are avoided. The paper could also serve as a straightforward introduction for numerical analysts to an interesting area of GPS.This revised version was published online in October 2005 with corrections to the Cover Date.  相似文献   

8.
We consider a class of distribution-free regression models only defined in terms of moments, which can be used to model separate reported but not settled reserves, and incurred but not reported reserves. These regression models can be estimated using standard least squares and method of moments techniques, similar to those used in the distribution-free chain-ladder model. Further, these regression models are closely related to double chain-ladder type models, and the suggested estimation techniques could serve as alternative estimation procedures for these models. Due to the simple structure of the models it is possible to obtain Mack-type mean squared error of prediction estimators. Moreover, the analysed regression models can be used on different levels of detailed data, and by using the least squares estimation techniques it is possible to show that the precision in the reserve predictor is improved by using more detailed data. These regression models can be seen as a sequence of linear models, and are therefore also easy to bootstrap non-parametrically.  相似文献   

9.
ON THE ACCURACY OF THE LEAST SQUARES AND THE TOTAL LEAST SQUARES METHODS   总被引:1,自引:0,他引:1  
Consider solving an overdetermined system of linear algebraic equations by both the least squares method (LS) and the total least squares method (TLS). Extensive published computational evidence shows that when the original system is consistent. one often obtains more accurate solutions by using the TLS method rather than the LS method. These numerical observations contrast with existing analytic perturbation theories for the LS and TLS methods which show that the upper bounds for the LS solution are always smaller than the corresponding upper bounds for the TLS solutions. In this paper we derive a new upper bound for the TLS solution and indicate when the TLS method can be more accurate than the LS method.Many applied problems in signal processing lead to overdetermined systems of linear equations where the matrix and right hand side are determined by the experimental observations (usually in the form of a lime series). It often happens that as the number of columns of the matrix becomes larger, the ra  相似文献   

10.
Abstract

We present a computational approach to the method of moments using Monte Carlo simulation. Simple algebraic identities are used so that all computations can be performed directly using simulation draws and computation of the derivative of the log-likelihood. We present a simple implementation using the Newton-Raphson algorithm with the understanding that other optimization methods may be used in more complicated problems. The method can be applied to families of distributions with unknown normalizing constants and can be extended to least squares fitting in the case that the number of moments observed exceeds the number of parameters in the model. The method can be further generalized to allow “moments” that are any function of data and parameters, including as a special case maximum likelihood for models with unknown normalizing constants or missing data. In addition to being used for estimation, our method may be useful for setting the parameters of a Bayes prior distribution by specifying moments of a distribution using prior information. We present two examples—specification of a multivariate prior distribution in a constrained-parameter family and estimation of parameters in an image model. The former example, used for an application in pharmacokinetics, motivated this work. This work is similar to Ruppert's method in stochastic approximation, combines Monte Carlo simulation and the Newton-Raphson algorithm as in Penttinen, uses computational ideas and importance sampling identities of Gelfand and Carlin, Geyer, and Geyer and Thompson developed for Monte Carlo maximum likelihood, and has some similarities to the maximum likelihood methods of Wei and Tanner.  相似文献   

11.
In a total least squares (TLS) problem, we estimate an optimal set of model parameters X, so that (AA)X=BB, where A is the model matrix, B is the observed data, and ΔA and ΔB are corresponding corrections. When B is a single vector, Rao (1997) and Paige and Strakoš (2002) suggested formulating standard least squares problems, for which ΔA=0, and data least squares problems, for which ΔB=0, as weighted and scaled TLS problems. In this work we define an implicitly-weighted TLS formulation (ITLS) that reparameterizes these formulations to make computation easier. We derive asymptotic properties of the estimates as the number of rows in the problem approaches infinity, handling the rank-deficient case as well. We discuss the role of the ratio between the variances of errors in A and B in choosing an appropriate parameter in ITLS. We also propose methods for computing the family of solutions efficiently and for choosing the appropriate solution if the ratio of variances is unknown. We provide experimental results on the usefulness of the ITLS family of solutions.  相似文献   

12.
In parameter estimation, it is not a good choice to select a “best model” by some criterion when there is model uncertainty. Model averaging is commonly used under this circumstance. In this paper, transformation-based model averaged tail area is proposed to construct confidence interval, which is an extension of model averaged tail area method in the literature. The transformation-based model averaged tail area method can be used for general parametric models and even non-parametric models. Also, it asymptotically has a simple formula when a certain transformation function is applied. Simulation studies are carried out to examine the performance of our method and compare with existing methods. A real data set is also analyzed to illustrate the methods.  相似文献   

13.
本文利用变点统计学和黄金分割法讨论有多个变点的离散回归方程的交点估计和参数估计,文中提出基于黄金分割法搜索最佳变点估计和同时得到参数估计的最小二乘算法,还讨论该算法在控制领域的应用,数值模拟结果显示本文算法能给出良好的变点及参数的估计值。  相似文献   

14.
This paper develops a method of adaptive modeling that may be applied to forecast non-stationary time series. The starting point are time-varying coefficients models introduced in statistics, econometrics and engineering. The basic step of modeling is represented by the implementation of adaptive recursive estimators for tracking parameters. This is achieved by unifying basic algorithms—such as recursive least squares (RLS) and extended Kalman filter (EKF)—into a general scheme and next by selecting its coefficients with the minimization of the sum of squared prediction errors. This defines a non-linear estimation problem that may be analyzed in the context of the conditional least squares (CLS) theory. A numerical application on the IBM stock price series of Box-Jenkins illustrates the method and shows its good forecasting ability.  相似文献   

15.
In the process of modeling and forecasting of fuzzy time series, an issue on how to partition the universe of discourse impacts the quality of the forecasting performance of the constructed fuzzy time series model. In this paper, a novel method of partitioning the universe of discourse of time series based on interval information granules is proposed for improving forecasting accuracy of model. In the method, the universe of discourse of time series is first pre-divided into some intervals according to the predefined number of intervals to be partitioned, and then information granules are constructed in the amplitude-change space on the basis of data of time series belonging to each of intervals and their corresponding change (trends). In the sequel, optimal intervals are formed by continually adjusting width of these intervals to make information granules which associate with the corresponding intervals become most “informative”. Three benchmark time series are used to perform experiments to validate the feasibility and effectiveness of proposed method. The experimental results clearly show that the proposed method produces more reasonable intervals exhibiting sound semantics. When using the proposed partitioning method to determine intervals for modeling of fuzzy time series, forecasting accuracy of the constructed model are prominently enhanced.  相似文献   

16.
17.
We propose a primal-dual “layered-step” interior point (LIP) algorithm for linear programming with data given by real numbers. This algorithm follows the central path, either with short steps or with a new type of step called a “layered least squares” (LLS) step. The algorithm returns an exact optimum after a finite number of steps—in particular, after O(n 3.5 c(A)) iterations, wherec(A) is a function of the coefficient matrix. The LLS steps can be thought of as accelerating a classical path-following interior point method. One consequence of the new method is a new characterization of the central path: we show that it composed of at mostn 2 alternating straight and curved segments. If the LIP algorithm is applied to integer data, we get as another corollary a new proof of a well-known theorem by Tardos that linear programming can be solved in strongly polynomial time provided thatA contains small-integer entries.  相似文献   

18.
We introduce a mixed regression model for mortality data which can be decomposed into a deterministic trend component explained by the covariates age and calendar year, a multivariate Gaussian time series part not explained by the covariates, and binomial risk. Data can be analyzed by means of a simple logistic regression model when the multivariate Gaussian time series component is absent and there is no overdispersion. In this paper we rather allow for overdispersion and the mixed regression model is fitted to mortality data from the United States and Sweden, with the aim to provide prediction and intervals for future mortality and annuity premium, as well as smoothing historical data, using the best linear unbiased predictor. We find that the form of the Gaussian time series has a large impact on the width of the prediction intervals, and it poses some new questions on proper model selection.  相似文献   

19.
In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.  相似文献   

20.
Summary Considerable progress has been made in recent years in the analysis of time series arising from chaotic systems. In particular, a variety of schemes for the short-term prediction of such time series has been developed. However, hitherto all such algorithms have used batch processing and have not been able to continuously update their estimate of the dynamics using new observations as they are made. This severely limits their usefulness in real time signal processing applications. In this paper we present a continuous update prediction scheme for chaotic time series that overcomes this difficulty. It is based on radial basis function approximation combined with a recursive least squares estimation algorithm. We test this scheme using simulated data and comment on its relationship to adaptive transversal filters, which are widely used in conventional signal processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号