首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

As an alternative to traditional, parametric approaches, we suggest nonparametric methods for analyzing spatial and temporal data on earthquake occurrences. Nonparametric techniques are particularly adaptive to anomalous behavior in the data and provide a new way of accessing a variety of different types of information about the way in which both intensity and magnitude of events evolve in time. They can be employed to estimate the spatial trajectory of event clusters as a function of time, and to define quiescent and active periods. The latter application suggests new approaches to forecasting high magnitude events. Our methods are founded on multivariate techniques for curve and surface estimation, particularly in contexts where curves or surfaces are unbounded at points or along lines.  相似文献   

2.
The widespread availability of digital spatial data and the capabilities of Geographic Information Systems (GIS) make it possible to easily synthesize spatial data from a variety of sources. More often than not, data have been collected at different geographic scales, and each of the scales may be different from the one of interest. Geographic information systems effortlessly handle these types of problems through raster and geoprocessing operations based on proportional allocation and centroid smoothing techniques. However, these techniques do not provide a measure of uncertainty in the estimates and lack the ability to incorporate important covariate information that may be used to improve the estimates. They also often ignore the different spatial supports (e.g., shape and orientation) of the data. On the other hand, statistical solutions to change-of-support problems are rather specific and difficult to implement. In this article, we present a general geostatistical framework for linking geographic data from different sources. This framework incorporates aggregation and disaggregation of spatial data, as well as prediction problems involving overlapping geographic units. It explicitly incorporates the supports of the data, can adjust for covariate values measured on different spatial units at different scales, provides a measure of uncertainty for the resulting predictions, and is computationally feasible within a GIS. The new framework we develop also includes a new approach for simultaneous estimation of mean and covariance functions from aggregated data using generalized estimating equations.  相似文献   

3.
The primary aim of this paper is to expose the use and the value of spatial statistical analysis in business and especially in designing economic policies in rural areas. Specifically, we aim to present under a unified framework, the use of both point and area‐based methods, in order to analyze in‐depth economic data, as well as, to drive conclusions through interpreting the analysis results. The motivating problem is related to the establishment of women‐run enterprises in a rural area of Greece. Moreover, in this article, the spatial scan statistic is successfully applied to the spatial economic data at hand, in order to detect possible clusters of small women‐run enterprises in a rural mountainous and disadvantaged region of Greece. Then, it is combined with Geographical Information System based on Local Indicator of Spatial Autocorrelation scan statistic for further exploring and interpreting the spatial patterns. The rejection of the random establishment of women‐run enterprises and the interpretation of the clustering patterns are deemed necessary, in order to assist government in designing policies for rural development. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
5.
This paper considers an attribute acceptance sampling problem in which inspection errors can occur. Unlike many common situations, the source of the inspection errors is the uncertainty associated with statistical sampling. Consider a lot that consists of N containers, with each container including a large number of units. It is desired to sample some of the containers and inspect a sample of units from these selected containers to determine proper disposition of the entire lot. Results presented in the paper demonstrate significant shortcomings in traditional sampling plans when applied in this context. Alternative sampling plans designed to address the risk of statistical classification error are presented. These plans estimate the rate of classification errors and set plan parameters to reduce the potential impact of such errors. Results are provided comparing traditional plans with the proposed alternatives. Limitations of the new plans are also discussed.  相似文献   

6.
李素芳  张虎  吴芳 《运筹与管理》2019,28(10):89-99
针对传统面板协整检验在建模过程中易受异常值影响以及其原假设设置的主观选择问题,本文利用动态公共因子刻画面板数据潜在的截面相关结构,提出基于动态因子的截面相关结构的贝叶斯分位面板协整检验,结合各个主要分位数水平下参数的条件后验分布,设计结合卡尔曼滤波的Gibbs抽样算法,进行贝叶斯分位面板协整检验;并进行Monte Carlo仿真实验验证贝叶斯分位面板协整检验的可行性与有效性。同时,采用中国各省金融发展和经济增长的面板数据进行实证研究,结果发现在各主要分位数水平下中国金融发展和经济增长之间具有协整关系。研究结果表明:贝叶斯分位面板协整检验方法避免了传统面板数据协整方法由于原假设设置不同而发生误判的问题,克服了异常值的影响,能够提供全面准确的模型参数估计和协整检验结果。  相似文献   

7.
Several economic applications require to consider different data sources and to integrate the information coming from them. This paper focuses on statistical matching, in particular we deal with incoherences. In fact, when logical constraints among the variables are present incoherencies on the probability evaluations can arise. The aim of this paper is to remove such incoherences by using different methods based on distances minimization or least commitment imprecise probabilities extensions. An illustrative example shows peculiarities of the different correction methods. Finally, limited to pseudo distance minimization, we performed a systematic comparison through a simulation study.  相似文献   

8.
We report a new optimal resolution for the statistical stratification problem under proportional sampling allocation among strata. Consider a finite population of N units, a random sample of n units selected from this population and a number L of strata. Thus, we have to define which units belong to each stratum so as to minimize the variance of a total estimator for one desired variable of interest in each stratum, and consequently reduce the overall variance for such quantity. In order to solve this problem, an optimal algorithm based on the concept of minimal path in a graph is proposed and assessed. Computational results using real data from IBGE (Brazilian Central Statistical Office) are provided.  相似文献   

9.
Investigations of spatial statistics, computed from lattice data in the plane, can lead to a special lattice point counting problem. The statistical goal is to expand the asymptotic expectation or large-sample bias of certain spatial covariance estimators, where this bias typically depends on the shape of a spatial sampling region. In particular, such bias expansions often require approximating a difference between two lattice point counts, where the counts correspond to a set of increasing domain (i.e., the sampling region) and an intersection of this set with a vector translate of itself. Non-trivially, the approximation error needs to be of smaller order than the spatial region’s perimeter length. For all convex regions in 2-dimensional Euclidean space and certain unions of convex sets, we show that a difference in areas can approximate a difference in lattice point counts to this required accuracy, even though area can poorly measure the lattice point count of any single set involved in the difference. When investigating large-sample properties of spatial estimators, this approximation result facilitates direct calculation of limiting bias, because, unlike counts, differences in areas are often tractable to compute even with non-rectangular regions. We illustrate the counting approximations with two statistical examples.  相似文献   

10.
首创采用"模糊坐标系",运用正交设计方法调查木荷防火林带叶蜂幼虫的危害状况,并对调查结果进行模糊分析,而且用多种方法测定了该幼虫的空间分布格局.研究表明:引进模糊坐标系,与传统的直角坐标、曲线坐标或曲面坐标相比,具有使调查结果与实际更吻合和使用更灵活方便的特点;正交调查结果的模糊分析比极差分析、方差分析获得更丰富信息;木荷防火林带中叶蜂幼虫空间分布格局符合负二项分布,分布的基本成份是稀疏的个体群.研究结论与调查林带的全面实际观察是一致的.  相似文献   

11.
Ranked set sampling (RSS) is a statistical technique that uses auxiliary ranking information of unmeasured sample units in an attempt to select a more representative sample that provides better estimation of population parameters than simple random sampling. However, the use of RSS can be hampered by the fact that a complete ranking of units in each set must be specified when implementing RSS. Recently, to allow ties declared as needed, Frey (Environ Ecol Stat 19(3):309–326, 2012) proposed a modification of RSS, which is to simply break ties at random so that a standard ranked set sample is obtained, and meanwhile record the tie structure for use in estimation. Under this RSS variation, several mean estimators were developed and their performance was compared via simulation, with focus on continuous outcome variables. We extend the work of Frey (2012) to binary outcomes and investigate three nonparametric and three likelihood-based proportion estimators (with/without utilizing tie information), among which four are directly extended from existing estimators and the other two are novel. Under different tie-generating mechanisms, we compare the performance of these estimators and draw conclusions based on both simulation and a data example about breast cancer prevalence. Suggestions are made about the choice of the proportion estimator in general.  相似文献   

12.
This paper provides a new structure in data envelopment analysis (DEA) for assessing the performance of decision making units (DMUs). It proposes a technique to estimate the DEA efficient frontier based on the Arash Method in a way different from the statistical inferences. The technique allows decisions in the target regions instead of points to benchmark DMUs without requiring any more information in the case of interval/fuzzy DEA methods. It suggests three efficiency indexes, called the lowest, technical and highest efficiency scores, for each DMU where small errors occur in both input and output components of the Farrell frontier, even if the data are accurate. These efficiency indexes provide a sensitivity index for each DMU and arrange both inefficient and technically efficient DMUs together while simultaneously detecting and benchmarking outliers. Two numerical examples depicted the validity of the proposed method.  相似文献   

13.
岩土工程中各土层参数的取值是根据现场及室内试验数据,采用经典统计学方法进行确定的,但这往往忽略了先验信息的作用.与经典统计学方法不同的是,Bayes法能从考虑先验分布的角度结合样本分布去推导后验分布,为岩土参数的取值提供一种新的分析方法.岩土工程勘察可视为对总体地层的随机抽样,当抽样完成时,样本分布密度函数是确定的,故Bayes法中的后验分布取决于先验分布,因此推导出两套不同的先验分布:利用先验信息确定先验分布及共轭先验分布.通过对先验及后验分布中超参数的计算,当样本总体符合N(μ,σ2)正态分布时,对所要研究的未知参数μ和σ展开分析,综合对比不同先验分布下后验分布的区间长度,给出岩土参数Bayes推断中最佳后验分布所要选择的先验分布.结果表明:共轭情况下的后验分布总是比无信息情况下的后验区间短,概率密度函数分布更集中,取值更方便.在正态总体情形下,根据未知参数μ和σ的联合后验分布求极值方法,确定样本总体中最大概率均值μmax和方差σmax作为工程设计采用值,为岩土参数取值方法提供了一条新的路径,有较好的工程意义.  相似文献   

14.
区域经济发展智能预测方法   总被引:2,自引:0,他引:2  
肖健华 《经济数学》2005,22(1):57-63
分析了影响区域经济发展的各种因素,指出由于这些因素相互制约、相互影响,使得传统的经济预测方法越来越难以胜任区域经济发展预测的需要.论述了核方法在处理非线性、不确定性和不精确性数据上存在的优势,建立了基于核方法三种经济预测模型,并将这三种预测模型与其它两种预测方法一起,对区域经济的发展进行组合预测.最后,采用数据融合的方法将各个体模型的预测结果进行集成,作为最终的输出.实际的结果表明,基于核方法的组合预测技术能取得较为理想的预测效果.  相似文献   

15.
Bayesian approaches to prediction and the assessment of predictive uncertainty in generalized linear models are often based on averaging predictions over different models, and this requires methods for accounting for model uncertainty. When there are linear dependencies among potential predictor variables in a generalized linear model, existing Markov chain Monte Carlo algorithms for sampling from the posterior distribution on the model and parameter space in Bayesian variable selection problems may not work well. This article describes a sampling algorithm based on the Swendsen-Wang algorithm for the Ising model, and which works well when the predictors are far from orthogonality. In problems of variable selection for generalized linear models we can index different models by a binary parameter vector, where each binary variable indicates whether or not a given predictor variable is included in the model. The posterior distribution on the model is a distribution on this collection of binary strings, and by thinking of this posterior distribution as a binary spatial field we apply a sampling scheme inspired by the Swendsen-Wang algorithm for the Ising model in order to sample from the model posterior distribution. The algorithm we describe extends a similar algorithm for variable selection problems in linear models. The benefits of the algorithm are demonstrated for both real and simulated data.  相似文献   

16.
Since in evaluating by traditional data envelopment analysis (DEA) models many decision making units (DMUs) are classified as efficient, a large number of methods for fully ranking both efficient and inefficient DMUs have been proposed. In this paper a ranking method is suggested which basically differs from previous methods but its models are similar to traditional DEA models such as BCC, additive model, etc. In this ranking method, DMUs are compared against an full-inefficient frontier, which will be defined in this paper. Based on this point of view many models can be designed, and we mention a radial and a slacks-based one out of them. This method can be used to rank all DMUs to get analytic information about the system, and also to rank only efficient DMUs to discriminate between them.  相似文献   

17.
The elaboration of optimal monetary policy strategies, and the statistical estimation of monetary policy rules followed by European Central Bank (ECB) in the new currency area of the Euro, are difficult to follow with the standard statistical models. For this reason we have developed an adaptive fuzzy expert system in order to mimic the framework on which the monetary policy strategy of the ECB is based. The expert system knowledge base consists of a set of fuzzy and crisp rules located at two different hierarchical levels. The high-level of the system receives some intermediate output values from the low-level and processes this information by means of a set of crisp rules. The low-level prepares these intermediate output values with the use of a fuzzy inference engine applied to economic input variables. The use of an expert system allows for modelling the ECB behaviour with the use of wider scope of knowledge, when compared with more traditional computational techniques. Rules at different hierarchical levels and at different intra-level groups, allow for managing the potentially contradictory structure of the ECB strategy. The system has been tested on the economic and financial time series going from the January 1999 to September 2000. The system’s correct prediction was estimated to overall 70% and, considering the complexity of the task, the results obtained are promising.  相似文献   

18.
Recent scientific applications produce data that are too large for storing or rendering for further statistical analysis. This motivates the construction of an optimum mechanism to choose only a subset of the available information and drawing inferences about the parent population using only the stored subset. This paper addresses the issue of estimation of parameter from such filtered data. Instead of all the observations we observe only a few chosen linear combinations of them and treat the remaining information as missing. From the observed linear combinations we try to estimate the parameter using EM based technique under the assumption that the parameter is sparse. In this paper we propose two related methods called ASREM and ESREM. The methods developed here are also used for hypothesis testing and construction of confidence interval. Similar data filtering approach already exists in signal sampling paradigm, for example, Compressive Sampling introduced by Candes et al. (Commun Pure Appl Math 59(8):1207–1223, 2006) and Donoho (IEEE Trans Inf Theory 52: 1289–1306, 2006). The methods proposed in this paper are not claimed to outperform all the available techniques of signal recovery, rather our methods are suggested as an alternative way of data compression using EM algorithm. However, we shall compare our methods to one standard algorithm, viz., robust signal recovery from noisy data using min-\(\ell _{1}\) with quadratic constraints. Finally we shall apply one of our methods to a real life dataset.  相似文献   

19.
To use a control chart, the quality engineer should specify three decision variables, namely the sample size, the sampling interval and the critical region of the chart. A significant part of recent research relaxed the constraint of using fixed design parameters to open the way to a new type of control charts called adaptive ones where at least one of the decision variables may change in real time based on the last data information. These adaptive schemes have proven their effectiveness from economical and statistical point of views. In this paper, the economic design of an attribute np control chart using a variable sampling interval (VSI) is treated. A sensitivity analysis is conducted to search for optimal design parameters minimizing the expected total cost per hour and to reveal the impact of the process and cost parameters on the behavior of optimal solutions. An economic comparison between the classical np chart, variable sample size (VSS) np control chart and VSI chart is conducted. It is found that switching from the classical attribute chart to the VSI sampling strategy results in notable cost savings and in reduction of the average time to signal and the average number of false alarms. In most cases of the sensitivity analysis, the VSI np chart outperforms the VSS np chart based on economical and statistical considerations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
Compact regular frames are always spatial. In this note we present a method for constructing non-spatial frames. As an application we show that there is a countably compact (and hence pseudocompact) completely regular frame which is not spatial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号