首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
An efficient and flexible algorithm for the spherical interpolation of large scattered data sets is proposed. It is based on a partition of unity method on the sphere and uses spherical radial basis functions as local approximants. This technique exploits a suitable partition of the sphere into a number of spherical zones, the construction of a certain number of cells such that the sphere is contained in the union of the cells, with some mild overlap among the cells, and finally the employment of an optimized spherical zone searching procedure. Some numerical experiments show the good accuracy of the spherical partition of unity method and the high efficiency of the algorithm.  相似文献   

2.
We study phase coexistence (separation) phenomena in Ising, Potts and random cluster models in dimensions d3 below the critical temperature. The simultaneous occurrence of several phases is typical for systems with appropriately arranged (mixed) boundary conditions or for systems satisfying certain physically natural constraints (canonical ensembles). The various phases emerging in these models define a partition, called the empirical phase partition, of the space. Our main results are large deviations principles for (the shape of) the empirical phase partition. More specifically, we establish a general large deviation principle for the partition induced by large (macroscopic) clusters in the Fortuin–Kasteleyn model and transfer it to the Ising–Potts model where we obtain a large deviation principle for the empirical phase partition induced by the various phases. The rate function turns out to be the total surface free energy (associated with the surface tension of the model and with boundary conditions) which can be naturally assigned to each reasonable partition. These LDP-s imply a weak law of large numbers: asymptotically, the law of the phase partition is determined by an appropriate variational problem. More precisely, the empirical phase partition will be close to some partition which is compatible with the constraints imposed on the system and which minimizes the total surface free energy. A general compactness argument guarantees the existence of at least one such minimizing partition. Our results are valid for temperatures T below a limit of slab-thresholds conjectured to agree with the critical point Tc. Moreover, T should be such that there exists only one translation invariant infinite volume state in the corresponding Fortuin–Kasteleyn model; a property which can fail for at most countably many values and which is conjectured to be true for every TTc.  相似文献   

3.
在传统的DEA模型中,最优相对效率模型是在不大于1的范围内研究决策单元的效率的,最差相对效率模型是在不小于1的范围内研究决策单元的效率,这两种模型在研究投影问题时,是在不同的范围内进行的,有一定的片面性.将在interval DEA模型中,研究决策单元的投影问题,该模型是在相同的约束域内研究最优和最差相对效率模型,得出的结论将更加全面,通过两个定理给出了非DEA有效的决策单元在DEA有效面上的投影表达式和非DEA无效的决策单元在DEA无效面上的投影表达式.同时,通过一个实例对决策单元在interval DEA模型中的投影结果与在传统的DEA模型的投影结果进行了比较,发现投影结果比传统模型得到的投影结果对实际的生产有更强的指导意义.  相似文献   

4.
佟毅 《运筹与管理》1998,7(4):30-33
研究了几种最小二乘估计(LSE)效率的偏序,证明了均方误差比(MSER)效率在几种效率度量中是最优的。  相似文献   

5.
Summary Local powers of two- andk-sample rank tests under alternatives of contaminated distributions are investigated. It is shown that the rank tests based on normal scores and Wilcoxon scores are superior to thet-test or theF-test for many choices of alternatives of contaminated distributions and that the values of the asymptotic relative efficiency of the rank test based on Wilcoxon scores with respect to the normal scores are about one in all the investigated cases.  相似文献   

6.
Increasingly, fuzzy partitions are being used in multivariate classification problems as an alternative to the crisp classification procedures commonly used. One such fuzzy partition, the grade of membership model, partitions individuals into fuzzy sets using multivariate categorical data. Although the statistical methods used to estimate fuzzy membership for this model are based on maximum likelihood methods, large sample properties of the estimation procedure are problematic for two reasons. First, the number of incidental parameters increases with the size of the sample. Second, estimated parameters fall on the boundary of the parameter space with non-zero probability. This paper examines the consistency of the likelihood approach when estimating the components of a particular probability model that gives rise to a fuzzy partition. The results of the consistency proof are used to determine the large sample distribution of the estimates. Common methods of classifying individuals based on multivariate observations attempt to place each individual into crisply defined sets. The fuzzy partition allows for individual to individual heterogeneity, beyond simply errors in measurement, by defining a set of pure type characteristics and determining each individual's distance from these pure types. Both the profiles of the pure types and the heterogeneity of the individuals must be estimated from data. These estimates empirically define the fuzzy partition. In the current paper, this data is assumed to be categorical data. Because of the large number of parameters to be estimated and the limitations of categorical data, one may be concerned about whether or not the fuzzy partition can be estimated consistently. This paper shows that if heterogeneity is measured with respect to a fixed number of moments of the grade of membership scores of each individual, the estimated fuzzy partition is consistent.  相似文献   

7.
External auditors have the responsibility of estimating thelevel of error in accounts presented to them by their clients,and establishing whether or not this exceeds materiality. Usuallythe large volume of accounts necessitates the use of sampleinformation to estimate the error amount, which is obtainedby randomly choosing a subset of the line items for auditing.Since the account amounts may vary considerably, it is desirableto select them with probability proportional to the book valuesize, and while numerous such procedures exist, most have implementationproblems which prevent their widespread application to practicalsituations: some select items with replacement, others becometoo complex when the sample size is large, and yet others returna variable sample size. This paper presents a selection methodthat mitigates these problems. It returns a fixed-size sampleof distinct line items, and is easy to implement no matter howlarge the sample. The results of a series of simulation experimentswith a variety of audit conditions indicate that the new methodprovides reliable bound estimates of the total error amount,which are more precise than methods currently in use.  相似文献   

8.
The pseudoachromatic number of a graph G is the maximum size of a vertex partition of G (where the sets of the partition may or may not be independent) such that, between any two distinct parts, there is at least one edge of G. This parameter is determined for graphs such as cycles, paths, wheels, certain complete multipartite graphs, and for other classes of graphs. Some open problems are raised.AMS Subject Classification (1991): primary 05C75 secondary 05C85  相似文献   

9.
For a diffeomorphism of a smooth compact Riemann manifold, retaining a measure equivalent to Riemann volume, a special invariant partition is constructed on a set where at least one value of the characteristic Lyapunov indicators is nonzero. This partition possesses properties analogous to the properties of partition into global condensing sheets for Y-diffeomorphisms while, as the complement to this set, there is partition into points. It is proven that the measurable hull of this partition coincides with the π-partition of a diffeomorphism.  相似文献   

10.
The rates of convergence of two Schwarz alternating methods are analyzed for the iterative solution of a discrete problem which arises when orthogonal spline collocation with piecewise Hermite bicubics is applied to the Dirichlet problem for Poisson's equation on a rectangle. In the first method, the rectangle is divided into two overlapping subrectangles, while three overlapping subrectangles are used in the second method. Fourier analysis is used to obtain explicit formulas for the convergence factors by which theH 1-norm of the errors is reduced in one iteration of the Schwarz methods. It is shown numerically that while these factors depend on the size of overlap, they are independent of the partition stepsize. Results of numerical experiments are presented which confirm the established rates of convergence of the Schwarz methods.This research was supported in part by funds from the National Science Foundation grant CCR-9103451.  相似文献   

11.
引入时间变量的数据包络分析模型   总被引:1,自引:0,他引:1  
考虑到实际中的生产过程大多数都是多阶段的生产过程,而传统的数据包络分析模型只能对单阶段的生产过程进行评价.传统的数据包络分析模型在应用中的局限性很大.本文是在传统数据包络分析模型的基础上,通过引入离散的时间变量来建立对整个多阶段生产过程进行评价的数据包络分析模型.  相似文献   

12.
The output distance function is a key concept in economics. However, its empirical estimation often violates properties dictated by neoclassical production theory. In this paper, we introduce the neural distance function (NDF) which constitutes a global approximation to any arbitrary production technology with multiple outputs given by a neural network (NN) specification. The NDF imposes all theoretical properties such as monotonicity, curvature and homogeneity, for all economically admissible values of outputs and inputs. Fitted to a large data set for all US commercial banks (1989–2000), the NDF explains a very high proportion of the variance of output while keeping the number of parameters to a minimum and satisfying the relevant theoretical properties. All measures such as total factor productivity (TFP) and technical efficiency (TE) are computed routinely. Next, the NDF is compared with the Translog popular specification and is found to provide very satisfactory results as it possesses the properties thought as desirable in neoclassical production theory in a way not matched by its competing specification.  相似文献   

13.
聚集数据线性模型参数估计的相对效率与广义相关系数   总被引:2,自引:0,他引:2  
对于聚集数据的线性模型,本文给出了Peter—Karsten估计相对于最佳线性无偏估计的一个相对效率,得到了相对效率的下界,讨论了该相对效率与广义相关系数的关系.  相似文献   

14.
本文研究一类单相关回归模型的效率及其应用,证明了对单相关回归模型的任一可估函数c′β=c′(X′X)-X′Y的最小二乘估计(LS)都是最佳线性一致无偏估计(BLU),给出了这类模型的均方误差比效率的下确界(infρMSER).同时研究了用最小二乘估计代替最佳线性一致无偏估计时应注意的问题  相似文献   

15.
Robust estimation often relies on a dispersion function that is more slowly varying at large values than the square function. However, the choice of tuning constant in dispersion functions may impact the estimation efficiency to a great extent. For a given family of dispersion functions such as the Huber family, we suggest obtaining the “best” tuning constant from the data so that the asymptotic efficiency is maximized. This data-driven approach can automatically adjust the value of the tuning constant to provide the necessary resistance against outliers. Simulation studies show that substantial efficiency can be gained by this data-dependent approach compared with the traditional approach in which the tuning constant is fixed. We briefly illustrate the proposed method using two datasets.  相似文献   

16.
图$G(V,E)$的全色数 $\chi_{t}(G)$就是将$V\bigcup E$分成彼此不相交的全独立分割集的最小个数。 如果任何两个$V\bigcup E$的全独立分割集的元素数目相差不超过1,那么 $V \bigcup E$的全独立分割集的最小个数就称为图$G$的均匀全色数,记为$\chi_{et}(G)$。 在本文中我们给出了当 $m \geq n \geq 3$ 时 $W_m\bigvee K_n$,$F_m \bigvee K_n$及$S_m \bigvee K_n$ 的均匀全色数.  相似文献   

17.
基于改进DEA模型的科技投入产出有效性分析   总被引:2,自引:0,他引:2  
合理地评价各地区的科技投入产出,对于资源的合理利用,提高资金的使用效率具有十分重要的现实意义.采用一种改进的DEA模型(简称M DEA),对2000—2002年间我国各地区的科技投入产出相对效率进行了充分评价和排序.并将三年的数据放在一起组成一个新的参考集,用同样的M DEA模型进行评估,得出各地区三年科技投入产出相对效率的变化情况.  相似文献   

18.
In this paper, a hybrid approximation method on the sphere is analysed. As interpolation scheme, we consider a partition of unity method, such as the modified spherical Shepard method, which uses zonal basis functions plus spherical harmonics as local approximants. The associated algorithm is efficiently implemented and works well also when the amount of data is very large, as it is based on an optimized searching procedure. Locality of the method guarantees stability in numerical computations, and numerical results show good accuracy. Moreover, we aimed to discuss preservation of such features when the method and the related algorithm are applied to experimental data. To achieve this purpose, we considered the Magnetic Field Satellite data. The goal was reached, as efficiency and accuracy are maintained on several sets of real data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Recurrent event time data are common in biomedical follow-up studies, in which a study subject may experience repeated occurrences of an event of interest. In this paper, we evaluate two popular nonparametric tests for recurrent event time data in terms of their relative efficiency. One is the log-rank test for classical survival data and the other a more recently developed nonparametric test based on comparing mean recurrent rates. We show analytically that, somewhat surprisingly, the log-rank test that only makes use of time to the first occurrence could be more efficient than the test for mean occurrence rates that makes use of all available recurrence times, provided that subject-to-subject variation of recurrence times is large. Explicit formula are derived for asymptotic relative efficiencies under the frailty model. The findings are demonstrated via extensive simulations. This work was supported by US National Science Foundation (Grant No. DMS-0504269)  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号