首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we introduce a new method to the cluster analysis of longitudinal data focusing on the determination of uncertainty levels for cluster memberships. The method uses the Dirichlet-t distribution which notably utilizes the robustness feature of the student-t distribution in the framework of a Bayesian semi-parametric approach together with robust clustering of subjects evaluates the uncertainty level of subjects memberships to their clusters. We let the number of clusters and the uncertainty levels be unknown while fitting Dirichlet process mixture models. Two simulation studies are conducted to demonstrate the proposed methodology. The method is applied to cluster a real data set taken from gene expression studies.  相似文献   

2.
Many computerized production scheduling systems have been implemented in order to make use of job dispatching rules for sequencing work on facilities. Few attempts have been made to implement such rules using manual systems, and it seems to be generally accepted that the use of a "good" dispatching rule is impossible without a computer system. The experimental evidence, however, supports the view that only under very exceptional circumstances would a computerized dispatching rule be worth implementing in preference to a manual system. This paper discusses the relative merits of the more popular dispatching rules with regard to their implementation requirements and their performance characteristics. A manual sequencing system which can be implemented at very little cost is described and the conditions required for successful use are discussed.  相似文献   

3.
统计深度函数及其应用   总被引:12,自引:0,他引:12  
次序统计量在一维统计数据分析中起着很重要的作用.多年来,人们一直在商维数据处理和分析中寻找“次序统计量”,却没有得到很满意的结果.由于缺少自然而有效的高维数据排序方法,因而象一维“中位数”的概念很难推广到高维.统计深度函数则提供了高维数据排序的一种工具,其主要思想是提供了一种从高维数据中心(最深点)向外的排序方法.不仅如此,统计深度函数已经在探索性高维数据分析,统计判决等方面带给我们一种全新的前景,并在工业、工程、生物医学等诸多领域得到很好的应用.本文介绍了统计深度函数概念及其应用,讨论了位置深度函数的标准,介绍了几种常用的统计深度函数.给出了由深度函数特别是由投影深度函数所诱导的位置和散布阵估计,介绍了它们的诸多优良性质,如极限分布,稳健性和有效性.由于在大多数场合下,高崩溃点的估计不是较有效的估计,而由统计深度函数所诱导的估计具有多元仿射不变性,并能提供理想的稳健性与有效性之间的平衡,本文还讨论了基于深度的统计检验和置信区域,介绍了统计深度函数的其他应用,如多元回归、带有变量误差模型、质量控制等,以及实际计算问题.指出了统计深度函数领域有关进一步的工作和研究方向.  相似文献   

4.
Mathematics courses for students of the social sciences have a vital role to play in providing relevant mathematical backgrounds for increasingly mathematical subjects.

The course described in this article has been designed for those first year degree students reading economics at University College, London, who have not passed G.C.E. A level? ? General Certificate of Education at Advanced level. mathematics. The aims and structure of the course are described, indicating those features which are thought to be of particular importance. The author reports her impression of the impact of the course on one group of students. The students’ own views about the course have also been collected and are summarized here, together with their more general views about mathematical education. Much of the analysis is thought to be relevant to all those involved in the mathematical education of social scientists.

  相似文献   

5.
Participants of a laboratory experiment judgmentally forecast a time series. In order to support their forecasts they are given a highly correlated indicator with a constant lead period of one. The subjects are not given any other information than the time series realizations and have to base their forecasts on pure eyeballing/chart-reading. Standard economic models do not appropriately account for the features of individual forecasts: These are typically affected by intra- and inter-individual instability of behavior. We extend the scheme theory by Otwin Becker for the explanation of individual forecasts by simple schemes based on visually perceived characteristics of the time series. We find that the forecasts of most subjects can be explained very accurately by only a few schemes.  相似文献   

6.
Bong  JM 王维克 《数学进展》1993,22(3):193-233
这是一篇介绍当前方程界十分重要的课题——非线性微局部分析——的综述文章。作为这一研究领域的开拓者,我们在不太长的篇幅里,从相当的理论高度简洁地介绍该领域近十年来一些最引人注目的工作。本文首先阐述了一般微局部分析的基本思想,然后介绍近十年来对非线性偏微分方程起重要推动作用的仿微分计算(如仿乘积,仿微分算子,仿复合等),以及有着更深刻内容的高次微局部的思想。同时,也大量介绍这些思想在非线性偏微分方程弱奇性分析中的应用,如奇性的传播,反射与绕射,余法型奇性的相互作用,非线性亚椭圆性,以及三个奇性波的相互作用等,这些均是当前方程界的热门课题。  相似文献   

7.
The Erlang Loss formula is a widely used model for determining values of the long-run proportion of customers that are lost (ploss values) in multi-server loss systems with Poisson arrival processes. There is a need for models that are less restrictive. Here, the general two-server loss system is investigated with no restrictions on the form that the renewal type input process takes; i.e. the underlying model is based on the GI/G/2 model of queueing theory. The analysis is carried out in discrete time leading to a compact system of equations that can be solved numerically, or in special cases exactly, to obtain ploss values. Exact results are obtained for some specific loss systems involving geometric distributions and, by taking appropriate limits, these results are extended to their continuous-time counterparts. A simple numerical procedure is developed to allow systems involving arbitrary continuous distributions to be approximated by the discrete-time model, leading to very accurate results for a set of test problems.  相似文献   

8.
A new statistical model for random unit vectors   总被引:1,自引:0,他引:1  
This paper proposes a new statistical model for symmetric axial directional data in dimension p. This proposal is an alternative to the Bingham distribution and to the angular central Gaussian family. The statistical properties for this model are presented. An explicit form for its normalizing constant is given and some moments and limiting distributions are derived. The proposed density is shown to apply to the modeling of 3×3 rotation matrices by representing them as quaternions, which are unit vectors in . The moment estimators of the parameters of the new model are calculated; explicit expressions for their sampling variances are given. The analysis of data measuring the posture of the right arm of subjects performing a drilling task illustrates the application of the proposed model.  相似文献   

9.
This study surveys claims in research articles regarding linguistic properties of mathematical texts, focusing on claims supported by empirical or logical arguments. It also performs a linguistic analysis to determine whether some of these claims are valid for school textbooks in mathematics and history. The result of the survey shows many and varying claims that mainly describe mathematical texts as highly compact, precise, complex, and containing technical vocabulary. However, very few studies present empirical support for their claims, and the few empirical studies that do exist contradict the most common, and unsupported, claims, since no empirical study has shown mathematical texts to be more complex than texts from other subjects, and any significant differences rather indicate the opposite. The linguistic analysis in this study is in line with previous empirical studies and stands in contrast to the more common opinion in the unsupported claims. For example, the mathematics textbooks have significantly shorter sentences than the history textbooks.  相似文献   

10.
Fractal dimension was demonstrated to be able to characterize the complexity of biological signals. The EMG time series are well known to have a complex behavior and some other studies already tried to characterize these signals by their fractal dimension.This paper is aimed at studying the correlation between the fractal dimension of surface EMG signal recorded over Rectus Femoris muscles during a vertical jump and the height reached in that jump.Healthy subjects performed vertical jumps at different heights. Surface EMG from Rectus Femoris was recorded and the height of each jump was measured by an optoelectronic motion capture system.Fractal dimension of sEMG was computed and the correlation between fractal dimension and eight of the jump was studied.Linear regression analysis showed a very high correlation coefficient between the fractal dimension and the height of the jump for all the subjects.The results of this study show that the fractal dimension is able to characterize the EMG signal and it can be related to the performance of the jump. Fractal dimension is therefore an useful tool for EMG interpretation.  相似文献   

11.
A discipline such as business and management (B&M) is very broad and has many fields within it, ranging from fairly scientific ones such as management science or economics to softer ones such as information systems. There are at least three reasons why it is important to identify these sub-fields accurately. First, to give insight into the structure of the subject area and identify perhaps unrecognised commonalities; second, for the purpose of normalising citation data as it is well-known that citation rates vary significantly between different disciplines. And third, because journal rankings and lists tend to split their classifications into different subjects—for example, the Association of Business Schools list, which is a standard in the UK, has 22 different fields. Unfortunately, at the moment these are created in an ad-hoc manner with no underlying rigour. The purpose of this paper is to identify possible sub-fields in B&M rigorously based on actual citation patterns. We have examined 450 journals in B&M, which are included in the ISI Web of Science and analysed the cross-citation rates between them enabling us to generate sets of coherent and consistent sub-fields that minimise the extent to which journals appear in several categories. Implications and limitations of the analysis are discussed.  相似文献   

12.
The Herfindahl–Hirschman Index (HHI) that measures the level of concentration in a given industry is a well-known and commonly accepted one indicator of market competition. On the basis of European Union Commission guidelines and HHI values, the given industry can be characterized as unconcentrated, moderately concentrated or concentrated. The paper is given over to a sensitivity analysis of HHI values, which allows simulations of concentration changes in relevant markets in order to assess new entries to the market. This paper derives relationships that allow the setting of boundaries in which the characteristic of industry concentration remains the same. Derived relations of HHI sensitivity analyses can be successfully used as a tool in assessing the entry of new subjects into any industry. In the case of economies with a smaller number of operating undertakings, it may be difficult to use the analysis based on the methodology of the European Commission. Therefore, the authors propose an approach of setting boundary ranges to characterize the concentration of the industry. Also, empirical analysis of the Slovak insurance industry, in which 23 insurance companies currently operate, was conducted on the basis of the European Commission methodology.  相似文献   

13.
Summary This paper is devoted to the numerical analysis of some finite volume discretizations of Darcys equations. We propose two finite volume schemes on unstructured meshes and prove their equivalence with either conforming or nonconforming finite element discrete problems. This leads to optimal a priori error estimates. In view of mesh adaptivity, we exhibit residual type error indicators and prove estimates which allow to compare them with the error in a very accurate way. Mathematics Subject Classification (2000):65G99, 65M06, 65M15, 65M60, 65P05This work was partially supported by Contract C03127/AEE2714 with the Laboratoire National dHydraulique of the Division Recherche et Développement of Électricité de France. We thank B. Gest and her research group for very interesting discussions on this subject.  相似文献   

14.
In this paper we study very large-scale neighborhoods for the minimum total weighted completion time problem on parallel machines, which is known to be strongly $\mathcal{NP}$ -hard. We develop two different ideas leading to very large-scale neighborhoods in which the best improving neighbor can be determined by calculating a weighted matching. The first neighborhood is introduced in a general fashion using combined operations of a basic neighborhood. Several examples for basic neighborhoods are given. The second approach is based on a partitioning of the job sets on the machines and a reassignment of them. In a computational study we evaluate the possibilities and the limitations of the presented very large-scale neighborhoods.  相似文献   

15.
A new class of algorithms to estimate the cardinality of very large multisets using constant memory and doing only one pass on the data is introduced here. It is based on order statistics rather than on bit patterns in binary representations of numbers. Three families of estimators are analyzed. They attain a standard error of using M units of storage, which places them in the same class as the best known algorithms so far. The algorithms have a very simple internal loop, which gives them an advantage in terms of processing speed. For instance, a memory of only 12 kB and only few seconds are sufficient to process a multiset with several million elements and to build an estimate with accuracy of order 2 percent. The algorithms are validated both by mathematical analysis and by experimentations on real internet traffic.  相似文献   

16.
Motivated by the fact that both long distance and local telephone business are evolving into markets consisting of a few firms having different cost structures and offering multipart pricing schedules, and by the fact that there are almost no analyses of markets of this type in the economics literature, a methodology is developed for the analysis of multipart prices in these markets. The approach makes use of variational inequality theory to model static Nash equilibria in multipart prices, and a marketing type product space model for differentiated products imbedded in a discrete choice model framework to model the demand. The approach is designed to be applicable to real world problems; it is flexible and constraints encountered in the real world can be imposed. Two specific models are developed for two-part tariffs, one without resale of services allowed and one allowing for resale. Some qualitative results concerning existence and uniqueness are presented, but the strength of the methodology is quantitative analysis. An algorithm for finding equilibria is presented. An example market representing business WATS is presented to demonstrate the method. The variety of scenarios that can be investigated using the methodology demonstrate its potential to be a very useful tool for the analysis of oligopolistic markets in which multipart prices are prevalent.  相似文献   

17.
In this article, the general (composite) Newton-Cotes rules for evaluating Hadamard finite-part integrals with third-order singularity (which is also called “supersingular integrals”) are investigated and the emphasis is placed on their pointwise superconvergence and ultraconvergence. The main error of the general Newton-Cotes rules is derived, which is shown to be determined by a certain function . Based on the error expansion, the corresponding modified quadrature rules are also proposed. At last, some numerical experiments are carried out to validate the theoretical analysis.  相似文献   

18.
In this paper we consider the problem of finding an equilibrium in an economy with non-linear constant returns to scale production activities. To find an equilibrium we propose an adjustment process in which the prices of the commodities and the activity levels of production adjust simultaneously. The process starts at a price vector at which each production activity has non-positive profit. We show that the process follows a path which connects the starting point with an equilibrium of the economy. From this it follows that the existence of a price vector at which each production activity has non-positive profit implies the existence of an equilibrium. The equilibrium can be computed by using a simplicial algorithm or by solving a sequence of Linear Variational Inequality Problems.This research is part of the VF-program Competition and Cooperation. The authors are very grateful to Dolf Talman and two unknown referees for their valuable comments and suggestions.  相似文献   

19.
Preconditioned Krylov subspace methods [7] are powerful tools for solving linear systems but sometimes they converge very slowly, and often after a long stagnation. A natural way to fix this is by enlarging the space in which the solution is computed at each iteration. Following this idea, we propose in this note two multipreconditioned algorithms: multipreconditioned orthomin and multipreconditioned biCG, which aim at solving general nonsingular linear systems in a small number of iterations. After describing the algorithms, we illustrate their behaviour on systems arising from the FETI domain decomposition method, where in order to enlarge the search space, each local component in the usual preconditioner is kept as a separate preconditioner.  相似文献   

20.
We analyze the dynamic quality of the RR interbeat intervals of electrocardiographic signals from healthy people and from patients with premature ventricular contractions (PVCs) by applying different measure algorithms to standardised public domain data sets of heart rate variability. Our aim is to assess the utility of these algorithms for the above mentioned purposes.

Long and short time series, 24 and 0.50 h respectively, of interbeat intervals of healthy and PVC subjects were compared with the aim of developing a fast method to investigate their temporal organization.

Two different methods were used: power spectral analysis and the integral correlation method.

Power spectral analysis has proven to be a powerful tool for detecting long-range correlations. If it is applied in a short time series, power spectra of healthy and PVC subjects show a similar behavior, which disqualifies power spectral analysis as a fast method to distinguish healthy from PVC subjects.

The integral correlation method allows us to study the fractal properties of interbeat intervals of electrocardiographic signals.

The cardiac activity of healthy and PVC people stems from dynamics of chaotic nature characterized by correlation dimensions df equal to 3.40±0.50 and 5.00±0.80 for healthy and PVC subjects respectively.

The methodology presented in this article bridges the gap between theoretical and experimental studies of non-linear phenomena. From our results we conclude that the minimum number of coupled differential equations to describe cardiac activity must be six and seven for healthy and PVC individuals respectively.

From the present analysis we conclude that the correlation integral method is particularly suitable, in comparison with the power spectral analysis, for the early detection of arrhythmias on short time (0.5 h) series.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号