首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The aim of this paper is to investigate the economic specialization of the Italian local labor systems (sets of contiguous municipalities with a high degree of self-containment of daily commuter travel) by using the Symbolic Data approach, on the basis of data derived from the Census of Industrial and Service Activities. Specifically, the economic structure of a local labor system (LLS) is described by an interval-type variable, a special symbolic data type that allows for the fact that all municipalities within the same LLS do not have the same economic structure.  相似文献   

2.
Emil Horobeţ 《代数通讯》2017,45(3):1177-1186
The generic number of critical points of the Euclidean distance function from a data point to a variety is called the Euclidean distance degree (or ED degree). The two special loci of the data points where the number of critical points is smaller than the ED degree are called the Euclidean distance data singular locus and the Euclidean distance data isotropic locus. In this article, we present connections between these two special loci of an a?ne cone and its dual cone.  相似文献   

3.
In this paper we present a simple dynamization method that preserves the query and storage costs of a static data structure and ensures reasonable update costs. In this method, the majority of data elements are maintained in a single data structure, and the updates are handled using smaller auxiliary data structures. We analyze the query, storage, and amortized update costs for the dynamic version of a static data structure in terms of a functionf, such thatf(n)<n, that bounds the sizes of the auxiliary data structures (wheren is the number of elements in the data structure). The conditions onf for minimal (with respect to asymptotic upper bounds) amortized update costs are then obtained. The proposed method is shown to be particularly suited for the cases where the merging of two data structures is more efficient than building the resultant data structure from scratch. Its effectiveness is illustrated by applying it to a class of data structures that have linear merging cost; this class consists of data structures such as Voronoi diagrams, K-d trees, quadtrees, multiple attribute trees, etc.  相似文献   

4.
Summary  An increasingly important problem in exploratory data analysis and visualization is that of scale; more and more data sets are much too large to analyze using traditional techniques, either in terms of the number of variables or the number of records. One approach to addressing this problem is the development and use of multiresolution strategies, where we represent the data at different levels of abstraction or detail through aggregation and summarization. In this paper we present an overview of our recent and current activities in the development of a multiresolution exploratory visualization environment for large-scale multivariate data. We have developed visualization, interaction, and data management techniques for effectively dealing with data sets that contain millions of records and/or hundreds of dimensions, and propose methods for applying similar approaches to extend the system to handle nominal as well as ordinal data.  相似文献   

5.
Data reduction is an important issue in the field of data mining. The goal of data reduction techniques is to extract a subset of data from a massive dataset while maintaining the properties and characteristics of the original data in the reduced set. This allows an otherwise difficult or impossible data mining task to be carried out efficiently and effectively. This paper describes a new method for selecting a subset of data that closely represents the original data in terms of its joint and univariate distributions. A pair of distance criteria, motivated by the χ2-statistic, are used for measuring the goodness-of-fit between the distributions of the reduced and full datasets. Under these criteria, the data reduction problem can be formulated as a bi-objective quadratic program. A genetic algorithm technique is used in the search/optimization process. Experiments conducted on several real-world data sets demonstrate the effectiveness of the proposed method.  相似文献   

6.
Compositional data are considered as data where relative contributions of parts on a whole, conveyed by (log-)ratios between them, are essential for the analysis. In Symbolic Data Analysis (SDA), we are in the framework of interval data when elements are characterized by variables whose values are intervals on \(\mathbb {R}\) representing inherent variability. In this paper, we address the special problem of the analysis of interval compositions, i.e., when the interval data are obtained by the aggregation of compositions. It is assumed that the interval information is represented by the respective midpoints and ranges, and both sources of information are considered as compositions. In this context, we introduce the representation of interval data as three-way data. In the framework of the log-ratio approach from compositional data analysis, it is outlined how interval compositions can be treated in an exploratory context. The goal of the analysis is to represent the compositions by coordinates which are interpretable in terms of the original compositional parts. This is achieved by summarizing all relative information (logratios) about each part into one coordinate from the coordinate system. Based on an example from the European Union Statistics on Income and Living Conditions (EU-SILC), several possibilities for an exploratory data analysis approach for interval compositions are outlined and investigated.  相似文献   

7.
8.
It was investigated how risk estimates derived from the RERF life span study data sets for cancer incidence and mortality, respectively, differ between the two cities Hiroshima and Nagasaki, and the two sexes. This was done by estimating the excess risk for various age-at-exposure and time-since-exposure groups. The epidemiologically most reliable age group are those aged 20–39 years at the time of exposure. As expected, in this group, the relative risk for females in Hiroshima is higher than that for males; however, in Nagasaki, the relative risk for females is lower than that for males. When comparing the risks in the two cities for the same sex, the risks of cancer incidence and mortality of females exposed in Hiroshima are higher than those in Nagasaki. However, for the males, the risks of cancer incidence in Hiroshima are lower than in Nagasaki, and the risks of cancer mortality of males are very similar between both cities. All differences depend on age-at-exposure and time-since-exposure, and are at the borderline of being statistically significant. The absorbed dose of neutrons, for the same γ-dose, is about three times as high in Hiroshima than in Nagasaki for both sexes. Because of these observed risk differences between both cities, it does not appear to be possible to reliably estimate the relative biological effectiveness of neutrons as compared to that of γ-rays from these epidemiological data sets. No evidence was found in this analysis that the radiation weighting factors wR presently used for neutrons in radiation protection could severely underestimate the risks for somatic late effects induced by neutrons.  相似文献   

9.
The space-time monopole equation is the reduction of anti-self-dual Yang-Mills equations in R2,2 to R2,1. This equation is a non-linear wave equation, and can be encoded in a Lax pair. An equivalent Lax pair is used by Dai and Terng to construct monopoles with continuous scattering data, and then the equation can be linearized by the scattering data, allowing one to use the inverse scattering method to solve the Cauchy problem with rapidly decaying small initial data. In this paper, we use the terminology of holomorphic bundle and transversality of certain maps, parametrized by initial data, to give more initial data, with which we can use scattering method to solve the Cauchy problem of the monopole equation up to gauge transformation.  相似文献   

10.
We present some of the features shown by RHIC and LHC data concerning high density, emphasizing the differences and similarities between the big bang and the little big bang. We briefly discuss multiplicity rapidity distributions, geometric scaling, saturation momentum of gluons, harmonic moments, and longrange rapidity correlations.  相似文献   

11.
An iterative procedure for determining temperature fields from Cauchy data given on a part of the boundary is presented. At each iteration step, a series of mixed well‐posed boundary value problems are solved for the heat operator and its adjoint. A convergence proof of this method in a weighted L2‐space is included, as well as a stopping criteria for the case of noisy data. Moreover, a solvability result in a weighted Sobolev space for a parabolic initial boundary value problem of second order with mixed boundary conditions is presented. Regularity of the solution is proved. (© 2007 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
The existence of indeterminacy in the choice of scattering data for the auxiliary linear system for the Davey-Stewartson I-equation is noted. A connection is established between different scattering data and the corresponding conjugation matrix for the nonlocal Riemann problem.Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 164, pp. 170–175, 1987.  相似文献   

13.
In this paper we consider problems related to the sortedness of a data stream. In the first part of this work, we investigate the problem of estimating the distance to monotonicity; given a finite stream of length n from alphabet {1,...,m}, we give a deterministic (2+?)-approximation algorithm for estimating its distance to monotonicity in space \(O\left( {\tfrac{1} {{\varepsilon ^2 }}\log ^2 (\varepsilon n)} \right)\). This improves over the previous randomized (4+?)-approximation algorithm due to Gopalan, Jayram, Krauthgamer and Kumar in SODA 2007.  相似文献   

14.
In this paper,we consider the problem of the nonnegative scalar curvature(NNSC)-cobordism of Bartnik data(∑_1~(n-1),γ_1,H_1) and(∑_2~(n-1),γ_2,H_2).We prove that given two metrics γ_1 and γ_2 on S~(n-1)(3≤n ≤ 7)with H_1 fixed,then(S~(n-1),γ_1,H_1) and(S~(n-1),γ_2,H_2) admit no NNSC-cobordism provided the prescribed mean curvature H2 is large enough(see Theorem 1.3).Moreover,we show that for n=3,a much weaker condition that the total mean curvature ∫_(s~2) H_2 dpγ_2 is large enough rules out NNSC-cobordisms(see Theorem 1.2);if we require the Gaussian curvature of γ_2 to be positive,we get a criterion for nonexistence of the trivial NNSCcobordism by using the Hawking mass and the Brown-York mass(see Theorem 1.1).For the general topology case,we prove that(∑_1~(n-1),γ_1,0) and(∑_2~(n-1),γ_2,H_2) admit no NNSC-cobordism provided the prescribed mean curvature H_2 is large enough(see Theorem 1.5).  相似文献   

15.
Summary. The saturation assumption asserts that the best approximation error in with piecewise quadratic finite elements is strictly smaller than that of piecewise linear finite elements. We establish a link between this assumption and the oscillation of , and prove that small oscillation relative to the best error with piecewise linears implies the saturation assumption. We also show that this condition is necessary, and asymptotically valid provided . Received November 17, 2000 / Published online July 25, 2001  相似文献   

16.
The user of data envelopment analysis (DEA) has little available guidance on model quality. The technique offers none of the misspecification tests or goodness of fit statistics developed for parametric statistical methods. Yet, if a DEA model is to guide managerial policy, the quality of the model is of crucial importance. This paper suggests four alternative purposes of DEA modelling, and offers four measures of the quality of a DEA model which reflect those purposes. Using Monte Carlo simulation methods, it explores the performance of DEA under a wide variety of assumptions. It notes that four issues will have an important influence on model results: the distribution of true efficiencies in the study sample; the size of the sample; the number of inputs and outputs included in the analysis; and the degree of correlation between inputs and outputs. The paper concludes that any judgement about the reliability of model results must be dependent on the objective of the analysis.  相似文献   

17.
All forecast models, whether they represent the state of the weather, the spread of a disease, or levels of economic activity, contain unknown parameters. These parameters may be the model's initial conditions, its boundary conditions, or other tunable parameters which have to be determined. Four dimensional variational data assimilation (4D-Var) is a method of estimating this set of parameters by optimizing the fit between the solution of the model and a set of observations which the model is meant to predict.Although the method of 4D-Var described in this paper is not restricted to any particular system, the application described here has a numerical weather prediction (NWP) model at its core, and the parameters to be determined are the initial conditions of the model.The purpose of this paper is to give a review covering assimilation of Doppler radar wind data into a NWP model. Some associated problems, such as sensitivity to small variations in the initial conditions or due to small changes in the background variables, and biases due to nonlinearity are also studied.  相似文献   

18.
This paper considers the problem of interval scale data in the most widely used models of data envelopment analysis (DEA), the CCR and BCC models. Radial models require inputs and outputs measured on the ratio scale. Our focus is on how to deal with interval scale variables especially when the interval scale variable is a difference of two ratio scale variables like profit or the decrease/increase in bank accounts. We suggest the use of these ratio scale variables in a radial DEA model.  相似文献   

19.
20.
In this paper, we investigate DEA with interval input-output data. First we show various extensions of efficiency and that 25 of them are essential. Second we formulate the efficiency test problems as mixed integer programming problems. We prove that 14 among 25 problems can be reduced to linear programming problems and that the other 11 efficiencies can be tested by solving a finite sequence of linear programming problems. Third, in order to obtain efficiency scores, we extend SBM model to interval input-output data. Fourth, to moderate a possible positive overassessment by DEA, we introduce the inverted DEA model with interval input-output data. Using efficiency and inefficiency scores, we propose a classification of DMUs. Finally, we apply the proposed approach to Japanese Bank Data and demonstrate its advantages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号