首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of this work is to provide a way to improve stability and convergence rate of a price adjustment mechanism that converges to a Walrasian equilibrium. We focus on a discrete tâtonnement based on a two-agent, two-good exchange economy, and we introduce memory, assuming that the auctioneer adjusts prices not only using the current excess demand, but also making use of the past excess demand functions. In particular, we study the effect of computing a weighted average of the current and the previous excess demands (finite two level memory) and of all the previous excess demands (infinite memory). We show that suitable weights’ distributions have a stabilizing effect, so that the resulting price adjustment process converge toward the competitive equilibrium in a wider range of situations than the process without memory. Finally, we investigate the convergence speed toward the equilibrium of the proposed mechanisms. In particular, we show that using infinite memory with fading weights approaches the competitive equilibrium faster than with a distribution of quasi-uniform weights.  相似文献   

2.
Some properties of frames of subspaces obtained by operator theory methods   总被引:1,自引:0,他引:1  
We study the relationship among operators, orthonormal basis of subspaces and frames of subspaces (also called fusion frames) for a separable Hilbert space H. We get sufficient conditions on an orthonormal basis of subspaces E={Ei}iI of a Hilbert space K and a surjective TL(K,H) in order that {T(Ei)}iI is a frame of subspaces with respect to a computable sequence of weights. We also obtain generalizations of results in [J.A. Antezana, G. Corach, M. Ruiz, D. Stojanoff, Oblique projections and frames, Proc. Amer. Math. Soc. 134 (2006) 1031-1037], which relate frames of subspaces (including the computation of their weights) and oblique projections. The notion of refinement of a fusion frame is defined and used to obtain results about the excess of such frames. We study the set of admissible weights for a generating sequence of subspaces. Several examples are given.  相似文献   

3.
In this paper we study lattice rules which are cubature formulae to approximate integrands over the unit cube [0,1] s from a weighted reproducing kernel Hilbert space. We assume that the weights are independent random variables with a given mean and variance for two reasons stemming from practical applications: (i) It is usually not known in practice how to choose the weights. Thus by assuming that the weights are random variables, we obtain robust constructions (with respect to the weights) of lattice rules. This, to some extend, removes the necessity to carefully choose the weights. (ii) In practice it is convenient to use the same lattice rule for many different integrands. The best choice of weights for each integrand may vary to some degree, hence considering the weights random variables does justice to how lattice rules are used in applications. In this paper the worst-case error is therefore a random variable depending on random weights. We show how one can construct lattice rules which perform well for weights taken from a set with large measure. Such lattice rules are therefore robust with respect to certain changes in the weights. The construction algorithm uses the component-by-component (cbc) idea based on two criteria, one using the mean of the worst case error and the second criterion using a bound on the variance of the worst-case error. We call the new algorithm the cbc2c (component-by-component with 2 constraints) algorithm. We also study a generalized version which uses r constraints which we call the cbcrc (component-by-component with r constraints) algorithm. We show that lattice rules generated by the cbcrc algorithm simultaneously work well for all weights in a subspace spanned by the chosen weights ?? (1), . . . , ?? (r). Thus, in applications, instead of finding one set of weights, it is enough to find a convex polytope in which the optimal weights lie. The price for this method is a factor r in the upper bound on the error and in the construction cost of the lattice rule. Thus the burden of determining one set of weights very precisely can be shifted to the construction of good lattice rules. Numerical results indicate the benefit of using the cbc2c algorithm for certain choices of weights.  相似文献   

4.
In this paper, we study the Drinfeld cusp forms for Γ1(T) and Γ(T) using Teitelbaum's interpretation as harmonic cocycles. We obtain explicit eigenvalues of Hecke operators associated to degree one prime ideals acting on the cusp forms for Γ1(T) of small weights and conclude that these Hecke operators are simultaneously diagonalizable. We also show that the Hecke operators are not diagonalizable in general for Γ1(T) of large weights, and not for Γ(T) even of small weights. The Hecke eigenvalues on cusp forms for Γ(T) with small weights are determined and the eigenspaces characterized.  相似文献   

5.
It is well-known that the univariate generalized Pareto distributions (GPD) are characterized by their peaks-over-threshold (POT) stability. We extend this result to multivariate GPDs.It is also shown that this POT stability is asymptotically shared by distributions which are in a certain neighborhood of a multivariate GPD. A multivariate extreme value distribution is a typical example.The usefulness of the results is demonstrated by various applications. We immediately obtain, for example, that the excess distribution of a linear portfolio with positive weights ai, id, is independent of the weights, if (U1,…,Ud) follows a multivariate GPD with identical univariate polynomial or Pareto margins, which was established by Macke [On the distribution of linear combinations of multivariate EVD and GPD distributed random vectors with an application to the expected shortfall of portfolios, Diploma Thesis, University of Würzburg, 2004, (in German)] and Falk and Michel [Testing for tail independence in extreme value models. Ann. Inst. Statist. Math. 58 (2006) 261-290]. This implies, for instance, that the expected shortfall as a measure of risk fails in this case.  相似文献   

6.
Linear hyperbolic partial differential equations in a homogeneous medium, e.g., the wave equation describing the propagation and scattering of acoustic waves, can be reformulated as time-domain boundary integral equations. We propose an efficient implementation of a numerical discretization of such equations when the strong Huygens’ principle does not hold.For the numerical discretization, we make use of convolution quadrature in time and standard Galerkin boundary element method in space. The quadrature in time results in a discrete convolution of weights Wj with the boundary density evaluated at equally spaced time points. If the strong Huygens’ principle holds, Wj converge to 0 exponentially quickly for large enough j. If the strong Huygens’ principle does not hold, e.g., in even space dimensions or when some damping is present, the weights are never zero, thereby presenting a difficulty for efficient numerical computation.In this paper we prove that the kernels of the convolution weights approximate in a certain sense the time domain fundamental solution and that the same holds if both are differentiated in space. The tails of the fundamental solution being very smooth, this implies that the tails of the weights are smooth and can efficiently be interpolated. Further, we hint on the possibility to apply the fast and oblivious convolution quadrature algorithm of Schädle et al. to further reduce memory requirements for long-time computation. We discuss the efficient implementation of the whole numerical scheme and present numerical experiments.  相似文献   

7.
Recently, quasi-Monte Carlo algorithms have been successfully used for multivariate integration of high dimensiond, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasi-Monte Carlo algorithms does not explain this phenomenon. This paper presents a partial answer to why quasi-Monte Carlo algorithms can work well for arbitrarily larged. It is done by identifying classes of functions for which the effect of the dimensiondis negligible. These areweightedclasses in which the behavior in the successive dimensions is moderated by a sequence of weights. We prove that the minimalworst caseerror of quasi-Monte Carlo algorithms does not depend on the dimensiondiff the sum of the weights is finite. We also prove that the minimal number of function values in the worst case setting needed to reduce the initial error by ε is bounded byCεp, where the exponentp∈ [1, 2], andCdepends exponentially on the sum of weights. Hence, the relatively small sum of the weights makes some quasi-Monte Carlo algorithms strongly tractable. We show in a nonconstructive way that many quasi-Monte Carlo algorithms are strongly tractable. Even random selection of sample points (done once for the whole weighted class of functions and then the worst case error is established for that particular selection, in contrast to Monte Carlo where random selection of sample points is carried out for a fixed function) leads to strong tractable quasi-Monte Carlo algorithms. In this case the minimal number of function values in theworst casesetting is of order εpwith the exponentp= 2. The deterministic construction of strongly tractable quasi-Monte Carlo algorithms as well as the minimal exponentpis open.  相似文献   

8.
Given n points in the plane with nonnegative weights, the inverse Fermat–Weber problem consists in changing the weights at minimum cost such that a prespecified point in the plane becomes the Euclidean 1-median. The cost is proportional to the increase or decrease of the corresponding weight. In case that the prespecified point does not coincide with one of the given n points, the inverse Fermat–Weber problem can be formulated as linear program. We derive a purely combinatorial algorithm which solves the inverse Fermat–Weber problem with unit cost using O(n) greedy-like iterations where each of them can be done in constant time if the points are sorted according to their slopes. If the prespecified point coincides with one of the given n points, it is shown that the corresponding inverse problem can be written as convex problem and hence is solvable in polynomial time to any fixed precision.  相似文献   

9.
In repetitive judgmental discrete decision-making with multiple criteria, the decision maker usually behaves as if there is a set of appropriate criterion weights such that the decisions chosen are based on the weighted sum of all the criteria. Many different procedures for estimating these implied criterion weights have been proposed. Most of these procedures emphasize the preference trade-off among the multiple criteria of the decision maker, and thus the criterion weights obtained are not directly related to the hit ratio of matching decisions. Based on past data, statistical discriminant analysis can be used to determine the implied criterion weights that would reflect the past decisions. The most interesting performance measure is the hit ratio. In this work, we use the integer linear goal-programming technique to determine optimal criterion weights which minimize the number of misclassification of decisions. The linear goal-programming formulation has m constraints and m + k + 1 variables, where m is the number of cases and k is the number of criteria. Empirical study is done by using two different procedures on the actual past admission data of an M.B.A. programme. The hit ratios of the different procedures are compared.  相似文献   

10.
The Spanning Tree Protocol routes traffic on shortest path trees. If some edges fail, the traffic has to be rerouted consequently, setting up alternative trees. In this paper we design efficient algorithms to compute polynomial-size integer weights so as to enforce the following stability property: if q=O(1) edges fail, traffic demands that are not affected by the failures are not redirected. Stability is a goal pursued by network operators in order to minimize transmission delays due to the restoration process.  相似文献   

11.
In this work we give extrapolation results on weighted Lebesgue spaces for weights associated to a family of operators. The starting point for the extrapolation can be the knowledge of boundedness on a particular Lebesgue space as well as the boundedness on the extremal case L . This analysis can be applied to a variety of operators appearing in the context of a Schrödinger operator (??Δ?+?V) where V satisfies a reverse Hölder inequality. In that case the weights involved are a localized version of Muckenhoupt weights.  相似文献   

12.
In this paper, we discuss the notion of reducibility of matrix weights and introduce a real vector space \(\mathcal C_\mathbb R\) which encodes all information about the reducibility of W. In particular, a weight W reduces if and only if there is a nonscalar matrix T such that \(TW=WT^*\). Also, we prove that reducibility can be studied by looking at the commutant of the monic orthogonal polynomials or by looking at the coefficients of the corresponding three-term recursion relation. A matrix weight may not be expressible as direct sum of irreducible weights, but it is always equivalent to a direct sum of irreducible weights. We also establish that the decompositions of two equivalent weights as sums of irreducible weights have the same number of terms and that, up to a permutation, they are equivalent. We consider the algebra of right-hand-side matrix differential operators \(\mathcal D(W)\) of a reducible weight W, giving its general structure. Finally, we make a change of emphasis by considering the reducibility of polynomials, instead of reducibility of matrix weights.  相似文献   

13.
We introduce a method for calculating rational interpolants when some (but not necessarily all) of their poles are prescribed. The algorithm determines the weights in the barycentric representation of the rationals; it simply consists in multiplying each interpolated value by a certain number, computing the weights of a rational interpolant without poles, and finally multiplying the weights by those same numbers. The supplementary cost in comparison with interpolation without poles is about (v + 2)N, where v is the number of poles and N the number of interpolation points. We also give a condition under which the computed rational interpolation really shows the desired poles.  相似文献   

14.
Let G be a simple graph; assume that a mapping assigns integers as weights to the edges of G such that for each induced subgraph that is a cycle, the sum of all weights assigned to its edges is positive; let σ be the sum of weights of all edges of G. It has been proved (Vijayakumar, Discrete Math 311(14):1385–1387, 2011) that (1) if G is 2-connected and the weight of each edge is not more than 1, then σ is positive. It has been conjectured (Xu, Discrete Math 309(4):1007–1012, 2009) that (2) if the minimum degree of G is 3 and the weight of each edge is ±1, then σ > 0. In this article, we prove a generalization of (1) and using this, we settle a vast generalization of (2).  相似文献   

15.
Given an undirected, connected network G=(V,E) with weights on the edges, the cut basis problem is asking for a maximal number of linear independent cuts such that the sum of the cut weights is minimized. Surprisingly, this problem has not attained as much attention as another graph theoretic problem closely related to it, namely, the cycle basis problem. We consider two versions of the problem: the unconstrained and the fundamental cut basis problem.For the unconstrained case, where the cuts in the basis can be of an arbitrary kind, the problem can be written as a multiterminal network flow problem, and is thus solvable in strongly polynomial time. In contrast, the fundamental cut basis problem, where all cuts in the basis are obtained by deleting an edge, each from a spanning tree T, is shown to be NP-hard. In this proof, we also show that a tree which induces the minimum fundamental cycle basis is also an optimal solution for the minimum fundamental cut basis problem in unweighted graphs.We present heuristics, integer programming formulations and summarize first experiences with numerical tests.  相似文献   

16.
Cohen’s linearly weighted kappa is a weighted average   总被引:1,自引:0,他引:1  
An n × n agreement table F?=?{f ij } with n ?? 3 ordered categories can for fixed m?(2??? m??? n ? 1) be collapsed into ${\binom{n-1}{m-1}}$ distinct m × m tables by combining adjacent categories. It is shown that the components (observed and expected agreement) of Cohen??s weighted kappa with linear weights can be obtained from the m × m subtables. A consequence is that weighted kappa with linear weights can be interpreted as a weighted average of the linearly weighted kappas corresponding to the mm tables, where the weights are the denominators of the kappas. Moreover, weighted kappa with linear weights can be interpreted as a weighted average of the linearly weighted kappas corresponding to all nontrivial subtables.  相似文献   

17.
The purpose of this study is to develop a new method which provides for given inputs and outputs the best common weights for all the units that discriminate optimally between the efficient and inefficient units as pregiven by the Data Envelopment Analysis (DEA), in order to rank all the units on the same scale. This new method, Discriminant Data Envelopment Analysis of Ratios (DR/DEA), presents a further post-optimality analysis of DEA for organizational units when their multiple inputs and outputs are given. We construct the ratio between the composite output and the composite input, where their common weights are computed by a new non-linear optimization of goodness of separation between the two pregiven groups. A practical use of DR/DEA is that the common weights may be utilized for ranking the units on a unified scale. DR/DEA is a new use of a two-group discriminant criterion that has been presented here for ratios, rather than the traditional discriminant analysis which applies to a linear function. Moreover, non-parametric statistical tests are employed to verify the consistency between the classification from DEA (efficient and inefficient units) and the post-classification as generated by DR/DEA.  相似文献   

18.
In this paper we revisit an existing dynamic programming algorithm for finding optimal subtrees in edge weighted trees. This algorithm was sketched by Maffioli in a technical report in 1991. First, we adapt this algorithm for the application to trees that can have both node and edge weights. Second, we extend the algorithm such that it does not only deliver the values of optimal trees, but also the trees themselves. Finally, we use our extended algorithm for developing heuristics for the k-cardinality tree problem in undirected graphs G with node and edge weights. This NP-hard problem consists of finding in the given graph a tree with exactly k edges such that the sum of the node and the edge weights is minimal. In order to show the usefulness of our heuristics we conduct an extensive computational analysis that concerns most of the existing problem instances. Our results show that with growing problem size the proposed heuristics reach the performance of state-of-the-art metaheuristics. Therefore, this study can be seen as a cautious note on the scaling of metaheuristics.  相似文献   

19.
Graph Mates     
A weighted digraph graph D is said to be doubly stochastic if all the weights of the edges in D are in [0, 1] and sum of the weights of the edges incident to each vertex in D is one. Let Ω(G) be denoted as set of all doubly stochastic digraphs with n vertices. We defined a Graph Mates in Ω(G) and derived a necessary and sufficient condition for two doubly stochastic digraphs are to be a Graph Mates.  相似文献   

20.
We prove weighted inequalities for the Bochner-Riesz means for Fourier-Bessel series with more general weights w(x) than previously considered power weights. These estimates are given by using the local Ap theory and Hardy's inequalities with weights. Moreover, we also obtain weighted weak type (1,1) inequalities. The case when w(x)=xa is sketched and follows as a corollary of the main result.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号