首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
This article is a slightly extended and revised version of a conference talk at “Arithmetik an der A7” in Würzburg, June 23rd, 2017. We present a conjecture on the coincidence of Hecke theta series of weight 1 on three distinct quadratic fields. Then we discuss a special instance of the Deligne–Serre Theorem, implying that the decomposition of prime numbers in a certain extension of the rationals is governed by the coefficients of the eta product \(\eta^{2}(z)\).  相似文献   

2.
The crushing operation of Jaco and Rubinstein is a powerful technique in algorithmic 3-manifold topology: it enabled the first practical implementations of 3-sphere recognition and prime decomposition of orientable manifolds, and it plays a prominent role in state-of-the-art algorithms for unknot recognition and testing for essential surfaces. Although the crushing operation will always reduce the size of a triangulation, it might alter its topology, and so it requires a careful theoretical analysis for the settings in which it is used. The aim of this short paper is to make the crushing operation more accessible to practitioners and easier to generalise to new settings. When the crushing operation was first introduced, the analysis was powerful but extremely complex. Here we give a new treatment that reduces the crushing process to a sequential combination of three “atomic” operations on a cell decomposition, all of which are simple to analyse. As an application, we generalise the crushing operation to the setting of non-orientable 3-manifolds, where we obtain a new practical and robust algorithm for non-orientable prime decomposition. We also apply our crushing techniques to the study of non-orientable minimal triangulations.  相似文献   

3.
非可加集函数的Lebesgue分解   总被引:1,自引:0,他引:1  
张强  刘克 《数学学报》2002,45(5):899-904
本文讨论一般的非可加集函数的Lebesgue分解定理,它是经典测度论中相应结果的扩充,同时,也为经典可加测度的Lebessue分解定理提供了另一证明方法.  相似文献   

4.
The Lebesgue decomposition theorem and the Radon–Nikodym theorem are the cornerstones of the classical measure theory. These theorems were generalized in several settings and several ways. Hassi, Sebestyén, and de Snoo recently proved a Lebesgue type decomposition theorem for nonnegative sesquilinear forms defined on complex linear spaces. The main purpose of this paper is to formulate and prove also a Radon–Nikodym type result in this setting. As an application, we present a Lebesgue type decomposition theorem and solve a special case of the infimum problem for densely defined (not necessarily bounded) positive operators.  相似文献   

5.
In the paper we present results on the continuity of nonlinear superposition operators acting in the space of functions of bounded variation in the sense of Jordan. It is shown that the continuity of an autonomous superposition operator is automatically guaranteed if the acting condition is met. We also give a simple proof of the fact that a nonautonomous superposition operator generated by a continuously differentiable function is uniformly continuous on bounded sets. Moreover, we present necessary and sufficient conditions for the continuity of a superposition operator (autonomous or nonautonomous) in a general setting. Thus, we give the answers to two basic open problems mentioned in the monograph (Appell et al. in Bounded variation and around, series in nonlinear analysis and application, De Gruyter, Berlin, 2014).  相似文献   

6.
We provide the general solution of problems concerning AC star circuits by turning them into geometric problems. We show that one problem is strongly related to the Fermat-point of a triangle. We present a solution that is well adapted to the practical application the problem is based on. Furthermore, we solve a generalization of the geometric situation and discuss the relation to non-symmetric, unbalanced AC star circuits.  相似文献   

7.
We study financial market models with different liquidity effects. In the first part of this paper, we extend the short-term price impact model introduced by Rogers and Singh (2007) to a general semimartingale setup. We show the convergence of the discrete-time into the continuous-time modeling framework when trading times approach each other. In the second part, arbitrage opportunities in illiquid economies are considered, in particular a modification of the feedback effect model of Bank and Baum (2004). We demonstrate that a large trader cannot create wealth at no risk within this framework. Here we have to assume that the price process is described by a continuous semimartingale.  相似文献   

8.
We prove some new properties of the small Lebesgue spaces introduced by Fiorenza [7]. Combining these properties with the Poincaré–Sobolev inequalities for the relative rearrangement (see [11]), we derive some new and precises estimates either for small Lebesgue–Sobolev spaces or for quasilinear equations with data in the small Lebesgue spaces. To cite this article: A. Fiorenza, J.-M. Rakotoson, C. R. Acad. Sci. Paris, Ser. I 334 (2002) 23–26  相似文献   

9.
In this paper we consider the notion of dynamic risk measures, which we will motivate as a reasonable tool in risk management. It is possible to reformulate an example of such a risk measure in terms of the value functions of a Markov decision model (MDM). Based on this observation the model is generalized to a setting with incomplete information about the risk distribution which can be seen as model uncertainty. This issue can be incorporated in the dynamic risk measure by extending the MDM to a Bayesian decision model. Moreover, it is possible to discuss the effect of model uncertainty on the risk measure in binomial models. All investigations are illustrated by a simple but useful coin tossing game proposed by Artzner and by the classic Cox–Ross–Rubinstein model.  相似文献   

10.
We prove two results concerning an Ulam-type stability problem for homomorphisms between lattices. One of them involves estimates by quite general error functions; the other deals with approximate (join) homomorphisms in terms of certain systems of lattice neighborhoods. As a corollary, we obtain a stability result for approximately monotone functions.  相似文献   

11.
First we present a short overview of the long history of projectively flat Finsler spaces. We give a simple and quite elementary proof of the already known condition for the projective flatness, and we give a criterion for the projective flatness of a special Lagrange space (Theorem 1). After this we obtain a second-order PDE system, whose solvability is necessary and sufficient for a Finsler space to be projectively flat (Theorem 2). We also derive a condition in order that an infinitesimal transformation takes geodesics of a Finsler space into geodesics. This yields a Killing type vector field (Theorem 3). In the last section we present a characterization of the Finsler spaces which are projectively flat in a parameter-preserving manner (Theorem 4), and we show that these spaces over ${\mathbb {R}}^{n}$ are exactly the Minkowski spaces (Theorems 5 and 6).  相似文献   

12.
We provide a full characterization of lattices which can be blocks of the skeleton tolerance relation of a finite lattice. Moreover, we formulate a necessary condition for a lattice to be such a block in the case of finite distributive lattices with at most k-dimensional maximal boolean intervals.  相似文献   

13.
In our former paper (Fund. Math. 166, 281–303, 2000) we discussed densities and liftings in the product of two probability spaces with good section properties analogous to that for measures and measurable sets in the Fubini Theorem. In the present paper we investigate the following more delicate problem: Let (Ω,Σ,μ) and (Θ,T,ν) be two probability spaces endowed with densities υ and τ, respectively. Can we define a density on the product space by means of a Fubini type formula \((\upsilon\odot\tau)(E)=\{(\omega,\theta):\omega\in\upsilon(\{\bar {\omega}:\theta\in\tau(E_{\bar{\omega}}\})\}\), for E measurable in the product, and the same for liftings instead of densities? We single out classes of marginal densities υ and τ which admit a positive solution in case of densities, where we have sometimes to replace the Fubini type product by its upper hull, which we call box product. For liftings the answer is in general negative, but our analysis of the above problem leads to a new method, which allows us to find a positive solution. In this way we solved one of the main problems of Musia?, Strauss and Macheras (Fund. Math. 166, 281–303, 2000).  相似文献   

14.
A nonnegative form on a complex linear space is decomposed with respect to another nonnegative form : it has a Lebesgue decomposition into an almost dominated form and a singular form. The part which is almost dominated is the largest form majorized by which is almost dominated by . The construction of the Lebesgue decomposition only involves notions from the complex linear space. An important ingredient in the construction is the new concept of the parallel sum of forms. By means of Hilbert space techniques the almost dominated and the singular parts are identified with the regular and a singular parts of the form. This decomposition addresses a problem posed by B. Simon. The Lebesgue decomposition of a pair of finite measures corresponds to the present decomposition of the forms which are induced by the measures. T. Ando's decomposition of a nonnegative bounded linear operator in a Hilbert space with respect to another nonnegative bounded linear operator is a consequence. It is shown that the decomposition of positive definite kernels involving families of forms also belongs to the present context. The Lebesgue decomposition is an example of a Lebesgue type decomposition, i.e., any decomposition into an almost dominated and a singular part. There is a necessary and sufficient condition for a Lebesgue type decomposition to be unique. This condition is inspired by the work of Ando concerning uniqueness questions.  相似文献   

15.
In this paper we give some new criteria for identifying the components of a probability measure, in its Lebesgue decomposition. This enables us to give new criteria to identify spectral types of self-adjoint operators on Hilbert spaces, especially those of interest.  相似文献   

16.
The empirical part of this article is based on a study on car insurance data to explore how global and local geographical effects on frequency and size of claims can be assessed with appropriate statistical methodology. Because these spatial effects have to be modeled and estimated simultaneously with linear and possibly nonlinear effects of further covariates such as age of policy holder, age of car or bonus-malus score, generalized linear models cannot be applied. Also, compared to previous analyses, the geographical information is given by the exact location of the residence of policy holders. Therefore, we employ a new class of geoadditive models, where the spatial component is modeled based on stationary Gaussian random fields, common in geostatistics (Kriging). Statistical inference is carried out by an empirical Bayes or penalized likelihood approach using mixed model technology. The results confirm that the methodological concept provides useful tools for exploratory analyses of the data at hand and in similar situations.  相似文献   

17.
The German proposal for a Solvency II-compatible standard model for the life insurance branch calculates the risk capital that is necessary for a sufficient risk capitalisation of the company at hand. This capital is called ‘‘target capital’’ or Solvency Capital Requirement (SCR for short). For this to achieve it is applied the book value of the actuarial reserve onto the well-known market value formula getting the market value (or present value) by means of the classical duration concept as a global approach (cf. the documentation of the standard model of the GDV p. 26). This formula takes into account the impact of the interest rate but leaves aside all the other actuarial assumptions. In particular, the influence of the biometrical assumptions is not considered. This is at least one reason, why this ansatz is – at the time being – no more compatible with the Solvency II requirements and thus does no more satisfy its own entitlements. In the work at hand it is proposed and worked out a concept that overcomes this drawback. The result is a formula with the help of which the present value of the actuarial liabilities is calculated from their book value in fact by taking into account the interest rate as well as the biometrical assumptions. It is to be remarked that the proposed two-dimensional duration concept may be developed completely along the lines given by the classical one-dimensional analogue leaving some arbitraries only on determining the biometrical gauge, i.e., the mapping of the vector that represents the formula of the active lives remaining onto its average value. For this to achieve one has to consider the underlying business in force. The superordinate relevance of such a two-dimensional ansatz lies in the fact that the developments of the project Solvency II during the last months have shown that its success depends crucially on the availability of efficient and well-elaborated approximation procedures.  相似文献   

18.
Finite element exterior calculus (FEEC) has been developed over the past decade as a framework for constructing and analyzing stable and accurate numerical methods for partial differential equations by employing differential complexes. The recent work of Arnold, Falk, and Winther includes a well-developed theory of finite element methods for Hodge–Laplace problems, including a priori error estimates. In this work we focus on developing a posteriori error estimates in which the computational error is bounded by some computable functional of the discrete solution and problem data. More precisely, we prove a posteriori error estimates of a residual type for Arnold–Falk–Winther mixed finite element methods for Hodge–de Rham–Laplace problems. While a number of previous works consider a posteriori error estimation for Maxwell’s equations and mixed formulations of the scalar Laplacian, the approach we take is distinguished by a unified treatment of the various Hodge–Laplace problems arising in the de Rham complex, consistent use of the language and analytical framework of differential forms, and the development of a posteriori error estimates for harmonic forms and the effects of their approximation on the resulting numerical method for the Hodge–Laplacian.  相似文献   

19.
We obtain new convolutions for quadratic-phase Fourier integral operators (which include, as subcases, e.g., the fractional Fourier transform and the linear canonical transform). The structure of these convolutions is based on properties of the mentioned integral operators and takes profit of weight-functions associated with some amplitude and Gaussian functions. Therefore, the fundamental properties of that quadratic-phase Fourier integral operators are also studied (including a Riemann–Lebesgue type lemma, invertibility results, a Plancherel type theorem and a Parseval type identity). As applications, we obtain new Young type inequalities, the asymptotic behaviour of some oscillatory integrals, and the solvability of convolution integral equations.  相似文献   

20.
In this paper, we present an inexact version of the steepest descent method with Armijo’s rule for multicriteria optimization in the Riemannian context given in Bento et al. (J. Optim. Theory Appl., 154: 88–107, 2012). Under mild assumptions on the multicriteria function, we prove that each accumulation point (if any) satisfies first-order necessary conditions for Pareto optimality. Moreover, assuming that the multicriteria function is quasi-convex and the Riemannian manifold has nonnegative curvature, we show full convergence of any sequence generated by the method to a Pareto critical point.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号