首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   69篇
  免费   2篇
  国内免费   1篇
化学   21篇
力学   1篇
数学   44篇
物理学   6篇
  2023年   1篇
  2022年   1篇
  2018年   1篇
  2016年   4篇
  2014年   1篇
  2012年   4篇
  2011年   4篇
  2010年   1篇
  2009年   1篇
  2008年   1篇
  2007年   6篇
  2006年   4篇
  2005年   6篇
  2004年   9篇
  2003年   4篇
  2000年   4篇
  1999年   3篇
  1998年   2篇
  1997年   4篇
  1996年   1篇
  1994年   1篇
  1993年   2篇
  1992年   2篇
  1991年   2篇
  1988年   1篇
  1987年   1篇
  1984年   1篇
排序方式: 共有72条查询结果,搜索用时 15 毫秒
1.
In a wide range of applications it is required to compute the nearest correlation matrix in the Frobenius norm to a given symmetric but indefinite matrix. Of the available methods with guaranteed convergence to the unique solution of this problem the easiest to implement, and perhaps the most widely used, is the alternating projections method. However, the rate of convergence of this method is at best linear, and it can require a large number of iterations to converge to within a given tolerance. We show that Anderson acceleration, a technique for accelerating the convergence of fixed-point iterations, can be applied to the alternating projections method and that in practice it brings a significant reduction in both the number of iterations and the computation time. We also show that Anderson acceleration remains effective, and indeed can provide even greater improvements, when it is applied to the variants of the nearest correlation matrix problem in which specified elements are fixed or a lower bound is imposed on the smallest eigenvalue. Alternating projections is a general method for finding a point in the intersection of several sets and ours appears to be the first demonstration that this class of methods can benefit from Anderson acceleration.  相似文献   
2.
Betweenness measures provide quantitative tools to pick out fine details from the massive amount of interaction data that is available from large complex networks. They allow us to study the extent to which a node takes part when information is passed around the network. Nodes with high betweenness may be regarded as key players that have a highly active role. At one extreme, betweenness has been defined by considering information passing only through the shortest paths between pairs of nodes. At the other extreme, an alternative type of betweenness has been defined by considering all possible walks of any length. In this work, we propose a betweenness measure that lies between these two opposing viewpoints. We allow information to pass through all possible routes, but introduce a scaling so that longer walks carry less importance. This new definition shares a similar philosophy to that of communicability for pairs of nodes in a network, which was introduced by Estrada and Hatano [E. Estrada, N. Hatano, Phys. Rev. E 77 (2008) 036111]. Having defined this new communicability betweenness measure, we show that it can be characterized neatly in terms of the exponential of the adjacency matrix. We also show that this measure is closely related to a Fréchet derivative of the matrix exponential. This allows us to conclude that it also describes network sensitivity when the edges of a given node are subject to infinitesimally small perturbations. Using illustrative synthetic and real life networks, we show that the new betweenness measure behaves differently to existing versions, and in particular we show that it recovers meaningful biological information from a protein-protein interaction network.  相似文献   
3.
Light without fright: a synthetic route to fluorescent primary phosphanes (RPH(2)) that are resistant to air oxidation both in the solid state and in chloroform solution is described. These versatile precursors undergo hydrophosphination to give tripodal ligands and subsequently fluorescent transition-metal complexes.  相似文献   
4.
We present theory and algorithms for the equality constrained indefinite least squares problem, which requires minimization of an indefinite quadratic form subject to a linear equality constraint. A generalized hyperbolic QR factorization is introduced and used in the derivation of perturbation bounds and to construct a numerical method. An alternative method is obtained by employing a generalized QR factorization in combination with a Cholesky factorization. Rounding error analysis is given to show that both methods have satisfactory numerical stability properties and numerical experiments are given for illustration. This work builds on recent work on the unconstrained indefinite least squares problem by Chandrasekaran, Gu, and Sayed and by the present authors.  相似文献   
5.
A highly nonnormal Jacobian may give rise to large transients. This behaviour has been shown to have implications for (a) the relevance of linearising a nonlinear system and (b) the timestep restrictions required to keep a numerical method stable. Here, we show that nonnormality also manifests itself for stochastic differential equations. We give an example of a family of systems that is stable without noise, but can be made exponentially unstable in mean-square by a noise perturbation that shrinks to zero as the nonnormality increases. We then show via finite-time convergence theory that an Euler approximation shares the same property, giving a discrete analogue of the result. In memory of Germund Dahlquist (1925–2005).AMS subject classification (2000) 65C30, 34F05  相似文献   
6.
Summary The Hölderp-norm of anm×n matrix has no explicit representation unlessp=1,2 or . It is shown here that thep-norm can be estimated reliably inO(mn) operations. A generalization of the power method is used, with a starting vector determined by a technique with a condition estimation flavour. The algorithm nearly always computes ap-norm estimate correct to the specified accuracy, and the estimate is always within a factorn 1–1/p of A p . As a by-product, a new way is obtained to estimate the 2-norm of a rectangular matrix; this method is more general and produces better estimates in practice than a similar technique of Cline, Conn and Van Loan.  相似文献   
7.
The majority of aquatic vertebrates are suction feeders: by rapidly expanding the mouth cavity they generate a fluid flow outside of their head in order to draw prey into their mouth. In addition to the biological relevance, the generated flow field is interesting fluid mechanically as it incorporates high velocities, is localized in front of the mouth, and is unsteady, typically lasting between 10 and 50 ms. Using manometry and high-speed particle image velocimetry, this is the first study to quantify pressure within and outside the mouth of a feeding fish while simultaneously measuring the velocity field outside the mouth. Measurements with a high temporal (2 ms) and spatial (<1 mm) resolution were made for several feeding events of a single largemouth bass (Micropterus salmoides). General properties of the flow were evaluated, including the transient velocity field, its relationship to pressure within the mouth and pressure at the prey. We find that throughout the feeding event a relationship exists for the magnitude of fluid speed as a function of distance from the predator mouth that is based on scaling the velocity field according to the size of the mouth opening and the magnitude of fluid speed at the mouth. The velocity field is concentrated within an area extending approximately one mouth diameter from the fish and the generated pressure field is even more local to the mouth aperture. Although peak suction pressures measured inside the mouth were slightly larger than those that were predicted using the equations of motion, we find that these equations give a very accurate prediction of the timing of peak pressure, so long as the unsteady nature of the flow is included.  相似文献   
8.
This work examines the stability of explicit Runge-Kutta methods applied to a certain linear ordinary differential equation with periodic coefficients. On this problem naïve use of the eigenvalues of the Jacobian results in misleading conclusions about stable behaviour. It is shown, however, that a valid analogue of the classical absolute stability theory can be developed. Further, using a suitable generalisation of the equilibrium theory of Hall [ACM Trans. on Math. Soft. 11 (1985), pp. 289–301], accurate predictions are made about the performance of modern, adaptive algorithms.Supported by the University of Dundee Research Initiatives Fund.  相似文献   
9.
Iterative refinement is a well-known technique for improving the quality of an approximate solution to a linear system. In the traditional usage residuals are computed in extended precision, but more recent work has shown that fixed precision is sufficient to yield benefits for stability. We extend existing results to show that fixed precision iterative refinement renders anarbitrary linear equations solver backward stable in a strong, componentwise sense, under suitable assumptions. Two particular applications involving theQR factorization are discussed in detail: solution of square linear systems and solution of least squares problems. In the former case we show that one step of iterative refinement suffices to produce a small componentwise relative backward error. Our results are weaker for the least squares problem, but again we find that iterative refinement improves a componentwise measure of backward stability. In particular, iterative refinement mitigates the effect of poor row scaling of the coefficient matrix, and so provides an alternative to the use of row interchanges in the HouseholderQR factorization. A further application of the results is described to fast methods for solving Vandermonde-like systems.  相似文献   
10.
Probabilistic algorithms are developed for a basic problem in distributed computation, assuming anonymous, asynchronous, unidirectional rings of processors. The problem, known as Solitude Detection, requires that a nonempty subset of the processors, calledcontenders, determine whether or not there is exactly one contender. Monte Carlo algorithms are developed that err with probability bounded by a specified parameter and exhibit either message or processor termination. The algorithms transmit an optimal expected number of bits, to within a constant factor. Their bit complexities display a surprisingly rich dependence on the kind of termination exhibited and on the processors' knowledge of the size of the ring. Two probabilistic tools are isolated and then combined in various ways to achieve all our algorithms.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号