首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12篇
  免费   0篇
数学   4篇
物理学   8篇
  2021年   1篇
  2019年   1篇
  2017年   2篇
  2013年   1篇
  2005年   1篇
  1994年   1篇
  1993年   1篇
  1984年   2篇
  1983年   1篇
  1980年   1篇
排序方式: 共有12条查询结果,搜索用时 15 毫秒
1.
We report the results of a one-loop calculation of Z0gg, where g is a gluino, the proposed supersymmetric partner of a gluon. Depending on the masses of the scalar quarks and of the top quark, the branching ratio for the decay is in the 10?5 to 10?4 range for gluino mass below about 40 GeV. The signature for gluinos should allow detection in this range.  相似文献   
2.
A method is developed for the removal of redundancy which is known to plague the calculation of low lying spectra of odd mass nuclei by the equation of motion method. The feasibility of the method is verified numerically.  相似文献   
3.
We show quantitative versions of classical results in discrete geometry, where the size of a convex set is determined by some non-negative function. We give versions of this kind for the selection theorem of Bárány, the existence of weak epsilon-nets for convex sets and the (p,q) theorem of Alon and Kleitman. These methods can be applied to functions such as the volume, surface area or number of points of a discrete set. We also give general quantitative versions of the colorful Helly theorem for continuous functions.  相似文献   
4.
We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through “cheap learning” with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various “no-flattening theorems” showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that n variables cannot be multiplied using fewer than \(2^n\) neurons in a single hidden layer.  相似文献   
5.
6.
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of 2Ω(n1ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.  相似文献   
7.
Summary. In this work, new interpolation error estimates have been derived for some well-known interpolators in the quasi-norms. The estimates are found to be essential to obtain the optimal a priori error bounds under the weakened regularity conditions for the piecewise linear finite element approximation of a class of degenerate equations. In particular, by using these estimates, we can close the existing gap between the regularity required for deriving the optimal error bounds and the regularity achievable for the smooth data for the 2-d and 3-d p-Laplacian.Mathematics Subject Classification (1991): 65N30  相似文献   
8.
We present the predictions of various models for D → Kπ?ν decay for the K-π system in the region of the K1 resonance. In this system both vector and axial vector currents can be studied. One of these models also applies to the D → K?ν decay mode. Also, tests are given for general Kπ?ν of the pure vector hypothesis for the (c, s) current.  相似文献   
9.
10.
The computation of Gröbner bases is an established hard problem. By contrast with many other problems, however, there has been little investigation of whether this hardness is robust. In this paper, we frame and present results on the problem of approximate computation of Gröbner bases. We show that it is NP-hard to construct a Gröbner basis of the ideal generated by a set of polynomials, even when the algorithm is allowed to discard a 1?? fraction of the generators, and likewise when the algorithm is allowed to discard variables (and the generators containing them). Our results show that computation of Gröbner bases is robustly hard even for simple polynomial systems (e.g. maximum degree 2, with at most 3 variables per generator). We conclude by greatly strengthening results for the Strong c-Partial Gröbner problem posed by De Loera et al. [10]. Our proofs also establish interesting connections between the robust hardness of Gröbner bases and that of SAT variants and graph-coloring.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号