首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Hidden Hypergraphs,Error-Correcting Codes,and Critical Learning in Hopfield Networks
Authors:Christopher Hillar  Tenzin Chan  Rachel Taubman  David Rolnick
Institution:1.Awecom, Inc., San Francisco, CA 94103, USA;2.Singapore University of Technology and Design, Singapore 487372, Singapore;3.School of Computer Science, McGill University, Montreal, QC H3A 0G4, Canada;
Abstract:In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of 2Ω(n1ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.
Keywords:Hopfield networks  clustering  error-correcting codes  exponential memory  hidden graph  neuroscience
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号