首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Learning knowledge representation with meta knowledge distillation for single image super-resolution
Institution:1. Faculty of Information Science and Engineering, Ningbo University, Ningbo, China;2. School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China;1. Department of Electronics and Communication, Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab 144011, India;2. Department of Computer Science and Engg., UIET, Sector 25, Panjab University, Chandigarh 160023, India
Abstract:Although the deep CNN-based super-resolution methods have achieved outstanding performance, their memory cost and computational complexity severely limit their practical employment. Knowledge distillation (KD), which can efficiently transfer knowledge from a cumbersome network (teacher) to a compact network (student), has demonstrated its advantages in some computer vision applications. The representation of knowledge is vital for knowledge transferring and student learning, which is generally defined in hand-crafted manners or uses the intermediate features directly. In this paper, we propose a model-agnostic meta knowledge distillation method under the teacher–student architecture for the single image super-resolution task. It provides a more flexible and accurate way to help teachers transmit knowledge in accordance with the abilities of students via knowledge representation networks (KRNets) with learnable parameters. Specifically, the texture-aware dynamic kernels are generated from local information to decompose the distillation problem into texture-wise supervision for further promoting the recovery quality of high-frequency details. In addition, the KRNets are optimized in a meta-learning manner to ensure the knowledge transferring and the student learning are beneficial to improving the reconstructed quality of the student. Experiments conducted on various single image super-resolution datasets demonstrate that our proposed method outperforms existing defined knowledge representation-related distillation methods and can help super-resolution algorithms achieve better reconstruction quality without introducing any extra inference complexity.
Keywords:Knowledge distillation  Single image super-resolution  Representation of knowledge  Meta learning  Texture-aware dynamic kernel
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号