Survey on Self-Supervised Learning: Auxiliary Pretext Tasks and Contrastive Learning Methods in Imaging |
| |
Authors: | Saleh Albelwi |
| |
Affiliation: | 1.Faculty of Computing and Information Technology, University of Tabuk, Tabuk 47731, Saudi Arabia;2.Industrial Innovation and Robotic Center (IIRC), University of Tabuk, Tabuk 47731, Saudi Arabia |
| |
Abstract: | Although deep learning algorithms have achieved significant progress in a variety of domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) using unlabeled data has emerged as an alternative, as it eliminates manual annotation. To do this, SSL constructs feature representations using pretext tasks that operate without manual annotation, which allows models trained in these tasks to extract useful latent representations that later improve downstream tasks such as object classification and detection. The early methods of SSL are based on auxiliary pretext tasks as a way to learn representations using pseudo-labels, or labels that were created automatically based on the dataset’s attributes. Furthermore, contrastive learning has also performed well in learning representations via SSL. To succeed, it pushes positive samples closer together, and negative ones further apart, in the latent space. This paper provides a comprehensive literature review of the top-performing SSL methods using auxiliary pretext and contrastive learning techniques. It details the motivation for this research, a general pipeline of SSL, the terminologies of the field, and provides an examination of pretext tasks and self-supervised methods. It also examines how self-supervised methods compare to supervised ones, and then discusses both further considerations and ongoing challenges faced by SSL. |
| |
Keywords: | self-supervised learning (SSL) auxiliary pretext tasks contrastive learning pretext tasks data augmentation contrastive loss encoder downstream tasks |
|
|