Istanbul Technical University
ABSTRACT: Data are the essential component in the pipeline of training a model that determines the performance of the model. However, there may not be enough data that meets the requirements of some tasks. In this paper, we introduce a knowledge distillation-based approach that mitigates the disadvantages of data scarcity. Specifically, we propose a method that boosts the pixel domain performance of a model, by utilizing compressed domain knowledge via cross distillation between these two modalities. To evaluate our approach, we conduct experiments on two computer vision tasks which are object detection and recognition. Results indicate that compressed domain features can be utilized for a task in the pixel domain via our approach, where data are scarce or not completely available due to privacy or copyright issues.
This work was supported by the Scientific and Technological Research Council of Turkiye (TUBITAK) with 1515 Frontier R&D Laboratories Support Program for BTS Advanced AI Hub: BTS Autonomous Networks and Data Innovation Lab. Project 5239903, with grant number 121E378; and in part by ITU-BAP PMA-2023-44299.