arxiv Preprint – ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation


In this episode we discuss ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation
by Xuefeng Hu, Ke Zhang, Lu Xia, Albert Chen, Jiajia Luo, Yuyin Sun, Ken Wang, Nan Qiao, Xiao Zeng, Min Sun, Cheng-Hao Kuo, Ram Nevatia. The paper discusses ReCLIP, a source-free domain adaptation method for large-scale pre-training vision-language models like CLIP. ReCLIP addresses the challenges of domain gaps and misalignment by learning a projection space and utilizing cross-modality self-training with pseudo labels. Experimental results show that ReCLIP reduces the average error rate of CLIP across 22 image classification benchmarks.


Posted

in

by

Tags: