2

DiverSeed: Integrating Active Learning for Target Domain Data Generation in Instruction Tuning

Instruction tuning has demonstrated its potential in aligning large language models (LLMs) to downstream domains; however, this approach heavily relies on extensive, high-quality datasets for fine-tuning through instructions. The construction of a …

Editing outdoor scenes with a large annotated synthetic dataset

Deep multimodal representation learning for generalizable person re-identification

Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance. Recently, leveraging the supervised or semi-unsupervised learning paradigms, which benefits from the …

Rethinking person re-identification via semantic-based pretraining

Pretraining is a dominant paradigm in computer vision. Generally, supervised ImageNet pretraining is commonly used to initialize the backbones of person re-identification (Re-ID) models. However, recent works show a surprising result that CNN-based …

Deep unsupervised progressive learning for distant domain adaptation

The superiority of deeply learned representation has been reported in very recent literature of re-identification (Re-ID) task. In this paper, we study a novel transfer learning problem termed Distant Domain Transfer Learning (DDTL) for Re-ID task. …