Universal Representation Learning from Multiple Domains for Few-shot Classification
   

Universal Representation Learning from Multiple Domains for Few-shot Classification

     

Wei-Hong Li, Xialei Liu, Hakan Bilen

University of Edinburgh

 
pipeline picture
Figure 1. Universal Representation Learning (URL). To learn universal representations from multiple domains that can generalize to previously unseen domains, one strategy is to learn one feature extractor for each domain and learn to retrieve or combine feature extractors for the target task during meta-test stage as in (a). We propose a universal representation network (b) which is learned by distilling knowledge learned from multiple datasets \(\{f_{\phi^{\ast}_\tau}\}_{\tau}^{K}\) to one single feature extractor \(f_{\phi}\) shared across all domains.In meta-test stage, we use a linear transformation \(A_{\vartheta}\) that further refines the universal representations for better generalization to unseen domains. Our universal representation network achieves better generalization performance than using multiple domain-specific ones while being more efficient than (a).
 

In this paper, we look at the problem of few-shot classification that aims to learn a classifier for previously unseen classes and domains from few labeled samples. Recent methods use adaptation networks for aligning their features to new domains or select the relevant features from multiple domain-specific feature extractors. In this work, we propose to learn a single set of universal deep representations by distilling knowledge of multiple separately trained networks after co-aligning their features with the help of adapters and centered kernel alignment. We show that the universal representations can be further refined for previously unseen domains by an efficient adaptation step in a similar spirit to distance learning methods. We rigorously evaluate our model in the recent Meta-Dataset benchmark and demonstrate that it significantly outperforms the previous methods while being more efficient.

 
 
 

 
 

Please let me know if any questions or suggestions.