Cross-domain Few-shot Learning with Task-specific Adapters
   

Cross-domain Few-shot Learning with Task-specific Adapters

     

Wei-Hong Li, Xialei Liu, Hakan Bilen

University of Edinburgh

 
pipeline picture
Figure 1. Illustration of our task adaptation for cross-domain few-shot learning. In meta-test stage (a), our method first attaches a parametric transformation \(r_{\alpha}\) to each layer, where \(\alpha\) can be constructed by (b) a serial or (c) a residual topology. They can be parameterized with matrix multiplication (d) or channel-wise scaling (e). We found that (c) is the best configuration with matrix parameterization which is further improved by attaching a linear transformation \(A_{\beta}\) to the end of the network. We adapt the network for a given task by optimizing \(\alpha\) and \(A_{\beta}\) on a few labeled images from the support set, then map query images to the task-specific space and assign them to the nearest class center.
 

In this paper, we look at the problem of cross-domain few-shot classification that aims to learn a classifier from previously unseen classes and domains with few labeled samples. Recent approaches broadly solve this problem by parameterizing their few-shot classifiers with task-agnostic and task-specific weights where the former is typically learned on a large training set and the latter is dynamically predicted through an auxiliary network conditioned on a small support set. In this work, we focus on the estimation of the latter, and propose to learn task-specific weights from scratch directly on a small support set, in contrast to dynamically estimating them. In particular, through systematic analysis, we show that task-specific weights through parametric adapters in matrix form with residual connections to multiple intermediate layers of a backbone network significantly improves the performance of the state-of-the-art models in the Meta-Dataset benchmark with minor additional cost.

 
 
 

 
 

Please let me know if any questions or suggestions.