few shot learning
play

Few-Shot Learning Christian Simon Piotr Koniusz Richard Nock - PowerPoint PPT Presentation

Deep Subspace Networks for Few-Shot Learning Christian Simon Piotr Koniusz Richard Nock Mehrtash Harandi # # Problem Definition Given: A Support Set A query A Support set contains N -way (classes)


  1. Deep Subspace Networks for Few-Shot Learning Christian Simon † § Piotr Koniusz † § Richard Nock †‡ § Mehrtash Harandi #§ † ‡ # §

  2. Problem Definition • Given: A Support Set A query • A Support set contains N -way (classes) and K -shot (samples). • The classes are unseen, can we classify them?. Support Set Query ? Few Images Per Class ?

  3. Motivation • An approach for classification is to use a fully connected layer as a classifier following with a softmax function. • Let a function , extracting a feature from an input. • Then, we can formulate the classifier and the softmax function as:

  4. Motivation • The classifier needs to be updated (e.g. iterative gradient descents) using new samples if there are samples from unseen classes. Should be updated

  5. Motivation • Some prior approaches use pair-wise [1], prototype [2,4], and binary classifiers [3]. • We define a function for these classifiers. • For example: A prototype Prototype Method as a classifier 𝒓 [3] Sung et al., “Learning to compare: relation network for few - shot learning,” CVPR, 2018 [1] Vinyals et al.,”Matching networks for one- shot learning,” NIPS, 2016. [4] Gidaris and Komodakis ,”Learning without forgetting,” CVPR, 2018. [2] Snell et al., “Prototypical networks for few - shot learning,” NIPS, 2017. .

  6. Proposed Method • Using subspace methods as classifiers. • Projecting each datapoint within the same class to a subspace. Our formulation: Projecting query Where: is an orthogonal basis for linear subspace spanning

  7. Proposed Method Prototype Method Subspace Method VS 𝒓 𝒓

  8. Experiments • Few-Shot Classification • Deep Subspace Network (DSN) compares to the state-of-the-arts Accuracy 5-way 1-shot and 5-way 5-shot with 95% confidence intervals on the mini -ImageNet

  9. Experiments • Robustness • There are two types of evaluation: • Samples come from other classes in the support set • Noise is appended to the input image 5-way 5-Shot 5-way 10-Shot Accuracy on the mini -ImageNet

  10. Conclusion • Subspace method is more expressive as a classifier to capture the information from a few samples compared to prior works e.g. averaging the features. • Subspace is also more robust compared to the prototype solution because of the denoising capability of subspaces.

Recommend


More recommend