Learning to accurately classify objects from a single training example is often unfeasible due to overfitting effects. We describe a framework for learning an object classifier from a single example, by emphasizing relevant dimensions using available examples of related classes. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.ĭespite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of "one-shot learning." Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Few-shot learning has become essential for producing models that generalize from few examples.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |