Invited talk abstract (NeurIPS 2020 Meta Learning Workshop)

Humans develop learning algorithms that are incredibly general and can be applied across a wide range of tasks. Unfortunately, this process is often tedious trial and error with numerous possibilities for suboptimal choices. General Meta Learning seeks to automate many of these choices, generating new learning algorithms automatically. Different from contemporary Meta Learning, where the generalization ability has been limited, these learning algorithms ought to be general-purpose. This allows us to leverage data at scale for learning algorithm design that is difficult for humans to consider. I present a General Meta Learner, MetaGenRL, that meta-learns novel Reinforcement Learning algorithms that can be applied to significantly different environments. We further investigate how we can reduce inductive biases and simplify Meta Learning. Finally, I introduce Variable Shared Meta Learning (VS-ML), a novel principle that generalizes Learned Learning Rules, Fast Weights, and Meta RNNs (learning in activations). This enables (1) implementing backpropagation purely in the recurrent dynamics of an RNN and (2) meta-learning algorithms for supervised learning from scratch.

Blog & Paper on MetaGenRL Paper on VS-ML

Variable Shared Meta Learning (VS-ML)

Variable Shared Meta Learning Poster Poster PDF

Invited talk

My invited talk took place at the NeurIPS 2020 Meta Learning Workshop.

Please cite my talk using

@misc{
  kirsch2020generalmeta,
  title={General Meta Learning},
  author={Louis Kirsch},
  howpublished={Meta Learning Workshop at Advances in Neural Information Processing Systems},
  year={2020}
}

and Variable Shared Meta Learning using

@article{
  kirsch2020vsml,
  title={Meta Learning Backpropagation And Improving It},
  author={Louis Kirsch and Juergen Schmidhuber},
  journal={Meta Learning Workshop at Advances in Neural Information Processing Systems},
  year={2020}
}