Hi. I’m Louis Kirsch.

I am a PhD with Jürgen Schmidhuber at IDSIA (The Swiss AI Lab) working on meta reinforcement learning. Previously, I completed my Master of Research at University College London. My long-term research goal is to create RL agents that learn their own learning algorithm, making them truly general in the AGI sense. They should be able to design their own abstractions for prediction, planning, and learning, and continuously improve across a wide range of environments.

Short CV

News

12/2022 When neural networks implement general-purpose in-context learners

7/2022 My invited talk at ICML DARL 2022 covers how we ca learn how to learn without any human-engineered meta-optimization

02/2022 Our work at DeepMind on Introducing Symmetries to Black Box Meta Reinforcement Learning will appear at AAAI 2022!

10/2021 My work on Variable Shared Meta Learning (VSML) will appear at NeurIPS 2021!

09/2021 I collaborated with some great people at DeepMind on general-purpose Meta Learning during an internship.
Our paper on Introducing Symmetries to Black Box Meta Reinforcement Learning.

12/2020 I am an invited speaker at Meta Learn @ NeurIPS 2020.
Join me on Dec 11th 16:00 UTC online and learn more about my newest work on General Meta Learning.

10/2020 I have been awarded a total of 550 thousand GPU compute hours on the Swiss National Supercomputer.
Huge thanks to CSCS for making exciting new Meta Learning research possible!

12/2019 My first work on meta-learning RL algorithms has been accepted at ICLR 2020 including a spotlight talk!
Arxiv link: Improving Generalization in Meta Reinforcement Learning using Learned Objectives
Read more about it in my blog post

Recent publications

A complete list can be found on Google scholar.

  • General-Purpose In-Context Learning by Meta-Learning Transformers [ArXiv]
    Preprint. Internship project at Google with James Harrison, Jascha Sohl-Dickstein, and Luke Metz

  • Eliminating Meta Optimization Through Self-Referential Meta Learning [ArXiv]
    Workshop paper at ICML and AutoML 2022

  • Introducing Symmetries to Black Box Meta Reinforcement Learning [ArXiv]
    Conference paper at AAAI 2022
    Internship project at DeepMind with Sebastian Flennerhag, Hado van Hasselt, Abram Friesen, Junhyuk Oh, Yutian Chen

  • Meta Learning Backpropagation And Improving It [Blog] [ArXiv]
    Workshop paper at NeurIPS Meta Learn 2020, (Kirsch and Schmidhuber 2020)
    Conference paper at NeurIPS 2021

  • Improving Generalization in Meta Reinforcement Learning using Learned Objectives [Blog][PDF]
    Conference paper at ICLR 2020, preprint on ArXiv (Kirsch et al. 2019)

  • Modular Networks: Learning to Decompose Neural Computation [PDF]
    Conference paper at NIPS 2018 (Kirsch et al. 2018)

  • Transfer Learning for Speech Recognition on a Budget [More]
    Workshop paper at ACL 2017 (Kunze and Kirsch et al. 2017)

  • Framework for Exploring and Understanding Multivariate Correlations
    Demo track paper at ECML PKDD 2017 (Kirsch et al. 2017)

Recent blog posts

subscribe via RSS