Hi. I’m Louis Kirsch.

I am a PhD with Jürgen Schmidhuber at IDSIA (The Swiss AI Lab) working on Meta Reinforcement Learning agents. Previously I have been at University College London completing my Master of Research. My long-term research goal is to make RL agents learn their own learning algorithm, making them truly general in the AGI sense. They should be able to design their own abstractions for prediction, planning, and learning, and continuously learn from arbitrarily many environments. I have outlined these challenges in my blog post and I also recommend reading Jeff Clune’s view and Juergen’s legacy.

Short CV

News

My first work on learning RL algorithms is on ArXiv!
Improving Generalization in Meta Reinforcement Learning using Learned Objectives
Read more about it in my blog post

Recent publications

A complete list can be found on Google scholar.
I also worked on other machine learning projects.

  • Improving Generalization in Meta Reinforcement Learning using Learned Objectives [Blog][PDF]
    Preprint on ArXiv (Kirsch et al. 2019)

  • Modular Networks: Learning to Decompose Neural Computation [PDF]
    Published Conference Paper at NIPS 2018 (Kirsch et al. 2018)

  • Contemporary Challenges in Artificial Intelligence [PDF]
    Technical report (Kirsch September 2018)

  • Scaling Neural Networks Through Sparsity [PDF]
    Technical report (Kirsch July-October 2018)

  • Characteristics of Machine Learning Research with Impact [More][PDF]
    Technical report (Kirsch May 2018)

  • Differentiable Convolutional Neural Network Architectures for Time Series Classification [More]
    Bachelor Thesis at Hasso Plattner Institute (Kirsch 2017)

  • Transfer Learning for Speech Recognition on a Budget [More]
    Published Workshop Paper at ACL 2017 (Kunze and Kirsch et al. 2017)

  • Framework for Exploring and Understanding Multivariate Correlations [More]
    Published Demo Track Paper at ECML PKDD 2017 (Kirsch et al. 2017)

Recent blog posts

  • MetaGenRL: Improving Generalization in Meta Reinforcement Learning

    Blog post preview image

    Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Inspired by this process, MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that affects how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms human-engineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency. [Continue reading]

  • NeurIPS 2018, Updates on the AI road map

    Blog post preview image

    I present an updated roadmap to AGI with four critical challenges: Continual Learning, Meta-Learning, Environments, and Scalability. I motivate the respective areas and discuss how research from NeurIPS 2018 has advanced them and where we need to go next. [Continue reading]

subscribe via RSS