<aside> 💡 self-supervised representation learning (data-)efficient pretraining, few-shot, zero-zero, long-tail learning continual learning, continual language models and supervision transfer interpretability and XAI medical AI
algorithmic bias, information bias, "fairness"
</aside>
NLP with Friends: Learning, evaluating and explaining transferable text representations from grumpy Big Small Data. 12/2020
ALPS winter school: Tutorial on interpretability in NLP, Lab 2.2
Aleph Alpha: Data Efficient NLP representation learning
NLP with Friends, Featured Friend: Nils Rethmeier
https://github.com/copenlu/ALPS_2021
Data Efficient NLP representation learning
Project: Cora4NLP, Contextual Reasoning and Adaptation for Natural Language Processing
WIP Paper: Continual Language Modeling and Adaptation