2021

Fine-Tuned Transformers Show Clusters ofSimilar Representations Across Layers

talks about CKA ****(Kornblith et al., 2019) and how researchers (Phang et al. 2021) use it to analyze representation change during NLP fine-tuning

Contrastive NLP, long-tail LMs, prompt embedding learning

Contrastive NLP and its relation to differentiable prompting (Logan IV et al., 2021)

20.1.2021 Nils_Rethmeier - DIKU reading group.pptx

Summary of "Transformer Feed-Forward Layers Are Key-Value Memories" by Geva et al, 2020