Back to archive

Thread

2 tweets

1
OMG, I'm reading papers again. What's interesting is that until a few years ago it very much sounds like scientific business as usual. Even the GPT-2 paper "Language Models are Unsupervised Multitask Learners" sounds relatively level headed.
2
Although if I understood correctly it introduces the idea of going from supervised learning to learning an unsupervised language model without which the whole prompt engineering business would not have been possible!