Back to archive

Thread

8 tweets

3
Started in 1984, it is still ongoing for almost 40 years. Painstakingly trying to distill "common sense" so that machines can reason about reality. I see parallels to the way ML has completely changed machine translation here.
4
Machine translation used to be the hunt for an universal grammar and parsers/generators as intermediary representation to translate between languages. As you can image, given all the quirks of natural language, this was difficult.
6
It seems we've achieved the same thing using large language models. Essentially by learning the sequential structure of data, we have somehow created a machine that generates that intermediary representation, that we don't understand but that seems to be good enough.
7
And at the same time, we extracted "common sense" that previously took decades to assemble manually. This is remarkable, although there are still more challenges to take. But that's for another thread :)