Mikio Braun
@mikiobraun
Replying to @rasbt
Ah, I was under the impression that in LoRA you keep the weight matrices but have a low-rank update for fine-tuning, but maybe I misunderstood. Yeah, OK, the matrices are not like really low-rank, but maybe low-rank "enough" to do some compression...