News
Overview The current Alpaca model is fine-tuned from a 7B LLaMA model [1] on 52K instruction-following data generated by the techniques in the Self-Instruct [2] paper, with some modifications that we ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results