AI/Paper Analysis

LIama 1 Paper

HEAD1TON 2025. 4. 1. 15:41

https://arxiv.org/abs/2302.13971

 

LLaMA: Open and Efficient Foundation Language Models

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, witho

arxiv.org

 

저작자표시 변경금지 (새창열림)