'AI > Paper Analysis' 카테고리의 다른 글
LoRA Paper (0) | 2025.04.01 |
---|---|
LIama 2 Paper Model (0) | 2025.04.01 |
LIama 1 Paper (0) | 2025.04.01 |
LoRA Paper (0) | 2025.04.01 |
---|---|
LIama 2 Paper Model (0) | 2025.04.01 |
LIama 1 Paper (0) | 2025.04.01 |
https://arxiv.org/abs/2106.09685
LoRA: Low-Rank Adaptation of Large Language Models
An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes le
arxiv.org
ChatGPT-1 Paper (0) | 2025.04.02 |
---|---|
LIama 2 Paper Model (0) | 2025.04.01 |
LIama 1 Paper (0) | 2025.04.01 |
https://arxiv.org/abs/2307.09288
Llama 2: Open Foundation and Fine-Tuned Chat Models
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. O
arxiv.org
ChatGPT-1 Paper (0) | 2025.04.02 |
---|---|
LoRA Paper (0) | 2025.04.01 |
LIama 1 Paper (0) | 2025.04.01 |
https://arxiv.org/abs/2302.13971
LLaMA: Open and Efficient Foundation Language Models
We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, witho
arxiv.org
ChatGPT-1 Paper (0) | 2025.04.02 |
---|---|
LoRA Paper (0) | 2025.04.01 |
LIama 2 Paper Model (0) | 2025.04.01 |