AI/Paper Analysis (4) 썸네일형 리스트형 ChatGPT-1 Paper https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf LoRA Paper https://arxiv.org/abs/2106.09685 LoRA: Low-Rank Adaptation of Large Language ModelsAn important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes learxiv.org LIama 2 Paper Model https://arxiv.org/abs/2307.09288 Llama 2: Open Foundation and Fine-Tuned Chat ModelsIn this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Oarxiv.org LIama 1 Paper https://arxiv.org/abs/2302.13971 LLaMA: Open and Efficient Foundation Language ModelsWe introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, withoarxiv.org 이전 1 다음