mirror of
https://git.adityakumar.xyz/llama.cpp.git
synced 2024-11-09 15:29:43 +00:00
Update README.md (#444)
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
This commit is contained in:
parent
863f65e2e3
commit
f4f5362edb
1 changed files with 14 additions and 12 deletions
26
README.md
26
README.md
|
@ -219,9 +219,11 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach.
|
||||||
|
|
||||||
### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data
|
### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data
|
||||||
|
|
||||||
* The LLaMA models are officially distributed by Facebook and will never be provided through this repository. See this [pull request in Facebook's LLaMA repository](https://github.com/facebookresearch/llama/pull/73/files) if you need to obtain access to the model data.
|
- **Under no circumstances share IPFS, magnet links, or any other links to model downloads anywhere in this respository, including in issues, discussions or pull requests. They will be immediately deleted.**
|
||||||
* Please verify the sha256 checksums of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files.
|
- The LLaMA models are officially distributed by Facebook and will **never** be provided through this repository.
|
||||||
* The following command will verify if you have all possible latest files in your self-installed `./models` subdirectory:
|
- Refer to [Facebook's LLaMA repository](https://github.com/facebookresearch/llama/pull/73/files) if you need to request access to the model data.
|
||||||
|
- Please verify the sha256 checksums of all downloaded model files to confirm that you have the correct model data files before creating an issue relating to your model files.
|
||||||
|
- The following command will verify if you have all possible latest files in your self-installed `./models` subdirectory:
|
||||||
|
|
||||||
`sha256sum --ignore-missing -c SHA256SUMS` on Linux
|
`sha256sum --ignore-missing -c SHA256SUMS` on Linux
|
||||||
|
|
||||||
|
@ -229,15 +231,15 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach.
|
||||||
|
|
||||||
`shasum -a 256 --ignore-missing -c SHA256SUMS` on macOS
|
`shasum -a 256 --ignore-missing -c SHA256SUMS` on macOS
|
||||||
|
|
||||||
* If your issue is with model generation quality then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
|
- If your issue is with model generation quality then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
|
||||||
* LLaMA:
|
- LLaMA:
|
||||||
* [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
|
- [Introducing LLaMA: A foundational, 65-billion-parameter large language model](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)
|
||||||
* [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
|
- [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
|
||||||
* GPT-3
|
- GPT-3
|
||||||
* [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
|
- [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
|
||||||
* GPT-3.5 / InstructGPT / ChatGPT:
|
- GPT-3.5 / InstructGPT / ChatGPT:
|
||||||
* [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
|
- [Aligning language models to follow instructions](https://openai.com/research/instruction-following)
|
||||||
* [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
|
- [Training language models to follow instructions with human feedback](https://arxiv.org/abs/2203.02155)
|
||||||
|
|
||||||
### Perplexity (Measuring model quality)
|
### Perplexity (Measuring model quality)
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue