From c6524f46eb93fdb949330293a8469fd70080bd5a Mon Sep 17 00:00:00 2001 From: Pavol Rusnak Date: Sun, 23 Apr 2023 10:21:26 +0200 Subject: [PATCH] readme : update gpt4all instructions (#980) --- README.md | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index 324d49f..7bf2cc1 100644 --- a/README.md +++ b/README.md @@ -275,18 +275,19 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach. ### Using [GPT4All](https://github.com/nomic-ai/gpt4all) -- Obtain the `gpt4all-lora-quantized.bin` model -- It is distributed in the old `ggml` format, which is now obsoleted -- You have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py). You may also need to -convert the model from the old format to the new format with [./migrate-ggml-2023-03-30-pr613.py](./migrate-ggml-2023-03-30-pr613.py): +- Obtain the `tokenizer.model` file from LLaMA model and put it to `models` +- Obtain the `added_tokens.json` file from Alpaca model and put it to `models` +- Obtain the `gpt4all-lora-quantized.bin` file from GPT4All model and put it to `models/gpt4all-7B` +- It is distributed in the old `ggml` format which is now obsoleted +- You have to convert it to the new format using `convert.py`: - ```bash - python3 convert-gpt4all-to-ggml.py models/gpt4all-7B/gpt4all-lora-quantized.bin ./models/tokenizer.model - python3 migrate-ggml-2023-03-30-pr613.py models/gpt4all-7B/gpt4all-lora-quantized.bin models/gpt4all-7B/gpt4all-lora-quantized-new.bin - ``` +```bash +python3 convert.py models/gpt4all-7B/gpt4all-lora-quantized.bin +``` -- You can now use the newly generated `gpt4all-lora-quantized-new.bin` model in exactly the same way as all other models -- The original model is saved in the same folder with a suffix `.orig` +- You can now use the newly generated `models/gpt4all-7B/ggml-model-q4_0.bin` model in exactly the same way as all other models + +- The newer GPT4All-J model is not yet supported! ### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data