From d8bd0013e8768aaa3dc9cfc1ff01499419d5348e Mon Sep 17 00:00:00 2001 From: Henri Vasserman Date: Sat, 3 Jun 2023 16:35:20 +0300 Subject: [PATCH] Add info about CUDA_VISIBLE_DEVICES (#1682) --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 00571d8..aba22b9 100644 --- a/README.md +++ b/README.md @@ -310,6 +310,8 @@ Building the program with BLAS support may lead to some performance improvements ``` Note: Because llama.cpp uses multiple CUDA streams for matrix multiplication results [are not guaranteed to be reproducible](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility). If you need reproducibility, set `GGML_CUDA_MAX_STREAMS` in the file `ggml-cuda.cu` to 1. + The environment variable [`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars) can be used to specify which GPU(s) will be used. + - **CLBlast** OpenCL acceleration is provided by the matrix multiplication kernels from the [CLBlast](https://github.com/CNugteren/CLBlast) project and custom kernels for ggml that can generate tokens on the GPU. @@ -348,7 +350,7 @@ Building the program with BLAS support may lead to some performance improvements cmake --install . --prefix /some/path ``` - Where `/some/path` is where the built library will be installed (default is `/usr/loca`l`). + Where `/some/path` is where the built library will be installed (default is `/usr/local`). Building: