2023-07-18 11:24:43 +00:00
|
|
|
# CI
|
|
|
|
|
|
|
|
In addition to [Github Actions](https://github.com/ggerganov/llama.cpp/actions) `llama.cpp` uses a custom CI framework:
|
|
|
|
|
|
|
|
https://github.com/ggml-org/ci
|
|
|
|
|
|
|
|
It monitors the `master` branch for new commits and runs the
|
|
|
|
[ci/run.sh](https://github.com/ggerganov/llama.cpp/blob/master/ci/run.sh) script on dedicated cloud instances. This allows us
|
|
|
|
to execute heavier workloads compared to just using Github Actions. Also with time, the cloud instances will be scaled
|
|
|
|
to cover various hardware architectures, including GPU and Apple Silicon instances.
|
|
|
|
|
|
|
|
Collaborators can optionally trigger the CI run by adding the `ggml-ci` keyword to their commit message.
|
|
|
|
Only the branches of this repo are monitored for this keyword.
|
|
|
|
|
|
|
|
It is a good practice, before publishing changes to execute the full CI locally on your machine:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
mkdir tmp
|
2023-07-22 08:48:22 +00:00
|
|
|
|
|
|
|
# CPU-only build
|
2023-07-18 11:24:43 +00:00
|
|
|
bash ./ci/run.sh ./tmp/results ./tmp/mnt
|
2023-07-22 08:48:22 +00:00
|
|
|
|
|
|
|
# with CUDA support
|
|
|
|
GG_BUILD_CUDA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt
|
2023-07-18 11:24:43 +00:00
|
|
|
```
|