8 lines
137 B
Markdown
8 lines
137 B
Markdown
|
|
# LLM Install
|
|
|
|
## NVIDIA
|
|
|
|
`CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --no-cache-dir`
|
|
|