Using llama.cpp with parlant #650
sonvhfpoly
started this conversation in
General
Replies: 1 comment 1 reply
-
|
The easiest way to use local models is with the Ollama NLP provider. However, you may wish to experiment with the LiteLLM NLP provider, as this supports custom OpenAI endpoints. For the latter, you will need to install from the development branch (per #538). |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone.
We have hosted some local models using llama.cpp. The server expose OpenAI compatible url. I don't find any guidance of how to set it to nlp_service. Can I use llama.cpp server with parlant?
Beta Was this translation helpful? Give feedback.
All reactions