llamacpphtmld/README.md
2023-04-08 15:30:37 +12:00

33 lines
856 B
Markdown

# llamacpphtmld
A web interface and API for the LLaMA large language AI model, based on the [llama.cpp](https://github.com/ggerganov/llama.cpp) runtime.
## Features
- Live streaming responses
- Continuation-based UI, supporting interrupt, modify, and resume
- Configure the maximum number of simultaneous users
- Works with any LLaMA model including [Vicuna](https://huggingface.co/eachadea/ggml-vicuna-13b-4bit)
- Bundled copy of llama.cpp, no separate compilation required
## Usage
All configuration should be supplied as environment variables:
```
LCH_MODEL_PATH=/srv/llama/ggml-vicuna-13b-4bit-rev1.bin \
LCH_NET_BIND=:8090 \
LCH_SIMULTANEOUS_REQUESTS=1 \
./llamacpphtmld
```
## API usage
```
curl -v -d '{"ConversationID": "", "APIKey": "", "Content": "The quick brown fox"}' -X 'http://localhost:8090/api/v1/generate'
```
## License
MIT