|
f5ba37a10b
|
api: simplify/combine the llama_eval branches
|
2023-04-08 16:04:16 +12:00 |
|
|
0c96f2bf6b
|
api: support a MaxTokens parameter
|
2023-04-08 16:03:59 +12:00 |
|
|
6c6a5c602e
|
api: llama_eval only needs to evaluate new tokens
|
2023-04-08 15:48:38 +12:00 |
|
|
2c11e32018
|
api: don't log resulting tokens on backend
|
2023-04-08 15:48:26 +12:00 |
|
|
2cdcf54dd8
|
webui: synchronize context size value for clientside warning
|
2023-04-08 15:48:16 +12:00 |
|
|
dc8db75e04
|
gitignore
|
2023-04-08 15:31:24 +12:00 |
|
|
a7dd9580a5
|
doc/README: initial commit
|
2023-04-08 15:30:37 +12:00 |
|
|
fa8db95cc6
|
doc/license: add MIT license
|
2023-04-08 15:30:32 +12:00 |
|
|
d044a9e424
|
initial commit
|
2023-04-08 15:30:15 +12:00 |
|
|
7c6a0cdaa2
|
llama.cpp: commit upstream files (as of rev 62cfc54f77e5190)
|
2023-04-08 15:30:02 +12:00 |
|