You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
llama.cpp/examples
Gary Linscott be87b6ed20
perplexity : add support for batch size to `--perplexity` (#407)
* Add support to batch size for perplexity

* Revert "Fix memory allocation issues and seg faults"

This reverts commit 4870e455b3.

* update from merge

* Remove perplexity from main

* updates

* Update batch size for efficiency
1 year ago
..
benchmark fix whitespace (#944) 1 year ago
embedding Fix whitespace, add .editorconfig, add GitHub workflow (#883) 1 year ago
main Fix whitespace, add .editorconfig, add GitHub workflow (#883) 1 year ago
perplexity perplexity : add support for batch size to `--perplexity` (#407) 1 year ago
quantize Add enum llama_ftype, sync ggml_type to model files (#709) 1 year ago
quantize-stats llama : merge llama_internal.h into llama.h 1 year ago
CMakeLists.txt Add quantize-stats command for testing quantization (#728) 1 year ago
Miku.sh Fix whitespace, add .editorconfig, add GitHub workflow (#883) 1 year ago
alpaca.sh examples : add -n to alpaca and gpt4all scripts (#706) 1 year ago
chat-13B.bat Create chat-13B.bat (#592) 1 year ago
chat-13B.sh Move chat scripts into "./examples" 1 year ago
chat.sh If n_predict == -1, generate forever 1 year ago
common.cpp common : remove unnecessary includes (#947) 1 year ago
common.h Rewrite loading code to try to satisfy everyone: 1 year ago
gpt4all.sh examples : add -n to alpaca and gpt4all scripts (#706) 1 year ago
reason-act.sh add example of re-act pattern (#583) 1 year ago