Commit Graph

425 Commits (q4_0-q4_2-range-fix)
 

Author SHA1 Message Date
Ikko Eltociear Ashimine a717cba844
py: huggingface -> Hugging Face (#686) 1 year ago
rimoliga d0a7f742e7
readme: replace termux links with homepage, play store is deprecated (#680) 1 year ago
Slaren 0d054e292e Show error message when -f fails 1 year ago
Stephan Walter 3525899277
Enable -std= for cmake builds, fix warnings (#598) 1 year ago
slaren 1d08882afa
Optimize AVX2 ggml_vec_dot_q4_0 (#642) 1 year ago
perserk 02c5b27e91
Add AVX acceleration (#617)
* ggml : add AVX quantize_row_q4_0()

* ggml : add AVX ggml_vec_dot_q4_0()

* ggml : refactor AVX part of ggml_vec_dot_q4_0()

https://github.com/ggerganov/llama.cpp/pull/617#issuecomment-1489985645
1 year ago
Pavol Rusnak cbef542879 py : cleanup the code
- use f-strings where possible
- drop first param of encode/decode functions since "utf-8" is the default
1 year ago
Pavol Rusnak 9733104be5 drop quantize.py (now that models are using a single file) 1 year ago
Georgi Gerganov 3df890aef4
readme : update supported models 1 year ago
Justine Tunney ee0c40dd6d Introduce GGML migration tool for new file format
If you deleted your old Meta LLaMA .pth files, then the
migrate-ggml-2023-03-30-pr613.py script will allow you to convert your
old ggml files into the new mmap()'able format.

See #613
1 year ago
Justine Tunney 6f23ba5ee2 Ensure --mlock works properly with mmap() support 1 year ago
Justine Tunney 78ca9838ee Make loading weights 10-100x faster
This is a breaking change that's going to give you three benefits:

1. Your inference commands should load 100x faster
2. You may be able to safely load models 2x larger
3. You can run many concurrent inference processes

This was accomplished by changing the file format so we can mmap()
weights directly into memory without having to read() or copy them
thereby ensuring the kernel can make its file cache pages directly
accessible to our inference processes; and secondly, that the file
cache pages are much less likely to get evicted (which would force
loads to hit disk) because they're no longer competing with memory
pages that were needlessly created by gigabytes of standard i/o.

The new file format supports single-file models like LLaMA 7b, and
it also supports multi-file models like LLaMA 13B. Our Python tool
now merges the foo.1, foo.2, etc. files back into a single file so
that the C++ code which maps it doesn't need to reshape data every
time. That's made llama.cpp so much simpler. Much of its load code
has now been deleted.

Furthermore, this change ensures that tensors are aligned properly
on a 32-byte boundary. That opens the door to seeing if we can get
additional performance gains on some microprocessors, by using ops
that require memory alignment.

Lastly note that both POSIX and the Windows platform are supported

Fixes #91
1 year ago
Slaren a017390358 Initial windows support (untested) 1 year ago
Slaren ac184d5147 Always initialize mm_addr and mm_length in llama_model 1 year ago
Slaren 276e5b7811 Unmap the file in llama_free 1 year ago
Slaren d68c5dc435 Make mmap_file static 1 year ago
Slaren 64bde3ffd4 Fix ggml_init_params in quantize 1 year ago
Slaren c03ae8dca1 Add mmap support for model files 1 year ago
Stephan Walter 3bcc129ba8
cmake : properly invoke CTest (#629) 1 year ago
Casey Primozic a4755cf288
Remove unused variable (#607)
* It seems some new warning were added recently that exposed this.  I wrote the code that included this unused variable originally and it is indeed not needed.
1 year ago
david raistrick 1f0414feec
make : fix darwin f16c flags check (#615)
...there was no check.  ported upstream from https://github.com/zanussbaum/gpt4all.cpp/pull/2 (I dont see any clean path for upstream patches)
1 year ago
Georgi Gerganov 77efdf5a50
ggml : fix NEON signs (close #620, #622) 1 year ago
slaren ed3c680bcd
Fix GGML_F32Cx8_STORE in AVX without F16C path (#619) 1 year ago
anzz1 9cbc404ba6
ci : re-enable AVX512 testing (Windows-MSVC) (#584)
* CI: Re-enable AVX512 testing (Windows-MSVC)

Now with 100% less base64 encoding

* plain __cpuid is enough here
1 year ago
Georgi Gerganov b51c717d5c
ggml : init time on first ggml_init() call 1 year ago
Georgi Gerganov 0ba76c1e73
llama : fix compile warnings when reading the vocab 1 year ago
Georgi Gerganov cea1c85948
ggml : add ARM_NEON dequantize_row_q4_1() 1 year ago
Georgi Gerganov f202ada131
ggml : add ARM_NEON quantize_row_q4_1() 1 year ago
Georgi Gerganov 3b44d30d9b
ggml : add ARM_NEON ggml_vec_dot_q4_1() 1 year ago
Pavol Rusnak 61cbfff5c9
rename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)
to match filenames of other converters
1 year ago
Thérence d9ad104440
Create chat-13B.bat (#592)
* Create chat-13B.bat

Same script than chat-13B.sh, but for windows users.
Tested and working on windows 10/11 v 22H2

* Apply suggestions from code review

---------

Co-authored-by: anzz1 <anzz1@live.com>
1 year ago
Georgi Gerganov b467702b87
readme : fix typos 1 year ago
Georgi Gerganov 516d88e75c
readme : add GPT4All instructions (close #588) 1 year ago
Georgi Gerganov 53635c081c
py : add GPT4All conversion script
For now: copy-paste
Too much time for me to deduplicate the python code
1 year ago
Maël Kerbiriou 41318d708e
llama : use the same threshold for OpenBLAS and ggml thread limiting (#577) 1 year ago
Tobias Lütke a6956b25a1
add example of re-act pattern (#583)
* add example of re-act pattern

* spelling...

* fixed whitespace in reverse prompt issue
1 year ago
anzz1 83df5639eb
Fix GCC warning about binary literal (#595)
0b10101010 -> 0xAA /* 0b10101010 */
1 year ago
anzz1 a5c42c4b13
Fix typo in llama.h (#593) 1 year ago
anzz1 5a5f8b1501
Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)
* Enable Fused-Multiply-Add (FMA) instructions on MSVC

__FMA__ macro does not exist in MSVC

* Enable F16C/CVT16 vector extensions on MSVC

__F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512

* MSVC cvt intrinsics

* Add __SSE3__ macro for MSVC too because why not

even though it's not currently used for anything when AVX is defined
1 year ago
anzz1 f1217055ea
CI: fix subdirectory path globbing (#546)
- Changes in subdirectories will now be detecter properly
- (Windows-MSVC) AVX512 tests temporarily disabled
1 year ago
anzz1 7f4c5c6651
llama : fix linkage with mingw (#551)
* Revert 7e53955 (#542)

Still needs to be fixed properly

* Fix linking on mingw32
1 year ago
slaren 2a98bc18ea
ggml : add AVX2 implementation of quantize_row_q4_1 (#515)
* Add AVX2 implementation of quantize_row_q4_1

* Actually use AVX2

* Make quantize_row_q4_1 static

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
thement d0aaff571c
py : add temporary script to convert old ggml files to newer version (#539)
Co-authored-by: Jakub Horak <jakub.horak@ibawizard.net>
1 year ago
Tai Duc Nguyen d0330fd783
py : add capabiliy to convert from ggml back to torch or hf format for further consumption/training/finetuning (#403) 1 year ago
Stephan Walter 99c5b27654
ggml : refactor quantized processing functions (#509)
* Refactor quantized processing functions

* ggml : minor

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
DooWoong Lee (David) 692ce3164e
py : removed unused `model` variable and verified that the code functions correctly with `vocab_only` setting. Also confirmed that the code works as expected after running with reduced memory usage due to deletion of no-longer-needed variable. (#547) 1 year ago
Georgi Gerganov 96f9c0506f
ci : make ctest verbose, hopefully we see what is wrong with the sanitizer 1 year ago
Georgi Gerganov d502bc7c9d
tests : free llama context at the end of the test 1 year ago
Stephan Walter 436e561931
all : be more strict about converting float to double (#458)
* Be more strict about converting float to double

* Test equivalence of round, SILU implementations

Test module is commented out in CMakeLists.txt because the tests may
take a long time, depending on how much the compiler optimizes.

* Fix softmax in perplexity.cpp

* all : prefer float over double where appropriate

* perplexity : add <cmath>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Jed Fox 20e1e84884
deploy : add a Package.swift for SwiftPM support (#393)
* Add a Package.swift for SwiftPM support

* Swap from exclusions to allowlist
1 year ago