Commit Graph

510 Commits (master)
 

Author SHA1 Message Date
Benjamin Lecaillon a90e96b266
Convert.py @staticmethod (#1327)
* Line 698 has one #staticmethod and should not

otherwise throw error at unpickle.load() as not callable

* Update convert.py

---------

Co-authored-by: Ivan Stepanov <ivanstepanovftw@gmail.com>
12 months ago
slaren 94c5652fc0
quantize: make output filename optional, default to ggml-model-<ftype>.bin (#1301) 12 months ago
Ivan Stepanov 34d9f22f44
Wrap exceptions in std::exception to verbose output on exception. (#1316) 12 months ago
Ivan Stepanov d3e8093e9b
convert: support DT_BF16 tensors (#1309)
Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
12 months ago
44670 360cfe5bec
readme : add OpenBuddy link (#1321) 12 months ago
44670 2edbdb0f99
main : add --in-suffix option (#1318)
* adding --in-suffix option

* print input suffix before generation
12 months ago
Ron Jailall 20fbf2a2a0
ggml : change immintrin.h to intrin.h for compatibility (#1307)
* change immintrin.h to intrin.h for compatibility

Building on windows11 arm throws an error on this line. Seems like using intrin.h covers x86 and and arm

* conditional def of intrin.h

* fix typo in ggml.c
12 months ago
DannyDaemonic db1080876a
Only escape prompts when used with `-e` (#1311) 12 months ago
DannyDaemonic c65a7fbfa9
Update main's README.md with new features (#1296) 12 months ago
Tomas f647ce040f
fix #1224 reverse prompt and multi line (#1297)
* fix reverse prompt and multi line

* Code Formatting

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
12 months ago
Georgi Gerganov 799fdc1b5d
ggml : vectorize Q8_0 quantization
https://github.com/ggerganov/ggml/pull/127#issuecomment-1533648531
1 year ago
khimaros 6daa09d879
examples : read chat prompts from a template file (#1196) 1 year ago
Georgi Gerganov bca9ad938a
minor : fix whitespaces (#1302) 1 year ago
Georgi Gerganov e2a937ca6a
minor : fix trailing whitespaces 1 year ago
KASR b0c71c7b6d
scripts : platform independent script to verify sha256 checksums (#1203)
* python script to verify the checksum of the llama models

Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability.

* Update README.md

update to the readme for improved readability and to explain the usage of the python checksum verification script

* update the verification script

I've extended the script based on suggestions by @prusnak

The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks.

* minor improvment

small change so that the available ram is checked and not the total ram

* remove the part of the code that reads the file at once if enough ram is available

based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks.

* Update verify-checksum-models.py

quick fix to pass the git check
1 year ago
CRD716 a8a2efdc81
examples : various prompt and example fixes (#1298)
* fix dan.txt

* miku prompt improvements

* use common characters
1 year ago
Evan Jones e216aa0463
llama : only copy used KV cache in get / set state (#1272)
* llama : only copy used KV cache in get / set state

* switch to ggml for copying k, v

* avoid designated initializers
1 year ago
DannyDaemonic 2485d7a4d3
Process escape sequences given in prompts (#1173) 1 year ago
DannyDaemonic 13b0c68ed7
Handle signals properly on Windows (#1123) 1 year ago
DannyDaemonic 55bc5f0900
Call sh on build-info.sh (#1294) 1 year ago
kuvaus 9daff419f6
fix build-info.h for git submodules (#1289)
* make git build info work with submodules

---------

Co-authored-by: Green Sky <green@g-s.xyz>
1 year ago
slaren bf4b22ffe4
fix missing parameters in `llama_init_from_gpt_params` (#1293) 1 year ago
Ron Evans 67c77799e0
examples : add llama_init_from_gpt_params() common function (#1290)
Signed-off-by: deadprogram <ron@hybridgroup.com>
1 year ago
Georgi Gerganov 0e6cbff1b7
llama : fix compile warnings 1 year ago
Georgi Gerganov 5d5817ca60
ggml : fix 32-bit ARM 1 year ago
Ron Evans 8c9be35ff9
examples : improve vertical alignment of a few variables (#1286)
Signed-off-by: deadprogram <ron@hybridgroup.com>
1 year ago
Marvin Gießing cc0bb7235c
ggml : fix ppc64le build error and make cmake detect Power processors (#1284)
* Fix ppc64le build issue

* Added support to detect ppc64* processors
1 year ago
Robert Brisita 2bb992f034
llama : allow 0 as a seed number. (#1275) 1 year ago
Ron Evans e2cd506999
main : switch input_noecho to input_echo to remove negation (#979)
Signed-off-by: deadprogram <ron@hybridgroup.com>
1 year ago
slaren 2d099e5193
ggml: add names to tensors (#1268)
* ggml: add names to tensors

* minor improvements to dot file formatting
1 year ago
DannyDaemonic f4cef87edf
Add git-based build information for better issue tracking (#1232)
* Add git-based build information for better issue tracking

* macOS fix

* "build (hash)" and "CMAKE_SOURCE_DIR" changes

* Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages

* Fix conditional dependency on missing target

* Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile

* 4 space indenting for cmake, attempt to clean up my mess in Makefile

* Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it
1 year ago
slaren 58b367c2d7
cuBLAS: refactor and optimize f16 mat mul performance (#1259)
* cuBLAS: refactor, convert fp16 to fp32 on device

* cuBLAS: use multiple streams, choose smartly between mul_mat_q and mul_mat_f16

* fix build

* cuBLAS: update block_q5_1
1 year ago
xloem ea3a0ad6b6
llama : update stubs for systems without mmap and mlock (#1266)
Co-authored-by: John Doe <john.doe@example.com>
1 year ago
Kerfuffle 2bdc09646d
ggml : fix ggml_used_mem() (#1264) 1 year ago
Georgi Gerganov 70269cae37
llama : fix session load / save (#1263) 1 year ago
slaren b925f1f1b0
cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)
* cuBLAS: fall back to pageable memory if pinned alloc fails

* cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED is set
1 year ago
Alex Klinkhamer 90b19bd6ee
llama : let context be const when accessing const data (#1261) 1 year ago
Georgi Gerganov 7ff0dcd320
ggml : fix UB (int << 31) 1 year ago
Pavol Rusnak 6f79699286
build: add armv{6,7,8} support to cmake (#1251)
- flags copied from Makefile
- updated comments in both CMakeLists.txt and Makefile to match reality
1 year ago
jon-chuang a5d30b1f53
common : better default number of threads (#934)
* commit

* fix

* try-catch

* apply code review

* improve

* improve

* add macos headers

* done

* remove color

* fix windows

* minor

* fix

* Apply suggestions from code review

Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com>

* remove

* minor

* minor

---------

Co-authored-by: jon-chuang <jon-chuang@users.noreply.github.com>
Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com>
1 year ago
0cc4m 76a884920a
ggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)
* Implement q5_0, q5_1 and q8_0

* Work around q5_0 OpenCL issue

* Fix q8_0 dequant kernel

* Move cl kernels into ggml-opencl.c

* Use two memcpy calls for q5_0 buffer transfer
1 year ago
Georgi Gerganov 6bc4400e67
ggml : add Q5 WASM SIMD + GGML_FTYPE 1 year ago
Stephan Walter f0d70f147d
Various fixes to mat_mul benchmark (#1253) 1 year ago
Georgi Gerganov 3e5aa8a1c4
ggml : fix labels for GGML_OP_ALIBI 1 year ago
Georgi Gerganov c3ca7a5f05
ggml : fix 32-bit ARM NEON 1 year ago
Georgi Gerganov e8c051611a
ggml : use vzip instead of vuzp for consistency 1 year ago
Georgi Gerganov 0b5a935099
ggml : fix visibility and unused warnings 1 year ago
Georgi Gerganov ec728e44d7
ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229) 1 year ago
Georgi Gerganov 214b6a3570
ggml : adjust mul_mat_f16 work memory (#1226)
* llama : minor - remove explicity int64_t cast

* ggml : reduce memory buffer for F16 mul_mat when not using cuBLAS

* ggml : add asserts to guard for incorrect wsize
1 year ago
Georgi Gerganov 305eb5afd5
build : fix reference to old llama_util.h 1 year ago