Commit Graph

352 Commits (489093548c89c67520109ab25c4df4a4614a32a0)
 

Author SHA1 Message Date
Pavol Rusnak 489093548c
py : bump sentencepiece to 0.1.98 to support Python 3.11 (#976) 1 year ago
Stephan Walter 93265e988a
make : fix dependencies, use auto variables (#983) 1 year ago
Pavol Rusnak c56b715269
Expose type name from ggml (#970)
Avoid duplication of type names in utils

Co-authored-by: Håkon H. Hitland <haakon@likedan.net>
1 year ago
Tomáš Pazdiora f4d277ae17
main : alternative instruct mode (Vicuna support, etc.) (#863)
* Add support for configs, add configurable prefixes / suffixes, deprecate instruct mode, add stop prompt

* Add multiline mode, update text input.

* bugfix

* update implementation

* typos

* Change --multiline implementation to be toggled by EOF.

* bugfix

* default multiline mode

* add more configs

* update formating

* update formatting

* apply suggestions
1 year ago
Kerfuffle c9a59b70a5
ggml : add unary and binary map operations (#874)
* GGML map ops proof of concept.

* Various cleanups.

Add handling for task setting.

Add handling for ggml_compute_backward.

Rename functions to ggml_map_unary_f32 and ggml_map_binary_f32

Fix compiler warnings related to casting function pointers and `void *`

Reorder functions and definitions based on the GGML op number.

Use typedefs for map op function pointer types.

* Fix position of map ops cases in ggml_compute_forward
1 year ago
Pavol Rusnak a32f7acc9f
py : cleanup dependencies (#962)
after #545 we do not need torch, tqdm and requests in the dependencies
1 year ago
Pavol Rusnak 43ffdefb74
py : fix flake8 and isort nitpicks (#960) 1 year ago
Georgi Gerganov 1623a6e9b4
ggml : minor 1 year ago
Georgi Gerganov c14e0d2f23
ggml : always allocate buffers with size multiple of GGML_MEM_ALIGN 1 year ago
comex 723dac55fa
py : new conversion script (#545)
Current status: Working, except for the latest GPTQ-for-LLaMa format
  that includes `g_idx`.  This turns out to require changes to GGML, so
  for now it only works if you use the `--outtype` option to dequantize it
  back to f16 (which is pointless except for debugging).

  I also included some cleanup for the C++ code.

  This script is meant to replace all the existing conversion scripts
  (including the ones that convert from older GGML formats), while also
  adding support for some new formats.  Specifically, I've tested with:

  - [x] `LLaMA` (original)
  - [x] `llama-65b-4bit`
  - [x] `alpaca-native`
  - [x] `alpaca-native-4bit`
  - [x] LLaMA converted to 'transformers' format using
        `convert_llama_weights_to_hf.py`
  - [x] `alpaca-native` quantized with `--true-sequential --act-order
        --groupsize 128` (dequantized only)
  - [x] same as above plus `--save_safetensors`
  - [x] GPT4All
  - [x] stock unversioned ggml
  - [x] ggmh

  There's enough overlap in the logic needed to handle these different
  cases that it seemed best to move to a single script.

  I haven't tried this with Alpaca-LoRA because I don't know where to find
  it.

  Useful features:

  - Uses multiple threads for a speedup in some cases (though the Python
    GIL limits the gain, and sometimes it's disk-bound anyway).

  - Combines split models into a single file (both the intra-tensor split
    of the original and the inter-tensor split of 'transformers' format
    files).  Single files are more convenient to work with and more
    friendly to future changes to use memory mapping on the C++ side.  To
    accomplish this without increasing memory requirements, it has some
    custom loading code which avoids loading whole input files into memory
    at once.

  - Because of the custom loading code, it no longer depends in PyTorch,
    which might make installing dependencies slightly easier or faster...
    although it still depends on NumPy and sentencepiece, so I don't know
    if there's any meaningful difference.  In any case, I also added a
    requirements.txt file to lock the dependency versions in case of any
    future breaking changes.

  - Type annotations checked with mypy.

  - Some attempts to be extra user-friendly:

      - The script tries to be forgiving with arguments, e.g. you can
        specify either the model file itself or the directory containing
        it.

      - The script doesn't depend on config.json / params.json, just in
        case the user downloaded files individually and doesn't have those
        handy.  But you still need tokenizer.model and, for Alpaca,
        added_tokens.json.

      - The script tries to give a helpful error message if
        added_tokens.json is missing.
1 year ago
Georgi Gerganov 0f07cacb05
ggml : fix q4_1 dot product types 1 year ago
Howard Su c5d70f5c9e
ggml : optimize rope function to avoid call powf in the tight loop (#807) 1 year ago
Gary Linscott be87b6ed20
perplexity : add support for batch size to `--perplexity` (#407)
* Add support to batch size for perplexity

* Revert "Fix memory allocation issues and seg faults"

This reverts commit 4870e455b3.

* update from merge

* Remove perplexity from main

* updates

* Update batch size for efficiency
1 year ago
CRD716 0e07e6a839
common : remove unnecessary includes (#947) 1 year ago
Georgi Gerganov a3a2a0eda8
ggml : add GGML_DEFAULT_N_THREADS 1 year ago
Georgi Gerganov d990e3fffc
ggml : speed-up ggml_vec_dot_q4_1() ARM_NEON + 32-bit ARM support (#900)
* ggml : speed-up q4_1 ARM_NEON by ~5%

* ggml : implement vaddvq when missing

* ggml : implement vminvq and vmaxvq when missing

* ggml : implement vzip when missing

* ggml : fix comment

* ggml : try to use correct ifdef
1 year ago
Georgi Gerganov 9190e8eac8
llama : merge llama_internal.h into llama.h
Hide it behind an #ifdef
1 year ago
Georgi Gerganov c85980acd0
gitignore : benchmark 1 year ago
Stephan Walter 6232f2d7fd
ggml : optimize non-SIMD Q4_0 vector dot product (#703) 1 year ago
Pavol Rusnak 6c248707f5
ggml : introduce GGML_ALIGNED_MALLOC/GGML_ALIGNED_FREE macros (#884)
which allows us to use aligned_alloc or _aligned_malloc functions
1 year ago
CRD716 8cda5c981d
fix whitespace (#944) 1 year ago
CRD716 ec29272175
readme : remove python 3.10 warning (#929) 1 year ago
Genkagaku.GPT 7e941b95eb
readme : llama node binding (#911)
* chore: add nodejs binding

* chore: add nodejs binding
1 year ago
Pavol Rusnak c729ff730a
flake.nix: add all binaries from bin (#848) 1 year ago
Judd 4579af95e8
zig : update build.zig (#872)
* update

* update readme

* minimize the changes.

---------

Co-authored-by: zjli2019 <zhengji.li@ingchips.com>
1 year ago
Vladimir 8c3ffc2f04
ggml : update cblas_sgemm columns var to be more reasonable (#838) 1 year ago
niansa/tuxifan 107980d970
examples : add -n to alpaca and gpt4all scripts (#706) 1 year ago
anzz1 585d91a156
cmake : add explicit F16C option (x86) (#576)
Fixes building for x86 processors missing F16C featureset
MSVC not included, as in MSVC F16C is implied with AVX2/AVX512
1 year ago
SebastianApel 95ea26f6e9
benchmark : add tool for timing q4_0 matrix multiplication (#653)
* Initial version of q4_0 matrix multiplication benchmark

* Bugfix: Added dependency to ggml.o to benchmark

* Reviewer requests: added parameter for threads, switched to ggml_time_us()

* Reviewer input: removed rtsc, use epsilon for check

* Review comment: Removed set_locale

* Feature: Param for numer of iterations, Bugfix for use of parameter threads

* Reviewer suggestion: Moved to examples

* Reviewer feedback: Updated clean: and benchmark: sections

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
1 year ago
Pavol Rusnak 82d146df9b
do not force the prompt file to end with a new line (#908) 1 year ago
Stephan Walter e7f6997f89
Don't crash on ftype (formerly f16) == 4 (#917) 1 year ago
Georgi Gerganov f76cb3a34d
readme : change "GPU support" link to discussion 1 year ago
Georgi Gerganov 782438070f
readme : update hot topics with link to "GPU support" issue 1 year ago
Nicolai Weitkemper 4dbbd40750
readme: link to sha256sums file (#902)
This is to emphasize that these do not need to be obtained from elsewhere.
1 year ago
Pavol Rusnak 8b679987cd
Fix whitespace, add .editorconfig, add GitHub workflow (#883) 1 year ago
Stephan Walter 3e6e70d8e8
Add enum llama_ftype, sync ggml_type to model files (#709) 1 year ago
comex 2663d2c678
Windows fixes (#890)
Mostly for msys2 and mingw64 builds, which are different from each other
and different from standard Visual Studio builds.  Isn't Windows fun?

- Define _GNU_SOURCE in more files (it's already used in ggml.c for
  Linux's sake).

- Don't use PrefetchVirtualMemory if not building for Windows 8 or later
  (mingw64 doesn't by default).  But warn the user about this situation
  since it's probably not intended.

- Check for NOMINMAX already being defined, which it is on mingw64.

- Actually use the `increment` variable (bug in my `pizza` PR).

- Suppress unused variable warnings in the fake pthread_create and
  pthread_join implementations for Windows.

- (not Windows-related) Remove mention of `asprintf` from comment;
  `asprintf` is no longer used.

Fixes #871.
1 year ago
qouoq a0caa34b16
Add BAIR's Koala to supported models (#877) 1 year ago
Georgi Gerganov 461ba9e66e
ggml : fix WASM build 1 year ago
Georgi Gerganov c3ac702e5e
ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dst 1 year ago
Georgi Gerganov 9d634ef452
ggml : remove trailing whitespaces 1 year ago
Marco Matthies d9a239c410
Simplify to include lower-case windows.h always, fix compile on mingw32 (#747) 1 year ago
Georgi Gerganov 684da25926
ggml : fix quantize_row_q4_1() ARM_NEON (close #876) 1 year ago
comex 180b693a47 Print model version.
Also improve model type printing, and fix indentation of an unrelated
switch statement.
1 year ago
comex f963b63afa Rewrite loading code to try to satisfy everyone:
- Support all three formats (ggml, ggmf, ggjt).  (However, I didn't
  include the hack needed to support GPT4All files without conversion.
  Those can still be used after converting them with convert.py from my
  other PR.)

- Support both mmap and read (mmap is used by default, but can be
  disabled with `--no-mmap`, and is automatically disabled for pre-ggjt
  files or on platforms where mmap is not supported).

- Support multi-file models like before, but automatically determine the
  number of parts rather than requiring `--n_parts`.

- Improve validation and error checking.

- Stop using the per-file type field (f16) entirely in favor of just
  relying on the per-tensor type/size fields.  This has no immediate
  benefit, but makes it easier to experiment with different formats, and
  should make it easier to support the new GPTQ-for-LLaMa models in the
  future (I have some work in progress on that front).

- Support VirtualLock on Windows (using the same `--mlock` option as on
  Unix).

    - Indicate loading progress when using mmap + mlock.  (Which led me
      to the interesting observation that on my Linux machine, with a
      warm file cache, mlock actually takes some time, whereas mmap
      without mlock starts almost instantly...)

      - To help implement this, move mlock support from ggml to the
        loading code.

- madvise/PrefetchVirtualMemory support (based on #740)

- Switch from ifstream to the `fopen` family of functions to avoid
  unnecessary copying and, when mmap is enabled, allow reusing the same
  file descriptor for both metadata reads and mmap (whereas the existing
  implementation opens the file a second time to mmap).

- Quantization now produces a single-file output even with multi-file
  inputs (not really a feature as much as 'it was easier this way').

Implementation notes:

I tried to factor the code into more discrete pieces than before.

Regarding code style: I tried to follow the code style, but I'm naughty
and used a few advanced C++ features repeatedly:

- Destructors to make it easier to ensure everything gets cleaned up.

- Exceptions.  I don't even usually use exceptions when writing C++, and
  I can remove them if desired... but here they make the loading code
  much more succinct while still properly handling a variety of errors,
  ranging from API calls failing to integer overflow and allocation
  failure.  The exceptions are converted to error codes at the
  API boundary.)

Co-authored-by: Pavol Rusnak <pavol@rusnak.io> (for the bit I copied from #740)
1 year ago
Tomáš Pazdiora aaf3b23deb
fix for windows utf-8 input (#840)
Use UTF-16 as input on Windows, since UTF-8 does not work and reads multibyte characters as zeros
1 year ago
eiery f2d1c47294
cmake should link openblas properly with -lopenblas like how it's done in the makefile (#839) 1 year ago
lon 317fb12fbd
Add new binaries to flake.nix (#847) 1 year ago
unbounded 62cfc54f77
Add quantize-stats command for testing quantization (#728)
Command that calculates some statistics over the errors introduced by
quantization, like mean square error, max error and some percentile errors for layer
weights. Should be useful for testing quantization improvements.

Exposes some internal state from ggml and llama for testing
1 year ago
bhubbb 698f7b5d63
make : add libllama.so target for llama-cpp-python (#797)
I was able to get llama-cpp-python working but only when I build libllama.so with make.
1 year ago