From 3366853e41fcc818222a0271c76b6106179106fb Mon Sep 17 00:00:00 2001 From: Georgi Gerganov Date: Tue, 21 Mar 2023 22:57:35 +0200 Subject: [PATCH] Add notice about pending change --- README.md | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index d9a4b1b..6149032 100644 --- a/README.md +++ b/README.md @@ -5,15 +5,21 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ +--- + +**TEMPORARY NOTICE:** +Big code change incoming: https://github.com/ggerganov/llama.cpp/pull/370 + +Do not merge stuff until we merge this. Probably merge will happen on March 22 ~6:00am UTC + +--- + **Hot topics:** - [Added Alpaca support](https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca) - Cache input prompts for faster initialization: https://github.com/ggerganov/llama.cpp/issues/64 - Create a `llama.cpp` logo: https://github.com/ggerganov/llama.cpp/issues/105 -**TEMPORARY NOTICE:** -If you're updating to the latest master, you will need to regenerate your model files as the format has changed. - ## Description The main goal is to run the model using 4-bit quantization on a MacBook