Use `tokenizer.vocab_size()` instead of hardcoding 32000 in convert-pth-to-ggml.py (#142)

There are ways that special tokens or other new tokens could be added to the tokenizer; therefore it's probably best not to assume the vocabulary is only 32000 tokens.
pull/178/head
Ronsor 1 year ago committed by GitHub
parent 113e685d18
commit 956dfda8ad
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -99,7 +99,7 @@ for p in range(n_parts):
fout.write(struct.pack("i", ftype))
# Is this correct??
for i in range(32000):
for i in range(tokenizer.vocab_size()):
if tokenizer.is_unknown(i):
# "<unk>" token (translated as ??)
text = " \u2047 ".encode("utf-8")

Loading…
Cancel
Save