# PR Checklist
- [x] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [x] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [x] Did you added a type def?
# Description
Reference:
https://platform.openai.com/docs/models/gpt-3-5-turbohttps://platform.openai.com/docs/deprecations
1. Added gpt-3.5-turbo-0125 model: The current gpt-3.5-turbo is pointed
to the 0613 model, but the official documentation says it will
automatically upgrade to the 0125 model on February 16 anyway.
2. Fixed the model name of gpt-4-0314, although it is deprecated.
# PR Checklist
- [ ] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [ ] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [ ] Did you added a type def?
# Description
I write simple changes on code, which allow user to choose tokenizers.
As I write on https://github.com/kwaroran/RisuAI/issues/280, differences
in tokenizers makes error when use mistral based models.

As I'm not good at javascript, I simply implement this work by write
name of tokenizer model, and select one on tokenizer.ts file.
I test it on my node RisuAI and I send long context to my own server.

As result, ooba returned 15858 as prompt tokens.

And as I test on official tokenizer implementations, it shows 1k
differences between llama tokenizer and mistral tokenizer.
So I think adding this option will help users use oobabooga with less
error.