# PR Checklist
- [v] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [v] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [v] Did you added a type def?
# Description
Remove unnecessary check when substituting first_msg_index that is
preventing it to be used within non-chat context (e.g. lorebook)
# PR Checklist
- [v] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [v] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [v] Did you added a type def?
# Description
Add a curly syntax for retrieving the index for the selected alternate
first message. {{first_msg_index}} replaced with -1 if the default first
message is being used, otherwise it's replace with the selected index of
alternate first message.
# PR Checklist
- [ ] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [ ] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [ ] Did you added a type def?
# Description
Apart from pr, I wish reverse proxy had showUnrec too like the official
openai setting
# PR Checklist
- [ ] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [ ] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [ ] Did you added a type def?
# Description
I fix the code which now make exception for gpt-based models so it can
send logit_bias to server and gpt4o with reverse_proxy automatically
choose new tokenizer.