# PR Checklist
- [ ] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [ ] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [ ] Did you added a type def?
# Description
Apart from pr, I wish reverse proxy had showUnrec too like the official
openai setting
# PR Checklist
- [ ] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [ ] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [x] Did you added a type def?
# Description
I knew there was a `ParseMarkdown` function, but I didn't think it fit
the current situation, so I created a new `applyMarkdownToNode`
function,
but I didn't see much difference in the results, So if you think
`ParseMarkdown` is better, you can change my code to use that.
(To use `ParseMarkdown`, we need to create a parameter that allows us to
use `mconverted.parseInline` instead of `mconverted.parse` to the
ParseMarkdown function)
# PR Checklist
- [x] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [x] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [x] Did you added a type def?
# Description
Basically, this feature was created to combine a sentence together and
translate it when a display edit script splits one sentence into
multiple HTML tags.
I hope you confirm as this option will significantly improve translation
performance without modifying existing scripts! `(e.g. Automark)`
I also made `translateHTML` accept a `chatID` and optimized the existing
code to get the script from `charArg`.