# PR Checklist
- [ ] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [ ] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [x] Did you added a type def?
# Description
I knew there was a `ParseMarkdown` function, but I didn't think it fit
the current situation, so I created a new `applyMarkdownToNode`
function,
but I didn't see much difference in the results, So if you think
`ParseMarkdown` is better, you can change my code to use that.
(To use `ParseMarkdown`, we need to create a parameter that allows us to
use `mconverted.parseInline` instead of `mconverted.parse` to the
ParseMarkdown function)
# PR Checklist
- [x] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [x] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [x] Did you added a type def?
# Description
Basically, this feature was created to combine a sentence together and
translate it when a display edit script splits one sentence into
multiple HTML tags.
I hope you confirm as this option will significantly improve translation
performance without modifying existing scripts! `(e.g. Automark)`
I also made `translateHTML` accept a `chatID` and optimized the existing
code to get the script from `charArg`.
# PR Checklist
- [ ] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
it does not need models
- [ ] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
due to me using nixos i cant really build it for my system. but i tested
the web version and do not see why it would not work for other versions.
furhtermore this is a backend issue so i doubt the ui changed anything
- [ ] Did you added a type def?
idk what that is. but i changed one line so i assume no
# Description
As described in #402 RisuAI would always try to reach localhost:11434 to
connect to ollama. completely ignoring the user supplied adress. this
fix addresses that. now it actually uses the correct url
# PR Checklist
- [ ] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [ ] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [ ] Did you added a type def?
# Description
This pr fixes the bug in the issue #399
I think it's a bug that needs to be fixed regardless of what we discuss
in the issue, so I send a pull request before we discuss it.
# PR Checklist
- [x] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [ ] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [ ] Did you added a type def?
# Description
Add support Openai-compatible custom embedding server on the playground.
Openai-compatible embedding servers:
https://github.com/toshsan/embedding-serverhttps://github.com/limcheekin/open-text-embeddingshttps://github.com/michaelfeil/infinity
I only tested it with infinity.
there is also a feature that If url does not end in /embeddings,
automatically adds /embeddings.
I think I need to test it with more local embedding servers, but I think
it'll be okay since it's only Playground