The memo field was used in prompt text logic, but its usage
affects string merging conditions for the `formated` inside
pushPrompts(). To avoid unintended merging issues, logic
involving the memo field has been rolled back.
# PR Checklist
- [ ] Have you checked if it works normally in all models? *Ignore this
if it doesn't use models.*
- [ ] Have you checked if it works normally in all web, local, and node
hosted versions? If it doesn't, have you blocked it in those versions?
- [ ] Have you added type definitions?
# Description
This PR introduces following:
- fix: pass message index when processing regex script in HypaV3 Modal
- feat: add BGE-m3-ko embedding
# PR Checklist
- [ ] Have you checked if it works normally in all models? *Ignore this
if it doesn't use models.*
- [ ] Have you checked if it works normally in all web, local, and node
hosted versions? If it doesn't, have you blocked it in those versions?
- [ ] Have you added type definitions?
# Description
Novel AI V4 Vibe 기능 추가
SelectInput에 number도 입력되도록 추가(Svelte의 select가 알아서 number string 변환해준다길래
타입만 추가함)
# PR Checklist
- [ ] Have you checked if it works normally in all models? *Ignore this
if it doesn't use models.*
- [x] Have you checked if it works normally in all web, local, and node
hosted versions? If it doesn't, have you blocked it in those versions?
- [x] Have you added type definitions?
# Description
This PR adds read-only lore books access from Lua.
- `getLoreBooks(triggerId, search)`: Gets all lore books of the name
(comment). No additional sorting is done - API user will need to sort
themselves. All lores are parsed before returning.
- `loadLoreBooks(triggerId, reserve)`: Retrieves all active lore books
in current context. This function takes account of max context length
and cut low priority lores, similar to a user submitting their message.
All lores are parsed before returning.
- Specifying `reserve` higher than `0` would reserve that much tokens
for other prompts.
With `loadLoreBooks()`, character and module creators would be able to
separate token- and context-heavy data generations into Lua and separate
LLM workflow for improved accuracy.