sub-hub
2a36743cb6
Fix: Potential problem in tikJS function
2025-04-21 14:57:19 +09:00
sub-hub
09228f3f86
Fix: Correct tokenize flow in tokenizer encode function
2025-04-21 13:34:01 +09:00
sub-hub
33d8ed4568
restore tokenizer caching with old-bug version
2025-04-21 13:27:43 +09:00
kwaroran
22a50904f8
Revert #805 due to tokenizing error
2025-04-21 05:27:05 +09:00
kwaroran
36e0935bb0
Make tokenizer caching to an option
2025-04-16 10:57:48 +09:00
sub-hub
a32e670108
reduce max tokenize cache size
2025-04-03 12:17:01 +09:00
sub-hub
99efcc5f23
Remove md5 function from getHash
2025-04-03 10:28:40 +09:00
sub-hub
4df80bf98b
remove log for check caching
2025-04-02 22:34:22 +09:00
sub-hub
c553478a78
Refactor: caching tokenize result
2025-04-02 22:01:01 +09:00
Kwaroran
fc80552749
feat: enhance tokenizeChat to accept optional countThoughts argument
2025-02-25 04:39:01 +09:00
Kwaroran
338d1cfec2
Add models
2025-01-29 05:38:26 +09:00
Kwaroran
191be6d5c1
feat: add image translation feature and enhance regex list functionality
2024-12-27 15:51:29 +09:00
Kwaroran
259181bbe2
Merge branch 'main' of https://github.com/kwaroran/RisuAI
2024-12-26 06:00:24 +09:00
Kwaroran
fe47f58c61
Add LoadingStatusState and improve tokenizer functionality
2024-12-26 06:00:07 +09:00
kwaroran
b874ed42ed
HypaV2 context deletion safety ( #680 )
...
# PR Checklist
- [x] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions? <<
Checked on browser only, works fine
- [x] Did you added a type def?
# Description
HypaV2 data's large type definition update.
```ts
mainChunks: { // summary itself
id: number;
text: string;
chatMemos: Set<string>; // UUIDs of summarized chats
lastChatMemo: string;
}[];
chunks: { // split mainChunks for retrieval or something. Although quite uncomfortable logic, so maybe I will delete it soon.
mainChunkID: number;
text:string;
}[];
```
With this, ensure that mainChunks is relevant on chat context change by
deletion
If there is no UUID in the chat context, but it exists on chatMemos on
certain mainChunk, removes it.
Changed index.svelte.ts to update args on each call to ensure hypav2 to
stay adaptive on this change without refreshing the page
Also changed mainChunks to be pushed instead of unshifted
2024-12-26 05:02:33 +09:00
Kwaroran
1c51afc626
Enhance plugin functionality by adding optional provider parameters and improving thoughts extraction regex
2024-12-25 04:55:05 +09:00
HyperBlaze
cd3294d529
Merge branch 'kwaroran:main' into main
2024-12-15 14:13:02 -08:00
Kwaroran
5174082796
Add Gemini related features
2024-12-12 08:38:33 +09:00
LightningHyperBlaze45654
69f44c03c6
Merge remote-tracking branch 'upstream/main'
2024-12-08 20:08:25 -08:00
kwaroran
9d8f239250
Refactor tokenizer
2024-12-07 06:24:33 +09:00
kwaroran
34b4a1245b
Add google cloud tokenizer
2024-12-07 03:49:56 +09:00
LightningHyperBlaze45654
60d4e33893
feat: add validation
...
Also revoked potentially problematic feature(add hypav2data chunk)
TODO:
1. On mid-context editing, currently that is not considered as deletion. Do have optional editedChatIndex to latter dive in more.
2. re-roll mainChunks(re-summarization) functionalities added, but not able to access it.
2024-12-01 13:00:00 -08:00
kwaroran
cc8d753dc8
Rework custom API
2024-11-25 23:04:32 +09:00
kwaroran
efbda2333d
Change saving
2024-11-02 01:46:21 +09:00
kwaroran
ffa6308ca3
change globalApi path
2024-10-26 20:40:40 +09:00
kwaroran
e255199fcc
Change setDatabase and getDatabase to accessing dbState
2024-10-25 19:11:41 +09:00
kwaroran
b3fddb814e
Migrate all DataBase to DBState
2024-10-24 01:59:57 +09:00
kwaroran
2044d9b63b
Change DataBase inside svelte to DBState for performance
2024-10-23 23:46:32 +09:00
kwaroran
c7330719ad
Migrate to svelte 5
2024-10-23 02:31:37 +09:00
kwaroran
614087ae97
Add cohere tokenizer
2024-09-09 03:46:48 +09:00
kwaroran
ccda92cc49
Add autopilot
2024-09-01 19:30:06 +09:00
kwaroran
b62120c02c
Add gemma tokenizer to custom
2024-07-06 01:21:04 +09:00
kwaroran
b94323510d
feat: Add support for Cohere AI model in tokenizer
2024-05-28 02:30:43 +09:00
kwaroran
007f6bf59e
feat: Add gemini tokenizer for gemma model
2024-05-28 02:25:06 +09:00
IHaBiS02
1aade242ec
Add supports of gpt-4o tokenizer for reverse_proxy
2024-05-14 06:30:36 +09:00
kwaroran
a79a00bd00
chore: Update version to 1.102.4
2024-05-14 03:02:48 +09:00
kwaroran
58513ed54a
Add tokenizer playground
2024-04-24 22:06:02 +09:00
kwaroran
f46df1af20
Add Llama3 tokenizer support
2024-04-23 23:02:27 +09:00
kwaroran
da272d83d8
Add custom tokenizers
2024-04-19 13:35:56 +09:00
kwaroran
dbe1a45317
Refactor multimodal and add claude-3 vision support
2024-03-17 23:48:24 +09:00
kwaroran
e6f6ef829c
Remove console.log statements
2024-03-16 14:41:44 +09:00
kwaroran
66fd70c01a
Add postfile function
2024-02-26 23:13:29 +09:00
kwaroran
9db4810bbc
Experimental llamacpp support
2024-01-16 10:56:23 +09:00
justpain02
ff4c67b993
Add selectable tokenizer supports on Ooba
2024-01-05 23:57:59 +09:00
kwaroran
6f54c90187
[feat] add mistral support
2023-12-13 12:33:58 +09:00
kwaroran
341ea3c364
[ref] remove unused
2023-12-10 03:38:21 +09:00
kwaroran
10710b6bc2
[fix] ban
2023-12-07 00:03:50 +09:00
kwaroran
a3dba9f306
[feat] strongban
2023-12-06 18:21:06 +09:00
kwaroran
6a6321c5dc
[fix] ooba to use llama tokenizer
2023-12-03 21:31:14 +09:00
kwaroran
26f4ce94fa
[fix] consistant tokenizing
2023-11-24 14:50:06 +09:00