# PR Checklist
- [x] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions? <<
Checked on browser only, works fine
- [x] Did you added a type def?
# Description
HypaV2 data's large type definition update.
```ts
mainChunks: { // summary itself
id: number;
text: string;
chatMemos: Set<string>; // UUIDs of summarized chats
lastChatMemo: string;
}[];
chunks: { // split mainChunks for retrieval or something. Although quite uncomfortable logic, so maybe I will delete it soon.
mainChunkID: number;
text:string;
}[];
```
With this, ensure that mainChunks is relevant on chat context change by
deletion
If there is no UUID in the chat context, but it exists on chatMemos on
certain mainChunk, removes it.
Changed index.svelte.ts to update args on each call to ensure hypav2 to
stay adaptive on this change without refreshing the page
Also changed mainChunks to be pushed instead of unshifted
# PR Checklist
- [x] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [x] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [ ] Did you added a type def?
# Description
This PR updates the regex used for extracting <Thoughts> content to
handle multiline cases reliably. The previous regex (.+?) had
limitations when processing text spanning multiple lines, as it could
not match content with line breaks.
# PR Checklist
- [ ] Did you check if it works normally in all models? *ignore this
when it dosen't uses models*
- [x] Did you check if it works normally in all of web, local and node
hosted versions? if it dosen't, did you blocked it in those versions?
- [x] Did you added a type def?
# Description
This PR fixes the image input functionality for the Gemini model.
### Issue with Existing Code
The previous implementation attempted to process images in the following
way:
1. In the `OpenAIChat` type, if the `memo` field started with
`inlayImage`, the `content` field's value was copied into a variable
called `pendingImage`.
2. Later, if the `chat` role was `'user'` and `pendingImage` was not an
empty string, the code processed the image.
However, it does not seem to work as expected in the current state.
### Update
I updated the image input handling to align with the method used in
RisuAI for handling GPT’s image input. Specifically, the changes
include:
1. In `image.ts`, I explicitly specified the `gemini-exp` model.
2. If the `chat` object has a `multimodals` field and the `role` is
`user`:
- I created an array called `geminiParts` to store `GeminiPart` objects.
- The `chat.content` value is set as the `text` field of the
`GeminiPart` object in the array.
- I then iterated over `chat.multimodals` and created an object for each
`image` type, formatting it to match the Gemini structure, and added it
to the `geminiParts` array.
- After the iteration, the `geminiParts` array is assigned to the
`parts` field of `reformatedChat`.
### Notes
- I removed the previous non-functional code entirely. If this causes
any inconvenience or violates any conventions, I sincerely apologize.
- As the final name of the next-generation Gemini model is currently
unknown, I restricted the functionality to the **gemini-exp** model in
the `image.ts` file for now. This can be updated later when the official
name is confirmed.
The Gemini model is currently very widely used, so I kindly request you
to review the updated code. If you have any feedback or if the changes
are not acceptable, I completely understand if this PR is rejected.
Thank you for your time and consideration! Let me know if there's
anything I can improve or clarify.