Everything works as intended.
when the first chat limit is reached, it succesfully summarize the previous chats according to the chunk size, and then returns the chat with summarized text as a system prompt, with other chats after that summarized point correctly appended.
Automatic sub-chunking also works perfectly.
Tested with sub-model.
back to original logic, only model selection is implemented.
Previous issue: the mainChunks object holds list of summarized texts, but for somewhat reason they are deleted after usage, and then re-added again, summarizing same chats.