[feat] Update AI model settings for GPT-3.5 and add GPT-3.5 Turbo 16k option
The commit updates AI model settings for GPT-3.5 by allowing up to 4000 tokens for context size. It also adds a new option for GPT-3.5 Turbo 16k with a maximum context of 16000 tokens. Additionally, the commit limits the context size when exceeding the maximum limit for each model.
This commit is contained in:
@@ -57,6 +57,8 @@ export async function requestChatDataMain(arg:requestDataArgument, model:'model'
|
||||
|
||||
switch(aiModel){
|
||||
case 'gpt35':
|
||||
case 'gpt35_16k':
|
||||
case 'gpt35_16k_0613':
|
||||
case 'gpt4':
|
||||
case 'gpt4_32k':{
|
||||
|
||||
@@ -69,6 +71,8 @@ export async function requestChatDataMain(arg:requestDataArgument, model:'model'
|
||||
|
||||
const body = ({
|
||||
model: aiModel === 'gpt35' ? 'gpt-3.5-turbo'
|
||||
: aiModel === 'gpt35_16k' ? 'gpt-3.5-turbo-16k'
|
||||
: aiModel === 'gpt35_16k_0613' ? 'gpt-3.5-turbo-16k-0613'
|
||||
: aiModel === 'gpt4' ? 'gpt-4' : 'gpt-4-32k',
|
||||
messages: formated,
|
||||
temperature: temperature,
|
||||
|
||||
Reference in New Issue
Block a user