05:00
Streaming
Stream responses token by token.
Lower values keep replies focused; higher values add more variety.
Enable caching
Toggle response caching on or off.
Auto refresh
Automatically refresh cache data.
Latest cache activity
Enable caching to start collecting cache stats.
Minimum tokens required to cache (selected model) 0/1,024
Choose how you want to fork this chat.
Loading...
Please wait while we finish up.
Something went wrong
Please try again.
No AI requests sent yet.
Set your name
profile preview
Are you sure you want to delete this preset?
Choose how you want to start your preset.
Browse your saved presets as cards. Click to load into the editor.
Loading cards...
System prompt (Locked)
Assistant prefill (Locked)
Chat history