v1.4.0 - Added support for o1-preview model
Added ability to select model up to o1-preview including o1-mini and missing GPT-4o
Removed all defaults from the resolver providing a clean slate for model generation. This means cleaner results without our bias.
- MaxToken is now under the hood mapped to max_output_tokens which will no longer force you to remove start new conversation or modify the maxTokens value. By default OpenAI will generate tokens until no longer possible.
- Removed default temperature and topP
Changed default model to GPT-4o without config