0.4.0
- Fill-in-the-middle strategy now supports custom prompt template so you can use it with other models.
- Support models with an FIM API, for example, Mistral AI FIM endpoint.
- Support setting suggestion token limit.
- OpenAI (compatible) APIs will now use stream response. So it will response more promptly when you give it a line limit.