You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: async-openai/README.md
+1-2Lines changed: 1 addition & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,6 @@
36
36
-[x] Microsoft Azure OpenAI Service
37
37
-[x] Models
38
38
-[x] Moderations
39
-
-[x] WASM support (experimental and only available in [`experiments`](https://github.com/64bit/async-openai/tree/experiments) branch)
40
39
- Support SSE streaming on available APIs
41
40
- All requests including form submissions (except SSE streaming) are retried with exponential backoff when [rate limited](https://platform.openai.com/docs/guides/rate-limits) by the API server.
42
41
- Ergonomic builder pattern for all request objects.
@@ -122,7 +121,7 @@ This project adheres to [Rust Code of Conduct](https://www.rust-lang.org/policie
122
121
123
122
## Complimentary Crates
124
123
-[openai-func-enums](https://github.com/frankfralick/openai-func-enums) provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing [clap](https://github.com/clap-rs/clap) application subcommands for natural language use of command line tools. It also supports openai's [parallel tool calls](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) and allows you to choose between running multiple tool calls concurrently or own their own OS threads.
0 commit comments