Releases: jupyterlab/jupyter-ai
v3.0.0b0
3.0.0b0
This is the first beta release of Jupyter AI v3! We've completed a majority of the new APIs & integrations that we plan to use in v3.0.0. It's now time for us to build features, fix bugs, (greatly) improve the UI, and make Jupyternaut a powerful default AI agent. We plan to move very quickly in the next couple of weeks to make v3.0.0 available to users as soon as we can. If everything works out, we will release v3.0.0 by the end of June. 💪
This release notably implements the "stop streaming" button that existed in Jupyter AI v2 & enhances the performance by removing thousands of lines of old v2 code. Besides the slash command capabilities (which will be implemented as agent tools in beta), Jupyter AI v3 now has feature parity with Jupyter AI v2. 🎉
Enhancements made
Maintenance and upkeep improvements
- Raise
jupyterlab-chat
version ceiling #1373 (@dlqqq) - Remove unused code from v3
main
branch #1369 (@dlqqq)
Documentation improvements
Contributors to this release
v3.0.0a1
3.0.0a1
Hey folks! This v3 release notably introduces AI personas that replace chat handlers, fixes various usability issues encountered in v3.0.0a0, and upgrades to LangChain v0.3 & Pydantic v2. 🎉
AI personas
AI personas re-define how new messages are handled in Jupyter AI, and supersede the previous convention of "chat handlers" used in v2. AI personas are like "chatbots" available in every chat instance and can use any model/framework of their choice.
- Each chat can have any number of AI personas.
- You have to
@
-mention a persona to get it to reply. The available personas will be listed after typing@
, which shows a menu listing the available personas. - Currently, Jupyter AI only has a single AI persona by default: Jupyternaut.
- Each message may mention any number of AI personas, so you can send the same question to multiple personas.
- Personas can have a custom name & avatar.
- Custom AI personas can be added to your Jupyter AI instance by writing & installing a new package that provides custom AI personas as entry points.
- We plan to add more AI personas by default and/or provide library packages that add AI personas.
- More information will be available in the v3 user documentation once it is ready.
There's also a new v3 documentation page! Currently, only the developer documentation has been updated. Please read through the v3 developer docs if you are interested in writing your own AI personas. 🤗
- Link to new v3 developer docs: https://jupyter-ai.readthedocs.io/en/v3/developers/index.html
Planned future work
-
Jupyternaut in v3 is similar to Jupyternaut in v2, but currently lacks slash commands. We are planning to replace slash commands with agentic tools called by the chat model directly.
- In other words, Jupyternaut will infer your intent based on your prompt and automatically learn/generate/fix files by v3.0.0.
- We will develop this once we begin work on providing APIs for agentic tool use and integrating MCP support after v3.0.0b0 (beta development phase).
-
See the roadmap issue & GitHub milestones for more details on our future work: #1052
Enhancements made
- Introduce AI persona framework #1341 (@dlqqq)
- Separate
BaseProvider
for faster import #1338 (@krassowski) - Added new
gpt-4.1
models #1325 (@srdas) - Introduce AI persona framework #1324 (@dlqqq)
- [v3] Upgrade to jupyterlab-chat v0.8, restore context command completions #1290 (@dlqqq)
- Added help text fields for embedding providers in the AI Setting page #1288 (@srdas)
- Allow chat handlers to be initialized in any order #1268 (@Darshan808)
- Allow embedding model fields, fix coupled model fields, add custom OpenAI provider #1264 (@srdas)
- Refactor Chat Handlers to Simplify Initialization #1257 (@Darshan808)
- Make Native Chat Handlers Overridable via Entry Points #1249 (@Darshan808)
- Upgrade to LangChain v0.3 and Pydantic v2 #1201 (@dlqqq)
- Show error icon near cursor on inline completion errors #1197 (@Darshan808)
Bugs fixed
- Fix the path missing in inline completion request when there is no kernel #1361 (@krassowski)
- Periodically update the persona awareness to keep it alive #1358 (@brichet)
- Added a local identity provider. #1333 (@3coins)
- Handle missing field in config.json on version upgrade #1330 (@srdas)
- [3.x] Expand edge case handling in ConfigManager #1322 (@dlqqq)
- Open the AI settings in a side panel in Notebook application #1309 (@brichet)
- Add
default_completions_model
trait #1303 (@srdas) - Pass
model_parameters
trait to embedding & completion models #1298 (@srdas) - Migrate old config schemas, fix v2.31.0 regression #1294 (@dlqqq)
- Remove error log emitted when FAISS file is absent #1287 (@srdas)
- Ensure magics package version is consistent in future releases #1280 (@dlqqq)
- Correct minimum versions in dependency version ranges #1272 (@dlqqq)
- Allow embedding model fields, fix coupled model fields, add custom OpenAI provider #1264 (@srdas)
- Enforce path imports for MUI icons, upgrade to ESLint v8 #1225 (@krassowski)
- Fixes duplicate api key being passed in
openrouter.py
#1216 (@srdas) - Fix MUI theme in Jupyter AI Settings #1210 (@MUFFANUJ)
- Fix Amazon Nova support (use
StrOutputParser
) #1202 (@dlqqq) - Remove remaining shortcut to focus the chat input #1186 (@brichet)
- Fix specifying empty list in provider and model allow/denylists #1185 (@MaicoTimmerman)
- Reply gracefully when chat model is not selected #1183 (@dlqqq)
Maintenance and upkeep improvements
- Revert "Introduce AI persona framework (#1324)" #1340 (@dlqqq)
- Add
pyupgrade --py39-plus
andautoflake
topre-commit
config #1329 (@rominf) - Ensure magics package version is consistent in future releases #1280 (@dlqqq)
- Correct minimum versions in dependency version ranges #1272 (@dlqqq)
- Remove the dependency on
jupyterlab
#1234 (@jtpio) - Upgrade to
actions/cache@v4
#1228 (@dlqqq) - Typo in comment #1217 (@Carreau)
Documentation improvements
- Overhaul v3 developer documentation #1344 (@dlqqq)
- Update documentation to show usage with OpenRouter API and URL #1318 (@srdas)
- Add information about ollama - document it as an available provider and provide clearer troubleshooting help. #1235 (@fperez)
- Add documentation for vLLM usage #1232 (@srdas)
- Update documentation for setting API keys without revealing them #1224 (@srdas)
- Typo in comment #1217 (@Carreau)
- Docs: Update installation steps to work in bash & zsh #1211 (@srdas)
- Update developer docs on Pydantic compatibility #1204 (@dlqqq)
- Update documentation to add usage of
Openrouter
#1193 (@srdas) - Fix dev install steps in contributor docs [#1188](https://github.com/jupyterlab/jupyter-a...
v2.31.5
2.31.5
Enhancements made
- Separate
BaseProvider
for faster import #1338 (@krassowski)
Bugs fixed
- Fix the path missing in inline completion request when there is no kernel #1361 (@krassowski)
- Added a local identity provider. #1333 (@3coins)
Maintenance and upkeep improvements
Contributors to this release
v2.31.4
v2.31.3
2.31.3
Bugs fixed
Documentation improvements
Contributors to this release
v2.31.2
2.31.2
Bugs fixed
- Add
default_completions_model
trait #1303 (@srdas) - Pass
model_parameters
trait to embedding & completion models #1298 (@srdas)
Contributors to this release
v2.31.1
2.31.1
Enhancements made
Bugs fixed
- Migrate old config schemas, fix v2.31.0 regression #1294 (@dlqqq)
- Remove error log emitted when FAISS file is absent #1287 (@srdas)
Contributors to this release
v2.31.0
2.31.0
This release notably:
- Allows any Ollama embedding model (now requires user input of the model ID),
- Adds a custom OpenAI provider for using any model served on an OpenAI API,
- Allows embedding model fields to be specified, and
- Fixes the Jupyter AI settings, which previously used a single dictionary for chat, embedding, and completion model fields. These fields are now stored separately in the Jupyter AI settings file.
Running pip install -U jupyter_ai
will now also update jupyter_ai_magics
automatically. This wasn't true before, but thankfully this is fixed now.
Special thanks to @srdas for his contributions to this release!
Enhancements made
Bugs fixed
- Ensure magics package version is consistent in future releases #1280 (@dlqqq)
- Allow embedding model fields, fix coupled model fields, add custom OpenAI provider #1264 (@srdas)
Maintenance and upkeep improvements
Contributors to this release
v2.30.0
2.30.0
This release notably allow developers to override or disable Jupyter AI's chat handlers and slash commands via the entry points API. See the new section in the developer documentation for more info.
Special thanks to @Darshan808 and @krassowski for their contributions to this release!
Enhancements made
- Make Native Chat Handlers Overridable via Entry Points #1249 (@Darshan808)
- Allow chat handlers to be initialized in any order #1268 (@Darshan808)
- Refactor Chat Handlers to Simplify Initialization #1257 (@Darshan808)
Bugs fixed
- Correct minimum versions in dependency version ranges #1272 (@dlqqq)
- Fix: Enable up and down arrow keys in chat input #1254 (@keerthi-swarna)
Maintenance and upkeep improvements
- Correct minimum versions in dependency version ranges #1272 (@dlqqq)
- Remove the dependency on
jupyterlab
#1234 (@jtpio)
Documentation improvements
- Add information about ollama - document it as an available provider and provide clearer troubleshooting help. #1235 (@fperez)
- Add documentation for vLLM usage #1232 (@srdas)
Contributors to this release
(GitHub contributors page for this release)
@Darshan808 | @dlqqq | @gogakoreli | @keerthi-swarna | @krassowski | @meeseeksmachine | @paulrutter | @srdas
v2.29.1
2.29.1
Enhancements made
- Show error icon near cursor on inline completion errors #1197 (@Darshan808)
Bugs fixed
- Enforce path imports for MUI icons, upgrade to ESLint v8 #1225 (@krassowski)
- Fixes duplicate api key being passed in
openrouter.py
#1216 (@srdas)
Maintenance and upkeep improvements
Documentation improvements
- Update documentation for setting API keys without revealing them #1224 (@srdas)
- Typo in comment #1217 (@Carreau)
- Docs: Update installation steps to work in bash & zsh #1211 (@srdas)