Replies: 1 comment 2 replies
-
As you know, lots of editors currently display auto-completions as ghost text, similar to GitHub Copilot or Cursor. However, I’m not sure if this approach provides the best user experience. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi community!
Has anyone successfully implemented streaming rendering for the Milkdown editor? Or does anyone have good ideas or approaches on how to achieve this?
My specific use case is: I'm trying to integrate an AI assistant (e.g., an LLM) with Milkdown to help users auto-complete or continue writing documents. For this to work well, I need to implement:
Streaming Output: The text generated by the AI assistant is returned incrementally, word-by-word or chunk-by-chunk (not as one complete block).
Streaming Rendering: The Milkdown editor needs to render (insert) these incremental chunks into the document at the current cursor position in real-time and smoothly, providing a good user experience (e.g., cursor tracking, scroll position maintenance).
I'm currently exploring how to synchronize this streaming output with the editor's rendering. I'd love to hear your thoughts on:
Are there any existing Milkdown plugins or extensions that implement similar functionality?
If not, how would you approach architecting or modifying Milkdown to achieve this? For instance:
What's an efficient and safe way to insert the arriving text fragments into the editor's document model?
How to ensure the editor's state (cursor, selection, history) remains correct and usable during continuous insertion?
How to handle potential network latency or interruptions to maintain rendering continuity?
Any suggestions for performance optimization or debouncing/throttling?
Any relevant ideas or experiences are highly welcome!
Thanks in advance for your insights and suggestions!
大家好!
请问社区里是否有朋友成功实现过 Milkdown 编辑器的流式渲染 (streaming rendering)?或者对于如何实现这一点有什么好的思路或想法吗?
我的具体场景是:我正在尝试集成一个智能助手(例如 LLM)到 Milkdown 中,用于辅助用户续写文档。在这个过程中,我需要实现:
流式输出 (Streaming Output): 智能助手生成的文本内容是逐字、分批返回的(不是一次性返回整段)。
流式渲染 (Streaming Rendering): Milkdown 编辑器需要能够实时、流畅地将这些分批返回的文本内容渲染(插入)到当前文档的光标位置,并保持良好的用户体验(如光标跟随、滚动条定位)。
我目前正在思考如何实现这种输出与渲染的“流式”同步。想请教大家:
是否有现成的 Milkdown 插件或扩展实现了类似功能?
如果没有现成的,大家觉得应该如何架构或改造 Milkdown 来实现这种需求?比如:
如何高效、安全地将分批到达的文本片段插入到编辑器的文档模型中?
如何确保在连续插入过程中编辑器的状态(光标、选区、历史记录)保持正确和可用?
如何处理可能出现的网络延迟或中断,保证渲染的连贯性?
是否有性能优化或防抖/节流方面的建议?
非常期待听到大家的经验分享、建议或任何相关的想法!感谢!
Beta Was this translation helpful? Give feedback.
All reactions