You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, I have locally deployed the qwq large model, OpenWebUI, and Pipelines. My goal is to use OpenWebUI together with qwq to analyze spreadsheets—for example, I upload a 200-row table and ask it to analyze that data. If I choose Smart Mode, OpenWebUI only feeds the model the first few rows; if I choose Full Context, it exceeds qwq’s maximum token limit. Is it possible to use a Pipeline to iteratively process the table in chunks, send each chunk to the model for analysis, and then finally summarize the results—just like commercial GPT services do, where you can upload anything and ask anything? I’m running entirely offline. Is there any way to achieve this?”
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Currently, I have locally deployed the qwq large model, OpenWebUI, and Pipelines. My goal is to use OpenWebUI together with qwq to analyze spreadsheets—for example, I upload a 200-row table and ask it to analyze that data. If I choose Smart Mode, OpenWebUI only feeds the model the first few rows; if I choose Full Context, it exceeds qwq’s maximum token limit. Is it possible to use a Pipeline to iteratively process the table in chunks, send each chunk to the model for analysis, and then finally summarize the results—just like commercial GPT services do, where you can upload anything and ask anything? I’m running entirely offline. Is there any way to achieve this?”
Beta Was this translation helpful? Give feedback.
All reactions