Replies: 1 comment
-
🤖 Hey @nshern! Great to see you diving deeper into LlamaIndex. How's everything going on your end? To speed up the rebuild and load process of the LlamaIndex data framework and prevent your streamlit frontend application from hanging on start-up, you might consider adjusting the number of workers in the Here's how you can do it: # Increase the number of workers
DEFAULT_NUM_WORKERS = 10 # or any number based on your system's capacity
async def run_jobs(
jobs: List[Coroutine],
show_progress: bool = False,
workers: int = DEFAULT_NUM_WORKERS,
) -> List[Any]:
"""Run jobs.
Args:
jobs (List[Coroutine]):
List of jobs to run.
show_progress (bool):
Whether to show progress bar.
Returns:
List[Any]:
List of results.
"""
asyncio_mod = get_asyncio_module(show_progress=show_progress)
semaphore = asyncio.Semaphore(workers)
async def worker(job: Coroutine) -> Any:
async with semaphore:
return await job
pool_jobs = [worker(job) for job in jobs]
return await asyncio_mod.gather(*pool_jobs) Please note that increasing the number of workers will increase the usage of system resources. Make sure your system has enough resources to handle the increased number of workers. You can find this in the llama_index/async_utils.py file. The current default number of workers set in the I hope this helps! If you have any other questions or need further clarification, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
As referenced here I am persisting my index and re-using it when my app is rerun. It takes about a minute for this process to complete - is there anyway I can speed up this process?
I am using a streamlit frontend and would like to prevent the app from hanging for a minute on start-up.
Beta Was this translation helpful? Give feedback.
All reactions