Add multi-batch inference support, fix hivemind dependency, and improve installation process #27
+432
−132
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Key Changes
Multi-batch inference support
Enables inference with variable batch sizes for better performance and scalability.
Fix hivemind dependency installation issue
Resolved issues with installing hivemind dependencies in certain environments.
Remove hardcoded model restriction
Removed the hardcoded limitation that only allowed loading LLaMA 7B, enabling support for other model sizes.
Update installation instructions in
README.mdSimplified the installation process. The entire BloomBee environment can now be installed and configured with a single command:
pip install -e .