-
Notifications
You must be signed in to change notification settings - Fork 8
upgrade graphrag-sdk and move to litellm #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Warning Rate limit exceeded@gkorland has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 23 minutes and 8 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughThe pull request introduces changes to the project's model configuration and dependency management. In the Changes
Sequence DiagramsequenceDiagram
participant User
participant LLMAgent as Knowledge Graph Agent
participant LiteModel as LiteModel
User->>LLMAgent: Create Knowledge Graph
LLMAgent->>LiteModel: Initialize with model name
LiteModel-->>LLMAgent: Model Ready
LLMAgent-->>User: Knowledge Graph Prepared
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
poetry.lock
is excluded by!**/*.lock
📒 Files selected for processing (2)
api/llm.py
(1 hunks)pyproject.toml
(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
api/llm.py
202-202: Undefined name LiteModel
(F821)
🔇 Additional comments (1)
api/llm.py (1)
208-208
: Configuration logic appears sound
The updated model_config
references the newly created model
object. This is consistent with the dependency change to LiteModel
, and no immediate issues are apparent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
Comments suppressed due to low confidence (3)
api/llm.py:4
- [nitpick] The variable name 'model' is generic. It should be renamed to 'lite_model' to be more descriptive.
from graphrag_sdk.models.litellm import LiteModel
api/llm.py:202
- [nitpick] The variable name 'model' is generic. It should be renamed to 'lite_model' to be more descriptive.
model = LiteModel(model_name="gemini/gemini-2.0-flash-exp")
api/llm.py:208
- Ensure that the new behavior introduced by switching to LiteLLM is covered by tests.
model_config=KnowledgeGraphModelConfig.with_model(model)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
api/graph.py (1)
31-31
: Consider a more extensible filtering approach.While this change is valid, if the codebase expands the set of unwanted graph suffixes over time, consider centralizing these suffixes in a set or list, making the logic easier to maintain. For instance:
-graphs = [g for g in graphs if not (g.endswith('_git') or g.endswith('_schema'))] +excluded_suffixes = ["_git", "_schema"] +graphs = [g for g in graphs if not any(g.endswith(suffix) for suffix in excluded_suffixes)]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.env.template (1)
6-6
: Add documentation for GEMINI_API_KEYConsider adding a comment above the GEMINI_API_KEY line to explain:
- How to obtain the API key
- Any specific requirements or limitations
- Link to relevant documentation
+# Gemini API key from Google AI Studio (https://makersuite.google.com/app/apikey) GEMINI_API_KEY=<gemini_api_key>
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.env.template
(1 hunks)
🔇 Additional comments (1)
.env.template (1)
5-6
: Verify if OPENAI_API_KEY is still required
Since the PR objectives mention moving to litellm and the changes introduce GEMINI_API_KEY, please verify if OPENAI_API_KEY is still needed. If not, it should be removed to avoid confusion.
✅ Verification successful
Both OpenAI and Gemini API keys are required
Based on the codebase analysis:
- The project uses LiteLLM as an abstraction layer to support multiple LLM providers
- The dependencies in
poetry.lock
andrequirements.txt
show both OpenAI and LiteLLM are required dependencies - The code in
api/llm.py
shows the system is configured to use Gemini by default (model_name = os.getenv('MODEL_NAME', 'gemini/gemini-2.0-flash-exp')
) but can be switched to use other providers through LiteLLM - Both API keys should be kept in the template to support this flexibility
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for OpenAI API key usage in the codebase
echo "Checking for OpenAI API key usage..."
rg -l "OPENAI_API_KEY"
# Search for OpenAI related imports or configurations
echo "Checking for OpenAI related code..."
rg -l "openai\."
Length of output: 205
Script:
#!/bin/bash
# Let's try a more comprehensive search for OpenAI and LiteLLM related code
echo "Checking for OpenAI references..."
rg -i "openai"
echo -e "\nChecking for LiteLLM references..."
rg -i "litellm"
echo -e "\nChecking for environment variable usage..."
rg -i "process\.env\.(OPENAI|GEMINI)"
echo -e "\nChecking Python environment variable usage..."
rg -i "os\.environ\.(get)?.*(OPENAI|GEMINI)"
echo -e "\nLooking for configuration files..."
fd -e json -e yaml -e yml -e config
Length of output: 2580
Script:
#!/bin/bash
# Let's check the LLM implementation to understand how the API keys are used
echo "Checking LLM implementation..."
cat api/llm.py
echo -e "\nChecking for environment variable loading..."
rg -i "load_dotenv|getenv|environ" -A 3 -B 3
Length of output: 12356
Summary by CodeRabbit
graphrag-sdk
to version 0.4.1.