-
-
Notifications
You must be signed in to change notification settings - Fork 102
Implement core improvements Batch 1 #211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Implement core improvements Batch 1 #211
Conversation
This commit includes the following improvements: 1. **DB: Ensure Chronological Message Ordering** * Modified `MessageDao.loadMessages` to include `ORDER BY created_at ASC`, ensuring messages are fetched in the correct order. 2. **DB: Add Index for Message Loading** * Added an `@Index` for `(chat_id, created_at)` to the `Message` entity to optimize query performance for loading ordered messages. 3. **UI: Stable Keys for ChatScreen LazyColumn** * Provided stable keys (`message.id`, timestamps) to `LazyColumn` items in `ChatScreen.kt` to improve Compose UI performance and state handling. 4. **Perf: Cache API Client Instances** * Implemented caching for `OpenAI` and `GenerativeModel` clients in `ChatRepositoryImpl`. Clients are now reused based on configuration (token, API URL, model name), reducing overhead from redundant instantiation. 5. **Test: Initial Unit Tests for Client Caching** * Created `ChatRepositoryImplTest.kt` with unit tests verifying the API client caching logic using reflection to inspect cache state. * Made internal config data classes in `ChatRepositoryImpl` accessible for testing.
WalkthroughThe changes introduce client instance caching in the chat repository to optimize API client reuse for OpenAI, Google, and Ollama services. Database queries for messages are now explicitly ordered by creation time, and an index is added for efficient querying. UI updates in the chat screen improve item key stability. Comprehensive unit tests validate client caching behavior. Changes
Sequence Diagram(s)sequenceDiagram
participant UI
participant ChatRepositoryImpl
participant ClientCache
participant APIClient
UI->>ChatRepositoryImpl: Request chat completion (platform, config)
ChatRepositoryImpl->>ClientCache: Check for cached client with config
alt Client exists
ClientCache-->>ChatRepositoryImpl: Return cached client
else Client missing
ChatRepositoryImpl->>APIClient: Create new client with config
ChatRepositoryImpl->>ClientCache: Store new client with config
ClientCache-->>ChatRepositoryImpl: Return new client
end
ChatRepositoryImpl->>APIClient: Send chat request
APIClient-->>ChatRepositoryImpl: Return response
ChatRepositoryImpl-->>UI: Return chat result
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
app/src/main/kotlin/dev/chungjungsoo/gptmobile/presentation/ui/chat/ChatScreen.kt (1)
214-217
: Fix inconsistent variable usage.These lines still reference the old
key
variable instead of the newgroupKey
variable, which will cause compilation errors.Apply this fix:
- text = groupedMessages[key]!![0].content, + text = groupedMessages[groupKey]!![0].content, isLoading = !isIdle, - onCopyClick = { clipboardManager.setText(AnnotatedString(groupedMessages[key]!![0].content.trim())) }, - onEditClick = { chatViewModel.openEditQuestionDialog(groupedMessages[key]!![0]) } + onCopyClick = { clipboardManager.setText(AnnotatedString(groupedMessages[groupKey]!![0].content.trim())) }, + onEditClick = { chatViewModel.openEditQuestionDialog(groupedMessages[groupKey]!![0]) }
🧹 Nitpick comments (1)
app/src/test/kotlin/dev/chungjungsoo/gptmobile/data/repository/ChatRepositoryImplTest.kt (1)
96-168
: Test logic is sound but brittle.The test correctly verifies client reuse through reflection, but this approach has maintainability concerns. Consider making the cache maps package-private or adding test-only accessors to avoid reflection.
Consider adding a test-only method to ChatRepositoryImpl:
@VisibleForTesting internal fun getClientCacheSize(): Pair<Int, Int> { return openAIClients.size to googleClients.size }This would eliminate the need for reflection while maintaining encapsulation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
app/src/main/kotlin/dev/chungjungsoo/gptmobile/data/database/dao/MessageDao.kt
(1 hunks)app/src/main/kotlin/dev/chungjungsoo/gptmobile/data/database/entity/Message.kt
(2 hunks)app/src/main/kotlin/dev/chungjungsoo/gptmobile/data/repository/ChatRepositoryImpl.kt
(7 hunks)app/src/main/kotlin/dev/chungjungsoo/gptmobile/presentation/ui/chat/ChatScreen.kt
(4 hunks)app/src/test/kotlin/dev/chungjungsoo/gptmobile/data/repository/ChatRepositoryImplTest.kt
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Build debug APK
🔇 Additional comments (12)
app/src/main/kotlin/dev/chungjungsoo/gptmobile/data/database/entity/Message.kt (1)
6-6
: LGTM! Database index properly optimizes message queries.The addition of an index on
(chat_id, created_at)
is well-aligned with the updated query inMessageDao.kt
that orders messages bycreated_at ASC
. This composite index will significantly improve query performance for fetching ordered messages within a chat.Also applies to: 20-20
app/src/main/kotlin/dev/chungjungsoo/gptmobile/data/database/dao/MessageDao.kt (1)
13-13
: LGTM! Explicit ordering ensures consistent message retrieval.Adding
ORDER BY created_at ASC
guarantees that messages are returned in chronological order, which is essential for proper chat functionality. This change works optimally with the database index added inMessage.kt
.app/src/main/kotlin/dev/chungjungsoo/gptmobile/presentation/ui/chat/ChatScreen.kt (2)
201-203
: LGTM! Variable renaming improves code clarity.The change from
key
togroupKey
and introduction ofcurrentGroup
makes the code more readable and explicit about what data is being processed.
205-205
: LGTM! Stable keys improve Compose performance.The stable key implementation using message IDs and timestamps will improve LazyColumn performance by providing consistent item identity across recompositions. The key patterns are well-designed:
- User messages:
"user-${currentGroup[0].id}"
- Assistant groups:
"assistant-group-${currentGroup[0].chatId}-${currentGroup[0].createdAt}"
- Live messages with timestamp-based keys
Also applies to: 225-225, 254-254, 271-271
app/src/main/kotlin/dev/chungjungsoo/gptmobile/data/repository/ChatRepositoryImpl.kt (4)
49-58
: LGTM! Well-structured client caching implementation.The configuration data classes and cache maps provide a clean foundation for client instance reuse. The separation between OpenAI and Google client caches with appropriate configuration keys is well-designed.
60-65
: LGTM! Clean OpenAI client caching.The
getOpenAIClient
method properly implements caching with configuration-based keys. The use ofgetOrPut
ensures thread-safe client creation and reuse.
187-188
: LGTM! Proper Ollama URL normalization.The URL normalization logic correctly ensures Ollama's API path ends with
/v1/
while avoiding double slashes. This handles the different URL formats that users might configure.
91-91
: LGTM! Consistent client caching usage.All chat completion methods now properly use the cached client instances instead of creating new ones on each call. This will significantly improve performance by avoiding redundant client initialization.
Also applies to: 142-148, 164-164, 188-188
app/src/test/kotlin/dev/chungjungsoo/gptmobile/data/repository/ChatRepositoryImplTest.kt (4)
28-36
: Good test documentation about limitations.The comments accurately identify the challenges of testing client caching without mocking constructors and acknowledge the trade-offs in the current approach.
200-252
: Google client tests need update for corrected caching logic.These tests will pass with the current flawed Google client caching implementation, but should be updated once the caching key includes all configuration parameters as suggested in the ChatRepositoryImpl review.
After fixing the Google client caching logic to include all configuration parameters, update these tests to verify that clients are correctly cached/created based on the complete configuration.
256-274
: LGTM! Ollama URL normalization test.The test correctly verifies that Ollama's URL normalization logic works as expected and that the normalized URL is used in the cache key.
272-272
: Verify OpenAIClientConfig accessibility.The test assumes
ChatRepositoryImpl.OpenAIClientConfig
is accessible for casting. Ensure this data class has appropriate visibility for testing.#!/bin/bash # Check if OpenAIClientConfig is accessible for testing ast-grep --pattern $'internal data class OpenAIClientConfig'
private fun getGoogleClient(token: String?, modelName: String?, systemPrompt: String?, temperature: Float?, topP: Float?): GenerativeModel { | ||
val configKey = GoogleClientConfig(token ?: "", modelName ?: "") // Simplified key for lookup | ||
// Actual config for creation uses all params | ||
return googleClients.getOrPut(configKey) { | ||
val genConfig = generationConfig { | ||
this.temperature = temperature | ||
this.topP = topP | ||
} | ||
GenerativeModel( | ||
modelName = modelName ?: "", | ||
apiKey = token ?: "", | ||
systemInstruction = content { text(systemPrompt ?: ModelConstants.DEFAULT_PROMPT) }, | ||
generationConfig = genConfig, | ||
safetySettings = listOf( | ||
SafetySetting(HarmCategory.DANGEROUS_CONTENT, BlockThreshold.ONLY_HIGH), | ||
SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, BlockThreshold.NONE) | ||
) | ||
) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Google client caching logic.
The caching key for Google clients only considers token
and modelName
, but the actual client creation uses additional parameters (systemPrompt
, temperature
, topP
). This could lead to incorrect client reuse when these parameters differ.
Consider expanding the cache key to include all configuration parameters:
-internal data class GoogleClientConfig(val token: String, val modelName: String) // Google client is tied to model name
+internal data class GoogleClientConfig(
+ val token: String,
+ val modelName: String,
+ val systemPrompt: String?,
+ val temperature: Float?,
+ val topP: Float?
+)
private fun getGoogleClient(token: String?, modelName: String?, systemPrompt: String?, temperature: Float?, topP: Float?): GenerativeModel {
- val configKey = GoogleClientConfig(token ?: "", modelName ?: "") // Simplified key for lookup
+ val configKey = GoogleClientConfig(token ?: "", modelName ?: "", systemPrompt, temperature, topP)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
private fun getGoogleClient(token: String?, modelName: String?, systemPrompt: String?, temperature: Float?, topP: Float?): GenerativeModel { | |
val configKey = GoogleClientConfig(token ?: "", modelName ?: "") // Simplified key for lookup | |
// Actual config for creation uses all params | |
return googleClients.getOrPut(configKey) { | |
val genConfig = generationConfig { | |
this.temperature = temperature | |
this.topP = topP | |
} | |
GenerativeModel( | |
modelName = modelName ?: "", | |
apiKey = token ?: "", | |
systemInstruction = content { text(systemPrompt ?: ModelConstants.DEFAULT_PROMPT) }, | |
generationConfig = genConfig, | |
safetySettings = listOf( | |
SafetySetting(HarmCategory.DANGEROUS_CONTENT, BlockThreshold.ONLY_HIGH), | |
SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, BlockThreshold.NONE) | |
) | |
) | |
} | |
} | |
// Expand the cache key to include all relevant client parameters | |
internal data class GoogleClientConfig( | |
val token: String, | |
val modelName: String, | |
val systemPrompt: String?, | |
val temperature: Float?, | |
val topP: Float? | |
) | |
private fun getGoogleClient( | |
token: String?, | |
modelName: String?, | |
systemPrompt: String?, | |
temperature: Float?, | |
topP: Float? | |
): GenerativeModel { | |
val configKey = GoogleClientConfig( | |
token ?: "", | |
modelName ?: "", | |
systemPrompt, | |
temperature, | |
topP | |
) | |
// Actual config for creation uses all params | |
return googleClients.getOrPut(configKey) { | |
val genConfig = generationConfig { | |
this.temperature = temperature | |
this.topP = topP | |
} | |
GenerativeModel( | |
modelName = modelName ?: "", | |
apiKey = token ?: "", | |
systemInstruction = content { text(systemPrompt ?: ModelConstants.DEFAULT_PROMPT) }, | |
generationConfig = genConfig, | |
safetySettings = listOf( | |
SafetySetting(HarmCategory.DANGEROUS_CONTENT, BlockThreshold.ONLY_HIGH), | |
SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, BlockThreshold.NONE) | |
) | |
) | |
} | |
} |
🤖 Prompt for AI Agents
In
app/src/main/kotlin/dev/chungjungsoo/gptmobile/data/repository/ChatRepositoryImpl.kt
around lines 67 to 86, the cache key for googleClients only includes token and
modelName, but the client creation depends also on systemPrompt, temperature,
and topP. To fix this, modify the cache key to include all these parameters so
that clients are correctly cached and reused only when all configuration
parameters match.
This commit includes the following improvements:
DB: Ensure Chronological Message Ordering
MessageDao.loadMessages
to includeORDER BY created_at ASC
, ensuring messages are fetched in the correct order.DB: Add Index for Message Loading
@Index
for(chat_id, created_at)
to theMessage
entity to optimize query performance for loading ordered messages.UI: Stable Keys for ChatScreen LazyColumn
message.id
, timestamps) toLazyColumn
items inChatScreen.kt
to improve Compose UI performance and state handling.Perf: Cache API Client Instances
OpenAI
andGenerativeModel
clients inChatRepositoryImpl
. Clients are now reused based on configuration (token, API URL, model name), reducing overhead from redundant instantiation.Test: Initial Unit Tests for Client Caching
ChatRepositoryImplTest.kt
with unit tests verifying the API client caching logic using reflection to inspect cache state.ChatRepositoryImpl
accessible for testing.Summary by CodeRabbit