Skip to content

Make SimpleApolloStore.apolloStore public and add SimpleApolloStore GC extensions #134

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Apr 28, 2025
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Next version (unreleased)

PUT_CHANGELOG_HERE
- Rename `ApolloStore` to `CacheManager` and `SimpleApolloStore` to `ApolloStore`.
- Revert the `ApolloClient.apolloStore` deprecation - keeping the original name makes more sense now after the above rename.

# Version 0.0.9
_2025-04-09_
Expand Down
33 changes: 28 additions & 5 deletions Writerside/topics/migration-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,10 @@ import com.apollographql.cache.normalized.*
import com.apollographql.apollo.cache.normalized.api.MemoryCacheFactory
// With
import com.apollographql.cache.normalized.memory.MemoryCacheFactory



```

In most cases, this will be enough to migrate your project, but there were a few renames and API breaking changes. Read on for more details.

## Database schema

The SQLite cache now uses a different schema.
Expand Down Expand Up @@ -106,7 +105,7 @@ store.writeOperation(operation, data).also { store.publish(it) }

Previously, if you configured custom scalar adapters on your client, you had to pass them to the `ApolloStore` methods.

Now, `ApolloClient.apolloStore` returns a `SimpleApolloStore`, a wrapper around `ApolloStore` which passes the client's `CustomScalarAdapters` automatically.
Now, `ApolloStore` has a reference to the client's `CustomScalarAdapters` so individual methods no longer need an adapters argument.

```kotlin
// Before
Expand All @@ -123,11 +122,35 @@ client.apolloStore.writeOperation(
)
```

### Providing your own store

The `ApolloStore` interface has been renamed to `CacheManager`. If you provide your own implementation, change the parent interface to `CacheManager`.
Correspondingly, the `ApolloClient.Builder.store()` extension has been renamed to `ApolloClient.Builder.cacheManager()`.

```kotlin
// Before
val MyStore = object : ApolloStore {
// ...
}
val apolloClient = ApolloClient.Builder()
// ...
.store(MyStore)
.build()

// After
val MyStore = object : CacheManager {
// ...
}
val apolloClient = ApolloClient.Builder()
// ...
.cacheManager(MyStore)
.build()
```

### Other changes

- `readFragment()` now returns a `ReadResult<D>` (it previously returned a `<D>`). This allows for surfacing metadata associated to the returned data, e.g. staleness.
- Records are now rooted per operation type (`QUERY_ROOT`, `MUTATION_ROOT`, `SUBSCRIPTION_ROOT`), when previously these were all at the same level, which could cause conflicts.
- `ApolloClient.apolloStore` is deprecated in favor of `ApolloClient.store` for consistency.

## CacheResolver, CacheKeyResolver

Expand Down
19 changes: 11 additions & 8 deletions Writerside/topics/pagination/pagination-other.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ extend type Query
@fieldPolicy(forField: "usersPage" paginationArgs: "page")
```

> This can also be done programmatically by configuring the `ApolloStore` with a [`FieldKeyGenerator`](https://apollographql.github.io/apollo-kotlin-normalized-cache/kdoc/normalized-cache/com.apollographql.cache.normalized.api/-field-key-generator/index.html?query=interface%20FieldKeyGenerator) implementation.
> This can also be done programmatically by configuring your cache with a [`FieldKeyGenerator`](https://apollographql.github.io/apollo-kotlin-normalized-cache/kdoc/normalized-cache/com.apollographql.cache.normalized.api/-field-key-generator/index.html?query=interface%20FieldKeyGenerator) implementation.

With that in place, after fetching the first page, the cache will look like this:

Expand Down Expand Up @@ -44,7 +44,7 @@ This is because the field key is now the same for all pages and the default merg
#### Record merging

To fix this, we need to supply the store with a piece of code that can merge the lists in a sensible way.
This is done by passing a [`RecordMerger`](https://apollographql.github.io/apollo-kotlin-normalized-cache/kdoc/normalized-cache/com.apollographql.cache.normalized.api/-record-merger/index.html?query=interface%20RecordMerger) to the `ApolloStore` constructor:
This is done by passing a [`RecordMerger`](https://apollographql.github.io/apollo-kotlin-normalized-cache/kdoc/normalized-cache/com.apollographql.cache.normalized.api/-record-merger/index.html?query=interface%20RecordMerger) when configuring your cache:

```kotlin
object MyFieldMerger : FieldRecordMerger.FieldMerger {
Expand All @@ -59,10 +59,13 @@ object MyFieldMerger : FieldRecordMerger.FieldMerger {
}
}

val apolloStore = ApolloStore(
normalizedCacheFactory = cacheFactory,
recordMerger = FieldRecordMerger(MyFieldMerger), // Configure the store with the custom merger
)
val client = ApolloClient.Builder()
// ...
.normalizedCache(
normalizedCacheFactory = cacheFactory,
recordMerger = FieldRecordMerger(MyFieldMerger), // Configure the store with the custom merger
)
.build()
```

With this, the cache will be as expected after fetching the second page:
Expand Down Expand Up @@ -99,7 +102,7 @@ Now let's store in the metadata of each `UserConnection` field the values of the
as well as the values of the first and last cursor in its list.
This will allow us to insert new pages in the correct position later on.

This is done by passing a [`MetadataGenerator`](https://apollographql.github.io/apollo-kotlin-normalized-cache/kdoc/normalized-cache/com.apollographql.cache.normalized.api/-metadata-generator/index.html?query=interface%20MetadataGenerator) to the `ApolloStore` constructor:
This is done by passing a [`MetadataGenerator`](https://apollographql.github.io/apollo-kotlin-normalized-cache/kdoc/normalized-cache/com.apollographql.cache.normalized.api/-metadata-generator/index.html?query=interface%20MetadataGenerator) when configuring the cache:

```kotlin
class ConnectionMetadataGenerator : MetadataGenerator {
Expand Down Expand Up @@ -144,7 +147,7 @@ extend type Query @typePolicy(embeddedFields: "usersConnection")
extend type UserConnection @typePolicy(embeddedFields: "edges")
```

> This can also be done programmatically by configuring the `ApolloStore` with an [`EmbeddedFieldsProvider`](https://apollographql.github.io/apollo-kotlin-normalized-cache/kdoc/normalized-cache/com.apollographql.cache.normalized.api/-embedded-fields-provider/index.html?query=interface%20EmbeddedFieldsProvider) implementation.
> This can also be done programmatically by configuring the cache with an [`EmbeddedFieldsProvider`](https://apollographql.github.io/apollo-kotlin-normalized-cache/kdoc/normalized-cache/com.apollographql.cache.normalized.api/-embedded-fields-provider/index.html?query=interface%20EmbeddedFieldsProvider) implementation.

Now that we have the metadata and embedded fields in place, we can implement the `RecordMerger` (simplified for brevity):

Expand Down
15 changes: 9 additions & 6 deletions Writerside/topics/pagination/pagination-relay-style.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,14 +55,17 @@ that return a connection:
extend type Query @typePolicy(connectionFields: "usersConnection")
```

In Kotlin, configure the `ApolloStore` like this, using the generated `Pagination` object:
In Kotlin, configure the cache like this, using the generated `Pagination` object:

```kotlin
val apolloStore = ApolloStore(
normalizedCacheFactory = cacheFactory,
metadataGenerator = ConnectionMetadataGenerator(Pagination.connectionTypes),
recordMerger = ConnectionRecordMerger
)
val client = ApolloClient.Builder()
// ...
.normalizedCache(
normalizedCacheFactory = cacheFactory,
metadataGenerator = ConnectionMetadataGenerator(Pagination.connectionTypes),
recordMerger = ConnectionRecordMerger
)
.build()
```

Query `UsersConnection()` to fetch new pages and update the cache, and watch it to observe the full list.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
package com.apollographql.cache.normalized.sql

import com.apollographql.cache.normalized.ApolloStore
import com.apollographql.cache.normalized.CacheManager
import com.apollographql.cache.normalized.api.CacheHeaders
import com.apollographql.cache.normalized.api.CacheKey
import com.apollographql.cache.normalized.api.DefaultRecordMerger
Expand All @@ -13,7 +13,7 @@ import kotlin.test.assertNull
class TrimTest {
@Test
fun trimTest() {
val apolloStore = ApolloStore(SqlNormalizedCacheFactory()).also { it.clearAll() }
val cacheManager = CacheManager(SqlNormalizedCacheFactory()).also { it.clearAll() }

val largeString = "".padStart(1024, '?')

Expand All @@ -23,7 +23,7 @@ class TrimTest {
mutationId = null,
metadata = emptyMap()
)
apolloStore.accessCache { it.merge(oldRecord, CacheHeaders.NONE, recordMerger = DefaultRecordMerger) }
cacheManager.accessCache { it.merge(oldRecord, CacheHeaders.NONE, recordMerger = DefaultRecordMerger) }

val newRecords = 0.until(2 * 1024).map {
Record(
Expand All @@ -33,16 +33,16 @@ class TrimTest {
metadata = emptyMap()
).withDates(receivedDate = it.toString(), expirationDate = null)
}
apolloStore.accessCache { it.merge(newRecords, CacheHeaders.NONE, recordMerger = DefaultRecordMerger) }
cacheManager.accessCache { it.merge(newRecords, CacheHeaders.NONE, recordMerger = DefaultRecordMerger) }

val sizeBeforeTrim = apolloStore.trim(-1, 0.1f)
val sizeBeforeTrim = cacheManager.trim(-1, 0.1f)
assertEquals(8515584, sizeBeforeTrim)

// Trim the cache by 10%
val sizeAfterTrim = apolloStore.trim(8515584, 0.1f)
val sizeAfterTrim = cacheManager.trim(8515584, 0.1f)

assertEquals(7667712, sizeAfterTrim)
// The oldest key must have been removed
assertNull(apolloStore.accessCache { it.loadRecord(CacheKey("old"), CacheHeaders.NONE) })
assertNull(cacheManager.accessCache { it.loadRecord(CacheKey("old"), CacheHeaders.NONE) })
}
}
Loading