Skip to content

Db format changes and use CacheKey in more places #108

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 31 commits into from
Mar 28, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
b3ef6b4
Hashed CacheKeys (WIP)
BoD Mar 11, 2025
88bdae1
Use CacheKey in more places
BoD Mar 12, 2025
515a463
Avoid using .hashed()
BoD Mar 12, 2025
46b783b
New 'fields' db format. Also, remove EVICT_AFTER_READ.
BoD Jan 9, 2025
0c17246
Add support for trim
BoD Jan 10, 2025
9aff3c9
Rename 'key' -> 'record'
BoD Feb 10, 2025
ce594f1
Minor ApolloJsonElementSerializer optimizations
BoD Feb 17, 2025
8263ee2
Merge branch 'hashed-cache-keys' into db-format-fields-with-hashes
BoD Mar 13, 2025
ed1057e
Do not call propagateErrors if there are no errors
BoD Mar 19, 2025
ee20dca
Minor optimization in toRecords()
BoD Mar 19, 2025
2dd3697
Use ByteString in CacheKey and blob in SQL.
BoD Mar 21, 2025
4f6c58d
Revert 1 field per row, and hashes
BoD Mar 24, 2025
1bc2238
Renames for consistency
BoD Mar 25, 2025
ec15cbd
Add ApolloStore.trim()
BoD Mar 25, 2025
001ec2a
Increase SQLite's memory cache to 8 MiB
BoD Mar 25, 2025
cf891be
Report #107
BoD Mar 25, 2025
d4d8b9e
Revert removal of assertChainedCachesAreEqual
BoD Mar 25, 2025
e261f1d
Fix cacheDumpProvider to include errors
BoD Mar 25, 2025
646ee02
Minor tweak/rename
BoD Mar 25, 2025
5865784
Remove debug
BoD Mar 25, 2025
7b3b11a
Revert removed test
BoD Mar 25, 2025
7d45d84
Revert tweaked values
BoD Mar 25, 2025
f0232f1
Update CHANGELOG.md
BoD Mar 25, 2025
91b924c
Merge branch 'main' into db-format-record-with-hashes
BoD Mar 25, 2025
073c4d1
Encode certain known metadata keys as single byte strings to save space
BoD Mar 25, 2025
21ecb03
Make CacheKey extensions internal, and add test-utils
BoD Mar 26, 2025
db4d91d
Make CacheKey.keyToString() internal
BoD Mar 26, 2025
3e7ecde
Add a comment about using string lengths
BoD Mar 26, 2025
59f1296
Remove CacheKey.serialize() and co.
BoD Mar 27, 2025
9840218
Optim: avoid some iterations while avoiding some iterations
BoD Mar 27, 2025
0acb637
RecordSerializer: encode ints smaller than 255-32 as one byte
BoD Mar 27, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
# Next version (unreleased)

PUT_CHANGELOG_HERE
- Storage binary format is changed to be a bit more compact
- Add `ApolloStore.trim()` to remove old data from the cache
- `CacheKey` is used in more APIs instead of `String`, for consistency.
- `ApolloCacheHeaders.EVICT_AFTER_READ` is removed. `ApolloStore.remove()` can be used instead.
- `NormalizedCache.remove(pattern: String)` is removed. Please open an issues if you need this feature back.

# Version 0.0.7
_2025-03-03_
Expand Down
90 changes: 51 additions & 39 deletions normalized-cache-incubating/api/normalized-cache-incubating.api

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ import com.apollographql.cache.normalized.api.NormalizedCache
import com.apollographql.cache.normalized.api.NormalizedCacheFactory
import com.apollographql.cache.normalized.api.Record
import com.apollographql.cache.normalized.api.RecordMerger
import com.apollographql.cache.normalized.api.RecordValue
import com.apollographql.cache.normalized.api.TypePolicyCacheKeyGenerator
import com.apollographql.cache.normalized.internal.DefaultApolloStore
import com.benasher44.uuid.Uuid
Expand Down Expand Up @@ -238,15 +239,27 @@ interface ApolloStore {
*/
fun remove(cacheKeys: List<CacheKey>, cascade: Boolean = true): Int

/**
* Trims the store if its size exceeds [maxSizeBytes]. The amount of data to remove is determined by [trimFactor].
* The oldest records are removed according to their update date.
*
* This may not be supported by all cache implementations (currently this is implemented by the SQL cache).
*
* @param maxSizeBytes the size of the cache in bytes above which the cache should be trimmed.
* @param trimFactor the factor of the cache size to trim.
* @return the cache size in bytes after trimming or -1 if the operation is not supported.
*/
fun trim(maxSizeBytes: Long, trimFactor: Float = 0.1f): Long

/**
* Normalizes executable data to a map of [Record] keyed by [Record.key].
*/
fun <D : Executable.Data> normalize(
executable: Executable<D>,
dataWithErrors: DataWithErrors,
rootKey: String = CacheKey.rootKey().key,
rootKey: CacheKey = CacheKey.rootKey(),
customScalarAdapters: CustomScalarAdapters = CustomScalarAdapters.Empty,
): Map<String, Record>
): Map<CacheKey, Record>

/**
* Publishes a set of keys that have changed. This will notify subscribers of [changedKeys].
Expand All @@ -273,7 +286,7 @@ interface ApolloStore {
*
* This is a synchronous operation that might block if the underlying cache is doing IO.
*/
fun dump(): Map<KClass<*>, Map<String, Record>>
fun dump(): Map<KClass<*>, Map<CacheKey, Record>>

/**
* Releases resources associated with this store.
Expand Down Expand Up @@ -312,16 +325,18 @@ internal interface ApolloStoreInterceptor : ApolloInterceptor
internal fun ApolloStore.cacheDumpProvider(): () -> Map<String, Map<String, Pair<Int, Map<String, Any?>>>> {
return {
dump().map { (cacheClass, cacheRecords) ->
cacheClass.normalizedCacheName() to cacheRecords.mapValues { (_, record) ->
record.size to record.fields.mapValues { (_, value) ->
value.toExternal()
}
}
cacheClass.normalizedCacheName() to cacheRecords
.mapKeys { (key, _) -> key.keyToString() }
.mapValues { (_, record) ->
record.size to record.fields.mapValues { (_, value) ->
value.toExternal()
}
}
}.toMap()
}
}

private fun Any?.toExternal(): Any? {
private fun RecordValue.toExternal(): Any? {
return when (this) {
null -> null
is String -> this
Expand All @@ -330,7 +345,8 @@ private fun Any?.toExternal(): Any? {
is Long -> this
is Double -> this
is JsonNumber -> this
is CacheKey -> this.serialize()
is CacheKey -> "ApolloCacheReference{${this.keyToString()}}"
is Error -> "ApolloCacheError{${this.message}}"
is List<*> -> {
map { it.toExternal() }
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,17 @@ import com.apollographql.cache.normalized.api.NormalizedCache
import com.apollographql.cache.normalized.api.Record
import com.apollographql.cache.normalized.api.RecordValue
import com.apollographql.cache.normalized.api.expirationDate
import com.apollographql.cache.normalized.api.fieldKey
import com.apollographql.cache.normalized.api.receivedDate
import kotlin.time.Duration

@ApolloInternal
fun Map<String, Record>.getReachableCacheKeys(): Set<CacheKey> {
fun Map<String, Record>.getReachableCacheKeys(roots: List<CacheKey>, reachableCacheKeys: MutableSet<CacheKey>) {
val records = roots.mapNotNull { this[it.key] }
fun Map<CacheKey, Record>.getReachableCacheKeys(): Set<CacheKey> {
fun Map<CacheKey, Record>.getReachableCacheKeys(roots: List<CacheKey>, reachableCacheKeys: MutableSet<CacheKey>) {
val records = roots.mapNotNull { this[it] }
val cacheKeysToCheck = mutableListOf<CacheKey>()
for (record in records) {
reachableCacheKeys.add(CacheKey(record.key))
reachableCacheKeys.add(record.key)
cacheKeysToCheck.addAll(record.referencedFields() - reachableCacheKeys)
}
if (cacheKeysToCheck.isNotEmpty()) {
Expand All @@ -34,7 +35,7 @@ fun Map<String, Record>.getReachableCacheKeys(): Set<CacheKey> {
}

@ApolloInternal
fun NormalizedCache.allRecords(): Map<String, Record> {
fun NormalizedCache.allRecords(): Map<CacheKey, Record> {
return dump().values.fold(emptyMap()) { acc, map -> acc + map }
}

Expand All @@ -49,8 +50,8 @@ fun NormalizedCache.removeUnreachableRecords(): Set<CacheKey> {
return removeUnreachableRecords(allRecords)
}

private fun NormalizedCache.removeUnreachableRecords(allRecords: Map<String, Record>): Set<CacheKey> {
val unreachableCacheKeys = allRecords.keys.map { CacheKey(it) } - allRecords.getReachableCacheKeys()
private fun NormalizedCache.removeUnreachableRecords(allRecords: Map<CacheKey, Record>): Set<CacheKey> {
val unreachableCacheKeys = allRecords.keys - allRecords.getReachableCacheKeys()
remove(unreachableCacheKeys, cascade = false)
return unreachableCacheKeys.toSet()
}
Expand Down Expand Up @@ -89,11 +90,11 @@ fun NormalizedCache.removeStaleFields(
}

private fun NormalizedCache.removeStaleFields(
allRecords: MutableMap<String, Record>,
allRecords: MutableMap<CacheKey, Record>,
maxAgeProvider: MaxAgeProvider,
maxStale: Duration,
): RemovedFieldsAndRecords {
val recordsToUpdate = mutableMapOf<String, Record>()
val recordsToUpdate = mutableMapOf<CacheKey, Record>()
val removedFields = mutableSetOf<String>()
for (record in allRecords.values.toList()) {
var recordCopy = record
Expand All @@ -115,7 +116,7 @@ private fun NormalizedCache.removeStaleFields(
if (staleDuration >= maxStale.inWholeSeconds) {
recordCopy -= field.key
recordsToUpdate[record.key] = recordCopy
removedFields.add(record.key + "." + field.key)
removedFields.add(record.key.fieldKey((field.key)))
if (recordCopy.isEmptyRecord()) {
allRecords.remove(record.key)
} else {
Expand All @@ -133,7 +134,7 @@ private fun NormalizedCache.removeStaleFields(
if (staleDuration >= maxStale.inWholeSeconds) {
recordCopy -= field.key
recordsToUpdate[record.key] = recordCopy
removedFields.add(record.key + "." + field.key)
removedFields.add(record.key.fieldKey(field.key))
if (recordCopy.isEmptyRecord()) {
allRecords.remove(record.key)
} else {
Expand All @@ -144,15 +145,15 @@ private fun NormalizedCache.removeStaleFields(
}
}
if (recordsToUpdate.isNotEmpty()) {
remove(recordsToUpdate.keys.map { CacheKey(it) }, cascade = false)
remove(recordsToUpdate.keys, cascade = false)
val emptyRecords = recordsToUpdate.values.filter { it.isEmptyRecord() }.toSet()
val nonEmptyRecords = recordsToUpdate.values - emptyRecords
if (nonEmptyRecords.isNotEmpty()) {
merge(nonEmptyRecords, CacheHeaders.NONE, DefaultRecordMerger)
}
return RemovedFieldsAndRecords(
removedFields = removedFields,
removedRecords = emptyRecords.map { CacheKey(it.key) }.toSet()
removedRecords = emptyRecords.map { it.key }.toSet()
)
}
return RemovedFieldsAndRecords(removedFields = emptySet(), removedRecords = emptySet())
Expand Down Expand Up @@ -182,12 +183,12 @@ fun ApolloStore.removeStaleFields(
* @return the fields and records that were removed.
*/
fun NormalizedCache.removeDanglingReferences(): RemovedFieldsAndRecords {
val allRecords: MutableMap<String, Record> = allRecords().toMutableMap()
val allRecords: MutableMap<CacheKey, Record> = allRecords().toMutableMap()
return removeDanglingReferences(allRecords)
}

private fun NormalizedCache.removeDanglingReferences(allRecords: MutableMap<String, Record>): RemovedFieldsAndRecords {
val recordsToUpdate = mutableMapOf<String, Record>()
private fun NormalizedCache.removeDanglingReferences(allRecords: MutableMap<CacheKey, Record>): RemovedFieldsAndRecords {
val recordsToUpdate = mutableMapOf<CacheKey, Record>()
val allRemovedFields = mutableSetOf<String>()
do {
val removedFields = mutableSetOf<String>()
Expand All @@ -197,7 +198,7 @@ private fun NormalizedCache.removeDanglingReferences(allRecords: MutableMap<Stri
if (field.value.isDanglingReference(allRecords)) {
recordCopy -= field.key
recordsToUpdate[record.key] = recordCopy
removedFields.add(record.key + "." + field.key)
removedFields.add(record.key.fieldKey(field.key))
if (recordCopy.isEmptyRecord()) {
allRecords.remove(record.key)
} else {
Expand All @@ -209,15 +210,15 @@ private fun NormalizedCache.removeDanglingReferences(allRecords: MutableMap<Stri
allRemovedFields.addAll(removedFields)
} while (removedFields.isNotEmpty())
if (recordsToUpdate.isNotEmpty()) {
remove(recordsToUpdate.keys.map { CacheKey(it) }, cascade = false)
remove(recordsToUpdate.keys, cascade = false)
val emptyRecords = recordsToUpdate.values.filter { it.isEmptyRecord() }.toSet()
val nonEmptyRecords = recordsToUpdate.values - emptyRecords
if (nonEmptyRecords.isNotEmpty()) {
merge(nonEmptyRecords, CacheHeaders.NONE, DefaultRecordMerger)
}
return RemovedFieldsAndRecords(
removedFields = allRemovedFields,
removedRecords = emptyRecords.map { CacheKey(it.key) }.toSet()
removedRecords = emptyRecords.map { it.key }.toSet()
)
}
return RemovedFieldsAndRecords(removedFields = emptySet(), removedRecords = emptySet())
Expand All @@ -233,9 +234,9 @@ fun ApolloStore.removeDanglingReferences(): RemovedFieldsAndRecords {
}
}

private fun RecordValue.isDanglingReference(allRecords: Map<String, Record>): Boolean {
private fun RecordValue.isDanglingReference(allRecords: Map<CacheKey, Record>): Boolean {
return when (this) {
is CacheKey -> allRecords[this.key] == null
is CacheKey -> allRecords[this] == null
is List<*> -> any { it.isDanglingReference(allRecords) }
is Map<*, *> -> values.any { it.isDanglingReference(allRecords) }
else -> false
Expand All @@ -244,15 +245,15 @@ private fun RecordValue.isDanglingReference(allRecords: Map<String, Record>): Bo

private fun Record.isEmptyRecord() = fields.isEmpty() || fields.size == 1 && fields.keys.first() == "__typename"

private fun RecordValue.guessType(allRecords: Map<String, Record>): String {
private fun RecordValue.guessType(allRecords: Map<CacheKey, Record>): String {
return when (this) {
is List<*> -> {
val first = firstOrNull() ?: return ""
first.guessType(allRecords)
}

is CacheKey -> {
allRecords[key]?.get("__typename") as? String ?: ""
allRecords[this]?.get("__typename") as? String ?: ""
}

else -> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,7 @@ object ApolloCacheHeaders {
*/
const val MEMORY_CACHE_ONLY = "memory-cache-only"

/**
* Records from this request should be evicted after being read.
*/
@Deprecated(level = DeprecationLevel.ERROR, message = "This header has no effect and will be removed in a future release. Use ApolloStore.remove() instead.")
const val EVICT_AFTER_READ = "evict-after-read"

/**
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,18 @@
package com.apollographql.cache.normalized.api

import kotlin.jvm.JvmInline
import kotlin.jvm.JvmStatic

/**
* A [CacheKey] identifies an object in the cache.
*
* @param key The key of the object in the cache. The key must be globally unique.
*/
class CacheKey(val key: String) {

@JvmInline
value class CacheKey(
/**
* The key of the object in the cache.
*/
val key: String,
) {
/**
* Builds a [CacheKey] from a typename and a list of Strings.
*
Expand All @@ -31,38 +35,13 @@ class CacheKey(val key: String) {
*/
constructor(typename: String, vararg values: String) : this(typename, values.toList())

override fun hashCode() = key.hashCode()
override fun equals(other: Any?): Boolean {
return key == (other as? CacheKey)?.key
internal fun keyToString(): String {
return key
}

override fun toString() = "CacheKey($key)"

fun serialize(): String {
return "$SERIALIZATION_TEMPLATE{$key}"
}
override fun toString() = "CacheKey(${keyToString()})"

companion object {
// IntelliJ complains about the invalid escape but looks like JS still needs it.
// See https://youtrack.jetbrains.com/issue/KT-47189
@Suppress("RegExpRedundantEscape")
private val SERIALIZATION_REGEX_PATTERN = Regex("ApolloCacheReference\\{(.*)\\}")
private const val SERIALIZATION_TEMPLATE = "ApolloCacheReference"

@JvmStatic
fun deserialize(serializedCacheKey: String): CacheKey {
val values = SERIALIZATION_REGEX_PATTERN.matchEntire(serializedCacheKey)?.groupValues
require(values != null && values.size > 1) {
"Not a cache reference: $serializedCacheKey Must be of the form: $SERIALIZATION_TEMPLATE{%s}"
}
return CacheKey(values[1])
}

@JvmStatic
fun canDeserialize(value: String): Boolean {
return SERIALIZATION_REGEX_PATTERN.matches(value)
}

private val ROOT_CACHE_KEY = CacheKey("QUERY_ROOT")

@JvmStatic
Expand All @@ -71,3 +50,19 @@ class CacheKey(val key: String) {
}
}
}

fun CacheKey.isRootKey(): Boolean {
return this == CacheKey.rootKey()
}

internal fun CacheKey.fieldKey(fieldName: String): String {
return "${keyToString()}.$fieldName"
}

internal fun CacheKey.append(vararg keys: String): CacheKey {
var cacheKey: CacheKey = this
for (key in keys) {
cacheKey = CacheKey("${cacheKey.key}.$key")
}
return cacheKey
}
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ class ResolverContext(
/**
* The key of the parent. Mainly used for debugging
*/
val parentKey: String,
val parentKey: CacheKey,

/**
* The type of the parent
Expand Down Expand Up @@ -135,7 +135,7 @@ object DefaultCacheResolver : CacheResolver {
override fun resolveField(context: ResolverContext): Any? {
val fieldKey = context.getFieldKey()
if (!context.parent.containsKey(fieldKey)) {
throw CacheMissException(context.parentKey, fieldKey)
throw CacheMissException(context.parentKey.keyToString(), fieldKey)
}

return context.parent[fieldKey]
Expand Down Expand Up @@ -190,7 +190,7 @@ class CacheControlCacheResolver(
val maxStale = context.cacheHeaders.headerValue(ApolloCacheHeaders.MAX_STALE)?.toLongOrNull() ?: 0L
if (staleDuration >= maxStale) {
throw CacheMissException(
key = context.parentKey,
key = context.parentKey.keyToString(),
fieldName = context.getFieldKey(),
stale = true
)
Expand All @@ -206,7 +206,7 @@ class CacheControlCacheResolver(
val maxStale = context.cacheHeaders.headerValue(ApolloCacheHeaders.MAX_STALE)?.toLongOrNull() ?: 0L
if (staleDuration >= maxStale) {
throw CacheMissException(
key = context.parentKey,
key = context.parentKey.keyToString(),
fieldName = context.getFieldKey(),
stale = true
)
Expand Down
Loading