Skip to content

Commit e96159b

Browse files
committed
hotfix: more backup restore formatting fixes
1 parent cb4b035 commit e96159b

File tree

1 file changed

+5
-6
lines changed

1 file changed

+5
-6
lines changed

content/influxdb3/clustered/admin/backup-restore.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ snapshot. When a snapshot is restored to the Catalog, the Compactor
5151
- [Resources](#resources)
5252
- [prep_pg_dump.awk](#preppgdumpawk)
5353

54-
### Soft delete
54+
## Soft delete
5555

5656
A _soft delete_ refers to when, on compaction, the Compactor sets a `deleted_at`
5757
timestamp on the Parquet file entry in the Catalog.
@@ -63,23 +63,22 @@ longer queryable, but remains intact in the object store.
6363
A _hard delete_ refers to when a Parquet file is actually deleted from object
6464
storage and no longer exists.
6565

66-
6766
## Recovery Point Objective (RPO)
6867

6968
RPO is the maximum amount of data loss (based on time) allowed after a disruptive event.
7069
It indicates how much time can pass between data snapshots before data is considered lost if a disaster occurs.
7170

7271
The InfluxDB Clustered snapshot strategy RPO allows for the following maximum data loss:
7372

74-
- 1 hour for hourly snapshots _(up to the configured hourly snapshot expiration)_
75-
- 1 day for daily snapshots _(up to the configured daily snapshot expiration)_
73+
- 1 hour for hourly snapshots _(up to the configured hourly snapshot expiration)_
74+
- 1 day for daily snapshots _(up to the configured daily snapshot expiration)_
7675

7776
## Recovery Time Objective (RTO)
7877

7978
RTO is the maximum amount of downtime allowed for an InfluxDB cluster after a failure.
8079
RTO varies depending on the size of your Catalog database, network speeds
81-
between the client machine and the Catalog database, cluster load, the status
82-
of your underlying hosting provider, and other factors.
80+
between the client machine and the Catalog database, cluster load, the status
81+
of your underlying hosting provider, and other factors.
8382

8483
## Data written just before a snapshot may not be present after restoring
8584

0 commit comments

Comments
 (0)