Skip to content

Rolling HDFS upgrade #571

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Aug 28, 2024
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
a5e9547
Add upgrade mode with serialized deployments
nightkr Aug 2, 2024
fc6cc0d
Use deployedProductVersion to decide upgrade mode (but do not automat…
nightkr Aug 2, 2024
2eb38a8
Upgrade docs
nightkr Aug 2, 2024
38809e2
Remove dummy log message
nightkr Aug 2, 2024
a36de0f
Move upgrade readiness check into utils module
nightkr Aug 2, 2024
acffa82
Fix test build issue
nightkr Aug 5, 2024
98baaad
Regenerate CRDs
nightkr Aug 5, 2024
5a552d3
Docs
nightkr Aug 5, 2024
c1e13a2
s/terminal/shell/g
nightkr Aug 5, 2024
e1476a2
Update rust/operator-binary/src/hdfs_controller.rs
nightkr Aug 5, 2024
8af1db6
Update docs/modules/hdfs/pages/usage-guide/upgrading.adoc
nightkr Aug 6, 2024
44b5e59
Update docs/modules/hdfs/pages/usage-guide/upgrading.adoc
nightkr Aug 6, 2024
947931e
Update docs/modules/hdfs/pages/usage-guide/upgrading.adoc
nightkr Aug 7, 2024
5970585
Update docs/modules/hdfs/pages/usage-guide/upgrading.adoc
nightkr Aug 7, 2024
13129b5
Update docs/modules/hdfs/pages/usage-guide/upgrading.adoc
nightkr Aug 7, 2024
eb19010
Move upgrade_args to a separate variable
nightkr Aug 7, 2024
d5a092a
Merge branch 'feature/upgrade' of github.com:stackabletech/hdfs-opera…
nightkr Aug 7, 2024
f0df2b7
Upgrade mode -> compatibility mode
nightkr Aug 8, 2024
49cf9d9
Move rollout tracker into operator-rs
nightkr Aug 8, 2024
c582a3a
Update docs/modules/hdfs/pages/usage-guide/upgrading.adoc
nightkr Aug 8, 2024
b24c25f
Add note on downgrades
nightkr Aug 9, 2024
1e68f1d
Merge branch 'feature/upgrade' of github.com:stackabletech/hdfs-opera…
nightkr Aug 9, 2024
10e5220
Perform downgrades in order
nightkr Aug 9, 2024
808f926
Add note about status subresource
nightkr Aug 9, 2024
a9809ba
Update CRDs
nightkr Aug 9, 2024
0604aa6
s/upgrading_product_version/upgrade_target_product_version/g
nightkr Aug 12, 2024
c142421
Switch to main operator-rs
nightkr Aug 12, 2024
6ae8e0b
Update rust/crd/src/lib.rs
nightkr Aug 21, 2024
46eedee
Merge branch 'main' into feature/upgrade
nightkr Aug 26, 2024
2a25ff4
Add guardrail against trying to crossgrade in the middle of another u…
nightkr Aug 26, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

10 changes: 5 additions & 5 deletions Cargo.nix

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 2 additions & 3 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
serde_yaml = "0.9"
snafu = "0.8"
stackable-operator = { git = "https://github.com/stackabletech/operator-rs.git", tag = "stackable-operator-0.70.0" }
stackable-operator = { git = "https://github.com/stackabletech/operator-rs.git", tag = "stackable-operator-0.73.0" }
product-config = { git = "https://github.com/stackabletech/product-config.git", tag = "0.7.0" }
strum = { version = "0.26", features = ["derive"] }
tokio = { version = "1.38", features = ["full"] }
Expand All @@ -30,5 +30,4 @@ tracing-futures = { version = "0.2", features = ["futures-03"] }

[patch."https://github.com/stackabletech/operator-rs.git"]
#stackable-operator = { path = "../operator-rs/crates/stackable-operator" }
#stackable-operator = { git = "https://github.com/stackabletech//operator-rs.git", branch = "main" }
stackable-operator = { git = "https://github.com/stackabletech//operator-rs.git", branch = "revert/lazylock" }
stackable-operator = { git = "https://github.com/stackabletech//operator-rs.git", branch = "main" }
4 changes: 2 additions & 2 deletions crate-hashes.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

22,772 changes: 1,277 additions & 21,495 deletions deploy/helm/hdfs-operator/crds/crds.yaml

Large diffs are not rendered by default.

22 changes: 21 additions & 1 deletion docs/modules/hdfs/pages/usage-guide/upgrading.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,23 @@

HDFS currently requires a manual process to upgrade. This guide will take you through an example case, upgrading an example cluster (from our xref:getting_started/index.adoc[Getting Started] guide) from HDFS 3.3.6 to 3.4.0.

== Preparing for the worst

Upgrades can fail, and it is important to prepare for when that happens. Apache HDFS supports https://hadoop.apache.org/docs/r3.4.0/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#Downgrade_and_Rollback[two ways to revert an upgrade]:

Rollback:: Reverts all user data to the pre-upgrade state. Requires taking the cluster offline.
Downgrade:: Downgrades the HDFS software but preserves all changes made by users. Can be performed as a rolling change, keeping the cluster online.

The Stackable Operator for HDFS supports downgrading but not rollbacks.

In order to downgrade, revert the `.spec.image.productVersion` field, and then proceed to xref:#finalize[finalizing] once the cluster is downgraded:

[source,shell]
----
$ kubectl patch hdfs/simple-hdfs --patch '{"spec": {"image": {"productVersion": "3.3.6"}}}' --type=merge
hdfscluster.hdfs.stackable.tech/simple-hdfs patched
----

== Preparing HDFS

HDFS must be configured to initiate the upgrade process. To do this, put the cluster into upgrade mode by running the following commands in an HDFS superuser environment
Expand All @@ -14,7 +31,7 @@

[source,shell]
----
$ hdfs dfsadmin -rollingUpgrade prepare

Check notice on line 34 in docs/modules/hdfs/pages/usage-guide/upgrading.adoc

View workflow job for this annotation

GitHub Actions / LanguageTool

[LanguageTool] docs/modules/hdfs/pages/usage-guide/upgrading.adoc#L34

Possible typo: you repeated a word (ENGLISH_WORD_REPEAT_RULE) Suggestions: `prepare` Rule: https://community.languagetool.org/rule/show/ENGLISH_WORD_REPEAT_RULE?lang=en-US Category: MISC
Raw output
docs/modules/hdfs/pages/usage-guide/upgrading.adoc:34:32: Possible typo: you repeated a word (ENGLISH_WORD_REPEAT_RULE)
 Suggestions: `prepare`
 Rule: https://community.languagetool.org/rule/show/ENGLISH_WORD_REPEAT_RULE?lang=en-US
 Category: MISC
PREPARE rolling upgrade ...
Preparing for upgrade. Data is being saved for rollback.
Run "dfsadmin -rollingUpgrade query" to check the status
Expand All @@ -24,7 +41,7 @@
Finalize Time: <NOT FINALIZED>

$ # Then run query until the HDFS is ready to proceed
$ hdfs dfsadmin -rollingUpgrade query

Check notice on line 44 in docs/modules/hdfs/pages/usage-guide/upgrading.adoc

View workflow job for this annotation

GitHub Actions / LanguageTool

[LanguageTool] docs/modules/hdfs/pages/usage-guide/upgrading.adoc#L44

Possible typo: you repeated a word (ENGLISH_WORD_REPEAT_RULE) Suggestions: `query` Rule: https://community.languagetool.org/rule/show/ENGLISH_WORD_REPEAT_RULE?lang=en-US Category: MISC
Raw output
docs/modules/hdfs/pages/usage-guide/upgrading.adoc:44:32: Possible typo: you repeated a word (ENGLISH_WORD_REPEAT_RULE)
 Suggestions: `query`
 Rule: https://community.languagetool.org/rule/show/ENGLISH_WORD_REPEAT_RULE?lang=en-US
 Category: MISC
QUERY rolling upgrade ...
Preparing for upgrade. Data is being saved for rollback.
Run "dfsadmin -rollingUpgrade query" to check the status
Expand All @@ -34,7 +51,7 @@
Finalize Time: <NOT FINALIZED>

$ # It is safe to proceed when the output indicates so, like this:
$ hdfs dfsadmin -rollingUpgrade query

Check notice on line 54 in docs/modules/hdfs/pages/usage-guide/upgrading.adoc

View workflow job for this annotation

GitHub Actions / LanguageTool

[LanguageTool] docs/modules/hdfs/pages/usage-guide/upgrading.adoc#L54

Possible typo: you repeated a word (ENGLISH_WORD_REPEAT_RULE) Suggestions: `query` Rule: https://community.languagetool.org/rule/show/ENGLISH_WORD_REPEAT_RULE?lang=en-US Category: MISC
Raw output
docs/modules/hdfs/pages/usage-guide/upgrading.adoc:54:32: Possible typo: you repeated a word (ENGLISH_WORD_REPEAT_RULE)
 Suggestions: `query`
 Rule: https://community.languagetool.org/rule/show/ENGLISH_WORD_REPEAT_RULE?lang=en-US
 Category: MISC
QUERY rolling upgrade ...
Proceed with rolling upgrade:
Block Pool ID: BP-841432641-10.244.0.29-1722612757853
Expand All @@ -58,13 +75,14 @@

NOTE: Services will be upgraded in order: JournalNodes, then NameNodes, then DataNodes.

[#finalize]
== Finalizing the upgrade

Once all HDFS pods are running the new version, the HDFS upgrade can be finalized (from the HDFS superuser environment as described in the preparation step):

[source,shell]
----
$ hdfs dfsadmin -rollingUpgrade finalize

Check notice on line 85 in docs/modules/hdfs/pages/usage-guide/upgrading.adoc

View workflow job for this annotation

GitHub Actions / LanguageTool

[LanguageTool] docs/modules/hdfs/pages/usage-guide/upgrading.adoc#L85

Possible typo: you repeated a word (ENGLISH_WORD_REPEAT_RULE) Suggestions: `finalize` Rule: https://community.languagetool.org/rule/show/ENGLISH_WORD_REPEAT_RULE?lang=en-US Category: MISC
Raw output
docs/modules/hdfs/pages/usage-guide/upgrading.adoc:85:32: Possible typo: you repeated a word (ENGLISH_WORD_REPEAT_RULE)
 Suggestions: `finalize`
 Rule: https://community.languagetool.org/rule/show/ENGLISH_WORD_REPEAT_RULE?lang=en-US
 Category: MISC
FINALIZE rolling upgrade ...
Rolling upgrade is finalized.
Block Pool ID: BP-841432641-10.244.0.29-1722612757853
Expand All @@ -84,4 +102,6 @@
hdfscluster.hdfs.stackable.tech/simple-hdfs patched
----

NOTE: The NameNodes will be restarted a final time, taking them out of compatibility mode.
NOTE: `deployedProductVersion` is located in the _status_ subresource, which will not be modified by most graphical editors, and `kubectl` requires the `--subresource=status` flag.

The NameNodes will then be restarted a final time, taking them out of compatibility mode.
37 changes: 30 additions & 7 deletions rust/crd/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -804,13 +804,22 @@ impl HdfsCluster {
Ok(result)
}

pub fn is_upgrading(&self) -> bool {
self.status
.as_ref()
.and_then(|status| status.deployed_product_version.as_deref())
.map_or(false, |deployed_version| {
deployed_version != self.spec.image.product_version()
})
pub fn upgrade_state(&self) -> Option<UpgradeState> {
let status = self.status.as_ref()?;
let requested_version = self.spec.image.product_version();

if requested_version != status.deployed_product_version.as_deref()? {
// If we're requesting a different version than what is deployed, assume that we're upgrading.
// Could also be a downgrade to an older version, but we don't support downgrades after upgrade finalization.
Some(UpgradeState::Upgrading)
} else if requested_version != status.upgrade_target_product_version.as_deref()? {
// If we're requesting the old version mid-upgrade, assume that we're downgrading.
// We only support downgrading to the exact previous version.
Some(UpgradeState::Downgrading)
} else {
// All three versions match, upgrade was completed without clearing `upgrading_product_version`.
None
}
}

pub fn authentication_config(&self) -> Option<&AuthenticationConfig> {
Expand Down Expand Up @@ -966,6 +975,14 @@ impl HdfsPodRef {
}
}

#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum UpgradeState {
/// The cluster is currently being upgraded to a new version.
Upgrading,
/// The cluster is currently being downgraded to the previous version.
Downgrading,
}

#[derive(
Clone,
Debug,
Expand Down Expand Up @@ -1334,7 +1351,13 @@ pub struct HdfsClusterStatus {
#[serde(default)]
pub conditions: Vec<ClusterCondition>,

/// The product version that the HDFS cluster is currently running.
///
/// During upgrades, this field contains the *old* version.
pub deployed_product_version: Option<String>,

/// The product version that is currently being upgraded to, otherwise null.
pub upgrade_target_product_version: Option<String>,
}

impl HasStatusCondition for HdfsCluster {
Expand Down
5 changes: 4 additions & 1 deletion rust/operator-binary/src/container.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
use crate::DATANODE_ROOT_DATA_DIR_PREFIX;
use crate::JVM_SECURITY_PROPERTIES_FILE;
use crate::LOG4J_PROPERTIES;
use stackable_hdfs_crd::UpgradeState;
use stackable_operator::utils::COMMON_BASH_TRAP_FUNCTIONS;
use std::{collections::BTreeMap, str::FromStr};

Expand Down Expand Up @@ -548,7 +549,9 @@ impl ContainerConfig {
args.push_str(&Self::export_kerberos_real_env_var_command());
}

let upgrade_args = if hdfs.is_upgrading() && *role == HdfsRole::NameNode {
let upgrade_args = if hdfs.upgrade_state() == Some(UpgradeState::Upgrading)
&& *role == HdfsRole::NameNode
{
"-rollingUpgrade started"
} else {
""
Expand Down
51 changes: 39 additions & 12 deletions rust/operator-binary/src/hdfs_controller.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ use stackable_operator::{
product_image_selection::ResolvedProductImage,
rbac::{build_rbac_resources, service_account_name},
},
iter::reverse_if,
k8s_openapi::{
api::{
apps::v1::{StatefulSet, StatefulSetSpec},
Expand Down Expand Up @@ -50,7 +51,7 @@ use stackable_operator::{
use strum::{EnumDiscriminants, IntoEnumIterator, IntoStaticStr};

use stackable_hdfs_crd::{
constants::*, AnyNodeConfig, HdfsCluster, HdfsClusterStatus, HdfsPodRef, HdfsRole,
constants::*, AnyNodeConfig, HdfsCluster, HdfsClusterStatus, HdfsPodRef, HdfsRole, UpgradeState,
};

use crate::{
Expand Down Expand Up @@ -325,10 +326,23 @@ pub async fn reconcile_hdfs(hdfs: Arc<HdfsCluster>, ctx: Arc<Ctx>) -> HdfsOperat
let dfs_replication = hdfs.spec.cluster_config.dfs_replication;
let mut ss_cond_builder = StatefulSetConditionBuilder::default();

let upgrade_state = hdfs.upgrade_state();
let mut deploy_done = true;

// Roles must be deployed in order during rolling upgrades
'roles: for role in HdfsRole::iter() {
// Roles must be deployed in order during rolling upgrades,
// namenode version must be >= datanode version (and so on).
let roles = reverse_if(
match upgrade_state {
// Downgrades have the opposite version relationship, so they need to be rolled out in reverse order.
Some(UpgradeState::Downgrading) => {
tracing::info!("HdfsCluster is being downgraded, deploying in reverse order");
true
}
_ => false,
},
HdfsRole::iter(),
);
'roles: for role in roles {
let role_name: &str = role.into();
let Some(group_config) = validated_config.get(role_name) else {
tracing::debug!(?role, "role has no configuration, skipping");
Expand Down Expand Up @@ -422,12 +436,12 @@ pub async fn reconcile_hdfs(hdfs: Arc<HdfsCluster>, ctx: Arc<Ctx>) -> HdfsOperat
name: rg_statefulset_name,
})?;
ss_cond_builder.add(deployed_rg_statefulset.clone());
if hdfs.is_upgrading() {
if upgrade_state.is_some() {
// When upgrading, ensure that each role is upgraded before moving on to the next as recommended by
// https://hadoop.apache.org/docs/r3.4.0/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#Upgrading_Non-Federated_Clusters
if let Err(reason) = check_statefulset_rollout_complete(&deployed_rg_statefulset) {
tracing::info!(
object = %ObjectRef::from_obj(&deployed_rg_statefulset),
rolegroup.statefulset = %ObjectRef::from_obj(&deployed_rg_statefulset),
reason = &reason as &dyn std::error::Error,
"rolegroup is still upgrading, waiting..."
);
Expand Down Expand Up @@ -482,17 +496,30 @@ pub async fn reconcile_hdfs(hdfs: Arc<HdfsCluster>, ctx: Arc<Ctx>) -> HdfsOperat
deployed_product_version: Some(
hdfs.status
.as_ref()
// Keep current version if set
.and_then(|status| status.deployed_product_version.as_deref())
// Otherwise (on initial deploy) fall back to user's specified version
.unwrap_or(hdfs.spec.image.product_version())
.to_string(),
),
// deployed_product_version: if deploy_done {
// Some(hdfs.spec.image.product_version().to_string())
// } else {
// hdfs.status
// .as_ref()
// .and_then(|status| status.deployed_product_version.clone())
// },
upgrade_target_product_version: match upgrade_state {
// User is upgrading, whatever they're upgrading to is (by definition) the target
Some(UpgradeState::Upgrading) => Some(hdfs.spec.image.product_version().to_string()),
Some(UpgradeState::Downgrading) => {
if deploy_done {
// Downgrade is done, clear
tracing::info!("downgrade deployed, clearing upgrade state");
None
} else {
// Downgrade is still in progress, preserve the current value
hdfs.status
.as_ref()
.and_then(|status| status.upgrade_target_product_version.clone())
}
}
// Upgrade is complete (if any), clear
None => None,
},
};

// During upgrades we do partial deployments, we don't want to garbage collect after those
Expand Down
Loading