Replies: 1 comment
-
I myself had similar issues and figured out how to do a complete disaster recovery of a whole cluster from an S3 backup. Take a look at the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I've been using this repository to manage my own k3s cluster on Hetzner for over a year now and it has been great, recently I've noticed that my control planes haven't been upgrading because of #1163. Now, I've updated terraform and a few changes have been applied most likely solving my issue. The problem is that 5 resources will be destroyed (I've added the log below just in case someone wants to see which ones). Most likely this isn't a problem, but I am scared to run it on my production cluster. Preferably I would like to run this change first on an exact duplicate to make sure nothing breaks and everything works.
Now the question: what would be the absolute best and easiest way to create an exact duplicate of the current cluster?
My own suggestion would be to create a duplicate of my
kube.tf
, fix the version of the terraform module to the old one and use a new Hetzner project and token to deploy it. Then upgrade and make sure everything still works as expected. Possibly lower some data planes and deploy a test project, but that should be it. But I want to make sure that I am not missing something and possibly messing up my production cluster.Beta Was this translation helpful? Give feedback.
All reactions