-
Notifications
You must be signed in to change notification settings - Fork 108
Moves nodes to private subnets. #3004
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This issue is outlined in #3008. @dcmcand, @viniciusdc, and I had a discussion regarding this limitation and decided that we'll try to first address #3008 before merging this PR. |
|
Do not merge until #3008 is fixed as this will cause difficulties with upgrades. |
| description = "VPC cidr number of bits to support 2^N subnets" | ||
| type = number | ||
| default = 2 | ||
| default = 2 # allows 4 /18 subnets with 16382 addresses each |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| default = 2 # allows 4 /18 subnets with 16382 addresses each | |
| default = 3 # allows 8 /18 subnets with 16382 addresses each |
needed this for my use case with 3 subnets specified
src/_nebari/stages/infrastructure/template/aws/modules/network/main.tf
Outdated
Show resolved
Hide resolved
…/main.tf Co-authored-by: Austin Macdonald <austin@dartmouth.edu>
|
We need the upgrade path on the next release -- (follow-up release), this is the last remaining bit to get this going |
Annotation for our "manual patch queue": commit done manually based on @satra's comment at nebari-dev#3004 (comment)
|
I dont think I can help much with providing an update path but I did want to provide my feedback on using this for a while in production. We eventually dropped this change from our deployment due to high cost of the NAT Gateway usage to move data around. When we moved back from private to public subnets, our upgrade path was a little bit awkward. Its the "reverse" of what an upgrade path for this one, so just in case its helpful, heres how I managed to move from private->public subnets.
This upgrade path does lose state though, I had to restore keycloak and conda-store state from backups. |
Reference Issues or PRs
Closes #2952
What does this implement/fix?
Moves nodes to private subnets and removes the autoassign public IP option.
Currently our nodes are placed in public subnets with a public ip assigned by default. This is a security vulnerability that gives us no benefit whatsoever. The new setup places all nodes in a private subnet while keeping load balancers in public subnets. This will still allow public access to nebari, but you will not be able to access the nodes themselves over the public internet anymore.
The following illustration is from the AWS documentation (https://docs.aws.amazon.com/eks/latest/best-practices/subnets.html) and shows the new setup. Note that this is the recommended setup for EKS on AWS.
Put a
xin the boxes that applyTesting
How to test this PR?
Deploy Nebari to AWS, in the console validate that the nodes are located in Private subnets, then go through the testing checklist to validate all functionality is unchanged.
Any other comments?
NOTE This will likely result in issues with the general node not restarting if it ends up in a different AZ from it's EBS volume. This is a known issue and needs addressed by changing our storage setup.