Skip to content

Using OOS as a Remote Backend #363

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
cvedia-mdsol opened this issue Jun 13, 2023 · 11 comments
Open

Using OOS as a Remote Backend #363

cvedia-mdsol opened this issue Jun 13, 2023 · 11 comments
Assignees
Labels
enhancement New feature or request

Comments

@cvedia-mdsol
Copy link

Current Terraform Version

Terraform v1.4.6
on windows_amd64

Use-cases

Terraform would create a State file within a local path by default but when attempting to work with various team members, it would be better to have state available in a remote space that can be retrieved by each member as needed.

Options to do this would be to use a Remote Data Store (such as Terraform Cloud, AWS S3, Azure Blob Storage, etc). Since our provisioning is happening within Outscale, would prefer to use OOS.

Attempted Solutions

Tried to connect to OOS using a bucket we've created but have been unable to connect. Because of this, I have raised a case in Github (see references).

response

Error: error configuring S3 Backend: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
        status code: 403, request id: 5102beb7-41d3-48f3-b480-ab6010fd2464

code

terraform {
    backend "s3" {
        region = "us-east-2"
        endpoint = "https://oos.us-east-2.outscale.com" 
        bucket = "bucket_name"
        access_key = "my_accesskey"
        secret_key = "my_secretkey"
        key = "terraform"
}

Proposal

I am able to successfully, perform other actions against OOS such as (create/delete buckets & upload/download objects) but have to do so through the proper endpoint based on the region of my account.

The endpoint does not seem to be able to confirm my identity when the call is made through Terraform. Would be useful if we can pass our Profile configuration detail on the connection attempt so that we can use the reference name to pass sensitive information like ('access key' and 'secret key').

References

GitHub Bug: raised due to feedback when attempting to connect with OOS as a remote backend:
#359

@cvedia-mdsol cvedia-mdsol added the enhancement New feature or request label Jun 13, 2023
@outscale-toa
Copy link
Member

Hi @cvedia-mdsol,

Thanks for reaching us, we are looking at your issue

Best regards,

@pavloos
Copy link

pavloos commented Aug 30, 2023

@cvedia-mdsol you can disable that behaviour

terraform {
  backend "s3" {
    bucket                      = "your-oos-state-bucket"
    endpoint                    = "https://oos.cloudgouv-eu-west-1.outscale.com"
    key                         = "terraform.tfstate"
    profile                     = "outscale-s3-prod"
    region                      = "cloudgouv-eu-west-1"

   # below options make it work just fine
    skip_credentials_validation = true
    skip_region_validation      = true
  }
}

@ArnaultMICHEL
Copy link

On a security point of view, it is a best practice to use standard AWS CLI env vars to store your secrets (AK/SK) :

**AWS_ACCESS_KEY_ID**
**AWS_SECRET_ACCESS_KEY**

With those additionnal env vars :

AWS_DEFAULT_REGION=eu-west-2
AWS_DATA_PATH=.aws/models

Note : create .aws/models/endpoints.json file according to outscale doc

I found two advantages :

  1. Your terraform backend config is lighter :

    terraform {
      backend "s3" {
        bucket   = "terraform-tfstate"
        key      = "my-project/terraform.tfstate"
    
        skip_region_validation      = true
        skip_credentials_validation = true
      }
    }
    
  2. aws cli usage need less options

    aws s3api create-bucket --bucket ${bucket} --acl private
    aws s3api list-buckets
    aws s3api list-objects --bucket ${bucket} |jq -r .Contents[].Key
    ...
    

Warning : it does not handle tfstate locking, parrallel executions of terraform apply could be a problem

@cvedia-mdsol
Copy link
Author

@ArnaultMICHEL thanks for the feedback. I typically do not include Secret or Key information directly in the configuration YAML. It was added for the example to keep things less complex for reporting purposes. We usually at least use of Environmental Variables as declared.

Currently, looking to use the OOS bucket as a means to keep a better handle on State. Confirmed use of S3 works fine with a quick reference switch so curious on the requirements for connectivity to OOS. As you mentioned, still need to see what services are available with Outscale to help lock the state.

@cvedia-mdsol
Copy link
Author

Hi @pavloos. Thanks for sharing. I ran tests using the parameters you've mentioned and received the following:

Using the region my account is setup in "us-east-2"

terraform {
    backend "s3" {
        profile = "outscale"
        region = "us-east-2"
        bucket = "cvedia-test"
        key = "terraform.tfstate"
        workspace_key_prefix = "terraform/workspace"

        skip_credentials_validation = true
        skip_region_validation = true
    }
}

Response: Error refreshing state: BucketRegionError: incorrect region, the bucket is not in 'us-east-2' region at endpoint '', bucket is in 'us-east-1' region
status code: 301,

Using the region "us-east-1" due to previous response.

terraform {
    backend "s3" {
        profile = "outscale"
        region = "us-east-1"
        bucket = "cvedia-test"
        key = "terraform.tfstate"
        workspace_key_prefix = "terraform/workspace"

        skip_credentials_validation = true
        skip_region_validation = true
    }
}

Response: Error refreshing state: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
status code: 403

All other resources have been in "us-east-2" not sure why why the 1st response is received. Example below.

osc-cli api ReadVms --profile outscale
{
    "Vms": [
        {
            "VmType": "tinav4.c2r4p2",
            "VmInitiatedShutdownBehavior": "stop",
            "State": "running",
            "StateReason": "",
            "RootDeviceType": "ebs",
            "RootDeviceName": "/dev/sda1",
            "IsSourceDestChecked": true,
            "KeypairName": "****",
            "ImageId": "ami-****",
            "DeletionProtection": false,
            "Architecture": "x86_64",
            "NestedVirtualization": false,
            "BlockDeviceMappings": [
                {
                    "DeviceName": "/dev/sda1",
                    "Bsu": {
                        "VolumeId": "vol-****",
                        "State": "attached",
                        "LinkDate": "2023-02-15T18:19:05.441Z",
                        "DeleteOnVmDeletion": true
                    }
                }
            ],
            "VmId": "i-****",
            "ReservationId": "r-****",
            "Hypervisor": "xen",
            "Placement": {
                "Tenancy": "default",
                "SubregionName": "us-east-2a"
            },

@MMege6317
Copy link

To use S3, you need to change profile parameter, it must be your aws profile, sthg like ~/.aws/credentials
See documentation: https://developer.hashicorp.com/terraform/language/settings/backends/s3

@cvedia-mdsol
Copy link
Author

@MMege6317 I have the profile configured under my profile (~/.aws/credentials). Re-included my config files below. I have no issues connecting and using S3 for storing state. Problem is using Outscale's version of S3 called OOS.

Credentials file

[outscale]
aws_access_key_id=xxx
aws_secret_access_key=xxx

Config file

[default]
region = us-east-2
output = json

Outscales supports the same AWS CLI commands but by default these command are directed to the AWS API's so endpoint references are used for issuing commands for example.

-To List Buckets-
AWS: aws s3 ls --profile profileName
Outscale: aws s3 ls --endpoint "SERVICE.REGION.outscale.com" --profile profileName

So to switch from using AWS to Outscale for state in Terraform, I change my profile name then also add an 'endpoint'. Similar to how it is done with this provider (as an example) https://registry.terraform.io/providers/FlexibleEngineCloud/flexibleengine/latest/docs/guides/remote-state-backend

@cvedia-mdsol
Copy link
Author

@outscale-toa has there been any additional updates from the team on this since the request has been made? is the provided endpoint referenced correct? I believe there are other endpoints like for FCU and EIM so want to make sure.

@cvedia-mdsol
Copy link
Author

@outscale-toa following up to see if there have been any additional updates on this that can be shared.

@cvedia-mdsol
Copy link
Author

@outscale-toa want to follow up on this again to see if there has been anything additional. having this supported using outscale OOS would help server as a centralized location for state within the same platform

@outscale-mgo
Copy link
Contributor

outscale-mgo commented Apr 30, 2024

hello,
Sorry for the delay.
I've just try with a simple file:

terraform {
  required_providers {
    outscale = {
      source  = "outscale/outscale"
      version = ">= 0.11.0"
    }
  }
  backend "s3" {
    bucket                      = "tf"
    endpoint                    = "https://oos.eu-west-2.outscale.com"
    key                         = "terraform.tfstate"
    profile                     = "default"
    region                      = "eu-west-2"

    # below options make it work just fine                                                                                                                                                                  
    skip_credentials_validation = true
    skip_region_validation      = true
  }
}

provider "outscale" {
  access_key_id = var.access_key_id
  secret_key_id = var.secret_key_id
  region        = "eu-west-2"
}

resource "outscale_volume" "s3-test" {
  subregion_name = "eu-west-2a"
  size           = 10
}

with my profile loking like this:

$ ls ~/.aws 
credentials
[default]
aws_access_key_id = <ACCESS_KEY>
aws_secret_access_key = <SECRET_KEY>

And it seems to work. So I don't really understand your problems, do you have a conflict with different profiles ?

I've test using:

curl -X GET https://oos.eu-west-2.outscale.com/tf/terraform.tfstate --aws-sigv4  "aws:amz:eu-west-2:s3" --user $AK:$SK

Which show me the terraform state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Development

No branches or pull requests

6 participants