Skip to content

Commit b16c319

Browse files
authored
Docs usage with examples (#11)
1 parent b5e38e4 commit b16c319

File tree

1 file changed

+93
-45
lines changed

1 file changed

+93
-45
lines changed

README.md

Lines changed: 93 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,19 @@
22

33
# Terraform Provider Iterative
44

5-
The Terraform Iterative provider is a plugin for Terraform that allows for the
6-
full lifecycle management of GPU or non GPU cloud resources with your favourite
7-
[vendor](#supported-vendors). The provider offers a simple and homogeneous way
8-
to deploy a GPU or a cluster of them reducing the complexity.
5+
The Terraform Iterative provider is a plugin for Terraform that allows for the full lifecycle management of GPU or non GPU cloud resources with your favourite [vendor](#supported-vendors). The provider offers a simple and homogeneous way to deploy a GPU or a cluster of them reducing the complexity.
96

107
# Usage
118

9+
#### 1- Setup your provider credentials as ENV variables
10+
11+
```sh
12+
export AWS_SECRET_ACCESS_KEY=YOUR_KEY
13+
export AWS_ACCESS_KEY_ID=YOUR_ID
14+
```
15+
16+
#### 2- Save your terraform file main.tf
17+
1218
```tf
1319
terraform {
1420
required_providers {
@@ -23,59 +29,101 @@ provider "iterative" {}
2329
2430
resource "iterative_machine" "machine" {
2531
region = "us-west"
32+
ami = "iterative-cml"
2633
instance_name = "machine"
27-
instance_hdd_size = "20"
34+
instance_hdd_size = "10"
2835
instance_type = "m"
2936
instance_gpu = "tesla"
3037
}
3138
```
3239

40+
#### 3- Launch it!
41+
42+
```
43+
terraform init
44+
terraform apply --auto-approve
45+
46+
# run it to destroy your instance
47+
# terraform destroy --auto-approve
48+
```
49+
50+
## Pitfalls
51+
52+
To be able to use the ```instance_type``` and ```instance_gpu``` you will need also to be allowed to launch [such instances](#AWS-instance-equivalences) within you cloud provider. Normally all the GPU instances need to be approved prior to be used by your vendor.
53+
You can always try with an already approved instance type by your vendor just setting it i.e. ```t2.micro```
54+
55+
<details>
56+
<summary>Example with native AWS instace type and region</summary>
57+
<p>
58+
59+
```tf
60+
terraform {
61+
required_providers {
62+
iterative = {
63+
source = "iterative/iterative"
64+
version = "0.5.1"
65+
}
66+
}
67+
}
68+
69+
provider "iterative" {}
70+
71+
resource "iterative_machine" "machine" {
72+
region = "us-west-1"
73+
ami = "iterative-cml"
74+
instance_name = "machine"
75+
instance_hdd_size = "10"
76+
instance_type = "t2.micro"
77+
}
78+
```
79+
80+
</p>
81+
</details>
82+
3383
## Argument reference
3484

35-
| Variable | Values | Default | |
36-
| ------------------- | ---------------------------------------- | --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
37-
| `region` | `us-west` `us-east` `eu-west` `eu-north` | `us-west` | Sets the collocation region |
38-
| `ami` | | `iterative-cml` | Sets the ami to be used. For that the provider does a search in the cloud provider by image name not by id, taking the lastest version in case there are many with the same name. Defaults to [iterative-cml image](#iterative-cml-image) |
39-
| `instance_name` | | cml\_{UID} | Sets the instance name and related resources like AWS key pair. |
40-
| `instance_hdd_size` | | 10 | Sets the instance hard disk size in gb |
41-
| `instance_type` | `m`, `l`, `xl` | `m` | Sets thee instance computing size. You can also specify vendor specific machines in AWS i.e. `t2.micro` |
42-
| `instance_gpu` | ``, `testla`, `k80` | `` | Sets the desired GPU if the `instance_type` is one of our types. |
43-
| `key_public` | | | Set up ssh access with your OpenSSH public key. If not provided one be automatically generated and returned in terraform.tfstate |
44-
| aws_security_group | | `cml` | AWS specific variable to setup an specific security group. If specified the instance will be launched in with that sg within the vpc managed by the specified sg. If not a new sg called `cml` will be created under the default vpc |
85+
| Variable | Values | Default | |
86+
| ------- | ------ | -------- | ------------- |
87+
| ```region``` | ```us-west``` ```us-east``` ```eu-west``` ```eu-north``` | ```us-west``` | Sets the collocation region. AWS regions are also accepted. |
88+
| ```ami``` | | ```iterative-cml``` | Sets the ami to be used. For that the provider does a search in the cloud provider by image name not by id, taking the lastest version in case there are many with the same name. Defaults to [iterative-cml image](#iterative-cml-image) |
89+
| ```instance_name``` | | cml_{UID} | Sets the instance name and related resources like AWS key pair. |
90+
| ```instance_hdd_size``` | | 10 | Sets the instance hard disk size in gb |
91+
| ```instance_type``` | ```m```, ```l```, ```xl``` | ```m``` | Sets thee instance computing size. You can also specify vendor specific machines in AWS i.e. ```t2.micro```. [See equivalences]((#AWS-instance-equivalences)) table below. |
92+
| ```instance_gpu``` | ``` ```, ```testla```, ```k80``` | ``` ``` | Sets the desired GPU if the ```instance_type``` is one of our types. |
93+
| ```key_public``` | | | Set up ssh access with your OpenSSH public key. If not provided one be automatically generated and returned in terraform.tfstate |
94+
| aws_security_group | | ```cml``` | AWS specific variable to setup an specific security group. If specified the instance will be launched in with that sg within the vpc managed by the specified sg. If not a new sg called ```cml``` will be created under the default vpc |
95+
4596

4697
# Supported vendors
4798

48-
- AWS
49-
50-
### AWS instance equivalences.
51-
52-
The instance type in AWS is calculated joining the `instance_type` and
53-
`instance_gpu`
54-
55-
| type | gpu | aws |
56-
| ---- | ----- | ----------- |
57-
| m | | m5.2xlarge |
58-
| l | | m5.8xlarge |
59-
| xl | | m5.16xlarge |
60-
| m | k80 | p2.xlarge |
61-
| l | k80 | p2.8xlarge |
62-
| xl | k80 | p2.16xlarge |
63-
| m | tesla | p3.xlarge |
64-
| l | tesla | p3.8xlarge |
65-
| xl | tesla | p3.16xlarge |
66-
67-
| region | aws |
68-
| -------- | ---------- |
69-
| us-west | us-west-1 |
70-
| us-east | us-east-1 |
99+
- AWS
100+
101+
### AWS instance equivalences
102+
The instance type in AWS is calculated joining the ```instance_type``` and ```instance_gpu```
103+
104+
| type | gpu | aws |
105+
| ------- | ------ | -------- |
106+
| m | | m5.2xlarge |
107+
| l | | m5.8xlarge |
108+
| xl | | m5.16xlarge |
109+
| m | k80 | p2.xlarge |
110+
| l | k80 | p2.8xlarge |
111+
| xl | k80 | p2.16xlarge |
112+
| m | tesla | p3.xlarge |
113+
| l | tesla | p3.8xlarge |
114+
| xl | tesla | p3.16xlarge |
115+
116+
| region | aws |
117+
| ------- | ------ |
118+
| us-west | us-west-1 |
119+
| us-east | us-east-1 |
71120
| eu-north | us-north-1 |
72-
| eu-west | us-west-1 |
121+
| eu-west | us-west-1 |
73122

74-
# Iterative CML image
123+
# iterative-cml image
75124

76-
It's a GPU ready image based on Ubuntu 18.04. It has the following stack already
77-
installed:
125+
It's a GPU ready image based on Ubuntu 18.04. It has the following stack already installed:
78126

79-
- nvidia drivers
80-
- docker
81-
- nvidia-docker
127+
- nvidia drivers
128+
- docker
129+
- nvidia-docker

0 commit comments

Comments
 (0)