diff --git a/docs/assets/images/guides/integrations/aws-role/project-cloud-roles.png b/docs/assets/images/guides/integrations/aws-role/project-cloud-roles.png
deleted file mode 100644
index eac59cba9..000000000
Binary files a/docs/assets/images/guides/integrations/aws-role/project-cloud-roles.png and /dev/null differ
diff --git a/docs/assets/images/guides/integrations/aws-role/role-mapping.png b/docs/assets/images/guides/integrations/aws-role/role-mapping.png
deleted file mode 100644
index db05829af..000000000
Binary files a/docs/assets/images/guides/integrations/aws-role/role-mapping.png and /dev/null differ
diff --git a/docs/assets/images/guides/integrations/aws-role/role-mappings.png b/docs/assets/images/guides/integrations/aws-role/role-mappings.png
deleted file mode 100644
index cd2c60589..000000000
Binary files a/docs/assets/images/guides/integrations/aws-role/role-mappings.png and /dev/null differ
diff --git a/docs/user_guides/integrations/assume_role.md b/docs/user_guides/integrations/assume_role.md
deleted file mode 100644
index 61af6b489..000000000
--- a/docs/user_guides/integrations/assume_role.md
+++ /dev/null
@@ -1,74 +0,0 @@
-# Assuming a role
-
-When deploying Hopsworks on EC2 instances you might need to assume different roles to access resources
-on AWS. These roles are configured in AWS and mapped to projects in Hopsworks, for a guide on how to
-configure this see [role mapping](../role_mapping/#iam-role-mapping).
-
-After an administrator configured role mappings in Hopsworks you can see the roles you can assume by going
-to your project settings.
-
-
-
-
- Cloud roles mapped to project.
-
-
-
-You can then use the Hops Python and Java APIs to assume the roles listed in your project's settings page.
-
-When calling the assume role method you can pass the role ARN string or use the get role method that takes
-the role id as an argument. If you assign a default role for your project you can call
-the assume role method with no argument.
-
-You can assign (if you are a Data owner in that project) a default role to you project by clicking on the **Default** button above the role you want to make default. You can set one default per project role. If a default is set for
-a project role (Data scientist or Data owner) and all members (ALL) the default set for the project role will take precedence over the default set for all members.
-In the image above if a Data scientist called the assume role method with no arguments she will assume the role with id 1029 but if a Data owner
-called the same method she will assume the role with id 1.
-
-!!! hint "Use temporary credentials."
- Python
- ```python
-
- from hops.credentials_provider import get_role, assume_role
- credentials = assume_role(role_arn=get_role(1))
- spark.read.csv("s3a://resource/test.csv").show()
- ```
- Scala
- ```scala
-
- import io.hops.util.CredentialsProvider
- val creds = CredentialsProvider.assumeRole(CredentialsProvider.getRole(1))
- spark.read.csv("s3a://resource/test.csv").show()
- ```
-
-The assume role method sets spark hadoop configurations that will allow spark to read s3 buckets. The code
-examples above show how to read s3 buckets using Python and Scala.
-
-Assume role also sets environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN
-so that programs running in the container can use the credentials for the newly assumed role.
-
-To read s3 buckets with TensorFlow you also need to set AWS_REGION environment variable (s3 bucket region).
-The code below shows how to read training and validation datasets from s3 bucket using TensorFlow.
-
-!!! hint "Use temporary credentials with TensorFlow."
- ```python
-
- from hops.credentials_provider import get_role, assume_role
- import tensorflow as tf
- import os
-
- assume_role(role_arn=get_role(1))
- # s3 bucket region need to be set for TensorFlow
- os.environ["AWS_REGION"] = "eu-north-1"
-
- train_filenames = ["s3://resource/train/train.tfrecords"]
- validation_filenames = ["s3://resourcet/validation/validation.tfrecords"]
-
- train_dataset = tf.data.TFRecordDataset(train_filenames)
- validation_dataset = tf.data.TFRecordDataset(validation_filenames)
-
- for raw_record in train_dataset.take(1):
- example = tf.train.Example()
- example.ParseFromString(raw_record.numpy())
- print(example)
- ```
diff --git a/docs/user_guides/integrations/role_mapping.md b/docs/user_guides/integrations/role_mapping.md
deleted file mode 100644
index b0c047b81..000000000
--- a/docs/user_guides/integrations/role_mapping.md
+++ /dev/null
@@ -1,92 +0,0 @@
-# IAM role mapping
-
-Using an EC2 instance profile enables your Hopsworks cluster to access AWS resources. This forces all Hopsworks users to
-share the instance profile role and the resource access policies attached to that role. To allow for per project access policies
-you could have your users use AWS credentials directly in their programs which is not recommended so you should instead use
-[Role chaining](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-role-chaining).
-
-To use Role chaining, you need to first setup IAM roles in AWS:
-
-1. Create an instance profile role that contains the different resource roles that we want to allow selected users to be able to assume in the Hopsworks cluster.
- In the example below, we define 4 different resource roles: test-role, s3-role, dev-s3-role, and redshift -
- and later we will define which users will be allowed to assume which of these resources roles.
-
-```json
-
- {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "AssumeDataRoles",
- "Effect": "Allow",
- "Action": "sts:AssumeRole",
- "Resource": [
- "arn:aws:iam::123456789011:role/test-role",
- "arn:aws:iam::xxxxxxxxxxxx:role/s3-role",
- "arn:aws:iam::xxxxxxxxxxxx:role/dev-s3-role",
- "arn:aws:iam::xxxxxxxxxxxx:role/redshift"
- ]
- }
- ]
- }
-```
-Example policy for assuming four roles.
-
-2. Create the resource roles and edit the trust relationship and add a policy document that will allow the instance profile to assume the resource roles.
-
-```json
-
- {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Principal": {
- "AWS": "arn:aws:iam::xxxxxxxxxxxx:role/instance-profile"
- },
- "Action": "sts:AssumeRole"
- }
- ]
- }
-```
-Example policy document.
-
-3. Finally attach the instance profile to the master node of your Hopsworks AWS instance.
-
-
-Role chaining allows the instance profile to assume any of the 4 resource roles in the policy that was attached in step 1.
-Typically, we will not want any user in Hopsworks to assume any of the resource roles. You can grant selected users the ability
-to assume any of the 4 resource roles from the admin page in hopsworks. In particular, we specify in which project(s) a given
-resource role can be used. Within a given project, we can further restrict who can assume the resource role by mapping the
-role to the group of users (data owners or data scientists).
-
-
-
-
- Resource role mapping.
-
-
-
-By clicking the 'Resource role mapping' icon in the admin page shown in the image above you can add mappings
-by entering the project name and which roles in that project can access the resource role.
-Optionally, you can set a role mapping as default by marking the default checkbox.
-The default roles can only be changed by a Data owner who can do so in the project settings page.
-
-
-
-
- Add resource role to project mapping.
-
-
-
-
-Any member of a project can then go to the project settings page to see which roles they can assume.
-
-
-
-
- Resource role mapped to project.
-
-
-
-For instructions on how to use the assume role API see [assuming a role](../assume_role/#assuming-a-role).