Configure your own buckets

For every CARTO Self-Hosted installation, we need some configured buckets to store resources that will be used by the platform. These storage buckets are part of the required infrastructure for importing and exporting data, map thumbnails, customization assets (custom logos and markers) and other internal data.

You can create and use your own storage buckets in any of the following supported storage providers:

Configuration

Select your preferred storage provider:

Once you've made your selection, please proceed to configure your storage preferences by completing the necessary fields below:

Google Cloud Storage

When configuring Google Cloud Storage as your storage provider, you'll have to:

  1. Create 3 buckets in GCS:

    • Assets Bucket

    • Temp Bucket

    • Data export Bucket (optional in case you'd like to allow exporting data from your data warehouse)

Custom markers won't work unless the assets bucket is public.

  1. Configure CORS: Temp and Assets buckets require having the following CORS headers configured:

[
    {
      "origin": ["*"],
      "method": ["GET", "PUT", "POST"],
      "responseHeader": ["Content-Type", "Content-MD5", "Content-Disposition", "Cache-Control" , "x-goog-content-length-range", "x-goog-meta-filename"],
      "maxAgeSeconds": 3600
    }
]

How do I setup CORS configuration? Check the provider docs.

  1. Ensure that the identity used to access your GCS buckets has read/write permissions on all of them. It should have the over the buckets that will be used.

  2. Provide the Project ID of the Google Cloud Platform (GCP) project where your GCS buckets are located.

  3. Specify the names of the GCS buckets that your application will be using. This allows your application to target the specific buckets for storing and retrieving data.

AWS S3

When configuring AWS S3 as your storage provider, you'll have to:

  1. Create 3 buckets in AWS S3 account:

    • Assets Bucket

    • Temp Bucket

    • Data export Bucket (optional in case you'd like to allow exporting data from your data warehouse)

Custom markers won't work unless the assets bucket is public.

When creating your buckets, please check that:

  • ACLs should be allowed.

  • If server-side encryption is enabled, the user must be granted with permissions over the KMS key following the AWS documentation

  1. Configure CORS: Temp and Assets buckets require having the following CORS headers configured:

[
    {
      "origin": ["*"],
      "method": ["GET", "PUT", "POST"],
      "responseHeader": ["Content-Type", "Content-MD5", "Content-Disposition", "Cache-Control"],
      "maxAgeSeconds": 3600
    }
]

How do I setup CORS configuration? Check the provider docs.

  1. Provide an Access Key ID and Secret Access Key that will be used to access your S3 buckets. You can generate these credentials through the AWS Management Console by creating an IAM user with appropriate permissions for accessing S3 resources.

  2. Configure the region in which these buckets are located. All the buckets must be created in the same AWS region.

  3. Specify the names of the AWS buckets that your application will be using. This allows your application to target the specific buckets for storing and retrieving data.

Configuration for Redshift

Create an AWS IAM role with the following settings:

  1. Trusted entity type: Custom trust policy

  2. Custom trust policy: Make sure to replace <your_aws_user_arn> with the ARN of the user which Access Key has been configured on CARTO deployment configuration

{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Principal": {
              "AWS": "<your_aws_user_arn>"
          },
          "Action": [
              "sts:AssumeRole",
              "sts:TagSession"
          ]
      }
  ]
}
  1. Add permissions: Create a new permissions' policy. Please, take into account that you can omit the export bucket permissions if you wouldn't like to enable exporting data from CARTO platform.

{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Effect": "Allow",
           "Action": "s3:ListBucket",
           "Resource": "arn:aws:s3:::<your_aws_s3_data_export_bucket_name>"
       },
       {
           "Effect": "Allow",
           "Action": "s3:*Object",
           "Resource": "arn:aws:s3:::<your_aws_s3_data_export_bucket_name>/*"
       },
       {
           "Effect": "Allow",
           "Action": "s3:ListBucket",
           "Resource": "arn:aws:s3:::<your_aws_s3_temp_bucket_name>"
       },
       {
           "Effect": "Allow",
           "Action": "s3:*Object",
           "Resource": "arn:aws:s3:::<your_aws_s3_temp_bucket_name>/*"
       }
   ]
}

This role has permissions to use both the exports bucket and the temp bucket to store that will be imported into Redshift. In order to enable exporting data from Redshift you'll have to specify the ARN of the role and the name of the exports bucket in the CARTO Self-Hosted configuration.

In case you'd like to enable importing data to Redshift, then it's not mandatory to provide the exports bucket's name, but you'll have to follow these instructions once the CARTO Self-Hosted deployment is ready.

Configuration for Snowflake

Create an AWS IAM role with the following settings:

  1. Trusted entity type: Custom trust policy

  2. Custom trust policy: Make sure to replace <your_aws_user_arn> with the ARN of the user which Access Key has been configured on CARTO deployment configuration

{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Principal": {
              "AWS": "<your_aws_user_arn>"
          },
          "Action": [
              "sts:AssumeRole",
              "sts:TagSession"
          ]
      }
  ]
}
  1. Add permissions: Create a new permissions' policy. Please, take into account that you can omit the export bucket permissions if you wouldn't like to enable exporting data from CARTO platform.

{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Effect": "Allow",
           "Action": "s3:ListBucket",
           "Resource": "arn:aws:s3:::<your_aws_s3_data_export_bucket_name>"
       },
       {
           "Effect": "Allow",
           "Action": "s3:*Object",
           "Resource": "arn:aws:s3:::<your_aws_s3_data_export_bucket_name>/*"
       }
   ]
}

This role has permissions to use the exports bucket to store the data exported from Snowflake. In order to enable exporting data from Snowflake you'll have to specify the ARN of the role and the name of the data export bucket in the CARTO Self-Hosted configuration.

Azure Blob

When configuring Azure Blob as your storage provider, you'll have to:

  1. Create 3 containers in your Azure Blob storage account:

    • Assets Bucket

    • Temp Bucket

    • Data export Bucket (optional in case you'd like to allow exporting data from your data warehouse)

Custom markers won't work unless the assets bucket is public.

  1. Configure CORS: Temp and Assets buckets require having the following CORS headers configured:

[
    {
      "origin": ["*"],
      "method": ["GET", "PUT", "POST"],
      "responseHeader": ["Content-Type", "Content-MD5", "Content-Disposition", "Cache-Control" , "Access-Control-Request-Headers", "X-MS-Blob-Type"],
      "maxAgeSeconds": 3600
    }
]

How do I setup CORS configuration? Check the provider docs.

  1. Provide an Access Key that will be used to access your containers.

  2. Specify the names of the buckets that your application will be using. This allows your application to target the specific buckets for storing and retrieving data.

Last updated