Configure your own buckets

For CARTO Self-hosted using Kots

1. Overview

Every CARTO Self-Hosted installation requires two cloud storage buckets to handle data and assets used by the platform:

Purpose
Is mandatory?
Description
Example contents

1. Temp Bucket

Yes

Used to upload and import datasets into CARTO.

.csv, .geojson, .zip

2. Assets Bucket

Yes

Stores generated map thumbnails and customization assets (logos, markers, etc.).

.png

3. Export Bucket

No, this bucket is optional.

Used for exporting data from your data warehouse (BigQuery, Snowflake, Redshift, or Amazon RDS). Create this only if you plan to use data export features.

.csv, .json, .parquet

You can create and use your own storage buckets in any of the following supported storage providers:

2. Pre-requisites

2.1. Create the required buckets

  1. Temp Bucket

  2. Assets Bucket

  3. Exports Bucket (optional)

There are no naming constraints for these buckets.

3. Configuration notes

3.1. Thumbnails bucket access

  • This bucket stores the .png image files used for map previews.

    • Bucket naming: There are no specific naming constraints for this bucket; you can use any name that complies with your cloud provider's rules.

    • Access control: By default, all thumbnail objects are configured to be publicly accessible. This ensures that features like custom branding and map markers function correctly.

  • You can change this in the Admin Console using the "Assets bucket is Public" config.

circle-exclamation

To maintain a private bucket while still enabling these features, you must configure your bucket with a hybrid access model that meets the following conditions:

  1. Allow public objects: The bucket policy must allow individual objects to be made public, even if the bucket itself is private by default.

  2. Enable fine-grained ACLs: You must enable Access Control Lists (ACLs), sometimes referred to as "non-uniform permissions." This allows CARTO to set public-read permissions on specific thumbnail files while all other objects remain private.

  3. Disable server-side encryption: Bucket-level server-side encryption is often incompatible with setting public ACLs and must be disabled.

3.2. Export bucket provider requirements

Optionally, you can create an Exports bucket to configure the Exports from your data warehouse.

Data Warehouse
Required Storage Provider

BigQuery

Google Cloud Storage

Snowflake

AWS S3

Redshift

AWS S3

Amazon RDS (PostgreSQL)

AWS S3

3.3. CORS configuration

You need to setup CORS policy for the Assets and Temp Buckets. This is mandatory for them to work as expected. Below you'll find the details to configure the policy:

Setting
Value

Allowed origins

*

Allowed methods

GET, PUT, POST

Allowed headers (common)

Content-Type, Content-MD5, Content-Disposition, Cache-Control

GCS extra headers (only for Google Cloud)

x-goog-content-length-range, x-goog-meta-filename

Azure Blob extra headers (only for Azure)

Access-Control-Request-Headers, X-MS-Blob-Type

Max age

3600

CORS configuration location:

  • GCS / S3: Bucket level

  • Azure Blob: Storage account level

For more details, refer to:

3.4. Authentication requirements

The buckets access from the Carto platform require authentication configuration. You'll find below the authentication methods available for each provider:

Provider
Auth Method

GCS

Service Account Key

AWS S3

Access Key ID + Secret Access Key

Azure Blob

Access Key

circle-info

If you can't setup Service Account Keys, Access Keys or Secret Access Key due to security constraints or other reasons you can setup GCP Workload Identity or EKS Pod Identity using Advanced Orchestrated Deployment Method with Helm.

4. Configuration

Select your preferred storage provider:

Once you've made your selection, please proceed to configure your storage preferences by completing the necessary fields below:

4.1 Google Cloud Storage

When configuring Google Cloud Storage as your storage provider, you'll have to:

  1. Create 2 buckets in GCS:

    • Assets Bucket

    • Temp Bucket

  2. Optionally you can create the Data export Bucket in case you'd like to allow exporting data from your data warehouse.

circle-info

Custom markers won't work unless the assets bucket is public.

  1. Configure CORS: Temp and Assets buckets require having the following CORS headers configured:

circle-info

How do I setup CORS configuration? Check the provider docsarrow-up-right.

  1. Ensure that the identity used to access your GCS buckets has read/write permissions on all of them. It should have the Storage Admin role over the buckets that will be used.

  2. Provide the Project ID of the Google Cloud Platform (GCP) project where your GCS buckets are located.

  3. Specify the names of the GCS buckets that your application will be using. This allows your application to target the specific buckets for storing and retrieving data.

4.2 AWS S3

When configuring AWS S3 as your storage provider, you'll have to:

  1. Create 3 buckets in AWS S3 account:

    • Assets Bucket

    • Temp Bucket

    • Data export Bucket (optional in case you'd like to allow exporting data from your data warehouse)

circle-info

Custom markers won't work unless the assets bucket is public.

triangle-exclamation
  1. Configure CORS: Temp and Assets buckets require having the following CORS headers configured:

circle-info

How do I setup CORS configuration? Check the provider docsarrow-up-right.

  1. Provide an Access Key ID and Secret Access Key that will be used to access your S3 buckets. You can generate these credentials through the AWS Management Console by creating an IAM user with appropriate permissions for accessing S3 resources.

  2. Configure the region in which these buckets are located. All the buckets must be created in the same AWS region.

  3. Specify the names of the AWS buckets that your application will be using. This allows your application to target the specific buckets for storing and retrieving data.

4.3 Configuration for Redshift

Create an AWS IAM role with the following settings:

  1. Trusted entity type: Custom trust policy

  2. Custom trust policy: Make sure to replace <your_aws_user_arn> with the ARN of the user which Access Key has been configured on CARTO deployment configuration

  1. Add permissions: Create a new permissions' policy. Please, take into account that you can omit the export bucket permissions if you wouldn't like to enable exporting data from CARTO platform.

This role has permissions to use both the exports bucket and the temp bucket to store that will be imported into Redshift. In order to enable exporting data from Redshift you'll have to specify the ARN of the role and the name of the exports bucket in the CARTO Self-Hosted configuration.

In case you'd like to enable importing data to Redshift, then it's not mandatory to provide the exports bucket's name, but you'll have to follow these instructions once the CARTO Self-Hosted deployment is ready.

4.4 Configuration for Snowflake

Create an AWS IAM role with the following settings:

  1. Trusted entity type: Custom trust policy

  2. Custom trust policy: Make sure to replace <your_aws_user_arn> with the ARN of the user which Access Key has been configured on CARTO deployment configuration

  1. Add permissions: Create a new permissions' policy. Please, take into account that you can omit the export bucket permissions if you wouldn't like to enable exporting data from CARTO platform.

This role has permissions to use the exports bucket to store the data exported from Snowflake. In order to enable exporting data from Snowflake you'll have to specify the ARN of the role and the name of the data export bucket in the CARTO Self-Hosted configuration.

4.5 Azure Blob

When configuring Azure Blob as your storage provider, you'll have to:

  1. Create 3 containers in your Azure Blob storage account:

    • Assets Bucket

    • Temp Bucket

    • Data export Bucket (optional in case you'd like to allow exporting data from your data warehouse)

circle-info

Custom markers won't work unless the assets bucket is public.

  1. Configure CORS: Temp and Assets buckets require having the following CORS headers configured:

circle-info

How do I setup CORS configuration? Check the provider docsarrow-up-right.

  1. Provide an Access Key that will be used to access your containers.

  2. Specify the names of the buckets that your application will be using. This allows your application to target the specific buckets for storing and retrieving data.

Last updated

Was this helpful?