Ask or search…
K
Links
Comment on page

Configure your own buckets

For every CARTO Self-Hosted installation, we need some configured buckets to store resources that will be used by the platform. These storage buckets are part of the required infrastructure for importing data, map thumbnails, customization assets (custom logos and markers) and other internal data.
You can create and use your own storage buckets in any of the following supported storage providers:

Pre-requisites

  1. 1.
    Create 2 buckets in your preferred Cloud provider:
    • Import Bucket
    • Thumbnails Bucket.
There're no name constraints.
Map thumbnails storage objects (.png files) can be configured to be public (default) or private. In order to change this, set WORKSPACE_THUMBNAILS_PUBLIC="false". Some features, such as branding and custom markers, won't work unless the bucket is public. However, there's a workaround to avoid making the whole bucket public, which requires allowing public objects, allowing ACLs (or non-uniform permissions) and disabling server-side encryption.
  1. 2.
    CORS configuration: Thumbnails and Import buckets require having the following CORS headers configured.
  • Allowed origins: *
  • Allowed methods: GET, PUT, POST
  • Allowed headers (common): Content-Type, Content-MD5, Content-Disposition, Cache-Control
    • GCS (extra): x-goog-content-length-range, x-goog-meta-filename
    • Azure (extra): Access-Control-Request-Headers, X-MS-Blob-Type
  • Max age: 3600
CORS is configured at bucket level in GCS and S3, and at storage account level in Azure.
How do I setup CORS configuration? Check the provider docs: GCS, AWS S3, Azure Blob Storage.
  1. 3.
    Generate credentials with Read/Write permissions to access those buckets, our supported authentication methods are:
  • GCS: Service Account Key
  • AWS: Access Key ID and Secret Access Key
  • Azure Blob: Access Key

Single VM deployments (Docker Compose)

Google Cloud Storage
AWS S3
Azure Blob Storage
In order to use Google Cloud Storage custom buckets you need to:
  1. 1.
    Create a custom Service account.
  2. 2.
    Grant this service account with the following role (in addition to the buckets access): roles/iam.serviceAccountTokenCreator.
  3. 3.
    Set the following variables in your customer.env file:
# Thumbnails bucket
WORKSPACE_THUMBNAILS_PROVIDER='gcp'
WORKSPACE_THUMBNAILS_PUBLIC=<true|false>
WORKSPACE_THUMBNAILS_BUCKET=<thumbnails_bucket_name>
WORKSPACE_THUMBNAILS_KEYFILENAME=/usr/src/certs/<gcp_key>.json
WORKSPACE_THUMBNAILS_PROJECTID=<gcp_project_id>
# Import bucket
IMPORT_PROVIDER='gcp'
IMPORT_BUCKET=<import_bucket_name>
IMPORT_KEYFILENAME=/usr/src/certs/<gcp_key>.json
IMPORT_PROJECTID=<gcp_project_id>
The service account that is used to access the GCP buckets should be copied into the certs folder, which is located inside the CARTO installation folder.
If <BUCKET>_KEYFILENAME is not defined env GOOGLE_APPLICATION_CREDENTIALS is used as default value. When the selfhosted service account is setup in a Compute Engine instance as the default service account, there's no need to set any of these, as the containers will inherit the instance default credentials.
If <BUCKET>_PROJECTID is not defined env GOOGLE_CLOUD_PROJECT is used as default value.
In order to use AWS S3 custom buckets you need to:
  1. 1.
    Create an IAM user and generate a programmatic key ID and secret. If server-side encryption is enabled, the user must be granted with permissions over the KMS key used.
  2. 2.
    Grant this user with read/write access permissions over the buckets.
  3. 3.
    Set the following variables in your customer.env file:
# Thumbnails bucket
WORKSPACE_THUMBNAILS_PROVIDER='s3'
WORKSPACE_THUMBNAILS_PUBLIC=<true|false>
WORKSPACE_THUMBNAILS_BUCKET=<thumbnails_bucket_name>
WORKSPACE_THUMBNAILS_ACCESSKEYID=<aws_access_key_id>
WORKSPACE_THUMBNAILS_SECRETACCESSKEY=<aws_access_key_secret>
WORKSPACE_THUMBNAILS_REGION=<aws_s3_region>
# Import bucket
IMPORT_PROVIDER='s3'
IMPORT_BUCKET=<import_bucket_name>
IMPORT_ACCESSKEYID=<aws_access_key_id>
IMPORT_SECRETACCESSKEY=<aws_access_key_secret>
IMPORT_REGION=<aws_s3_region>
In order to use Azure Storage buckets you need to:
  1. 1.
    Create an storage account if you don't have one already.
  2. 2.
    Generate an Access Key, from the storage account's Security properties.
  3. 3.
    Set the following variables in your customer.env file:
# Thumbnails bucket
WORKSPACE_THUMBNAILS_PROVIDER='azure-blob'
WORKSPACE_THUMBNAILS_PUBLIC=<true|false>
WORKSPACE_THUMBNAILS_BUCKET=<thumbnails_bucket_name>
WORKSPACE_THUMBNAILS_STORAGE_ACCOUNT=<storage_account_name>
WORKSPACE_THUMBNAILS_STORAGE_ACCESSKEY=<access_key>
# Import bucket
IMPORT_PROVIDER='azure-blob'
IMPORT_BUCKET=<import_bucket_name>
IMPORT_STORAGE_ACCOUNT=<storage_account_name>
IMPORT_STORAGE_ACCESSKEY=<access_key>
The access key that is used to access the GCP buckets should be copied into the certs folder, which is located inside the CARTO installation folder.

Orchestrated container deployment (Kubernetes)

Google Cloud Storage
AWS S3
Azure Blob Storage
In order to use Google Cloud Storage custom buckets you need to:
  1. 1.
    Add the following lines to your customizations.yaml and replace the <values> with your own settings:
appConfigValues:
storageProvider: "gcp"
workspaceImportsBucket: <import_bucket_name>
workspaceImportsPublic: <false|true>
workspaceThumbnailsBucket: <thumbnails_bucket_name>
workspaceThumbnailsPublic: <false|true>
thumbnailsBucketExternalURL: <public or authenticated external bucket URL>
googleCloudStorageProjectId: <gcp_project_id>
Note that thumbnailsBucketExternalURL could be https://storage.googleapis.com/<thumbnails_bucket_name>/ for public access or https://storage.cloud.google.com/<thumbnails_bucket_name>/ for authenticated access.
  1. 2.
    Select a Service Account that will be used by the application to interact with the buckets. There are two options:
    1. 1.
      Using a custom Service Account, that will be used not only for the buckets, but for the services deployed by CARTO as well. If you are using Workload Identity, that's your option.
    2. 2.
      Using a dedicated Service Account only for the buckets
  2. 3.
    Grant the selected Service Account with the role roles/iam.serviceAccountTokenCreator in the GCP project where it was created.
⚠️ We don't recommend granting this role at project IAM level, but instead at the Service Account permissions level (IAM > Service Accounts > your_service_account > Permissions).
  1. 4.
    Grant the selected Service Account with the role roles/storage.admin to the buckets created.
  2. 5.
    [OPTIONAL] Pass your GCP credentials as secrets: This is only required if you are going to use a dedicated Service Account only for the buckets
    • Option 1: Automatically create the secret:
      appSecrets:
      googleCloudStorageServiceAccountKey:
      value: |
      <REDACTED>
    appSecrets.googleCloudStorageServiceAccountKey.value should be in plain text, preserving the multiline and correctly tabulated.
    • Option 2: Using existing secret: Create a secret running the command below, after replacing the <PATH_TO_YOUR_SECRET.json> value with the path to the file of the Service Account:
      kubectl create secret generic \
      [-n my-namespace] \
      mycarto-google-storage-service-account \
      --from-file=key=<PATH_TO_YOUR_SECRET.json>
      Add the following lines to your customizations.yaml, without replacing any value:
      appSecrets:
      googleCloudStorageServiceAccountKey:
      existingSecret:
      name: mycarto-google-storage-service-account
      key: key
In order to use AWS S3 custom buckets you need to:
  1. 1.
    Create an IAM user and generate a programmatic key ID and secret.
  2. 2.
    Grant this user with read/write access permissions over the buckets. If server-side encryption is enabled, the user must be granted with permissions over the KMS key used.
  3. 3.
    Add the following lines to your customizations.yaml and replace the <values> with your own settings:
appConfigValues:
storageProvider: "s3"
workspaceImportsBucket: <import_bucket_name>
workspaceImportsPublic: <false|true>
workspaceThumbnailsBucket: <thumbnails_bucket_name>
workspaceThumbnailsPublic: <false|true>
thumbnailsBucketExternalURL: <external bucket URL>
awsS3Region: <s3_buckets_region>
Note that thumbnailsBucketExternalURL should be https://<thumbnails_bucket_name>.s3.amazonaws.com/
  1. 4.
    Pass your AWS credentials as secrets by using one of the options below:
    • Option1: Automatically create a secret
      Add the following lines to your customizations.yaml replacing it with your access key values:
      appSecrets:
      awsAccessKeyId:
      value: "<REDACTED>"
      awsAccessKeySecret:
      value: "<REDACTED>"
      appSecrets.awsAccessKeyId.value and appSecrets.awsAccessKeySecret.value should be in plain text
    • Option 2: Using an existing secret
      Create a secret running the command below, after replacing the <REDACTED> values with your key values:
      kubectl create secret generic \
      [-n my-namespace] \
      mycarto-custom-s3-secret \
      --from-literal=awsAccessKeyId=<REDACTED> \
      --from-literal=awsSecretAccessKey=<REDACTED>
      Use the same namespace where you are installing the helm chart
      Add the following lines to your customizations.yaml, without replacing any value:
      appSecrets:
      awsAccessKeyId:
      existingSecret:
      name: mycarto-custom-s3-secret
      key: awsAccessKeyId
      awsAccessKeySecret:
      existingSecret:
      name: mycarto-custom-s3-secret
      key: awsSecretAccessKey
In order to use Azure Storage buckets you need to:
  1. 1.
    Generate an Access Key, from the storage account's Security properties.
  2. 2.
    Add the following lines to your customizations.yaml and replace the <values> with your own settings:
appConfigValues:
storageProvider: "azure-blob"
azureStorageAccount: <storage_account_name>
workspaceImportsBucket: <import_bucket_name>
workspaceImportsPublic: <false|true>
workspaceThumbnailsBucket: <thumbnails_bucket_name>
thumbnailsBucketExternalURL: <external bucket URL>
workspaceThumbnailsPublic: <false|true>
Note that thumbnailsBucketExternalURL should be https://<azure_resource_group>.blob.core.windows.net/<thumbnails_bucket_name>/
  1. 3.
    Pass your credentials as secrets by using one of the options below:
    • Option 1: Automatically create the secret: Add the following lines to your customizations.yaml replacing it with your access key value:
      appSecrets:
      azureStorageAccessKey:
      value: "<REDACTED>"
      appSecrets.azureStorageAccessKey.value should be in plain text
    • Option 2: Using existing secret: Create a secret running the command below, after replacing the <REDACTED> values with your key values:
      kubectl create secret generic \
      [-n my-namespace] \
      mycarto-custom-azure-secret \
      --from-literal=azureStorageAccessKey=<REDACTED>
      Use the same namespace where you are installing the helm chart
      Add the following lines to your customizations.yaml, without replacing any value:
      appSecrets:
      awsAccessKeyId:
      existingSecret:
      name: mycarto-custom-azure-secret
      key: azureStorageAccessKey