# Single VM deployment (docker-compose)

{% hint style="danger" %}
**This documentation is for the CARTO Self-Hosted Legacy Version**. Use only if you've installed this specific version. Explore our latest documentation for updated features.
{% endhint %}

{% hint style="info" %}
**Estimated time:** Completing this deployment guide is expected to take approximately **3 hours**. This estimate may vary based on individual familiarity with the technology stack involved and the complexity of your organization's environment.
{% endhint %}

## Requirements

To deploy CARTO Self-Hosted based on a Single VM deployment, you need:

* A CARTO Self-Hosted installation package containing your environment configuration and a license key. The package has two files: `customer.env` and `key.json`. If you don't have it yet, you can ask for it at <support@carto.com>.
* A domain you own, to which you can add a DNS record.
* Familiarity with and installations of [Docker Engine](https://docs.docker.com/engine/install/) and [Docker Compose](https://docs.docker.com/compose/).
* Familiarity as a SysAdmin in the cloud environment where you are running your installation: GCP, AWS, or Azure.

## Create a Linux VM instance[​](https://docs.retool.com/self-hosted/quickstarts/gcp#1-create-a-linux-vm-instance) <a href="#id-1-create-a-linux-vm-instance" id="id-1-create-a-linux-vm-instance"></a>

CARTO Self-Hosted can be deployed in any Virtual Machine that meets the minimum requirements specified at [Single VM deployments (docker-compose](https://docs.carto.com/key-concepts/deployment-requirements#single-vm-deployments-docker-compose)).

{% tabs %}
{% tab title="Google Cloud GCP Instance" %}
Create a new Linux VM in the [Google Cloud console](https://console.cloud.google.com/) that meets the minimum requirements specified at [Single VM deployments (docker-compose](https://docs.carto.com/key-concepts/deployment-requirements#single-vm-deployments-docker-compose)).

Refer to the [Google Cloud documentation](https://www.bing.com/search?q=gcp+compute+engine+create+linux+vm\&PC=U316\&FORM=CHROMN) to learn how to create a new virtual machine.

* [​](https://docs.retool.com/self-hosted/quickstarts/gcp#network)Configure the firewall to allow HTTPS traffic.
* Specify **SSD persistent** with a size that meets or exceeds the minimum requirements.
  {% endtab %}

{% tab title="AWS EC2 Instance" %}
Create a new Linux EC2 instance in the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) using the Ubuntu Server 22.04 LTS (x86) Amazon Machine Image (AMI).

Refer to the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html) to learn how to create a new virtual machine.

* [​](https://docs.retool.com/self-hosted/quickstarts/gcp#network)Configure the firewall to allow HTTPS traffic.
* Specify **SSD persistent** with a size that meets or exceeds the minimum requirements.
  {% endtab %}

{% tab title="Azure VM" %}
Create a new Linux VM in the [Azure Portal](https://portal.azure.com/) that meets the minimum requirements.

Refer to the [Azure documentation](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu) to learn how to create a new virtual machine.

* [​](https://docs.retool.com/self-hosted/quickstarts/gcp#network)Configure the firewall to allow HTTPS traffic.
* When creating the VM, use SSH public key authentication and provide a username. Generate a new key-pair and specify a name. Azure generates and stores the key in the Azure KeyVault to download later.[​](https://docs.retool.com/self-hosted/quickstarts/azure-vm#authentication)
* Specify **SSD persistent** with a size that meets or exceeds the minimum requirements.
* Once the VM is initialized, download the private key when prompted. Update the permissions of the key-pair to ensure it has the required permissions for your SSH client.

```bash
chmod 400 <path_to_pem_file>
ssh -i <path_to_pem_file> <username>@<public_ip>
```

Ensure **Delete public IP and NIC when VM is deleted** is enabled.
{% endtab %}
{% endtabs %}

Once, your VM is ready, you should log in via SSH and install the latest version of [Docker Engine](https://docs.docker.com/engine/install/) and [Docker Compose](https://docs.docker.com/compose/).

## Installation steps

Clone this repository:

```
git clone https://github.com/CartoDB/carto-selfhosted.git
cd carto-selfhosted
```

Checkout to the [latest stable release](https://github.com/CartoDB/carto-selfhosted/releases):

```
git checkout tags/2026.3.10
```

Copy into `carto-selfhosted` folder the two files of the installation package

* `customer.env`
* `key.json`

### Domain configuration

Configure your CARTO Self-hosted domain by updating the env var `SELFHOSTED_DOMAIN` to <mark style="color:orange;">my.domain.com</mark>.

{% hint style="info" %}
A full domain is required. You cannot install CARTO in a domain path like <https://my.domain.com/carto>
{% endhint %}

Create a DNS record that points <mark style="color:orange;">my.domain.com</mark> to the **External IP** of your VM. For debugging purposes, you might want to modify your /etc/hosts:

```
echo "34.172.214.74 my.domain.com" >> /etc/hosts
```

### Configure the external database

Add to <mark style="color:orange;">customer.env</mark> the configuration of the [external database](https://docs.carto.com/key-concepts/deployment-requirements#external-database). At this point, you need to provide a PostgreSQL admin user (typically `postgres`) with permission to create users and databases.

* POSTGRES\_ADMIN\_USER: Your PostgreSQL admin user.
* POSTGRES\_ADMIN\_PASSWORD: The password of your admin user.
* WORKSPACE\_POSTGRES\_USER: The admin user to be created. It will be created with the previous admin user.
* WORKSPACE\_POSTGRES\_PASSWORD: The new password to be created.
* WORKSPACE\_POSTGRES\_DB: The database to be created.

```
# Set to 0 to not create the PostgreSQL container locally
LOCAL_POSTGRES_SCALE=0
WORKSPACE_POSTGRES_HOST=<YourServerIP>
WORKSPACE_POSTGRES_PORT=5432
WORKSPACE_POSTGRES_USER=carto_worskpace_admin
WORKSPACE_POSTGRES_PASSWORD=carto_worskpace_admin
WORKSPACE_POSTGRES_DB=carto_worskpace
# SSL will be enabled later.
WORKSPACE_POSTGRES_SSL_ENABLED=false
WORKSPACE_POSTGRES_SSL_MODE=disable
POSTGRES_ADMIN_USER=postgres
POSTGRES_ADMIN_PASSWORD=postgres
```

In some scenarios, it's required an SSL connection between the external database and the APIs. In that case, you should provide the SSL certificate and add to <mark style="color:orange;">customer.env</mark> the SSL configuration of your server.

```
WORKSPACE_POSTGRES_SSL_ENABLED=true
WORKSPACE_POSTGRES_SSL_MODE=require
# Only applies if Postgres SSL certificate is self-signed
WORKSPACE_POSTGRES_SSL_CA=/usr/src/certs/<CERTIFICATE_NAME>.pem
```

{% hint style="danger" %}
Mutual TLS connections between the external database and the APIs are not supported, so client certificates can't be configured on your external database
{% endhint %}

You should copy your certificate in `.pem` format into the `certs` folder located inside your installation route. We'll automatically mount the whole `certs` folder inside the required containers so that they can use the SSL certificate.

### Bring up the environment

Run the `install.sh` script to generate the `.env` file out of the `customer.env` file:

```
bash install.sh
```

Bring up the environment:

```
docker-compose up -d
```

Check all the containers are up and running:

```
docker-compose ps
```

{% hint style="info" %}
All containers should be in the state `Up`, except for `workspace-migrations` which state should be `Exit 0`, meaning the database migrations finished correctly.
{% endhint %}

A **non-production-ready** deployment of CARTO should be available at <mark style="color:orange;">`https://my.domain.com.`</mark>

## Configure your storage buckets

CARTO Self-hosted platform needs access to some storage buckets to save some resources needed by the platform. These buckets are in charge of storing assets such as imported datasets, map snapshots and custom markers.

You can create and use your own storage buckets in any of the following supported storage providers:

* [Google Cloud Storage](https://cloud.google.com/storage)
* [AWS S3](https://aws.amazon.com/s3/)
* [Azure Blob Storage](https://azure.microsoft.com/es-es/products/storage/blobs/)

And in order to configure them, there is a [detailed guide](#configure-your-custom-buckets) available that you should follow to complete the Self-Hosted configuration process.

## Add your SSL certificate

By default, CARTO Self-hosted will generate and use a self-signed certificate. In production environments, you need to provide your own SSL certificate.

{% hint style="info" %}
If you don't have yet a valid certificate for the domain of your Self-hosted, you might be interested in using <https://letsencrypt.org/> to get a valid one.
{% endhint %}

A valid certificate contains:

* A `.crt` file with your custom domain x509 certificate.
* A `.key` file with your custom domain private key.

{% hint style="info" %}
If your TLS certificate key is protected with a passphrase the CARTO Self-hosted installation won't be able to work as expected. You can easily generate a new key file without passphrase protection using the following command:

```bash
openssl rsa -in keyfile_with_passphrase.key -out new_keyfile.key
```

{% endhint %}

1. Create a `certs` folder in the current directory (`carto-selfhosted`)
2. Copy your `<cert>.crt` and `<cert>.key` files in the `certs` folders
3. Modify the following vars in the `customer.env` file:

   ```
   ROUTER_SSL_AUTOGENERATE=0
   ROUTER_SSL_CERTIFICATE_PATH=/etc/nginx/ssl/my.domain.com.crt
   ROUTER_SSL_CERTIFICATE_KEY_PATH=/etc/nginx/ssl/my.domain.com.key
   ```

Refresh:

```bash
bash install.sh
docker-compose up -d 
```

## Post-installation checks

In order to verify CARTO Self Hosted was correctly installed, and it's functional, we recommend performing the following checks:

1. Sign in to your Self Hosted, create a user and a new organization.
2. Go to the `Connections` page, in the left-hand menu, create a new connection to one of the available providers.
3. Go to the `Data Explorer` page, click on the `Upload` button right next to the `Connections` panel. Import a dataset from a local file.
4. Go back to the `Maps` page, and create a new map.
5. In this new map, add a new layer from a table using the connection created in step 3.
6. Create a new layer from a SQL Query to the same table. You can use a simple query like:

```
SELECT * FROM <dataset_name.table_name> LIMIT 100;
```

7. Create a new layer from the dataset imported in step 4.
8. Make the map public, copy the sharing URL, and open it in a new incognito window.
9. Go back to the `Maps` page, and verify your map appears there, and the map thumbnail represents the latest changes you made to the map.

**Congrats**! Once you've configured your custom buckets, you should have a production-ready deployment of CARTO Self-Hosted at <mark style="color:orange;">`https://my.domain.com`</mark>

{% hint style="info" %}
You may notice that the **onboarding experience** (demo maps, demo workflows...) and the **Data Observatory-automated features** (subscriptions, enrichment...) are disabled by default in your new organization, because the CARTO Data Warehouse is not enabled.

If you'd like to enable the onboarding experience and the Data Observatory features, follow the [guide to enable the CARTO Data Warehouse](https://docs.carto.com/carto-self-hosted/guides/guides/enable-the-carto-data-warehouse) or contact <support@carto.com>.

If you prefer not to enable the CARTO Data Warehouse, you can still use the Data Observatory without the UI features: after getting in touch, our team can deliver the data (both premium and public subscriptions) manually to your data warehouse.
{% endhint %}

## Analytics Toolbox in CARTO Self-Hosted

To fully leverage CARTO's capabilities you need to gain access to the Analytics Toolbox functions. This step is crucial to fully leverage CARTO's capabilities. Please refer to the documentation of your data warehouse provider for detailed instructions:

* [Google BigQuery](https://docs.carto.com/data-and-analysis/analytics-toolbox-for-bigquery/getting-access)
* [Amazon Redshift](https://docs.carto.com/data-and-analysis/analytics-toolbox-for-redshift/getting-access)
* [Snowflake](https://docs.carto.com/data-and-analysis/analytics-toolbox-for-snowflake/getting-access)
* [PostgreSQL](https://docs.carto.com/data-and-analysis/analytics-toolbox-for-postgresql/getting-access)

## Root Privileges

The installation of CARTO Self-Hosted doesn't require root privileges. It can be performed using a regular system user with permission to execute the `docker` and `docker-compose` binaries.

This means that once the dependencies and prerequisites are satisfied, the operator that runs the installation only requires permission to run the docker and docker-compose binaries.

This is usually achieved by adding the system user to the docker group, but there is more detailed information [here](https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user).

## Troubleshooting

The following standard commands of docker-compose could be used to debug possible issues that might arise:

`docker-compose logs` and `docker-compose ps`

#### Database

The container <mark style="color:orange;">workspace-migrations</mark> will be responsible for creating a new user <mark style="color:orange;">carto\_worskpace\_admin</mark> and a database <mark style="color:orange;">carto\_workspace.</mark>

To debug possible errors with the connection of the external database, you might need to check the logs of this container:

```bash
docker-compose logs workspace-migrations
```

For further assistance, check our [Support](https://docs.carto.com/carto-self-hosted/support) page.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.carto.com/carto-self-hosted/carto-self-hosted-legacy/quickstarts/single-vm-deployment.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
