Depending on your chosen cloud provider and type of deployment, there can be different possible architectural setups to deploy CARTO Self-Hosted as illustrated below.

Single Virtual Machine Deployment

Orchestrated container deployment


The following diagram describes the different components of the CARTO Self-hosted. This set of components maps both with a container or external service in the deployment.


It's the principal entry point of the application. It's an nginx reverse proxy to route the HTTPs traffic to the right components.


PostgreSQL database to manage the metadata of the CARTO platform.

Subscribers and APIs use this database to perform their operations.

More info at the external database section in deployment requirements.


Message broker to exchange messages between the different pieces of the platform using the producer-consumer paradigm.

APIs or scheduler processes often produce messages to be consumed by the subscriber. For example, Imports API produces a message to import a file, and the import-worker consumes the message and performs the import operation.

It's not a container included in the Self-Hosted, it uses Google Cloud Pub/Sub.


In-memory redis cache for the APIs and subscribers.

CARTO Self-Hosted can work without this cache, the platform will just fall back the queries against the metadata database.

You can use an external service of your cloud provider as explained in more detail here.


The Web Application of the workspace. It's a modularized React application. Applications in the CARTO platform such as Builder or Workflows are inside this container.

It's the root path ( of the Self-Hosted, a basic nginx container with static files like HTML, CSS, and JS.


The Web Application manages the login and signup process. It's a modularized React application.

It's at /acc/ path ( of the Self-Hosted, a basic nginx container with static files like HTML, CSS, and JS.

It requires a connection with the CARTO accounts API to perform different operations: create accounts, create users, invite users, etc.


An HTTP cache that works as a CDN for the maps-api and some endpoints of workspace-api. It uses Varnish HTTP Cache.


API to support the Web Application of the workspace (workspace-www).


Maps and SQL high-performance API. It's the component with higher traffic as it's deeply used by other components.

It doesn't perform batch operations. Batch operations of SQL API are executed by sql-worker.

You should create multiple instances of this container to scale at your needs.


Import API. This component doesn't perform actual import operation, it just creates jobs for import-worker.


Location Data Services (LDS) API.


Consumer of the messages related to workspace at the Message Broker. It reads the messages published on that bus and performs the required actions.


Consumer of the messages related to imports at the Message Broker. It uploads geospatial files into the customer Data Warehouse.

It requires 8GB of RAM to process up to 1GB of geospatial files.


Consumer of the messages related to SQL at the Message Broker.

This worker is mainly used to execute SQL in batch for customers using PostgreSQL or Redshift. BigQuery and Snowflake data warehouse won't use this component as it uses the batch APIs (jobs) provided by the data warehouse.


Consumer of the messages related to invalidation at the Message Broker. It invalidates content at http-cache.


Containers used for the deployment of the Admin Console. They serve the statics needed to handle the Admin Console.

This piece manages the changes applied to your CARTO Self-Hosted configuration, and it handles licensing related processes and upgrade checks.

Last updated