# Orchestrated container deployment (Kots)

**Estimated time:** Completing this deployment guide is expected to take approximately **2 hours**. This estimate may vary based on individual familiarity with the technology stack involved and the complexity of your organization's environment.

This guide provides step-by-step instructions for deploying CARTO Self-Hosted on a Kubernetes cluster using Kots.

## 1. Prerequisites

Before you begin, ensure you have the following tools and assets ready.

To deploy CARTO Self-Hosted on Kubernetes, you need:

### **1.2 Required Tools**

* A working installation of kubectl. To install kubectl, see documentation on [Google Cloud Platform](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl#generate_kubeconfig_entry), [AWS](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html), and [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli#connect-to-the-cluster).
* [Kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) installed on your CLI.
* A working [installation of Helm v3](https://helm.sh/docs/intro/install/) on version 3.6.0 or later.
* [Troubleshoot.sh ](https://troubleshoot.sh/docs/)installed to run the preflight checks (view [step 4](#id-4.-execute-the-preflight-checks).)

### **1.3 Required assets**

* **CARTO Installation Package**: You should have received a package from CARTO support containing two key files:
  * `carto-values.yaml`: Contains the base configuration.
  * `carto-secrets.yaml`: Contains your license and private credentials.
  * If you don't have it yet, you can ask for it at <support@carto.com>.
* **Kubernetes Cluster**: A running cluster that meets CARTO's hardware and software requirements.
  * To create a cluster, see documentation on [Google Cloud Platform](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster), [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html), and [Azure](https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli). This cluster **must fit** our hardware and software [requirements](https://docs.carto.com/key-concepts/deployment-requirements#hardware-requirements) for Kubernetes.
* **External PostgreSQL Database**: A running PostgreSQL instance accessible from your cluster.
* **Cloud Storage Buckets**: Pre-configured storage buckets on a supported provider (GCS, S3, or Azure Blob). You must visit the guide: [Configure your own buckets (Helm)](https://docs.carto.com/carto-self-hosted/guides/guides-helm/configure-your-own-buckets)
* **DNS name and Certificate**: You'll need a domain you own, to which you can add a DNS record.

## 2. Install Kots plugin for kubectl

Type this command to install the Kots plugin:

```bash
curl https://kots.io/install | sudo bash
```

## 3. Install CARTO

Check that your cluster meets the [deployment requirements](https://docs.carto.com/carto-self-hosted/key-concepts/deployment-requirements) and use the following command to install the CARTO Admin Console in your Kubernetes cluster:

```bash
kubectl kots install carto -n <namespace>
```

Throughout the installation process, you will be prompted to set a password for accessing the Admin Console. This Admin Console serves as the central hub for managing your CARTO Self-Hosted deployment.

Once the Admin Console is successfully deployed, you will find a link within the console interface itself. Click on this link to navigate to the Kots console, where you will be able to upload your license file and further configure your CARTO Self-Hosted deployment.

Click on the "Continue" button to upload the license of your CARTO Self-Hosted installation and start configuring the different settings of your platform.

## 4. Login to the Admin Console

The Admin Console is a web-based UI for managing your CARTO installation, including configuration, updates, and license management.<br>

1. Forward the Admin Console port to your local machine. This command creates a secure tunnel to the console running in your cluster. Replace `<namespace>` with the namespace you used during installation.

   ```bash
   kubectl kots admin-console --namespace <namespace>
   ```
2. You'll have a result like this:

```bash
kubectl kots admin-console -n <namespace>
  • Press Ctrl+C to exit
  • Go to http://localhost:8800 to access the Admin Console
```

3. Open your web browser and navigate to: `http://localhost:<port>`
4. Log in using the password you created during the installation step.

## 5. Setup metadata database connection

At this point, we are setting up the configuration of the [external database](https://docs.carto.com/key-concepts/deployment-requirements#external-database). You need to provide a PostgreSQL user and a database that can be used by that user to configure the metadata database used by CARTO platform.

<figure><img src="https://3029946802-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FybPdpmLltPkzGFvz7m8A%2Fuploads%2Fgit-blob-8f7a38b0318a656b61ea85937211d450805ffcca%2FScreenshot%202024-02-20%20at%2010.29.08.png?alt=media" alt=""><figcaption></figcaption></figure>

If you already have a PostgreSQL deployment that can be used by your CARTO Self-Hosted platform, you'll have to [create a new database](https://www.postgresql.org/docs/current/manage-ag-createdb.html) for CARTO platform and a user with enough permissions to use that database.

In order to enable TLS connections, you'll also have to provide the SSL certificate of your PostgreSQL database.

{% hint style="danger" %}
Mutual TLS connections between the external database and the APIs are not supported, so client certificates can't be configured on your external database
{% endhint %}

{% hint style="warning" %}
**Azure PostgreSQL Flexible Server**:

* Make sure you add ownership over the carto database and all privileges over the schema.
* Extensions must be allowlisted before creation. Run `az postgres flexible-server parameter set --name azure.extensions --value "pgcrypto"` then create the extension as the admin user. See [Azure docs](https://go.microsoft.com/fwlink/?linkid=2301063).
  {% endhint %}

## 6. Setup access to CARTO

Configure your CARTO Self-Hosted domain to <mark style="color:orange;">my.domain.com</mark>.

<figure><img src="https://3029946802-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FybPdpmLltPkzGFvz7m8A%2Fuploads%2Fgit-blob-119b31970cac883d9641509b6185ef0a702f36f5%2FScreenshot%202024-03-19%20at%2011.42.11.png?alt=media" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
A full domain is required. You cannot install CARTO in a domain path like <https://my.domain.com/carto>
{% endhint %}

Depending on your Kubernetes provider, you'll find the following options to configure the access to CARTO Self-Hosted platform:

{% tabs %}
{% tab title="GKE" %}
**Default access mode**

With the default access option we'll setup a standard [**gke-l7-global-external-managed**](https://cloud.google.com/kubernetes-engine/docs/concepts/gateway-api) load balancer service within your cluster to expose the platform through a public IP. Take into account that **it's mandatory to** **enable the Gateway API option of your cluster**.

To configure the load balancer you'll have to provide the name of a valid SSL certificate managed on GCP.

You can [configure/request your SSL certificate in GCP](https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#terraform_1) by navigating to Security > Data protection > Certificate Manager > Classic certificates. In this panel you'll be able to both add a custom SSL certificate or request a certificate managed by GCP.

{% hint style="info" %}
If you don't have a valid certificate yet for the domain of your Self-hosted, you might be interested in using a self signed one. You can generate a self-signed cert valid for GCP with the following command:

```
openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -sha256 -days 365
```

Take into account that you'll have to generate a new key without a passphrase after generating your custom certificate:

```
openssl rsa -in key.pem -out key_without_pass.pem
```

{% endhint %}

If you don't provide a static IP address for your CARTO Self-Hosted platform, the gateway deployed on your GKE will used an automatically assigned one.

**Custom access mode**

If you'd like to configure your own load balancer, you can select this mode and connect it to the CARTO router service.

Please refer to the "**Other providers"** tab to obtain more information about how to configure the access to CARTO platform.
{% endtab %}

{% tab title="EKS" %}
**Default access mode**

With the default access option we'll setup a standard load balancer service within your cluster to expose the platform through a public IP.

To configure the load balancer you'll have to provide the name of a valid SSL certificate managed on AWS.

You can [request your SSL certificate](https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html) or [import a self-signed certificate](https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html) in AWS by navigating to the AWS Certificate Manager.

{% hint style="info" %}
If you don't have a valid certificate yet for the domain of your Self-hosted, you might be interested in using a self signed one. You can generate a self-signed cert valid for GCP with the following command:

```
openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -sha256 -days 365
```

Take into account that you'll have to generate a new key without a passphrase after generating your custom certificate:

```
openssl rsa -in key.pem -out key_without_pass.pem
```

{% endhint %}

If you don't provide a static IP address for your CARTO Self-Hosted platform, the gateway deployed on your AKS will used an automatically assigned one. Please, take into account that AWS will automatically assign a domain to your load balancer, but you'll just be able to access CARTO platform from the domain you configured before.

In case you want to avoid providing a static IP for your load balancer, you'll have to configure your domain to point to the load balancer. There is more information explaining how to proceed with the needed changes available [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-domain-names-with-elb.html#dns-associate-custom-elb).

**Custom access mode**

If you'd like to configure your own load balancer, you can select this mode and connect it to the CARTO router service.

Please refer to the "**Other providers**" tab to obtain more information about how to configure the access to CARTO platform.
{% endtab %}

{% tab title="Other providers" %}
Configuring the access to CARTO platform involves multiple steps, particularly regarding networking and security considerations. CARTO supports HTTPs only, which means any deployment must ensure secure communication over HTTPS.

You'll have to deploy your own load balancer. This load balancer will handle incoming HTTPS traffic and direct it to the appropriate services within your Kubernetes cluster.

CARTO likely has a service responsible for routing incoming requests to the appropriate components within its deployment. **You'll need to ensure that your custom load balancer is connected to the router service** so that it can correctly forward incoming requests to CARTO.

Since CARTO requires HTTPS, you'll need to configure SSL certificates for securing communication. TLS termination can be configured at different levels within the deployment:

* **Terminate TLS inside CARTO application**: You can configure SSL certificates directly at the router service level. This ensures that all incoming traffic to the CARTO platform is encrypted right at the entry point.
* **Terminate TLS in a higher layer and connect to CARTO over HTTP**: Alternatively, SSL certificates can be configured at a higher layer, such as the Load Balancer level. In this setup, the Load Balancer terminates SSL connections and forwards decrypted traffic to the router service and other components of the CARTO platform within the Kubernetes cluster. This approach offloads SSL termination from individual components within the Kubernetes cluster, simplifying their configuration.

By following these steps, you can deploy the CARTO platform ensuring secure and efficient communication using HTTPS while accommodating customers' SSL certificate requirements.

**Ingress testing mode**

When configuring the access to CARTO platform you can enable the ingress testing mode by selecting the following option:

<figure><img src="https://3029946802-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FybPdpmLltPkzGFvz7m8A%2Fuploads%2Fgit-blob-6c865393845b4ecc5d9886b10707c2a9bd4f4503%2FScreenshot%202024-07-16%20at%2011.52.32.png?alt=media" alt=""><figcaption></figcaption></figure>

This will deploy the minimum needed components of CARTO platform to ensure that the layers that you configure on top of CARTO are working as expected. If everything is correctly configured, you should see the following static website when navigating to CARTO platform:

<figure><img src="https://3029946802-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FybPdpmLltPkzGFvz7m8A%2Fuploads%2Fgit-blob-11f1b809b11dc52ecd94c3b76cb5ffba9c239ff0%2FScreenshot%202024-07-16%20at%2011.55.04.png?alt=media" alt=""><figcaption></figcaption></figure>

Once you can see that website, this means that CARTO platform access mode is correctly configured and you just have to disable the ingress testing mode and apply the change to start using the application.
{% endtab %}
{% endtabs %}

## 7. Setup cloud storage buckets configurations

CARTO Self-hosted platform needs access to some storage buckets to save some resources needed by the platform. These buckets are in charge of storing assets such as imported datasets, map snapshots and custom markers.

You can create and use your own storage buckets in any of the following supported storage providers:

* [Google Cloud Storage](https://cloud.google.com/storage)
* [AWS S3](https://aws.amazon.com/s3/)
* [Azure Blob Storage](https://azure.microsoft.com/es-es/products/storage/blobs/)

And in order to configure them, there is a [detailed guide](https://docs.carto.com/carto-self-hosted/guides/guides/configure-your-own-buckets) available that you should follow to complete the Self-Hosted configuration process.

<figure><img src="https://3029946802-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FybPdpmLltPkzGFvz7m8A%2Fuploads%2Fgit-blob-6f8591ca0a40d9f1e40641359a0617583e06f381%2Fimage%20(1)%20(2).png?alt=media" alt=""><figcaption></figcaption></figure>

## 8. Run preflight checks and deploy

After saving your configuration, you'll be taken to the Admin Console dashboard to begin the deployment.

### **8.1 Review preflight checks**

The dashboard will automatically run a series of preflight checks to validate that your cluster is ready for CARTO. Review the results carefully to identify and resolve any failures before proceeding.

{% hint style="warning" %}
There are some anomalies in the preflight checks within clusters in GKE with Autopilot enabled that may cause unnecessary alerts or warnings during your deployment process. Please ignore any alerts and proceed with your deployment as usual
{% endhint %}

Once your CARTO Self-Hosted platform deployment is complete, it's important to ensure all the Kubernetes pods are up and running. You can check clicking in the details link if all the services are running correctly.

Once all of them are in a healthy status, you'll have to find the IP of your deployment and configure your DNS to point to the right IP. In case you selected the default access mode, you can find the IP executing the following commands depending on your cloud vendor:

### **8.2 Deploy the platform**

Once all checks pass, click the Deploy button to begin the installation. *This process may take several minutes to complete*. After the deployment is finished, you can monitor the status of all Kubernetes pods directly from the dashboard. Click the details link to ensure all services show a healthy and running status.

### **8.3 Configure your DNS**

The final step is to point your DNS record to the application's external IP address. If you selected the default access mode, you can find the IP by running the command corresponding to your cloud provider:

{% tabs %}
{% tab title="GKE" %}
Obtain the IP of the Gateway deployed for CARTO router:

```
kubectl get gateway -n <namespace> -o jsonpath="{.items[0].status.addresses[0].value}"
```

{% endtab %}

{% tab title="EKS" %}
Obtain the IP of the Load Balancer service deployed for CARTO router:

```
kubectl get svc --namespace {{ .Release.Namespace }} {{ include "carto.router.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress.*.ip}
```

{% endtab %}
{% endtabs %}

If you selected the custom access mode or using other provider, then you'll have to obtain that IP depending on the way you configured the access to CARTO Self-Hosted platform.

## 9. Post-installation checks

In order to verify CARTO Self Hosted was correctly installed, and it's functional, we recommend performing the following checks:

1. Sign in to your Self Hosted, create a user and a new organization.
2. Go to the `Connections` page, in the left-hand menu, create a new connection to one of the available providers.
3. Go to the `Data Explorer` page, click on the `Upload` button right next to the `Connections` panel. Import a dataset from a local file.
4. Go back to the `Maps` page, and create a new map.
5. In this new map, add a new layer from a table using the connection created in step 3.
6. Create a new layer from a SQL Query to the same table. You can use a simple query like:

```
SELECT * FROM <dataset_name.table_name> LIMIT 100;
```

7. Create a new layer from the dataset imported in step 4.
8. Make the map public, copy the sharing URL and open it in a new incognito window.
9. Go back to the `Maps` page, and verify your map appears there, and the map thumbnail represents the latest changes you made to the map.

**Congrats**! Once you've configured your custom buckets, you should have a production-ready deployment of CARTO Self-Hosted at <mark style="color:orange;">`https://my.domain.com`</mark>

{% hint style="info" %}
You may notice that the **onboarding experience** (demo maps, demo workflows...) and the **Data Observatory-automated features** (subscriptions, enrichment...) are disabled by default in your new organization, because the CARTO Data Warehouse is not enabled.

If you'd like to enable the onboarding experience and the Data Observatory features, follow the [guide to enable the CARTO Data Warehouse](https://docs.carto.com/carto-self-hosted/guides/guides/enable-the-carto-data-warehouse) or contact <support@carto.com>.

If you prefer not to enable the CARTO Data Warehouse, you can still use the Data Observatory without the UI features: after getting in touch, our team can deliver the data (both premium and public subscriptions) manually to your data warehouse.
{% endhint %}

### 9.1 Analytics Toolbox in CARTO Self-Hosted

To fully leverage CARTO's capabilities you need to gain access to the Analytics Toolbox functions. This step is crucial to fully leverage CARTO's capabilities. Please refer to the documentation of your data warehouse provider for detailed instructions:

* [Google BigQuery](https://docs.carto.com/data-and-analysis/analytics-toolbox-for-bigquery/getting-access)
* [Amazon Redshift](https://docs.carto.com/data-and-analysis/analytics-toolbox-for-redshift/getting-access)
* [Snowflake](https://docs.carto.com/data-and-analysis/analytics-toolbox-for-snowflake/getting-access)
* [Databricks](https://docs.carto.com/data-and-analysis/analytics-toolbox-for-databricks)
* [PostgreSQL](https://docs.carto.com/data-and-analysis/analytics-toolbox-for-postgresql/getting-access)

## Troubleshooting

From the Admin Console you'll be able to analyze your CARTO installation by clicking on the Troubleshoot section. You can generate a support bundle from this view, that will collect all the required information to check the status of your deployment.

For further assistance, check our [Support](https://docs.carto.com/carto-self-hosted/support) page.
