# Load Balancing best practices (Helm)

This guide provides best practices guidance for exposing CARTO Self-Hosted to users in production environments. It covers load balancing architectures, configuration patterns, and cloud provider-specific recommendations for Kubernetes deployments using Helm.

## **1. Prerequisites**

* CARTO Self-Hosted deployed via Helm.
* Kubernetes cluster running on a cloud provider (GCP, AWS, or Azure).
* Understanding of Kubernetes Services and Ingress resources.
* Domain name configured for your CARTO deployment.

## 2. Load balancing and publishing architecture

CARTO Self-Hosted uses a component called `router-http` as the main entry point for all HTTP/HTTPS traffic. You can see the details on Carto Self-hosted architecture here: [architecture](https://docs.carto.com/carto-self-hosted/key-concepts/architecture "mention")&#x20;

This router service can be exposed through different Kubernetes networking patterns:

<table><thead><tr><th width="159.93359375">Pattern</th><th width="306.578125">Description</th><th>Recommended For</th><th>Helm Management</th></tr></thead><tbody><tr><td><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-clusterip"><strong>ClusterIP</strong></a> <strong>+</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/gateway/"><strong>GatewayAPI</strong></a></td><td>Service internal to cluster managed by helm chart. Exposed via user-managed GatewayAPI controller </td><td>Production deployments in private/public clusters</td><td><span data-gb-custom-inline data-tag="emoji" data-code="2705">✅</span> Ingress managed separately</td></tr><tr><td><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-clusterip"><strong>ClusterIP</strong></a> <strong>+</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/"><strong>Ingress Layer</strong></a></td><td>Service internal to cluster managed by helm chart. Exposed via user-managed  Ingress controller </td><td>Production deployments in private/public clusters</td><td><span data-gb-custom-inline data-tag="emoji" data-code="2705">✅</span> Ingress managed separately</td></tr><tr><td><strong>Cloud Load Balancer +</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer"><strong>LoadBalancer Service</strong></a></td><td>Kubernetes creates cloud LB automatically (internal recommended). Exposed via user-managed Cloud Load Balancer.</td><td>Production deployments in private/public clusters</td><td><span data-gb-custom-inline data-tag="emoji" data-code="2705">✅</span> Ingress managed separately</td></tr><tr><td><strong>Helm-managed</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer"><strong>LoadBalancer Service</strong></a></td><td>Kubernetes automatically provisions a cloud load balancer, fully managed through Helm. <em><strong>This approach offers limited flexibility for advanced security customizations.</strong></em></td><td>Production with soft network security requirements</td><td><span data-gb-custom-inline data-tag="emoji" data-code="2705">✅</span> LB managed by Helm chart.  </td></tr><tr><td><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport"><strong>NodePort</strong></a> <strong>+ Cloud Load Balancer</strong></td><td>Service internal to cluster managed by helm chart. User-managed Cloud LB points to NodePorts</td><td>Production deployments in private clusters</td><td>⚠️ Not recommended for production</td></tr><tr><td><strong>Helm-managed</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport"><strong>NodePort Service</strong></a> </td><td>Service exposed on static port on each node</td><td>Development/testing</td><td>⚠️ Not recommended for production</td></tr><tr><td><strong>Helm-managed</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/"><strong>Ingress Layer</strong></a></td><td>Ingress Layer managed by Helm Chart</td><td>Not recommended.</td><td><span data-gb-custom-inline data-tag="emoji" data-code="274c">❌</span> To be deprecated.</td></tr><tr><td><strong>Helm-managed</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/gateway/"><strong>GatewayAPI</strong></a></td><td>GatewayAPI usage managed by the Helm Chart.</td><td>Not recommended for Helm deployments.</td><td><span data-gb-custom-inline data-tag="emoji" data-code="274c">❌</span> To be deprecated.</td></tr></tbody></table>

## 3. Recommended Load Balancing patterns:

Below you'll find in detail the two recommended patterns to publish your CARTO SelfHosted instance through a Load Balancer:&#x20;

* Cloud Load Balancer + Router LoadBalancer Service.&#x20;
* Ingress/GatewayAPI Layer + Router ClusterIP Service.<br>

### 3.1 Load Balancer service + Cloud Load Balancer

In this pattern, you define the service type as `LoadBalancer`. The Kubernetes Cloud Controller Manager talks directly to your cloud provider (AWS, GCP, Azure) to automatically provision and configure a native Cloud Load Balancer. You handle your own Cloud Load Balancing layer on top of this K8s integration.&#x20;

<figure><img src="https://3029946802-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FybPdpmLltPkzGFvz7m8A%2Fuploads%2Fdd9ftL4JC0O5fnSdqa3x%2FLoad%20Balancing.drawio%20(4).png?alt=media&#x26;token=3e0417e1-34f4-4877-ada9-3203fc8a4f80" alt=""><figcaption></figcaption></figure>

#### **3.1.1 How it works:**

Kubernetes requests a Load Balancer from the cloud provider. The cloud provider provisions it, assigns it a public/private IP, and automatically updates the backend target groups (nodes) as the cluster autoscales or nodes are replaced.

**When to use:**

* **Simplicity:** You want a dedicated Load Balancer for CARTO without the complexity of configuring an Ingress Controller.
* **Automation:** You want Kubernetes to automatically manage the backend node registration (avoiding the risk of stale Node IPs).
* **Cloud-specific features:** You need to utilize specific cloud features (like AWS NLB or Azure Internal LB) triggered via Service Annotations.

#### 3.1.2 Configuration

To enable this pattern, simply update the `router` service type in your values file.

```yaml
router:
  service:
    type: LoadBalancer
```

#### 3.1.3 Advanced configuration: annotations

You can customize the behavior of the cloud-provisioned Load Balancer (such as enabling SSL, setting timeouts, or making it internal-only) using **annotations**.

These annotations are specific to your Cloud Provider (AWS, GCP, Azure) and are passed directly to the `router.service.annotations` field.

**Key configuration options:**

| **Parameter**                             | **Description**                                         |
| ----------------------------------------- | ------------------------------------------------------- |
| `router.service.type`                     | Set to **LoadBalancer**                                 |
| `router.service.annotations`              | Key-value pairs for cloud-specific settings             |
| `router.service.loadBalancerIP`           | Request a specific static IP (if supported by provider) |
| `router.service.loadBalancerSourceRanges` | Firewall allow-list (CIDR ranges) for access control    |

#### 3.1.4 Example: AWS Load Balancer configuration

This example configures an AWS Load Balancer with SSL termination, specific timeouts, and an internet-facing scheme.

```yaml
router:
  service:
    type: LoadBalancer
    annotations:
      # Use AWS Network Load Balancer (NLB) or Classic (CLB)
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
      # Reference an ACM Certificate ARN for SSL termination
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:123456789012:certificate/uuid"
      # Configure SSL ports
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      # Connection settings
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "605"
      service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
```

#### **3.1.5 Common annotation cheatsheet:**

* **GCP Internal LB:**

  ```yaml
  router:
    service:
      annotations:
        cloud.google.com/load-balancer-type: "Internal"
  ```
* **Azure Internal LB:**

  ```yaml
  router:
    service:
      annotations:
        service.beta.kubernetes.io/azure-load-balancer-internal: "true"
  ```

#### Key characteristics

* ✅ **Automated Lifecycle:** Kubernetes automatically updates the Load Balancer when nodes change or die.
* ✅ **Cloud Integration:** Deep integration with cloud provider features via annotations.
* ✅ **Reduced Maintenance:** No need to manually track Node Ports or update target groups.
* ⚠️ **Cost:** Typically provisions one dedicated Load Balancer per service (higher cost than Ingress) + the standalone load balancer you setup on your own.

### 3.2 ClusterIP service + User-managed Ingress/GatewayAPI layer

<figure><img src="https://3029946802-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FybPdpmLltPkzGFvz7m8A%2Fuploads%2FvEll5jian6oASsw87Zsc%2Fimage.png?alt=media&#x26;token=33a96941-b375-40c5-8c8f-868922bd76a9" alt=""><figcaption></figcaption></figure>

The CARTO router service remains internal to the cluster (ClusterIP). A Kubernetes Ingress or GatewayAPI resource (managed separately from Helm) handles external access, SSL termination, and routing.

When to use:

* Standard Kubernetes deployments on cloud providers (GKE, EKS, AKS)
* Teams familiar with Kubernetes networking patterns
* Need for native Kubernetes certificate management (cert-manager, cloud-managed certs)
* Multi-service deployments sharing the same ingress controller

Configuration:

<pre class="language-yaml"><code class="lang-yaml"><strong>router:
</strong>  service:
    type: ClusterIP  # Default value
</code></pre>

Key characteristics:

* ✅ Kubernetes-native: Uses standard Ingress resources and controllers
* ✅ Ecosystem integration: Works seamlessly with cert-manager, external-dns, etc.
* ✅ Simpler architecture: No need to manage node IPs or port mappings
* ✅ Cloud provider integration: GKE Ingress, AWS ALB Controller, Azure AGIC available
* ⚠️ Ingress/GatewayAPI controller required: Must install/configure if not pre-existing
* ⚠️ Learning curve: Requires understanding of Ingress concepts and annotations<br>

## 4. TLS/SSL configuration

CARTO Self-Hosted supports two primary SSL/TLS termination strategies. Your choice should align with your security requirements, compliance needs, and operational preferences:<br>

* **TLS Offloading (recommended):** Termination occurs at the external Load Balancer or Ingress.
* **End-to-End TLS**: TLS traffic passes through to the CARTO Router, which handles termination.

{% hint style="info" %}
Certificates and private keys used in this process **must not be encrypted or protected with a passphrase**. CARTO Self-Hosted does not support encrypted certificates. This requirement applies to all components and configuration steps where TLS certificates are used.
{% endhint %}

### 4.1 Decision guide

Use this table to quickly determine the best strategy for your TLS/SSL Configuration in CARTO deployment.

| **Requirement**               | **Option A: TLS Offloading (recommended)**   | **Option B: End-to-End TLS**                           |
| ----------------------------- | -------------------------------------------- | ------------------------------------------------------ |
| Automatic certificate renewal | ✅ Yes (Managed by LB/Ingress or third party) | :warning: Manual process or other integration required |
| Zero operational overhead     | ✅ Yes                                        | ❌ Regular maintenance needed                           |
| Compliance                    | ⚠️ May suffice, check policy                 | ✅ Required by stricter policies                        |
| Zero-trust architecture       | ⚠️ Not ideal                                 | ✅ Required                                             |

### 4.2 Option A: TLS Offloading (recommended)

In this recommended architecture, SSL/TLS is terminated at your Load Balancer or Ingress Controller. HTTP traffic is then forwarded internally to the CARTO Router.

{% hint style="info" %}
**Key Benefit:** The entire certificate lifecycle is managed automatically at the load balancer or ingress level. This eliminates the operational burden of certificate rotation.
{% endhint %}

**When to Use**

* Standard production deployments.
* Requirement for automated certificate provisioning and renewal (zero manual intervention).
* Integration with WAF or DDoS protection.
* Simplified operations with no certificate expiration concerns.
* For standard deployments, use TLS Offloading. It provides the best operational experience with zero certificate expiration risk.<br>

#### 4.2.1 Customizations.yaml for TLS Offloading

No explicit TLS configuration is needed within the CARTO Helm chart, the external component handles all SSL aspects.

<table data-header-hidden><thead><tr><th width="107.58984375">Pattern</th><th width="325.9765625">values.yaml Configuration</th><th>Notes</th></tr></thead><tbody><tr><td>ClusterIP + Ingress</td><td><p><code>router.service.type: ClusterIP</code><br></p><p><code>router.service.ports.http: 80</code></p></td><td>The Ingress Controller performs TLS termination and forwards to ClusterIP port 80.</td></tr><tr><td>ClusterIP + Cloud LB</td><td><p><code>router.service.type: ClusterIP</code><br></p><p><code>router.service.ports.http: 80</code></p></td><td>The Cloud Load Balancer performs TLS termination and forwards to ClusterIP.</td></tr></tbody></table>

<pre class="language-yml"><code class="lang-yml"># Example: Configuration for TLS Offloading
## Disable HTTPS and disable cert autogeneration.
tlsCerts:
  httpsEnabled: false
  autoGenerate: false
<strong>router:
</strong>  service:
<strong>    type: ClusterIP
</strong>    ports:
      http: 80
</code></pre>

### 4.3 Option B: End-to-End TLS

In this setup, SSL/TLS remains encrypted all the way to the CARTO Router, which is responsible for TLS termination.

{% hint style="info" %}
**Crucial Responsibility**: You are fully responsible for the certificate lifecycle management (acquisition, renewal, and rotation).
{% endhint %}

**When to Use**

* Strict compliance requirements mandating encryption throughout the infrastructure.
* Zero-trust architectures that require absolute end-to-end encryption.
* Requirement for Client Certificate Authentication.<br>

#### 4.3.1 Certificate preparation

Before configuring the Helm chart, you must obtain a valid SSL/TLS certificate from a Certificate Authority (CA).

**Certificate Requirements:**

* Valid SSL certificate for your CARTO domain
* Full certificate chain (server certificate + intermediate CA certificates)
* Private key corresponding to the certificate
* PEM-encoded format for both certificate and key<br>

#### 4.3.2 Encode certificates to Base64

The Helm chart requires certificates to be base64-encoded **without line breaks**.

```bash
# 1. Encode the full certificate chain (including intermediates)
cat /path/to/fullchain.pem | base64 | tr -d '\n' > cert.b64

# 2. Encode the private key
cat /path/to/privkey.pem | base64 | tr -d '\n' > key.b64

# 3. View the encoded values (you'll copy these into your values file)
echo "Certificate (base64):"
cat cert.b64
echo ""
echo "Private Key (base64):"
cat key.b64
```

{% hint style="info" %}
**Notes:**

* The `tr -d '\n'` command removes all line breaks, creating a single-line base64 string
* On macOS, you can also use `base64 -i /path/to/file` without line breaks
* Keep these base64 strings secure - they contain your private key
  {% endhint %}

#### 4.3.3 Customizations.yaml for End-to-End TLS

You must explicitly enable TLS and reference a Kubernetes secret containing your certificate and key.

```yaml
# Example: Configuration for End-to-End TLS
## Enable HTTPS and disable cert autogeneration.
tlsCerts:
  httpsEnabled: true
  autoGenerate: false

router:
  service:
    type: ClusterIP
    ports:
      https: 443
      httpsTargetPort: "https"

  ## Add the base64 cert keys to the router. 
  tlsCertificates:
    certificateValueBase64: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZaRENDQk..."
    privateKeyValueBase64: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQU..."
    
```
