Load Balancing best practices (Helm)

This guide provides best practices guidance for exposing CARTO Self-Hosted to users in production environments. It covers load balancing architectures, configuration patterns, and cloud provider-specific recommendations for Kubernetes deployments using Helm.

1. Prerequisites

  • CARTO Self-Hosted deployed via Helm.

  • Kubernetes cluster running on a cloud provider (GCP, AWS, or Azure).

  • Understanding of Kubernetes Services and Ingress resources.

  • Domain name configured for your CARTO deployment.

2. Load balancing and publishing architecture

CARTO Self-Hosted uses a component called router-http as the main entry point for all HTTP/HTTPS traffic. You can see the details on Carto Self-hosted architecture here: Architecture

This router service can be exposed through different Kubernetes networking patterns:

Pattern
Description
Recommended For
Helm Management

Service internal to cluster managed by helm chart. Exposed via user-managed GatewayAPI controller

Production deployments in private/public clusters

Ingress managed separately

Service internal to cluster managed by helm chart. Exposed via user-managed Ingress controller

Production deployments in private/public clusters

Ingress managed separately

Cloud Load Balancer + LoadBalancer Service

Kubernetes creates cloud LB automatically (internal recommended). Exposed via user-managed Cloud Load Balancer.

Production deployments in private/public clusters

Ingress managed separately

Kubernetes automatically provisions a cloud load balancer, fully managed through Helm. This approach offers limited flexibility for advanced security customizations.

Production with soft network security requirements

LB managed by Helm chart.

NodePort + Cloud Load Balancer

Service internal to cluster managed by helm chart. User-managed Cloud LB points to NodePorts

Production deployments in private clusters

⚠️ Not recommended for production

Helm-managed NodePort Service

Service exposed on static port on each node

Development/testing

⚠️ Not recommended for production

Helm-managed Ingress Layer

Ingress Layer managed by Helm Chart

Not recommended.

To be deprecated.

Helm-managed GatewayAPI

GatewayAPI usage managed by the Helm Chart.

Not recommended for Helm deployments.

To be deprecated.

Below you'll find in detail the two recommended patterns to publish your CARTO SelfHosted instance through a Load Balancer:

  • Cloud Load Balancer + Router LoadBalancer Service.

  • Ingress/GatewayAPI Layer + Router ClusterIP Service.

3.1 Load Balancer service + Cloud Load Balancer

In this pattern, you define the service type as LoadBalancer. The Kubernetes Cloud Controller Manager talks directly to your cloud provider (AWS, GCP, Azure) to automatically provision and configure a native Cloud Load Balancer. You handle your own Cloud Load Balancing layer on top of this K8s integration.

3.1.1 How it works:

Kubernetes requests a Load Balancer from the cloud provider. The cloud provider provisions it, assigns it a public/private IP, and automatically updates the backend target groups (nodes) as the cluster autoscales or nodes are replaced.

When to use:

  • Simplicity: You want a dedicated Load Balancer for CARTO without the complexity of configuring an Ingress Controller.

  • Automation: You want Kubernetes to automatically manage the backend node registration (avoiding the risk of stale Node IPs).

  • Cloud-specific features: You need to utilize specific cloud features (like AWS NLB or Azure Internal LB) triggered via Service Annotations.

3.1.2 Configuration

To enable this pattern, simply update the router service type in your values file.

3.1.3 Advanced configuration: annotations

You can customize the behavior of the cloud-provisioned Load Balancer (such as enabling SSL, setting timeouts, or making it internal-only) using annotations.

These annotations are specific to your Cloud Provider (AWS, GCP, Azure) and are passed directly to the router.service.annotations field.

Key configuration options:

Parameter

Description

router.service.type

Set to LoadBalancer

router.service.annotations

Key-value pairs for cloud-specific settings

router.service.loadBalancerIP

Request a specific static IP (if supported by provider)

router.service.loadBalancerSourceRanges

Firewall allow-list (CIDR ranges) for access control

3.1.4 Example: AWS Load Balancer configuration

This example configures an AWS Load Balancer with SSL termination, specific timeouts, and an internet-facing scheme.

3.1.5 Common annotation cheatsheet:

  • GCP Internal LB:

  • Azure Internal LB:

Key characteristics

  • Automated Lifecycle: Kubernetes automatically updates the Load Balancer when nodes change or die.

  • Cloud Integration: Deep integration with cloud provider features via annotations.

  • Reduced Maintenance: No need to manually track Node Ports or update target groups.

  • ⚠️ Cost: Typically provisions one dedicated Load Balancer per service (higher cost than Ingress) + the standalone load balancer you setup on your own.

3.2 ClusterIP service + User-managed Ingress/GatewayAPI layer

The CARTO router service remains internal to the cluster (ClusterIP). A Kubernetes Ingress or GatewayAPI resource (managed separately from Helm) handles external access, SSL termination, and routing.

When to use:

  • Standard Kubernetes deployments on cloud providers (GKE, EKS, AKS)

  • Teams familiar with Kubernetes networking patterns

  • Need for native Kubernetes certificate management (cert-manager, cloud-managed certs)

  • Multi-service deployments sharing the same ingress controller

Configuration:

Key characteristics:

  • ✅ Kubernetes-native: Uses standard Ingress resources and controllers

  • ✅ Ecosystem integration: Works seamlessly with cert-manager, external-dns, etc.

  • ✅ Simpler architecture: No need to manage node IPs or port mappings

  • ✅ Cloud provider integration: GKE Ingress, AWS ALB Controller, Azure AGIC available

  • ⚠️ Ingress/GatewayAPI controller required: Must install/configure if not pre-existing

  • ⚠️ Learning curve: Requires understanding of Ingress concepts and annotations

4. TLS/SSL configuration

CARTO Self-Hosted supports two primary SSL/TLS termination strategies. Your choice should align with your security requirements, compliance needs, and operational preferences:

  • TLS Offloading (recommended): Termination occurs at the external Load Balancer or Ingress.

  • End-to-End TLS: TLS traffic passes through to the CARTO Router, which handles termination.

Certificates and private keys used in this process must not be encrypted or protected with a passphrase. CARTO Self-Hosted does not support encrypted certificates. This requirement applies to all components and configuration steps where TLS certificates are used.

4.1 Decision guide

Use this table to quickly determine the best strategy for your TLS/SSL Configuration in CARTO deployment.

Requirement

Option A: TLS Offloading (recommended)

Option B: End-to-End TLS

Automatic certificate renewal

✅ Yes (Managed by LB/Ingress or third party)

⚠️ Manual process or other integration required

Zero operational overhead

✅ Yes

❌ Regular maintenance needed

Compliance

⚠️ May suffice, check policy

✅ Required by stricter policies

Zero-trust architecture

⚠️ Not ideal

✅ Required

In this recommended architecture, SSL/TLS is terminated at your Load Balancer or Ingress Controller. HTTP traffic is then forwarded internally to the CARTO Router.

Key Benefit: The entire certificate lifecycle is managed automatically at the load balancer or ingress level. This eliminates the operational burden of certificate rotation.

When to Use

  • Standard production deployments.

  • Requirement for automated certificate provisioning and renewal (zero manual intervention).

  • Integration with WAF or DDoS protection.

  • Simplified operations with no certificate expiration concerns.

  • For standard deployments, use TLS Offloading. It provides the best operational experience with zero certificate expiration risk.

4.2.1 Customizations.yaml for TLS Offloading

No explicit TLS configuration is needed within the CARTO Helm chart, the external component handles all SSL aspects.

ClusterIP + Ingress

router.service.type: ClusterIP

router.service.ports.http: 80

The Ingress Controller performs TLS termination and forwards to ClusterIP port 80.

ClusterIP + Cloud LB

router.service.type: ClusterIP

router.service.ports.http: 80

The Cloud Load Balancer performs TLS termination and forwards to ClusterIP.

4.3 Option B: End-to-End TLS

In this setup, SSL/TLS remains encrypted all the way to the CARTO Router, which is responsible for TLS termination.

Crucial Responsibility: You are fully responsible for the certificate lifecycle management (acquisition, renewal, and rotation).

When to Use

  • Strict compliance requirements mandating encryption throughout the infrastructure.

  • Zero-trust architectures that require absolute end-to-end encryption.

  • Requirement for Client Certificate Authentication.

4.3.1 Certificate preparation

Before configuring the Helm chart, you must obtain a valid SSL/TLS certificate from a Certificate Authority (CA).

Certificate Requirements:

  • Valid SSL certificate for your CARTO domain

  • Full certificate chain (server certificate + intermediate CA certificates)

  • Private key corresponding to the certificate

  • PEM-encoded format for both certificate and key

4.3.2 Encode certificates to Base64

The Helm chart requires certificates to be base64-encoded without line breaks.

Notes:

  • The tr -d '\n' command removes all line breaks, creating a single-line base64 string

  • On macOS, you can also use base64 -i /path/to/file without line breaks

  • Keep these base64 strings secure - they contain your private key

4.3.3 Customizations.yaml for End-to-End TLS

You must explicitly enable TLS and reference a Kubernetes secret containing your certificate and key.

Last updated

Was this helpful?