LogoLogo
HomeAcademyLoginTry for free
  • Welcome
  • What's new
    • Q2 2025
    • Q1 2025
    • Q4 2024
    • Q3 2024
    • Q2 2024
    • Q1 2024
    • Q4 2023
    • Q3 2023
    • Q2 2023
    • Q1 2023
    • Q4 2022
    • Q3 2022
  • FAQs
    • Accounts
    • Migration to the new platform
    • User & organization setup
    • General
    • Builder
    • Workflows
    • Data Observatory
    • Analytics Toolbox
    • Development Tools
    • Deployment Options
    • CARTO Basemaps
    • CARTO for Education
    • Support Packages
    • Security and Compliance
  • Getting started
    • What is CARTO?
    • Quickstart guides
      • Connecting to your data
      • Creating your first map
      • Creating your first workflow
      • Developing your first application
    • CARTO Academy
  • CARTO User Manual
    • Overview
      • Creating your CARTO organization
      • CARTO Cloud Regions
      • CARTO Workspace overview
    • Maps
      • Data sources
        • Simple features
        • Spatial Indexes
        • Pre-generated tilesets
        • Rasters
        • Defining source spatial data
        • Managing data freshness
        • Changing data source location
      • Layers
        • Point
          • Grid point aggregation
          • H3 point aggregation
          • Heatmap point aggregation
          • Cluster point aggregation
        • Polygon
        • Line
        • Grid
        • H3
        • Raster
        • Zoom to layer
      • Widgets
        • Formula widget
        • Category widget
        • Pie widget
        • Histogram widget
        • Range widget
        • Time Series widget
        • Table widget
      • SQL Parameters
        • Date parameter
        • Text parameter
        • Numeric parameter
        • Publishing SQL parameters
      • Interactions
      • Legend
      • Basemaps
        • Basemap selector
      • AI Agents
      • SQL analyses
      • Map view modes
      • Map description
      • Feature selection tool
      • Search locations
      • Measure distances
      • Exporting data
      • Download PDF reports
      • Managing maps
      • Sharing and collaboration
        • Editor collaboration
        • Map preview for editors
        • Map settings for viewers
        • Comments
        • Embedding maps
        • URL parameters
      • Performance considerations
    • Workflows
      • Workflow canvas
      • Results panel
      • Components
        • Aggregation
        • Custom
        • Data Enrichment
        • Data Preparation
        • Generative AI
        • Input / Output
        • Joins
        • Parsers
        • Raster Operations
        • Spatial Accessors
        • Spatial Analysis
        • Spatial Constructors
        • Spatial Indexes
        • Spatial Operations
        • Statistics
        • Tileset Creation
        • BigQuery ML
        • Snowflake ML
        • Google Earth Engine
        • Google Environment APIs
        • Telco Signal Propagation Models
      • Data Sources
      • Scheduling workflows
      • Sharing workflows
      • Using variables in workflows
      • Executing workflows via API
      • Temporary data in Workflows
      • Extension Packages
      • Managing workflows
      • Workflows best practices
    • Data Explorer
      • Creating a map from your data
      • Importing data
        • Importing rasters
      • Geocoding data
      • Optimizing your data
    • Data Observatory
      • Terminology
      • Browsing the Spatial Data Catalog
      • Subscribing to public and premium datasets
      • Accessing free data samples
      • Managing your subscriptions
      • Accessing your subscriptions from your data warehouse
        • Access data in BigQuery
        • Access data in Snowflake
        • Access data in Databricks
        • Access data in Redshift
        • Access data in PostgreSQL
    • Connections
      • Google BigQuery
      • Snowflake
      • Databricks
      • Amazon Redshift
      • PostgreSQL
      • CARTO Data Warehouse
      • Sharing connections
      • Deleting a connection
      • Required permissions
      • IP whitelisting
      • Customer data responsibilities
    • Applications
    • Settings
      • Understanding your organization quotas
      • Activity Data
        • Activity Data Reference
        • Activity Data Examples
        • Activity Data Changelog
      • Users and Groups
        • Inviting users to your organization
        • Managing user roles
        • Deleting users
        • SSO
        • Groups
        • Mapping groups to user roles
      • CARTO Support Access
      • Customizations
        • Customizing appearance and branding
        • Configuring custom color palettes
        • Configuring your organization basemaps
        • Enabling AI Agents
      • Advanced Settings
        • Managing applications
        • Configuring S3 Bucket for Redshift Imports
        • Configuring OAuth connections to Snowflake
        • Configuring OAuth U2M connections to Databricks
        • Configuring S3 Bucket integration for RDS for PostgreSQL Exports in Builder
        • Configuring Workload Identity Federation for BigQuery
      • Data Observatory
      • Deleting your organization
    • Developers
      • Managing Credentials
        • API Base URL
        • API Access Tokens
        • SPA OAuth Clients
        • M2M OAuth Clients
      • Named Sources
  • Data and Analysis
    • Analytics Toolbox Overview
    • Analytics Toolbox for BigQuery
      • Getting access
        • Projects maintained by CARTO in different BigQuery regions
        • Manual installation in your own project
        • Installation in a Google Cloud VPC
        • Core module
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • cpg
        • data
        • http_request
        • import
        • geohash
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • routing
        • s2
        • statistics
        • telco
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release notes
      • About Analytics Toolbox regions
    • Analytics Toolbox for Snowflake
      • Getting access
        • Native App from Snowflake's Marketplace
        • Manual installation
      • Key concepts
        • Spatial indexes
        • Tilesets
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • data
        • http_request
        • import
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release Notes
    • Analytics Toolbox for Databricks
      • Getting access
        • Personal (former Single User) cluster
        • Standard (former Shared) cluster
      • Reference
        • lds
        • tiler
      • Guides
      • Release Notes
    • Analytics Toolbox for Redshift
      • Getting access
        • Manual installation in your database
        • Installation in an Amazon Web Services VPC
        • Core version
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • clustering
        • constructors
        • data
        • http_request
        • import
        • lds
        • placekey
        • processing
        • quadbin
        • random
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
      • Release Notes
    • Analytics Toolbox for PostgreSQL
      • Getting access
        • Manual installation
        • Core version
      • Key concepts
        • Tilesets
        • Spatial Indexes
      • SQL Reference
        • h3
        • quadbin
        • tiler
      • Guides
        • Creating spatial index tilesets
        • Running queries from Builder
      • Release Notes
    • CARTO + Python
      • Installation
      • Authentication Methods
      • Visualizing Data
      • Working with Data
        • How to work with your data in the CARTO Data Warehouse
        • How to access your Data Observatory subscriptions
        • How to access CARTO's Analytics Toolbox for BigQuery and create visualizations via Python notebooks
        • How to access CARTO’s Analytics Toolbox for Snowflake and create visualizations via Python notebooks
        • How to visualize data from Databricks
      • Reference
    • CARTO QGIS Plugin
  • CARTO for Developers
    • Overview
    • Key concepts
      • Architecture
      • Libraries and APIs
      • Authentication methods
        • API Access Tokens
        • OAuth Access Tokens
        • OAuth Clients
      • Connections
      • Data sources
      • Visualization with deck.gl
        • Basemaps
          • CARTO Basemap
          • Google Maps
            • Examples
              • Gallery
              • Getting Started
              • Basic Examples
                • Hello World
                • BigQuery Tileset Layer
                • Data Observatory Tileset Layer
              • Advanced Examples
                • Arc Layer
                • Extrusion
                • Trips Layer
            • What's New
          • Amazon Location
            • Examples
              • Hello World
              • CartoLayer
            • What's New
      • Charts and widgets
      • Filtering and interactivity
      • Integrating Builder maps in your application
      • Summary
    • Quickstart
      • Make your first API call
      • Visualize your first dataset
      • Create your first widget
    • Guides
      • Build a public application
      • Build a private application
      • Build a private application using SSO
      • Visualize massive datasets
      • Integrate CARTO in your existing application
      • Use Boundaries in your application
      • Avoid exposing SQL queries with Named Sources
      • Managing cache in your CARTO applications
    • Reference
      • Deck (@deck.gl reference)
      • Data Sources
        • vectorTableSource
        • vectorQuerySource
        • vectorTilesetSource
        • h3TableSource
        • h3QuerySource
        • h3TilesetSource
        • quadbinTableSource
        • quadbinQuerySource
        • quadbinTilesetSource
        • rasterSource
        • boundaryTableSource
        • boundaryQuerySource
      • Layers (@deck.gl/carto)
      • Widgets
        • Data Sources
        • Server-side vs. client-side
        • Models
          • getFormula
          • getCategories
          • getHistogram
          • getRange
          • getScatter
          • getTimeSeries
          • getTable
      • Filters
        • Column filters
        • Spatial filters
      • fetchMap
      • CARTO APIs Reference
    • Release Notes
    • Examples
    • CARTO for React
      • Guides
        • Getting Started
        • Views
        • Data Sources
        • Layers
        • Widgets
        • Authentication and Authorization
        • Basemaps
        • Look and Feel
        • Query Parameters
        • Code Generator
        • Sample Applications
        • Deployment
        • Upgrade Guide
      • Examples
      • Library Reference
        • Introduction
        • API
        • Auth
        • Basemaps
        • Core
        • Redux
        • UI
        • Widgets
      • Release Notes
  • CARTO Self-Hosted
    • Overview
    • Key concepts
      • Architecture
      • Deployment requirements
    • Quickstarts
      • Single VM deployment (Kots)
      • Orchestrated container deployment (Kots)
      • Advanced Orchestrated container deployment (Helm)
    • Guides
      • Guides (Kots)
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Use Workload Identity in GCP
        • High availability configuration for CARTO Self-hosted
        • Configure your custom service account
      • Guides (Helm)
        • Configure your own buckets (Helm)
        • Configure an external in-memory cache (Helm)
        • Enable Google Basemaps (Helm)
        • Enable the CARTO Data Warehouse (Helm)
        • Configure an external proxy (Helm)
        • Enable BigQuery OAuth connections (Helm)
        • Configure Single Sign-On (SSO) (Helm)
        • Use Workload Identity in GCP (Helm)
        • Use EKS Pod Identity in AWS (Helm)
        • Enable Redshift imports (Helm)
        • Migrating CARTO Self-hosted installation to an external database (Helm)
        • Advanced customizations (Helm)
        • Configure your custom service account (Helm)
    • Maintenance
      • Maintenance (Kots)
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
        • Change the Admin Console password
      • Maintenance (Helm)
        • Monitoring (Helm)
        • Rotating keys (Helm)
        • Uninstall (Helm)
        • Backups (Helm)
        • Updates (Helm)
    • Support
      • Get debug information for Support (Kots)
      • Get debug information for Support (Helm)
    • CARTO Self-hosted Legacy
      • Key concepts
        • Architecture
        • Deployment requirements
      • Quickstarts
        • Single VM deployment (docker-compose)
      • Guides
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Enable Redshift imports
        • Configure your custom service account
        • Advanced customizations
        • Migrating CARTO Self-Hosted installation to an external database
      • Maintenance
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
      • Support
    • Release Notes
  • CARTO Native App for Snowflake Containers
    • Deploying CARTO using Snowflake Container Services
  • Get Help
    • Legal & Compliance
    • Previous libraries and components
    • Migrating your content to the new CARTO platform
Powered by GitBook
On this page
  • Enrich H3 Grid
  • Enrich Points
  • Enrich Polygons
  • Enrich Polygons with Weights
  • Enrich Quadbin Grid

Was this helpful?

Export as PDF
  1. CARTO User Manual
  2. Workflows
  3. Components

Data Enrichment

Components to enrich your data with variables from other data sources. These components work for simple features and spatial indexes grids.

Enrich H3 Grid

Description

This component enriches a target table with data from a source. Enriching here means adding columns with aggregated data from the source that matches the target geographies.

  • The target, (which is the upper input connection of this component), must have a column that contains H3 indices, which will be used to join with the source.

  • The source (lower input connection) can be either a CARTO Data Observatory subscription or table (or result from other component) with a geography column.

For the enrichment operation the CARTO Analytics Toolbox is required, and one of the following procedures will be called:

  • DATAOBS_ENRICH_GRID if the source is a Data Observatory subscription

  • ENRICH_GRID otherwise

Inputs

  • Target geo column: it's the column of the target that will be used to join with the source and select the rows that will be aggregated for each target row.

  • Source geo column: (only necessary for non-DO sources) is the column of the source that will join with the target.

  • Variables: this allows selecting the data from the source that will be aggregated and added to the target.

    • For Data Observatory subscriptions, the variables can be selected from the DO variables of the subscription, identified by their variable slug;

    • for other sources they are the columns in the source table.

    Each variable added must be assigned an aggregation method. You can add the same variable with different aggregation methods. At the moment only numeric variables are supported.

For spatially smoothed enrichments that take into account the surrounding cells, use the following input parameters:

  • Kring size: size of the k-ring where the decay function will be applied. This value can be 0, in which case no k-ring will be computed and the decay function won't be applied.

  • Decay function: decay function to aggregate and smooth the data. Supported values are uniform, inverse, inverse_square and exponential.

Outputs

  • Result table [Table]

Enrich Points

Description

This component enriches a target table with data from a source. Enriching here means adding columns with aggregated data from the source that matches (intersects) the target geographies.

  • The target, (which is the upper input connection of this component), must have a geo column, which will be used to intersect with the source.

  • The source (lower input connection) can be either a CARTO Data Observatory subscription or table (or result from other component) with a geo column.

For the enrichment operation the CARTO Analytics Toolbox is required, and one of the following procedures will be called:

  • DATAOBS_ENRICH_POINTS if the source is a Data Observatory subscription

  • ENRICH_POINTS otherwise

Inputs

  • Target geo column: it's the column of the target that will be used to intersect with the source and select the rows that will be aggregated for each target row.

  • Source geo column: (only necessary for non-DO sources) is the column of the source that will intersect with the target.

  • Variables: this allows selecting the data from the source that will be aggregated and added to the target.

    • For Data Observatory subscriptions, the variables can be selected from the DO variables of the subscription, identified by their variable slug;

    • for other sources they are the columns in the source table.

    Each variable added must be assigned an aggregation method. You can add the same variable with different aggregation methods. At the moment only numeric variables are supported.

Outputs

  • Result table [Table]

Enrich Polygons

Description

This component enriches a target table with data from a source. Enriching here means adding columns with aggregated data from the source that matches (intersects) the target geographies.

  • The target, (which is the upper input connection of this component), must have a geo column, which will be used to intersect with the source.

  • The source (lower input connection) can be either a CARTO Data Observatory subscription or table (or result from other component) with a geo column.

For the enrichment operation the CARTO Analytics Toolbox is required, and one of the following procedures will be called:

  • DATAOBS_ENRICH_POLYGONS if the source is a Data Observatory subscription

  • ENRICH_POLYGONS otherwise

Inputs

  • Target geo column: it's the column of the target that will be used to intersect with the source and select the rows that will be aggregated for each target row.

  • Source geo column: (only necessary for non-DO sources) is the column of the source that will intersect with the target.

  • Variables: this allows selecting the data from the source that will be aggregated and added to the target.

    • For Data Observatory subscriptions, the variables can be selected from the DO variables of the subscription, identified by their variable slug;

    • for other sources they are the columns in the source table.

    Each variable added must be assigned an aggregation method. You can add the same variable with different aggregation methods. At the moment only numeric variables are supported.

Outputs

  • Result table [Table]

Enrich Polygons with Weights

Description

This component uses a data source (either a table or a DO subscription) to enrich another target table using weights to control the enrichment.

Inputs

  • Target table to be enriched

  • Source table with data for the enrichment data

  • Weights table with data to weight the enrichment

Settings

  • Target polygons geo column: Select the column from the target table that contains a valid geography.

  • Source table geo column: Select the column from the source table that contains a valid geography.

  • Variables: Select a list of variables and aggregation method from the source table to be used to enrich the target table. Valid aggregation methods are:

    • COUNT It computes the number of features that contain the enrichment variable and are intersected by the input geography.

💡 The component will return an error if all variables selected are aggregated as MIN or MAX, since the result wouldn't actually be weighted.

  • Weights geo column: Select the column from the weights table that contains a valid geography.

  • Weights variable: Select one variable and aggregation operation to be used as weight for the enrichment.

If your weight variables are included in the same table as the source variables, you can connect the same node to both inputs in this component.

When the source for the enrichment is a standard table, the weights source can't be a DO subscription.

The same limitation applies when the source for the enrichment is a DO subscription; the weights source can't be a standard table.

Outputs

  • Output table with the following schema

    • All columns from Target

    • A column from each variable in 'Variables', named like 'name_sum', 'name_avg', 'name_max' depending on the original column name and the aggregation method.

Enrich Quadbin Grid

Description

This component enriches a target table with data from a source. Enriching here means adding columns with aggregated data from the source that matches the target geographies.

  • The target, (which is the upper input connection of this component), must have a column that contains Quadbin indices, which will be used to join with the source.

  • The source (lower input connection) can be either a CARTO Data Observatory subscription or table (or result from other component) with a geography column.

For the enrichment operation the CARTO Analytics Toolbox is required, and one of the following procedures will be called:

  • DATAOBS_ENRICH_GRID if the source is a Data Observatory subscription

  • ENRICH_GRID otherwise

Inputs

  • Target geo column: it's the column of the target that will be used to join with the source and select the rows that will be aggregated for each target row.

  • Source geo column: (only necessary for non-DO sources) is the column of the source that will join with the target.

  • Variables: this allows selecting the data from the source that will be aggregated and added to the target.

    • For Data Observatory subscriptions, the variables can be selected from the DO variables of the subscription, identified by their variable slug;

    • for other sources they are the columns in the source table.

    Each variable added must be assigned an aggregation method. You can add the same variable with different aggregation methods. At the moment only numeric variables are supported.

For spatially smoothed enrichments that take into account the surrounding cells, use the following input parameters:

  • Kring size: size of the k-ring where the decay function will be applied. This value can be 0, in which case no k-ring will be computed and the decay function won't be applied.

  • Decay function: decay function to aggregate and smooth the data. Supported values are uniform, inverse, inverse_square and exponential.

Outputs

  • Result table [Table]

PreviousCustomNextData Preparation

Last updated 7 months ago

Was this helpful?

SUM: It assumes the aggregated variable is an (e.g. population). Accordingly, the value corresponding to the feature intersected is weighted by the fraction of the intersected weight variable.

MIN: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the weight variable.

MAX: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the weight variable.

AVG: It assumes the aggregated variable is an (e.g. temperature, population density). A is computed, using the value of the intersected weight variable as weights.

extensive property
intensive property
intensive property
intensive property
weighted average