LogoLogo
HomeAcademyLoginTry for free
  • Welcome
  • What's new
    • Q2 2025
    • Q1 2025
    • Q4 2024
    • Q3 2024
    • Q2 2024
    • Q1 2024
    • Q4 2023
    • Q3 2023
    • Q2 2023
    • Q1 2023
    • Q4 2022
    • Q3 2022
  • FAQs
    • Accounts
    • Migration to the new platform
    • User & organization setup
    • General
    • Builder
    • Workflows
    • Data Observatory
    • Analytics Toolbox
    • Development Tools
    • Deployment Options
    • CARTO Basemaps
    • CARTO for Education
    • Support Packages
    • Security and Compliance
  • Getting started
    • What is CARTO?
    • Quickstart guides
      • Connecting to your data
      • Creating your first map
      • Creating your first workflow
      • Developing your first application
    • CARTO Academy
  • CARTO User Manual
    • Overview
      • Creating your CARTO organization
      • CARTO Cloud Regions
      • CARTO Workspace overview
    • Maps
      • Data sources
        • Simple features
        • Spatial Indexes
        • Pre-generated tilesets
        • Rasters
        • Defining source spatial data
        • Managing data freshness
        • Changing data source location
      • Layers
        • Point
          • Grid point aggregation
          • H3 point aggregation
          • Heatmap point aggregation
          • Cluster point aggregation
        • Polygon
        • Line
        • Grid
        • H3
        • Raster
        • Zoom to layer
      • Widgets
        • Formula widget
        • Category widget
        • Pie widget
        • Histogram widget
        • Range widget
        • Time Series widget
        • Table widget
      • SQL Parameters
        • Date parameter
        • Text parameter
        • Numeric parameter
        • Publishing SQL parameters
      • Interactions
      • Legend
      • Basemaps
        • Basemap selector
      • AI Agents
      • SQL analyses
      • Map view modes
      • Map description
      • Feature selection tool
      • Search locations
      • Measure distances
      • Exporting data
      • Download PDF reports
      • Managing maps
      • Sharing and collaboration
        • Editor collaboration
        • Map preview for editors
        • Map settings for viewers
        • Comments
        • Embedding maps
        • URL parameters
      • Performance considerations
    • Workflows
      • Workflow canvas
      • Results panel
      • Components
        • Aggregation
        • Custom
        • Data Enrichment
        • Data Preparation
        • Generative AI
        • Input / Output
        • Joins
        • Parsers
        • Raster Operations
        • Spatial Accessors
        • Spatial Analysis
        • Spatial Constructors
        • Spatial Indexes
        • Spatial Operations
        • Statistics
        • Tileset Creation
        • BigQuery ML
        • Snowflake ML
        • Google Earth Engine
        • Google Environment APIs
        • Telco Signal Propagation Models
      • Data Sources
      • Scheduling workflows
      • Sharing workflows
      • Using variables in workflows
      • Executing workflows via API
      • Temporary data in Workflows
      • Extension Packages
      • Managing workflows
      • Workflows best practices
    • Data Explorer
      • Creating a map from your data
      • Importing data
        • Importing rasters
      • Geocoding data
      • Optimizing your data
    • Data Observatory
      • Terminology
      • Browsing the Spatial Data Catalog
      • Subscribing to public and premium datasets
      • Accessing free data samples
      • Managing your subscriptions
      • Accessing your subscriptions from your data warehouse
        • Access data in BigQuery
        • Access data in Snowflake
        • Access data in Databricks
        • Access data in Redshift
        • Access data in PostgreSQL
    • Connections
      • Google BigQuery
      • Snowflake
      • Databricks
      • Amazon Redshift
      • PostgreSQL
      • CARTO Data Warehouse
      • Sharing connections
      • Deleting a connection
      • Required permissions
      • IP whitelisting
      • Customer data responsibilities
    • Applications
    • Settings
      • Understanding your organization quotas
      • Activity Data
        • Activity Data Reference
        • Activity Data Examples
        • Activity Data Changelog
      • Users and Groups
        • Inviting users to your organization
        • Managing user roles
        • Deleting users
        • SSO
        • Groups
        • Mapping groups to user roles
      • CARTO Support Access
      • Customizations
        • Customizing appearance and branding
        • Configuring custom color palettes
        • Configuring your organization basemaps
        • Enabling AI Agents
      • Advanced Settings
        • Managing applications
        • Configuring S3 Bucket for Redshift Imports
        • Configuring OAuth connections to Snowflake
        • Configuring OAuth U2M connections to Databricks
        • Configuring S3 Bucket integration for RDS for PostgreSQL Exports in Builder
        • Configuring Workload Identity Federation for BigQuery
      • Data Observatory
      • Deleting your organization
    • Developers
      • Managing Credentials
        • API Base URL
        • API Access Tokens
        • SPA OAuth Clients
        • M2M OAuth Clients
      • Named Sources
  • Data and Analysis
    • Analytics Toolbox Overview
    • Analytics Toolbox for BigQuery
      • Getting access
        • Projects maintained by CARTO in different BigQuery regions
        • Manual installation in your own project
        • Installation in a Google Cloud VPC
        • Core module
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • cpg
        • data
        • http_request
        • import
        • geohash
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • routing
        • s2
        • statistics
        • telco
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release notes
      • About Analytics Toolbox regions
    • Analytics Toolbox for Snowflake
      • Getting access
        • Native App from Snowflake's Marketplace
        • Manual installation
      • Key concepts
        • Spatial indexes
        • Tilesets
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • data
        • http_request
        • import
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release Notes
    • Analytics Toolbox for Databricks
      • Getting access
        • Personal (former Single User) cluster
        • Standard (former Shared) cluster
      • Reference
        • lds
        • tiler
      • Guides
      • Release Notes
    • Analytics Toolbox for Redshift
      • Getting access
        • Manual installation in your database
        • Installation in an Amazon Web Services VPC
        • Core version
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • clustering
        • constructors
        • data
        • http_request
        • import
        • lds
        • placekey
        • processing
        • quadbin
        • random
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
      • Release Notes
    • Analytics Toolbox for PostgreSQL
      • Getting access
        • Manual installation
        • Core version
      • Key concepts
        • Tilesets
        • Spatial Indexes
      • SQL Reference
        • h3
        • quadbin
        • tiler
      • Guides
        • Creating spatial index tilesets
        • Running queries from Builder
      • Release Notes
    • CARTO + Python
      • Installation
      • Authentication Methods
      • Visualizing Data
      • Working with Data
        • How to work with your data in the CARTO Data Warehouse
        • How to access your Data Observatory subscriptions
        • How to access CARTO's Analytics Toolbox for BigQuery and create visualizations via Python notebooks
        • How to access CARTO’s Analytics Toolbox for Snowflake and create visualizations via Python notebooks
        • How to visualize data from Databricks
      • Reference
    • CARTO QGIS Plugin
  • CARTO for Developers
    • Overview
    • Key concepts
      • Architecture
      • Libraries and APIs
      • Authentication methods
        • API Access Tokens
        • OAuth Access Tokens
        • OAuth Clients
      • Connections
      • Data sources
      • Visualization with deck.gl
        • Basemaps
          • CARTO Basemap
          • Google Maps
            • Examples
              • Gallery
              • Getting Started
              • Basic Examples
                • Hello World
                • BigQuery Tileset Layer
                • Data Observatory Tileset Layer
              • Advanced Examples
                • Arc Layer
                • Extrusion
                • Trips Layer
            • What's New
          • Amazon Location
            • Examples
              • Hello World
              • CartoLayer
            • What's New
        • Rapid Map Prototyping
      • Charts and widgets
      • Filtering and interactivity
      • Summary
    • Quickstart
      • Make your first API call
      • Visualize your first dataset
      • Create your first widget
    • Guides
      • Build a public application
      • Build a private application
      • Build a private application using SSO
      • Visualize massive datasets
      • Integrate CARTO in your existing application
      • Use Boundaries in your application
      • Avoid exposing SQL queries with Named Sources
      • Managing cache in your CARTO applications
    • Reference
      • Deck (@deck.gl reference)
      • Data Sources
        • vectorTableSource
        • vectorQuerySource
        • vectorTilesetSource
        • h3TableSource
        • h3QuerySource
        • h3TilesetSource
        • quadbinTableSource
        • quadbinQuerySource
        • quadbinTilesetSource
        • rasterSource
        • boundaryTableSource
        • boundaryQuerySource
      • Layers (@deck.gl/carto)
      • Widgets
        • Data Sources
        • Server-side vs. client-side
        • Models
          • getFormula
          • getCategories
          • getHistogram
          • getRange
          • getScatter
          • getTimeSeries
          • getTable
      • Filters
        • Column filters
        • Spatial filters
      • CARTO APIs Reference
    • Release Notes
    • Examples
    • CARTO for React
      • Guides
        • Getting Started
        • Views
        • Data Sources
        • Layers
        • Widgets
        • Authentication and Authorization
        • Basemaps
        • Look and Feel
        • Query Parameters
        • Code Generator
        • Sample Applications
        • Deployment
        • Upgrade Guide
      • Examples
      • Library Reference
        • Introduction
        • API
        • Auth
        • Basemaps
        • Core
        • Redux
        • UI
        • Widgets
      • Release Notes
  • CARTO Self-Hosted
    • Overview
    • Key concepts
      • Architecture
      • Deployment requirements
    • Quickstarts
      • Single VM deployment (Kots)
      • Orchestrated container deployment (Kots)
      • Advanced Orchestrated container deployment (Helm)
    • Guides
      • Guides (Kots)
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Use Workload Identity in GCP
        • High availability configuration for CARTO Self-hosted
        • Configure your custom service account
      • Guides (Helm)
        • Configure your own buckets (Helm)
        • Configure an external in-memory cache (Helm)
        • Enable Google Basemaps (Helm)
        • Enable the CARTO Data Warehouse (Helm)
        • Configure an external proxy (Helm)
        • Enable BigQuery OAuth connections (Helm)
        • Configure Single Sign-On (SSO) (Helm)
        • Use Workload Identity in GCP (Helm)
        • Use EKS Pod Identity in AWS (Helm)
        • Enable Redshift imports (Helm)
        • Migrating CARTO Self-hosted installation to an external database (Helm)
        • Advanced customizations (Helm)
        • Configure your custom service account (Helm)
    • Maintenance
      • Maintenance (Kots)
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
        • Change the Admin Console password
      • Maintenance (Helm)
        • Monitoring (Helm)
        • Rotating keys (Helm)
        • Uninstall (Helm)
        • Backups (Helm)
        • Updates (Helm)
    • Support
      • Get debug information for Support (Kots)
      • Get debug information for Support (Helm)
    • CARTO Self-hosted Legacy
      • Key concepts
        • Architecture
        • Deployment requirements
      • Quickstarts
        • Single VM deployment (docker-compose)
      • Guides
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Enable Redshift imports
        • Configure your custom service account
        • Advanced customizations
        • Migrating CARTO Self-Hosted installation to an external database
      • Maintenance
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
      • Support
    • Release Notes
  • CARTO Native App for Snowflake Containers
    • Deploying CARTO using Snowflake Container Services
  • Get Help
    • Legal & Compliance
    • Previous libraries and components
    • Migrating your content to the new CARTO platform
Powered by GitBook
On this page
  • DATAOBS_ENRICH_GRID
  • DATAOBS_ENRICH_POINTS
  • DATAOBS_ENRICH_POLYGONS
  • ENRICH_GRID
  • ENRICH_POINTS
  • ENRICH_POLYGONS

Was this helpful?

Export as PDF
  1. Data and Analysis
  2. Analytics Toolbox for Redshift
  3. SQL Reference

data

PreviousconstructorsNexthttp_request

Last updated 19 days ago

Was this helpful?

This module contains functions and procedures that make use of data (either Data Observatory or user-provided data) for their computations.

For the DATAOBS_ methods, users can find the details on where to locate the tables (and their qualified names) within the corresponding data warehouse by going to the "Access in" option in the subscription page in the Data Explorer, and picking the data warehouse where that data has been made available.

DATAOBS_ENRICH_GRID

DATAOBS_ENRICH_GRID(grid_type, input_query, input_index_column, variables, filters, output, source)

Description

This procedure enriches a set of grid cells of one of the supported types (quadbin) with data from the Data Observatory. The user must be subscribed to all the Data Observatory datasets involved in the enrichment. The cells are identified by their indices.

As a result of this process, each input grid cell will be enriched with the data of the Data Observatory datasets that spatially intersect it. When the input cell intersects with more than one polygon, point, or line of the Data Observatory datasets, the data is aggregated using the aggregation methods specified.

Valid aggregation methods are:

  • COUNT It computes the number of Data Observatory features that contain the enrichment variable and are intersected by the input geography.

Input parameters

  • grid_type: Type of grid: "quadbin".

  • input_query: VARCHAR query to be enriched; this query must produce valid grid indices for the selected grid type in a column of the proper type (INT for quadbin). It can include additional columns with data associated with the grid cells that will be preserved. A qualified table name can be given as well, e.g. 'database.schema.table-name' or 'schema.table-name'.

  • input_index_column: VARCHAR name of a column in the query that contains the grid indices.

  • variables: VARCHAR JSON array of pairs with variables of the Data Observatory that will be used to enrich the input polygons. For each variable, its slug and the aggregation method to be used must be provided. Use 'default' to use the variable’s default aggregation method. Valid aggregation methods are: SUM, AVG, MAX, MIN, and COUNT. e.g. [["var1","sum"],["var2","count"].

  • filters: VARCHAR filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset field of the structure. The second field of the structure, expression, is an SQL expression that will be inserted in a WHERE clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here.

  • output_table: VARCHAR containing the name of an output table to store the results e.g. 'database.table-name'. The resulting table cannot exist before running the procedure.

  • source: VARCHAR name of the location where the Data Observatory subscriptions of the user are stored, in 'database.schema' or 'schema' format.

Output

The output table will contain all the input columns provided in the input_query and one extra column for each variable in variables, named after its corresponding enrichment column and including a suffix indicating the aggregation method used.

Examples

CALL carto.DATAOBS_ENRICH_GRID(
  'quadbin',
  'my-database.my-schema.my-table',
  'index',
   '[["population_93405ad7", "avg"]]',
   '[["ags_sociodemogr_8a89f775", "population > 0"]]',
   'my-database.my-schema.my-table',
   'my-database.my-dataobs-schema'
)
-- The table `my-database.my-schema.my-table` will be augmented
-- with columns: population_93405ad7_avg, ags_sociodemogr_8a89f775.
-- The enrichment query also will filter those rows in the Data Observatory
-- dataset with slug ags_sociodemogr_8a89f775 with a population
-- (name of the actual column) larger than 0

DATAOBS_ENRICH_POINTS

DATAOBS_ENRICH_POINTS(input_query, input_geography_column, variables, filters, variables, output, source)

Description

This procedure enriches a query containing geographic points with data from the Data Observatory. The user must be subscribed to all the Data Observatory datasets involved in the enrichment.

As a result of this process, each input point will be enriched with the data of the Data Observatory datasets that spatially intersect it. When the input point intersects with more than one polygon, point or line of the Data Observatory datasets, the data is aggregated using the aggregation methods specified.

Valid aggregation methods are: SUM, MIN, MAX, AVG, and COUNT.

Input parameters

  • input_query: VARCHAR query to be enriched. A qualified table name can be given as well, e.g. 'database.schema.table-name' or 'schema.table-name'.

  • input_geography_column: VARCHAR name of the GEOGRAPHY column in the query containing the points to be enriched.

  • variables: VARCHAR JSON array of pairs with variables of the Data Observatory that will be used to enrich the input polygons. For each variable, its slug and the aggregation method to be used must be provided. Use 'default' to use the variable’s default aggregation method. Valid aggregation methods are: SUM, AVG, MAX, MIN, and COUNT. e.g. [["var1","sum"],["var2","count"]].

  • filters: VARCHAR filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset field of the structure. The second field of the structure, expression, is an SQL expression that will be inserted in a WHERE clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here.

  • output: VARCHAR containing the name of an output table to store the results e.g. 'database.table-name'. The resulting table cannot exist before running the procedure.

  • source: VARCHAR name of the location where the Data Observatory subscriptions of the user are stored, in 'database.schema' or 'schema' format.

Output

The output table will contain all the input columns provided in the input_query and one extra column for each variable in variables, named after its corresponding slug and including a suffix indicating the aggregation method used.

Examples

CALL carto.DATAOBS_ENRICH_POINTS(
   'SELECT id, geom FROM my-schema.my-input',
   'geom',
   '[["population_93405ad7", "sum"]]',
   NULL,
   'my-database.my-schema.my-enriched-table',
   'my-database.my-dataobs-schema'
);
-- The table 'my-database.my-schema.my-enriched-tabl' will be created
-- with columns: id, geom, population_93405ad7_sum
CALL carto.DATAOBS_ENRICH_POINTS(
   'my-schema.my-input',
   'geom',
   '[["population_93405ad7", "avg"]]',
   NULL,
   'my-database.my-schema.my-enriched-table',
   'my-database.my-dataobs-schema'
);
-- The table 'my-database.my-schema.my-enriched-table' will be created
-- with columns the same columns than 'my-schema.my-input'
-- plus population_93405ad7_avg
CALL carto.DATAOBS_ENRICH_POINTS(
   'my-schema.my-input',
   'geom',
   '[["population_93405ad7", "avg"]]',
   '[["ags_sociodemogr_8a89f775", "population > 0"]]',
   'my-database.my-schema.my-enriched-table',
   'my-database.my-dataobs-schema'
);
-- The table 'my-database.my-schema.my-enriched-table' will be created
-- with columns the same columns than 'my-schema.my-input'
-- plus population_93405ad7_avg. The enrichment query also will filter those
-- rows in the Data Observatory dataset with slug ags_sociodemogr_8a89f775 with
-- a population (name of the actual column) larger than 0

DATAOBS_ENRICH_POLYGONS

DATAOBS_ENRICH_POLYGONS(input_query, input_geography_column, variables, filters, variables, output, source)

Description

This procedure enriches a query containing geographic polygons with data from the Data Observatory. The user must be subscribed to all the Data Observatory datasets involved in the enrichment.

As a result of this process, each input polygon will be enriched with the data of the Data Observatory datasets that spatially intersect it. When the input polygon intersects with more than one polygon, point, or line of the Data Observatory datasets, the data is aggregated using the aggregation methods specified.

Valid aggregation methods are:

  • COUNT It computes the number of enrichment features that contain the enrichment variable and are intersected by the input geography.

Input parameters

  • input_query: VARCHAR query to be enriched. A qualified table name can be given as well, e.g. 'database.schema.table-name' or 'schema.table-name'.

  • input_geography_column: VARCHAR name of the GEOGRAPHY column in the query containing the points to be enriched.

  • variables: VARCHAR JSON array of pairs with variables of the Data Observatory that will be used to enrich the input polygons. For each variable, its slug and the aggregation method to be used must be provided. Use 'default' to use the variable’s default aggregation method. Valid aggregation methods are: SUM, AVG, MAX, MIN, and COUNT. e.g. [["var1","sum"],["var2","count"].

  • filters: VARCHAR filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset field of the structure. The second field of the structure, expression, is an SQL expression that will be inserted in a WHERE clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here.

  • output: VARCHAR containing the name of an output table to store the results e.g. 'database.table-name'. The resulting table cannot exist before running the procedure.

  • source: VARCHAR name of the location where the Data Observatory subscriptions of the user are stored, in 'database.schema' or 'schema' format.

Output

The output table will contain all the input columns provided in the input_query and one extra column for each variable in variables, named after its corresponding slug and including a suffix indicating the aggregation method used.

Examples

CALL carto.DATAOBS_ENRICH_POLYGONS(
   'SELECT id, geom FROM my-schema.my-input',
   'geom',
   '[["population_93405ad7", "sum"]]',
   NULL,
   'my-database.my-schema.my-enriched-table',
   'my-database.my-dataobs-schema'
);
-- The table 'my-database.my-schema.my-enriched-tabl' will be created
-- with columns: id, geom, population_93405ad7_sum
CALL carto.DATAOBS_ENRICH_POLYGONS(
   'my-schema.my-input',
   'geom',
   '[["population_93405ad7", "avg"]]',
   NULL,
   'my-database.my-schema.my-enriched-table',
   'my-database.my-dataobs-schema'
);
-- The table 'my-database.my-schema.my-enriched-table' will be created
-- with columns the same columns than 'my-schema.my-input'
-- plus population_93405ad7_avg
CALL carto.DATAOBS_ENRICH_POLYGONS(
   'my-schema.my-input',
   'geom',
   '[["population_93405ad7", "avg"]]',
   '[["ags_sociodemogr_8a89f775", "population > 0"]]',
   'my-database.my-schema.my-enriched-table',
   'my-database.my-dataobs-schema'
);
-- The table 'my-database.my-schema.my-enriched-table' will be created
-- with columns the same columns than 'my-schema.my-input'
-- plus population_93405ad7_avg. The enrichment query also will filter those
-- rows in the Data Observatory dataset with slug ags_sociodemogr_8a89f775 with
-- a population (name of the actual column) larger than 0

ENRICH_GRID

ENRICH_GRID(grid_type, input_query, input_index_column, data_query, data_geography_column, variables, output)

Description

This procedure enriches a set of grid cells of one of the supported types (quadbin) with data from another enrichment query. The cells are identified by their indices.

As a result of this process, each input grid cell will be enriched with the data of the enrichment query that spatially intersects it. When the input cell intersects with more than one feature of the enrichment query, the data is aggregated using the aggregation methods specified.

Valid aggregation methods are:

  • COUNT It computes the number of enrichment features that contain the enrichment variable and are intersected by the input geography.

Input parameters

  • grid_type: Type of grid: "quadbin".

  • input_query: VARCHAR query to be enriched; this query must produce valid grid indices for the selected grid type in a column of the proper type (INT for quadbin). It can include additional columns with data associated with the grid cells that will be preserved. A qualified table name can be given as well, e.g. 'database.schema.table-name' or 'schema.table-name'.

  • input_index_column: VARCHAR name of a column in the query that contains the grid indices.

  • data_query: VARCHAR query that contains both a geography column and the columns with the data that will be used to enrich the polygons provided in the input query.

  • data_geography_column: VARCHAR name of the GEOGRAPHY column provided in the data_query.

  • variables: VARCHAR a JSON array of pairs. The column that will be used to enrich the input polygons and their corresponding aggregation method. e.g. [["var1","sum"],["var2","count"]].

  • output: VARCHAR containing the name of an output table to store the results e.g. 'schema.table-name'. The resulting table cannot exist before running the procedure.

Output

The output table will contain all the input columns provided in the input_query and one extra column for each variable in variables, named after its corresponding enrichment column and including a suffix indicating the aggregation method used.

Examples

CALL carto.ENRICH_GRID(
   'quadbin',
   'SELECT 5256779493799886847 AS index
   UNION ALL SELECT 5256779493812469759
   UNION ALL SELECT 5256779493816664063
   UNION ALL SELECT 5256779493808275455
   UNION ALL SELECT 5256779493820858367
   UNION ALL SELECT 5256779493825052671
   UNION ALL SELECT 5256779493900550143
   UNION ALL SELECT 5256779493913133055
   UNION ALL SELECT 5256779493917327359',
   'index',
   'SELECT geom, var1, var2 FROM my-database.my-schema.my-data',
   'geom',
   '[["var1", "sum"],["var2", "sum"],["var2", "max"]]',
   'my-database.my-schema.my-enriched-table'
);
-- The table `my-database.my-schema.myenrichedtable` will be created
-- with columns: index, var1_sum, var2_sum, var2_max
CALL carto.ENRICH_GRID(
   'quadbin',
   'my-database.my-schema.my-table',
   'index',
   'SELECT geom, var1, var2 FROM my-database.my-schema.my-data',
   'geom',
   '[["var1", "sum"],["var2", "sum"],["var2", "max"]]',
   'my-database.my-schema.my-table'
);
-- The table `my-database.my-schema.my-table` will be augmented
-- with columns: var1_sum, var2_sum, var2_max

ENRICH_POINTS

ENRICH_POINTS(input_query, input_geography_column, data_query, data_geography_column, variables, output)

Description

This procedure enriches a query containing geographic points with data from another query, spatially matching both and aggregating the result.

As a result of this process, each input point will be enriched with the data from the enrichment query that spatially intersects it. When an input point intersects with more than one enrichment polygon, point, or line, the data is aggregated using the aggregation methods specified.

Valid aggregation methods are: SUM, MIN, MAX, AVG, and COUNT.

Input parameters

  • input_query: VARCHAR query to be enriched. A qualified table name can be given as well, e.g. 'database.schema.table-name' or 'schema.table-name'.

  • input_geography_column: VARCHAR name of the GEOGRAPHY column in the query containing the points to be enriched.

  • data_query: VARCHAR query that contains both a geography column and the columns with the data that will be used to enrich the points provided in the input query.

  • data_geography_column: VARCHAR name of the GEOGRAPHY column provided in the data_query.

  • variables: VARCHAR a JSON array of pairs. The column that will be used to enrich the input points and their corresponding aggregation method. e.g. [["var1","sum"],["var2","count"]].

  • output: VARCHAR containing the name of an output table to store the results e.g. 'schema.table-name'. The resulting table cannot exist before running the procedure.

Output

The output table will contain all the input columns provided in the input_query and one extra column for each variable in variables, named after its corresponding enrichment column and including a suffix indicating the aggregation method used.

Examples

CALL carto.ENRICH_POINTS(
   'SELECT id, geom FROM my-schema.my-input',
   'geom',
   'SELECT geom, var1, var2 FROM my-database.my-schema.my-data',
   'geom',
   '[["var1", "sum"],["var2", "count"]]',
   'my-database.my-schema.my-enriched-table'
);
-- The table 'my-database.my-schema.my-enriched-table' will be created
-- with columns: id, geom, var1_sum, var2_count
CALL carto.ENRICH_POINTS(
   'my-schema.my-input',
   'geom',
   'SELECT geom, var1, var2 FROM my-database.my-schema.my-data',
   'geom',
   '[["var1", "sum"],["var2", "count"]]',
   'my-database.my-schema.my-enriched-table'
);
-- The table 'my-database.my-schema.my-enriched-table' will be created
-- with columns: id, geom, var1_sum, var2_count

ENRICH_POLYGONS

ENRICH_POLYGONS(input_query, input_geography_column, data_query, data_geography_column, variables, output)

Description

This procedure enriches a query containing geographic polygons with data from another query, spatially matching both and aggregating the result.

As a result of this process, each input polygons will be enriched with the data from the enrichment query that spatially intersects it. When the input polygons intersects with more than one enrichment polygon, point, or line, the data is aggregated using the aggregation methods specified.

Valid aggregation methods are:

  • COUNT It computes the number of enrichment features that contain the enrichment variable and are intersected by the input geography.

Input parameters

  • input_query: STRING query to be enriched. A qualified table name can be given as well, e.g. 'schema.database.table-name' or 'database.table-name'.

  • input_geography_column: STRING name of the GEOMETRY/GEOGRAPHY column in the query containing the points to be enriched. The input cannot be a GeometryCollection.

  • data_query: STRING query that contains both a geography column and the columns with the data that will be used to enrich the points provided in the input query.

  • data_geography_column: STRING name of the GEOMETRY/GEOGRAPHY column provided in the data_query.

  • variables: STRING a JSON with a property variables containing an array of pairs. The column that will be used to enrich the input polygons and their corresponding aggregation method. e.g. [["var1","sum"],["var2","count"]].

  • output: STRING containing the name of an output table to store the results e.g. 'database.table-name'. The resulting table cannot exist before running the procedure.

The input and data geography columns need to be in the same spatial reference system. If they are not, you will need to convert them.

Output

The output table will contain all the input columns provided in the input_query and one extra column for each variable in variables, named after its corresponding enrichment column and including a suffix indicating the aggregation method used.

Examples

CALL carto.ENRICH_POLYGONS(
   'SELECT id, geom FROM my-schema.my-input',
   'geom',
   'SELECT geom, var1, var2 FROM my-database.my-schema.my-data',
   'geom',
   '[["var1", "sum"],["var2", "count"]]',
   'my-database.my-schema.my-enriched-table'
);
-- The table 'my-database.my-schema.my-enriched-table' will be created
-- with columns: id, geom, var1_sum, var2_count
CALL carto.ENRICH_POLYGONS(
   'my-schema.my-input',
   'geom',
   'SELECT geom, var1, var2 FROM my-database.my-schema.my-data',
   'geom',
   '[["var1", "sum"],["var2", "count"]]',
   'my-database.my-schema.my-enriched-table'
);
-- The table 'my-database.my-schema.my-enriched-table' will be created
-- with columns: id, geom, var1_sum, var2_count

SUM: It assumes the aggregated variable is an (e.g. population). Accordingly, the value corresponding to the Data Observatory feature intersected is weighted by the fraction of area or length intersected. If the Data Observatory features are points, then a simple sum is performed.

MIN: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM.

MAX: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM.

AVG: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM. However, a is computed, using the intersection areas or lengths as the weight. When the Data Observatory features are points, a simple average is computed.

SUM: It assumes the aggregated variable is an (e.g. population). Accordingly, the value corresponding to the enrichment feature intersected is weighted by the fraction of area or length intersected. If the enrichment features are points, then a simple sum is performed.

MIN: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM.

MAX: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM.

AVG: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM. However, a is computed, using the intersection areas or lengths as the weight. When the enrichment features are points, a simple average is computed.

SUM: It assumes the aggregated variable is an (e.g. population). Accordingly, the value corresponding to the enrichment feature intersected is weighted by the fraction of area or length intersected. If the enrichment features are points, then a simple sum is performed.

MIN: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM.

MAX: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM.

AVG: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM. However, a is computed, using the intersection areas or lengths as the weight. When the enrichment features are points, a simple average is computed.

SUM: It assumes the aggregated variable is an (e.g. population). Accordingly, the value corresponding to the enrichment feature intersected is weighted by the fraction of area or length intersected. If the enrichment features are points, then a simple sum is performed.

MIN: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM.

MAX: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM.

AVG: It assumes the aggregated variable is an (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM. However, a is computed, using the intersection areas or lengths as the weight. When the enrichment features are points, a simple average is computed.

extensive property
intensive property
intensive property
intensive property
weighted average
extensive property
intensive property
intensive property
intensive property
weighted average
extensive property
intensive property
intensive property
intensive property
weighted average
extensive property
intensive property
intensive property
intensive property
weighted average