LogoLogo
HomeAcademyLoginTry for free
  • Welcome
  • What's new
    • Q2 2025
    • Q1 2025
    • Q4 2024
    • Q3 2024
    • Q2 2024
    • Q1 2024
    • Q4 2023
    • Q3 2023
    • Q2 2023
    • Q1 2023
    • Q4 2022
    • Q3 2022
  • FAQs
    • Accounts
    • Migration to the new platform
    • User & organization setup
    • General
    • Builder
    • Workflows
    • Data Observatory
    • Analytics Toolbox
    • Development Tools
    • Deployment Options
    • CARTO Basemaps
    • CARTO for Education
    • Support Packages
    • Security and Compliance
  • Getting started
    • What is CARTO?
    • Quickstart guides
      • Connecting to your data
      • Creating your first map
      • Creating your first workflow
      • Developing your first application
    • CARTO Academy
  • CARTO User Manual
    • Overview
      • Creating your CARTO organization
      • CARTO Cloud Regions
      • CARTO Workspace overview
    • Maps
      • Data sources
        • Simple features
        • Spatial Indexes
        • Pre-generated tilesets
        • Rasters
        • Defining source spatial data
        • Managing data freshness
        • Changing data source location
      • Layers
        • Point
          • Grid point aggregation
          • H3 point aggregation
          • Heatmap point aggregation
          • Cluster point aggregation
        • Polygon
        • Line
        • Grid
        • H3
        • Raster
        • Zoom to layer
      • Widgets
        • Formula widget
        • Category widget
        • Pie widget
        • Histogram widget
        • Range widget
        • Time Series widget
        • Table widget
      • SQL Parameters
        • Date parameter
        • Text parameter
        • Numeric parameter
        • Publishing SQL parameters
      • Interactions
      • Legend
      • Basemaps
        • Basemap selector
      • AI Agents
      • SQL analyses
      • Map view modes
      • Map description
      • Feature selection tool
      • Search locations
      • Measure distances
      • Exporting data
      • Download PDF reports
      • Managing maps
      • Sharing and collaboration
        • Editor collaboration
        • Map preview for editors
        • Map settings for viewers
        • Comments
        • Embedding maps
        • URL parameters
      • Performance considerations
    • Workflows
      • Workflow canvas
      • Results panel
      • Components
        • Aggregation
        • Custom
        • Data Enrichment
        • Data Preparation
        • Generative AI
        • Input / Output
        • Joins
        • Parsers
        • Raster Operations
        • Spatial Accessors
        • Spatial Analysis
        • Spatial Constructors
        • Spatial Indexes
        • Spatial Operations
        • Statistics
        • Tileset Creation
        • BigQuery ML
        • Snowflake ML
        • Google Earth Engine
        • Google Environment APIs
        • Telco Signal Propagation Models
      • Data Sources
      • Scheduling workflows
      • Sharing workflows
      • Using variables in workflows
      • Executing workflows via API
      • Temporary data in Workflows
      • Extension Packages
      • Managing workflows
      • Workflows best practices
    • Data Explorer
      • Creating a map from your data
      • Importing data
        • Importing rasters
      • Geocoding data
      • Optimizing your data
    • Data Observatory
      • Terminology
      • Browsing the Spatial Data Catalog
      • Subscribing to public and premium datasets
      • Accessing free data samples
      • Managing your subscriptions
      • Accessing your subscriptions from your data warehouse
        • Access data in BigQuery
        • Access data in Snowflake
        • Access data in Databricks
        • Access data in Redshift
        • Access data in PostgreSQL
    • Connections
      • Google BigQuery
      • Snowflake
      • Databricks
      • Amazon Redshift
      • PostgreSQL
      • CARTO Data Warehouse
      • Sharing connections
      • Deleting a connection
      • Required permissions
      • IP whitelisting
      • Customer data responsibilities
    • Applications
    • Settings
      • Understanding your organization quotas
      • Activity Data
        • Activity Data Reference
        • Activity Data Examples
        • Activity Data Changelog
      • Users and Groups
        • Inviting users to your organization
        • Managing user roles
        • Deleting users
        • SSO
        • Groups
        • Mapping groups to user roles
      • CARTO Support Access
      • Customizations
        • Customizing appearance and branding
        • Configuring custom color palettes
        • Configuring your organization basemaps
        • Enabling AI Agents
      • Advanced Settings
        • Managing applications
        • Configuring S3 Bucket for Redshift Imports
        • Configuring OAuth connections to Snowflake
        • Configuring OAuth U2M connections to Databricks
        • Configuring S3 Bucket integration for RDS for PostgreSQL Exports in Builder
        • Configuring Workload Identity Federation for BigQuery
      • Data Observatory
      • Deleting your organization
    • Developers
      • Managing Credentials
        • API Base URL
        • API Access Tokens
        • SPA OAuth Clients
        • M2M OAuth Clients
      • Named Sources
  • Data and Analysis
    • Analytics Toolbox Overview
    • Analytics Toolbox for BigQuery
      • Getting access
        • Projects maintained by CARTO in different BigQuery regions
        • Manual installation in your own project
        • Installation in a Google Cloud VPC
        • Core module
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • cpg
        • data
        • http_request
        • import
        • geohash
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • routing
        • s2
        • statistics
        • telco
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release notes
      • About Analytics Toolbox regions
    • Analytics Toolbox for Snowflake
      • Getting access
        • Native App from Snowflake's Marketplace
        • Manual installation
      • Key concepts
        • Spatial indexes
        • Tilesets
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • data
        • http_request
        • import
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release Notes
    • Analytics Toolbox for Databricks
      • Getting access
        • Personal (former Single User) cluster
        • Standard (former Shared) cluster
      • Reference
        • lds
        • tiler
      • Guides
      • Release Notes
    • Analytics Toolbox for Redshift
      • Getting access
        • Manual installation in your database
        • Installation in an Amazon Web Services VPC
        • Core version
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • clustering
        • constructors
        • data
        • http_request
        • import
        • lds
        • placekey
        • processing
        • quadbin
        • random
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
      • Release Notes
    • Analytics Toolbox for PostgreSQL
      • Getting access
        • Manual installation
        • Core version
      • Key concepts
        • Tilesets
        • Spatial Indexes
      • SQL Reference
        • h3
        • quadbin
        • tiler
      • Guides
        • Creating spatial index tilesets
        • Running queries from Builder
      • Release Notes
    • CARTO + Python
      • Installation
      • Authentication Methods
      • Visualizing Data
      • Working with Data
        • How to work with your data in the CARTO Data Warehouse
        • How to access your Data Observatory subscriptions
        • How to access CARTO's Analytics Toolbox for BigQuery and create visualizations via Python notebooks
        • How to access CARTO’s Analytics Toolbox for Snowflake and create visualizations via Python notebooks
        • How to visualize data from Databricks
      • Reference
    • CARTO QGIS Plugin
  • CARTO for Developers
    • Overview
    • Key concepts
      • Architecture
      • Libraries and APIs
      • Authentication methods
        • API Access Tokens
        • OAuth Access Tokens
        • OAuth Clients
      • Connections
      • Data sources
      • Visualization with deck.gl
        • Basemaps
          • CARTO Basemap
          • Google Maps
            • Examples
              • Gallery
              • Getting Started
              • Basic Examples
                • Hello World
                • BigQuery Tileset Layer
                • Data Observatory Tileset Layer
              • Advanced Examples
                • Arc Layer
                • Extrusion
                • Trips Layer
            • What's New
          • Amazon Location
            • Examples
              • Hello World
              • CartoLayer
            • What's New
        • Rapid Map Prototyping
      • Charts and widgets
      • Filtering and interactivity
      • Summary
    • Quickstart
      • Make your first API call
      • Visualize your first dataset
      • Create your first widget
    • Guides
      • Build a public application
      • Build a private application
      • Build a private application using SSO
      • Visualize massive datasets
      • Integrate CARTO in your existing application
      • Use Boundaries in your application
      • Avoid exposing SQL queries with Named Sources
      • Managing cache in your CARTO applications
    • Reference
      • Deck (@deck.gl reference)
      • Data Sources
        • vectorTableSource
        • vectorQuerySource
        • vectorTilesetSource
        • h3TableSource
        • h3QuerySource
        • h3TilesetSource
        • quadbinTableSource
        • quadbinQuerySource
        • quadbinTilesetSource
        • rasterSource
        • boundaryTableSource
        • boundaryQuerySource
      • Layers (@deck.gl/carto)
      • Widgets
        • Data Sources
        • Server-side vs. client-side
        • Models
          • getFormula
          • getCategories
          • getHistogram
          • getRange
          • getScatter
          • getTimeSeries
          • getTable
      • Filters
        • Column filters
        • Spatial filters
      • CARTO APIs Reference
    • Release Notes
    • Examples
    • CARTO for React
      • Guides
        • Getting Started
        • Views
        • Data Sources
        • Layers
        • Widgets
        • Authentication and Authorization
        • Basemaps
        • Look and Feel
        • Query Parameters
        • Code Generator
        • Sample Applications
        • Deployment
        • Upgrade Guide
      • Examples
      • Library Reference
        • Introduction
        • API
        • Auth
        • Basemaps
        • Core
        • Redux
        • UI
        • Widgets
      • Release Notes
  • CARTO Self-Hosted
    • Overview
    • Key concepts
      • Architecture
      • Deployment requirements
    • Quickstarts
      • Single VM deployment (Kots)
      • Orchestrated container deployment (Kots)
      • Advanced Orchestrated container deployment (Helm)
    • Guides
      • Guides (Kots)
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Use Workload Identity in GCP
        • High availability configuration for CARTO Self-hosted
        • Configure your custom service account
      • Guides (Helm)
        • Configure your own buckets (Helm)
        • Configure an external in-memory cache (Helm)
        • Enable Google Basemaps (Helm)
        • Enable the CARTO Data Warehouse (Helm)
        • Configure an external proxy (Helm)
        • Enable BigQuery OAuth connections (Helm)
        • Configure Single Sign-On (SSO) (Helm)
        • Use Workload Identity in GCP (Helm)
        • Use EKS Pod Identity in AWS (Helm)
        • Enable Redshift imports (Helm)
        • Migrating CARTO Self-hosted installation to an external database (Helm)
        • Advanced customizations (Helm)
        • Configure your custom service account (Helm)
    • Maintenance
      • Maintenance (Kots)
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
        • Change the Admin Console password
      • Maintenance (Helm)
        • Monitoring (Helm)
        • Rotating keys (Helm)
        • Uninstall (Helm)
        • Backups (Helm)
        • Updates (Helm)
    • Support
      • Get debug information for Support (Kots)
      • Get debug information for Support (Helm)
    • CARTO Self-hosted Legacy
      • Key concepts
        • Architecture
        • Deployment requirements
      • Quickstarts
        • Single VM deployment (docker-compose)
      • Guides
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Enable Redshift imports
        • Configure your custom service account
        • Advanced customizations
        • Migrating CARTO Self-Hosted installation to an external database
      • Maintenance
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
      • Support
    • Release Notes
  • CARTO Native App for Snowflake Containers
    • Deploying CARTO using Snowflake Container Services
  • Get Help
    • Legal & Compliance
    • Previous libraries and components
    • Migrating your content to the new CARTO platform
Powered by GitBook
On this page
  • CREATE_VECTOR_TILESET
  • CREATE_POINT_AGG_TILESET
  • CREATE_H3_AGG_TILESET
  • CREATE_QUADBIN_AGG_TILESET

Was this helpful?

Export as PDF
  1. Data and Analysis
  2. Analytics Toolbox for Databricks
  3. Reference

tiler

We currently provide procedures to create the following kind of tilesets:

  • Spatial index tiles (aggregates spatial indexes into tiles at specific resolutions)

  • Geometry-based tiles of two types:

    • simple tilesets to visualize features individually

    • aggregation tilesets to generate aggregated point visualizations

CREATE_VECTOR_TILESET

CREATE_VECTOR_TILESET(input, output, options)

Description

Generates a simple tileset.

  • input: STRING that can either contain a table name (e.g. database.schema.tablename) or a full query (e.g.(SELECT * FROM database.schema.tablename)).

  • output: STRING of the format database.schema.tablename where the resulting tileset will be stored. The database and schema must exist and the caller needs to have permissions to create a new table in it.

  • options: STRING containing a valid JSON with the different options. Valid options are described in the table below.

warning

If a query is passed in input, it might be evaluated multiple times to generate the tileset. Thus, non-deterministic functions, such as [ROW_NUMBER] should be avoided. If such a function is needed, the query should be saved into a table first and then passed as input, to avoid inconsistent results.

Option
Description

if_exists

Default: "fail". A STRING that indicates if the process will fail if the table already exists, if set to "fail". Or any existing table will be replaced, if set to "replace".

geom_column

Default: "geom". A STRING that indicates the name of the geography column that will be used. The geography column must be a WKB with type BINARY. If your input table contains geographies in WKT format, they can be converted to WKB by using ST_ASWKB(ST_GEOMFROMWKT(<geom_column>)) AS <geom_column> in your input query.

zoom_min

Default: 0. A INTEGER that defines the minimum zoom level at which tiles will be generated. Any zoom level under this level won't be generated.

zoom_max

Default: 12. A INTEGER that defines the maximum zoom level at which tiles will be generated. Any zoom level over this level won't be generated.

max_tile_vertices

Default: 200000. A INTEGER that sets the maximum number of vertices a tile can contain. This limit only applies when the input geometries are lines or polygons. When this maximum is reached, the procedure will drop features according to the chosen max_tile_size_strategy. You can configure in which order the features are kept by setting the tile_feature_order property.

max_tile_features

Default: 10000. A INTEGER that sets the maximum number of features a tile can contain. This limit only applies when the input geometries are points. When this limit is reached, the procedure will stop adding features into the tile. You can configure in which order the features are kept by setting the tile_feature_order property.

tile_feature_order

Default: RANDOM() for points, ST_AREA() DESC for polygons, ST_LENGTH() DESC for lines. A STRING defining the order in which properties are added to a tile. This expects the SQL ORDER BY keyword definition, such as "aggregated_total DESC". The "ORDER BY" part must not be included. You can use any source column even if it is not included in the tileset as a property.

include_geoids

Default: false. Generates an additional geoids column that contains the geoid value from each row in the input data that intersects with the tile. This option is required to use the resulting tileset as a boundary.

metadata

properties

Default: "". A STRING that defines the properties that will be included associated with each cell feature. Each property is defined by using SQL syntax and can make use of the columns present in the input table. Note that every property different from Number will be casted to String. Different properties must be separated by ;.

Example

import com.carto.analytics.toolbox.ATExecute

ATExecute.sql(
 """
  |CALL_CARTO carto_un.carto.CREATE_VECTOR_TILESET(
  |
  | '(SELECT geom, population, category FROM database.schema.population_table)',
  | 'database.schema.population_tileset',
  | '{
  | "if_exists": "replace",
  | "geom_column": "geom",
  | "zoom_min": 0,
  | "zoom_max": 6,
  | "properties": "population; category",
  | "metadata": {
  |   "name": "Population",
  |   "description": "Population in the cities"
  |   }
  | }'
  | );
  | """.stripMargin,
  spark
)

CREATE_POINT_AGG_TILESET

CREATE_POINT_AGG_TILESET(input, output, options)

Description

Generates a point aggregation tileset.

  • input: STRING that can either contain a table name (e.g. database.schema.tablename) or a full query (e.g.(SELECT * FROM database.schema.tablename)).

  • output: STRING of the format database.schema.tablename where the resulting tileset will be stored. The database and schema must exist and the caller needs to have permissions to create a new table in it.

  • options: STRING containing a valid JSON with the different options. Valid options are described in the table below.

warning

If a query is passed in input, it might be evaluated multiple times to generate the tileset. Thus, non-deterministic functions, such as [ROW_NUMBER] should be avoided. If such a function is needed, the query should be saved into a table first and then passed as input, to avoid inconsistent results.

Option
Description

if_exists

Default: "fail". A STRING that indicates if the process will fail if the table already exists, if set to "fail". Or any existing table will be replaced, if set to "replace".

geom_column

Default: "geom". A STRING that indicates the name of the geography column that will be used. The geography column must be a WKB with type BINARY. If your input table contains geographies in WKT format, they can be converted to WKB by using ST_ASWKB(ST_GEOMFROMWKT(<geom_column>)) AS <geom_column> in your input query.

zoom_min

Default: 0. An INTEGER that defines the minimum zoom level at which tiles will be generated. Any zoom level under this level won't be generated.

zoom_max

Default: 12; maximum: 20. An INTEGER that defines the maximum zoom level at which tiles will be generated. Any zoom level over this level won't be generated.

aggregation_resolution

Default: 6. An INTEGER that specifies the resolution of the spatial aggregation. Aggregation for zoom z is based on quadgrid cells at z + resolution level. For example, with resolution 6, the z0 tile will be divided into cells that match the z6 tiles, or the cells contained in the z10 tile will be the boundaries of the z16 tiles within them. In other words, each tile is subdivided into 4^resolution cells, which is the maximum number of resulting features (aggregated) that the tiles will contain. Note that adding more granularity necessarily means heavier tiles which take longer to be transmitted and processed in the final client, and you are more likely to hit the internal memory limits.

aggregation_placement

Default: "cell-centroid". A STRING that defines what type of geometry will be used to represent the cells generated in the aggregation, which will be the features of the resulting tileset. There are currently four options:

  • "cell-centroid": Each feature will be defined as the centroid of the cell, that is, all points that are aggregated together into the cell will be represented in the tile by a single point positioned at the centroid of the cell.

  • "features-any": The aggregation cell will be represented by any random point from the source data contained within it. That is, if 10 points fall inside a cell, the procedure will randomly choose the location of one of them to represent the aggregation cell.

  • "features-centroid": The feature will be defined as the centroid (point) of the collection of points within the cell.

metadata

properties

FEATURES PER TILE LIMITS

The value of aggregation_resolution sets an upper bound to how many features can be present in a tile. For a value of n, a maximum of 4^n (4 raised to n) features can be present in a tile. For example, for an aggregation resolution of 8, the maximum number of features (points) will be 65536 per tile. This value can be too high and produce tiles that are too large when either the aggregation resolution is high or many properties are included. In that case, to improve the performance of the map visualizations, the max_tile_features should be used to limit the size of the tiles to about 1MB.

Result

The generated tileset consists of a table with the following columns, where each row represents a tile:

  • Z: zoom level of the tile.

  • X: X-index of the tile (0 to 2^Z-1).

  • Y: Y-index of the tile (0 to 2^Z-1).

  • DATA: contents of the tile, encoded as a MVT gzipped binary. It will contain the resulting points (location of the aggregated features) and their attributes (as defined by properties).

Additionally, there is a row identified by Z=-1 which contains metadata about the tileset in the DATA column in JSON format. It contains the following properties:

  • bounds: geographical extents of the source as a string in Xmin, Ymin, Xmax, Ymax format.

  • center: center of the geographical extents as X, Y, Z, where the Z represents the zoom level where a single tile spans the whole extents size.

  • zmin: minimum zoom level in the tileset.

  • zmax: maximum zoom level in the tileset.

  • tilestats: stats about the feature's properties. In addition to its name (attribute) and type, it contains min, max, average, sum and quantiles for numeric attributes and categories for text attributes.

Example

import com.carto.analytics.toolbox.ATExecute

ATExecute.sql(
 """
  |CALL_CARTO carto_un.carto.CREATE_POINT_AGG_TILESET(
  |
  | '(SELECT * FROM database.schema.cities_table)',
  | 'database.schema.cities_tileset',
  | '{
  | "if_exists": "replace",
  | "geom_column": "geom",
  | "zoom_min": 0,
  | "zoom_max": 12,
  | "aggregation_resolution": 6,
  | "aggregation_placement": "cell-centroid",
  | "properties": "COUNT(*) AS num_cities; SUM(POPULATION) AS population_sum; CASE WHEN COUNT(*) <= 1 THEN ANY_VALUE(city_name) ELSE NULL END AS city_name; ANY_VALUE(date) AS date",
  | "metadata": {
  |   "name": "Population",
  |   "description": "Population in the cities"
  |   }
  | }'
  | );
  | """.stripMargin,
  spark
)

In the example above, for all features we would get a property "num_cities" with the number of points that fall in it and "population_sum" with the sum of the population in those cities. In addition to this, when there is only one point that belongs to this property (and only in that case) we will also get the column values from the source data in "city_name".

CREATE_H3_AGG_TILESET

CREATE_H3_AGG_TILESET(input, output, options)

Description

Creates a tileset that uses a H3 spatial index, aggregating data from an input table that uses that same spatial index.

Aggregated data is computed for all levels between resolution_min and resolution_max. For each resolution level, all tiles for the area covered by the source table are added, with data aggregated at level resolution + aggregation resolution.

  • input: STRING that can either contain a table name (e.g. database.schema.tablename) or a full query (e.g.(SELECT * FROM database.schema.tablename)).

  • output: STRING of the format database.schema.tablename where the resulting tileset will be stored. The database and schema must exist and the caller needs to have permissions to create a new table in it.

  • options: STRING containing a valid JSON with the different options. Valid options are described in the table below.

warning

If a query is passed in input, it might be evaluated multiple times to generate the tileset. Thus, non-deterministic functions, such as [ROW_NUMBER] should be avoided. If such a function is needed, the query should be saved into a table first and then passed as input, to avoid inconsistent results.

Option
Description

if_exists

Default: "fail". A STRING that indicates if the process will fail if the table already exists, if set to "fail". Or any existing table will be replaced, if set to "replace".

resolution_min

Default: 0. A INTEGER that defines the minimum resolution level for tiles. Any resolution level under this level won't be generated.

resolution_max

Default: 6. A INTEGER that defines the maximum resolution level for tiles. Any resolution level over this level won't be generated.

h3_column

Default: h3. A STRING that indicates the name of the H3 spatial index column that will be used.

h3_resolution

A INTEGER defining the resolution of the tiles in the input table.

aggregation_resolution

Default: 4. A INTEGER defining the resolution to use when aggregating data at each resolution level. For a given h3_resolution, data is aggregated at resolution_level + aggregation resolution.

metadata

properties

Example

import com.carto.analytics.toolbox.ATExecute

ATExecute.sql(
 """
  |CALL_CARTO carto_un.carto.CREATE_H3_AGG_TILESET(
  |
  | '(SELECT * FROM database.schema.input_table_h3_level10)',
  | 'your_database.your_schema.output_tileset_h3_level10',
  | '{
  | "if_exists": "replace",
  | "h3_column": "h3",
  | "h3_resolution": 10,
  | "resolution_min": 0,
  | "resolution_max": 6,
  | "aggregation_resolution": 4,
  | "properties": "SUM(population) AS population; ANY_VALUE(date) AS date",
  | "metadata": {
  |   "name": "Population",
  |   "description": "Population in the cities"
  |   }
  | }'
  | );
  | """.stripMargin,
  spark
)

CREATE_QUADBIN_AGG_TILESET

CREATE_QUADBIN_AGG_TILESET(input, output, options)

Description

Creates a tileset that uses a quadbin spatial index, aggregating data from an input table that uses that same spatial index.

Aggregated data is computed for all levels between resolution_min and resolution_max. For each resolution level, all tiles for the area covered by the source table are added, with data aggregated at level resolution + aggregation resolution.

  • input: STRING that can either contain a table name (e.g. database.schema.tablename) or a full query (e.g.(SELECT * FROM database.schema.tablename)).

  • output: STRING of the format database.schema.tablename where the resulting tileset will be stored. The database and schema must exist and the caller needs to have permissions to create a new table in it.

  • options: STRING containing a valid JSON with the different options. Valid options are described in the table below.

warning

If a query is passed in input, it might be evaluated multiple times to generate the tileset. Thus, non-deterministic functions, such as [ROW_NUMBER] should be avoided. If such a function is needed, the query should be saved into a table first and then passed as input, to avoid inconsistent results.

Option
Description

if_exists

Default: "fail". A STRING that indicates if the process will fail if the table already exists, if set to "fail". Or any existing table will be replaced, if set to "replace".

resolution_min

Default: 0. A INTEGER that defines the minimum resolution level for tiles. Any resolution level under this level won't be generated.

resolution_max

Default: 12. A INTEGER that defines the maximum resolution level for tiles. Any resolution level over this level won't be generated.

quadbin_column

Default: quadbin. A STRING that indicates the name of the quadbin spatial index column that will be used.

quadbin_resolution

A INTEGER defining the resolution of the tiles in the input table.

aggregation_resolution

Default: 6. A INTEGER defining the resolution to use when aggregating data at each resolution level. For a given quadbin_resolution, data is aggregated at resolution_level + aggregation resolution.

metadata

properties

Example

import com.carto.analytics.toolbox.ATExecute

ATExecute.sql(
 """
  |CALL_CARTO carto_un.carto.CREATE_QUADBIN_AGG_TILESET(
  |
  | '(SELECT * FROM database.schema.input_table_quadbin_level14)',
  | 'your_database.your_schema.output_tileset_quadbin_level14',
  | '{
  | "if_exists": "replace",
  | "quadbin_column": "quadbin",
  | "quadbin_resolution": 14,
  | "resolution_min": 0,
  | "resolution_max": 8,
  | "aggregation_resolution": 4,
  | "properties": "SUM(population) AS population; ANY_VALUE(date) AS date",
  | "metadata": {
  |   "name": "Population",
  |   "description": "Population in the cities"
  |   }
  | }'
  | );
  | """.stripMargin,
  spark
)
PreviousldsNextGuides

Last updated 6 months ago

Was this helpful?

Default: {}. A JSON object to specify the associated metadata of the tileset. Use this to set the name, description and legend to be included in the . Other fields will be included in the object extra_metadata.

Default: {}. A JSON object to specify the associated metadata of the tileset. Use this to set the name, description and legend to be included in the . Other fields will be included in the object extra_metadata.

Default: "". A STRING that defines the properties that will be included associated with each cell feature. Each property is defined by using SQL syntax and should include a formula to be applied to the values of the points that fall under the cell. This formula can be any SQL formula that uses an supported by Databricks. Note that every property different from Number will be casted to String. Different properties must be separated by ;.

Default: {}. A JSON object to specify the associated metadata of the tileset. Use this to set the name, description and legend to be included in the . Other fields will be included in the object extra_metadata.

Default: "". A STRING that defines the properties that will be included associated with each cell feature. Each property is defined by using SQL syntax and should include a formula to be applied to the values of the points that fall under the cell. This formula can be any SQL formula that uses an supported by Databricks. Note that every property different from Number will be casted to String. Different properties must be separated by ;.

Default: {}. A JSON object to specify the associated metadata of the tileset. Use this to set the name, description and legend to be included in the . Other fields will be included in the object extra_metadata.

Default: "". A STRING that defines the properties that will be included associated with each cell feature. Each property is defined by using SQL syntax and should include a formula to be applied to the values of the points that fall under the cell. This formula can be any SQL formula that uses an supported by Databricks. Note that every property different from Number will be casted to String. Different properties must be separated by ;.

TileJSON
TileJSON
aggregate function
TileJSON
aggregate function
TileJSON
aggregate function