data
Last updated
Last updated
This module contains functions and procedures that make use of data (either Data Observatory or user-provided data) for their computations.
For the DATAOBS_
methods, users can find the details on where to locate the tables (and their qualified names) within the corresponding data warehouse by going to the "Access in" option in the subscription page in the Data Explorer, and picking the data warehouse where that data has been made available.
Description
This procedure enriches a query containing grid cell indexes of one of the supported types (H3, Quadbin) with data from the Data Observatory. The user must be subscribed to all the Data Observatory datasets involved in the enrichment. The cells must all have the same resolution.
The Data Observatory Catalog tables (SPATIAL_CATALOG_DATASETS
and SPATIAL_CATALOG_VARIABLES
) must also be present in the same schema as the procedures.
If the enrich data is indexed by an H3 or Quadbin grid compatible with the input (same grid type and equal or greater resolution), then the enrichment will be performed much more efficiently by matching the index values rather than intersecting associated GEOGRAPHY elements.
As a result of this process, each input grid cell will be enriched with the data of the Data Observatory datasets that spatially intersect it. When the input cell intersects with more than one polygon, point, or line of the Data Observatory datasets, the data is aggregated using the aggregation methods specified.
Valid aggregation methods are:
SUM
: It assumes the aggregated variable is an extensive property (e.g. population). Accordingly, the value corresponding to the Data Observatory feature intersected is weighted by the fraction of area or length intersected. If the Data Observatory features are points or grid index matching is used, then a simple sum is performed.
MIN
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
MAX
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
AVG
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
. However, a weighted average is computed, using the intersection areas or lengths as the weight. When the Data Observatory features are points or grid index matching is used, a simple average is computed.
COUNT
It computes the number of Data Observatory features that contain the enrichment variable and are intersected by the input geography.
For other types of aggregation, the DATAOBS_ENRICH_GRID_RAW
procedure can be used to obtain non-aggregated data that can be later applied to any desired custom aggregation.
Input parameters
grid_type
: VARCHAR
Type of grid: "h3" or "quadbin".
input_query
: VARCHAR
query to be enriched; this query must produce valid grid indexes for the selected grid type in a column of the proper type (VARCHAR for H3, and INT for Quadbin). It can include additional columns with data associated with the grid cells that will be preserved. A qualified table name can be given as well.
input_index_column
: VARCHAR
name of a column in the query that contains the grid indexes.
variables
: ARRAY
of OBJECT
s with fields variable
and aggregation
. Variables of the Data Observatory that will be used to enrich the input polygons. For each variable, its slug and the aggregation method must be provided. Use 'default'
to use the variable's default aggregation method. Valid aggregation methods are: SUM
, AVG
, MAX
, MIN
, and COUNT
. The catalog procedure DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find available variables and their slugs and default aggregation.
filters
ARRAY
of OBJECT
s with fields dataset
and expression
. Filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog procedures DATAOBS_SUBSCRIPTIONS
and DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find both the column names and the corresponding table slugs.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than the input, the input table will be enriched in place.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
Output
The output table will contain all the input columns provided in the input_query
and one extra column for each variable in variables
, named after its corresponding slug and including a suffix indicating the aggregation method used.
If a new output table is created, it will be clustered by the spatial index to optimize its performance when filtering data by it or using it to join to other grid tables. This is important to visualize the results in a map efficiently. If an SQL clause is included in the output
parameter this optimization will not be performed.
Examples
Description
This procedure enriches a query containing grid cell indexes of one of the supported types (H3, Quadbin) with data from the Data Observatory. The user must be subscribed to all the Data Observatory datasets involved in the enrichment. The cells must all have the same resolution.
The Data Observatory Catalog tables (SPATIAL_CATALOG_DATASETS
and SPATIAL_CATALOG_VARIABLES
) must also be present in the same schema as the procedures.
As a result of this process, each input grid cell will be enriched with the data from the Data Observatory datasets that spatially intersect it. The variable values corresponding to all intersecting Data Observatory features for a given input polygon will be returned in an ARRAY column. When variables come from multiple Data Observatory geographies, one ARRAY column will be included for each source geography table. Data Observatory geography slugs are used for the names of these columns. Each array contains OBJECTs with one field for each variable (named after the variable slug) and additional measure fields __carto_intersection
, __carto_total
, __carto_dimension
. See the output information for more details.
If the enrich data is indexed by an H3 or Quadbin grid compatible with the input (same grid type and equal or greater resolution), then the enrichment will be performed much more efficiently by matching the index values rather than intersecting associated GEOGRAPHY elements. In this case the additional measure fields are omitted.
Input parameters
grid_type
: VARCHAR
Type of grid: "h3" or "quadbin"
input_query
: VARCHAR
query to be enriched; this query must produce valid grid indexes for the selected grid type in a column of the proper type (VARCHAR for H3, and INT for Quadbin). It can include additional columns with data associated with the grid cells that will be preserved. A qualified table name can be given as well.
input_index_column
: VARCHAR
name of a column in the query that contains the grid indexes.
variables
: ARRAY
of VARCHAR with slugs (unique identifiers) of the Data Observatory variables to enrich the input polygons. The catalog procedure DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find available variables and their slugs and default aggregation.
filters
ARRAY
of OBJECT
s with fields dataset
and expression
. Filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog procedures DATAOBS_SUBSCRIPTIONS
and DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find both the column names and the corresponding table slugs.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than the input, the input table will be enriched in place.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
Output
The output table will contain all the input columns provided in the input_query
and one extra ARRAY column for each Data Observatory geography containing enrichment variables, named after their corresponding geography slug.
The array contains OBJECTs with one field for each variable, using the variable slug as the field name. Additional fields will be included with information about the intersection of the geographies unless the grid matching described above is performed:
__carto_dimension
dimension of the Data Observatory geography: 2 for areas (polygons), 1 for lines, and 0 for points.
__carto_intersection
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the intersection.
__carto_total
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the Data Observatory feature.
Moreover, another column named __CARTO_INPUT_AREA
will be added containing the area of the input polygon in square meters.
If a new output table is created, it will be clustered by the spatial index to optimize its performance when filtering data by it or using it to join to other grid tables. This is important to visualize the results in a map efficiently. If an SQL clause is included in the output
parameter this optimization will not be performed.
Examples
Description
This procedure enriches a query containing grid cell indexes of one of the supported types (H3, Quadbin) with data from the Data Observatory using a speficied feature as weight for the enrichment, which weights appropiately the intersection segments with regard to the total original segment. For example the attribution of the feature to each intersected segment results from the value of the weighted feature in the intersection segment over the total original segment. The user must be subscribed to all the Data Observatory datasets involved in the enrichment. The cells must all have the same resolution. The feature used for the "weights" can either come from Data Observatory or provided by the user.
If the enrich data is indexed by an H3 or Quadbin grid compatible with the input (same grid type and equal or greater resolution), then the enrichment will be performed much more efficiently by matching the index values rather than intersecting associated GEOGRAPHY elements.
As a result of this process, each input grid cell will be enriched with the data of the Data Observatory datasets that spatially intersect it, weighted accordingly by the specified feature . When the input cell intersects with more than one polygon, point, or line of the Data Observatory datasets, the data is aggregated using the aggregation methods specified in a weighted manner where the weights result from the specified feature.
Valid aggregation methods are:
SUM
: It assumes the aggregated variable is an extensive property (e.g. population). Accordingly, the value corresponding to the Data Observatory feature intersected is weighted by the fraction of area or length intersected. If the Data Observatory features are points or grid index matching is used, then a simple sum is performed.
MIN
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
MAX
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
AVG
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
. However, a weighted average is computed, using the intersection areas or lengths as the weight. When the Data Observatory features are points or grid index matching is used, a simple average is computed.
COUNT
It computes the number of Data Observatory features that contain the enrichment variable and are intersected by the input geography.
For other types of aggregation, the DATAOBS_ENRICH_GRID_RAW
procedure can be used to obtain non-aggregated data that can be later applied to any desired custom aggregation.
If the enrichment of an input table needs to be repeated, please notice that dropping the added columns will generate problems in consecutive enrichments as Bigquery saves those columns during 7 days for time travel purposes. We recommend storing the original table columns in a temporal table, dropping the input table and then recreating the input table from the temporal table.
Input parameters
grid_type
: STRING
Type of grid: "h3" or "quadbin".
input_query
: STRING
query to be enriched (Standard SQL); this query must produce valid grid indexes for the selected grid type in a column of the proper type (STRING for H3, and INT64 for Quadbin). It can include additional columns with data associated with the grid cells that will be preserved. A qualified table name can be given as well, e.g. 'project-id.dataset-id.table-name'
.
input_index_column
: STRING
name of a column in the query that contains the grid indexes.
variables
: ARRAY
of OBJECT
s with fields variable
and aggregation
. Variables of the Data Observatory that will be used to enrich the input polygons. For each variable, its slug and the aggregation method must be provided. Use 'default'
to use the variable's default aggregation method. Valid aggregation methods are: SUM
, AVG
, MAX
, MIN
, and COUNT
. The catalog procedure DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find available variables and their slugs and default aggregation.
filters
ARRAY
of OBJECT
s with fields dataset
and expression
. Filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog procedures DATAOBS_SUBSCRIPTIONS
and DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find both the column names and the corresponding table slugs.
weight_variable
: OBJECT
with fields variable
and aggregation
. Variable that will be used to weight the intersections of the input polygons with the Data Observatory datasets polygons, lines, points. Its slug and the aggregation method must be provided. Valid aggregation methods are: SUM
, AVG
, MAX
, MIN
, COUNT
. This variable can either originate from the Data Observatory or be provided by the user, and it is mandatory. If NULL
then an error is raised.
custom_weight_query
: STRING
query that contains the custom variable to be used as weight together with a geography column geom
. This field is compulsory when a custom variable is passed in weight_variable, otherwise this must be set to NULL
.
do_weight_filters
: OBJECT
with fields dataset
and expression
. In case the weight_variable is from the DO, filters can be applied. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog procedures DATAOBS_SUBSCRIPTIONS
and DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find both the column names and the corresponding table slugs.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
source
: STRING
name of the location where the Data Observatory subscriptions of the user are stored, in 'project-id.dataset-id'
format. If only the 'dataset-id'
is included, it uses the project 'carto-data'
by default.
Output
The output table will contain all the input columns provided in the input_query
and one extra column for each variable in variables
, named after its corresponding slug and including a suffix indicating the aggregation method used.
If a new output table is created, it will be clustered by the spatial index to optimize its performance when filtering data by it or using it to join to other grid tables. This is important to visualize the results in a map efficiently. If an SQL clause is included in the output
parameter this optimization will not be performed.
Examples
Description
This procedure enriches a query containing geographic points with data from the Data Observatory, The user must be subscribed to all the Data Observatory datasets involved in the enrichment. The Data Observatory Catalog tables (SPATIAL_CATALOG_DATASETS
and SPATIAL_CATALOG_VARIABLES
) must also be present in the same schema as the procedures.
As a result of this process, each input point will be enriched with the data from the Data Observatory that spatially intersect it. When the input point intersects with more than one data observatory polygon, point or line, the data is aggregated using the aggregation methods specified.
Valid aggregation methods are: SUM
, MIN
, MAX
, AVG
, and COUNT
.
For other types of aggregation, the DATAOBS_ENRICH_POINTS_RAW
procedure can be used to obtain non-aggregated data that can be later applied to any desired custom aggregation.
Input parameters
input_query
: VARCHAR
query to be enriched. A qualified table name can be given as well.
input_geography_column
: VARCHAR
name of the GEOGRAPHY column in the query containing the points to be enriched.
variables
: ARRAY
of OBJECT
s with fields variable
and aggregation
. Variables of the Data Observatory that will be used to enrich the input points. For each variable, its slug and the aggregation method must be provided. Use 'default'
to use the variable's default aggregation method. Valid aggregation methods are: SUM
, AVG
, MAX
, MIN
, and COUNT
. The catalog procedure DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find available variables and their slugs and default aggregation.
filters
ARRAY
of OBJECT
s with fields dataset
and expression
. Filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog procedures DATAOBS_SUBSCRIPTIONS
and DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find both the column names and the corresponding table slugs.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
Output
The output table will contain all the input columns provided in the input_query
and one extra column for each variable in variables
, named after its corresponding slug and including a suffix indicating the aggregation method used.
If a new output table is created, it will be ordered by the x and y coordinates optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently.
Examples
Description
This procedure enriches a query containing geographic points with data from the Data Observatory, The user must be subscribed to all the Data Observatory datasets involved in the enrichment. The Data Observatory Catalog tables (SPATIAL_CATALOG_DATASETS
and SPATIAL_CATALOG_VARIABLES
) must also be present in the same schema as the procedures.
As a result of this process, each input point will be enriched with the data of the Data Observatory datasets that spatially intersect it. The variable values corresponding to all intersecting Data Observatory features for a given input point will be returned in an ARRAY column. When variables come from multiple Data Observatory geographies, one ARRAY column will be included for each source geography. Data Observatory geography slugs are used for the names of these columns. Each array contains OBJECTs with one field for each variable (named after the variable slug) and additional measure fields __carto_total
, __carto_dimension
. See the output information for more details.
Input parameters
input_query
: VARCHAR
query to be enriched. A qualified table name can be given as well.
input_geography_column
: VARCHAR
name of the GEOGRAPHY column in the query containing the points to be enriched.
variables
: ARRAY
of VARCHAR with slugs (unique identifiers) of the Data Observatory variables to enrich the input polygons. The catalog procedure DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find available variables and their slugs and default aggregation.
filters
ARRAY
of OBJECT
s with fields dataset
and expression
. Filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog procedures DATAOBS_SUBSCRIPTIONS
and DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find both the column names and the corresponding table slugs.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
Output
The output table will contain all the input columns provided in the input_query
and one extra ARRAY column for each Data Observatory geography containing enrichment variables, named after their corresponding geography slug. The array contains OBJECTs with one field for each variable, using the variable slug as the field name. Additional fields will be included with information about the intersected enrichment geographies:
__carto_dimension
dimension of the Data Observatory geography: 2 for areas (polygons), 1 for lines, and 0 for points.
__carto_total
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the Data Observatory feature.
If a new output table is created, it will be ordered by the x and y coordinates optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently.
Examples
Description
This procedure enriches a query containing geographic polygons with data from the Data Observatory, The user must be subscribed to all the Data Observatory datasets involved in the enrichment. The Data Observatory Catalog tables (SPATIAL_CATALOG_DATASETS
and SPATIAL_CATALOG_VARIABLES
) must also be present in the same schema as the procedures.
As a result of this process, each input polygon will be enriched with the data from the Data Observatory that spatially intersects it. When the input polygon intersects with more than one data observatory polygon, point or line, the data is aggregated using the aggregation methods specified.
Valid aggregation methods are:
SUM
: It assumes the aggregated variable is an extensive property (e.g. population). Accordingly, the value corresponding to the Data Observatory feature intersected is weighted by the fraction of area or length intersected. If the Data Observatory features are points, then a simple sum is performed.
MIN
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
MAX
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
AVG
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
. However, a weighted average is computed, using the intersection areas or lengths as the weight. When the enrichment features are points, a simple average is computed.
COUNT
It computes the number of enrichment features that contain the Data Observatory variable and are intersected by the input geography.
For other types of aggregation, the DATAOBS_ENRICH_POLYGONS_RAW
procedure can be used to obtain non-aggregated data that can be later applied to any desired custom aggregation.
Input parameters
input_query
: VARCHAR
query to be enriched. A qualified table name can be given as well.
input_geography_column
: VARCHAR
name of the GEOGRAPHY column in the query containing the polygons to be enriched.
variables
: ARRAY
of OBJECT
s with fields variable
and aggregation
. Variables of the Data Observatory that will be used to enrich the input polygons. For each variable, its slug and the aggregation method must be provided. Use 'default'
to use the variable's default aggregation method. Valid aggregation methods are: SUM
, AVG
, MAX
, MIN
, and COUNT
. The catalog procedure DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find available variables and their slugs and default aggregation.
filters
ARRAY
of OBJECT
s with fields dataset
and expression
. Filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog tables SPATIAL_CATALOG_VARIABLES
and SPATIAL_CATALOG_DATASETS
can be used to find both the column names and the corresponding table slugs.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
Note that GeometryCollection/FeatureCollection geographies are not supported at the moment.
Output
The output table will contain all the input columns provided in the input_query
and one extra column for each variable in variables
, named after its corresponding slug and including a suffix indicating the aggregation method used.
If a new output table is created, it will be ordered by the minimum x and y coordinates of each polygon to optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently.
Examples
Description
This procedure enriches a query containing geographic polygons with data from the Data Observatory, The user must be subscribed to all the Data Observatory datasets involved in the enrichment.
As a result of this process, each input polygon will be enriched with the data of the Data Observatory datasets that spatially intersect it. The variable values corresponding to all intersecting Data Observatory features for a given input polygon will be returned in an ARRAY column. When variables come from multiple Data Observatory geographies, one ARRAY column will be included for each source geography. Data Observatory geography slugs are used for the names of these columns. Each array contains OBJECTs with one field for each variable (named after the variable slug) and additional measure fields __carto_intersection
, __carto_total
, __carto_dimension
. See the output information for more details.
Input parameters
input_query
: VARCHAR
query to be enriched. A qualified table name can be given as well.
input_geography_column
: VARCHAR
name of the GEOGRAPHY column in the query containing the polygons to be enriched.
variables
: ARRAY
of VARCHAR with slugs (unique identifiers) of the Data Observatory variables to enrich the input polygons. The catalog procedure DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find available variables and their slugs and default aggregation.
filters
ARRAY
of OBJECT
s with fields dataset
and expression
. Filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog tables SPATIAL_CATALOG_VARIABLES
and SPATIAL_CATALOG_DATASETS
can be used to find both the column names and the corresponding table slugs.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
Note that GeometryCollection/FeatureCollection geographies are not supported at the moment.
Output
The output table will contain all the input columns provided in the input_query
and one extra ARRAY column for each Data Observatory geography containing enrichment variables, named after their corresponding geography slug. The array contains OBJECTs with one field for each variable, using the variable slug as the field name. Additional fields will be included with information about the intersection of the geographies:
__carto_dimension
dimension of the Data Observatory geography: 2 for areas (polygons), 1 for lines, and 0 for points.
__carto_intersection
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the intersection.
__carto_total
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the Data Observatory feature.
Moreover, another column named __CARTO_INPUT_AREA
will be added containing the area of the input polygon in square meters.
If a new output table is created, it will be ordered by the minimum x and y coordinates of each polygon to optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently.
Examples
Imagine that you need some information about the population in two areas of interest defined by polygons.
Now let's compute some aggregated statistics for our area:
The sum of the population, adjusted by the fraction of intersected enrichment areas
The average density of population, weighted by the intersection areas
Description
This procedure enriches a query containing geographic polygons with data from the Data Observatory. The user must be subscribed to all the Data Observatory datasets involved in the enrichment.
As a result of this process, each input polygon will be enriched with the data of the Data Observatory datasets that spatially intersect it, weighted accordingly by the specified feature. When the input polygon intersects with more than one polygon, point, or line of the Data Observatory datasets, the data is aggregated using the aggregation methods specified. Using a speficied feature as weight for the enrichment, it weights appropiately the intersection segments with regard to the total original segment. For example the attribution of the feature to each intersected segment results from the value of the weighted feature in the intersection segment over the total original segment.
Valid aggregation methods are:
SUM
: It assumes the aggregated variable is an extensive property (e.g. population). Accordingly, the value corresponding to the Data Observatory feature intersected is weighted by the fraction of the intersected weight variable. If the Data Observatory features are points, then a simple sum is performed.
MIN
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the weight variable.
MAX
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the weight variable.
AVG
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length. However, a weighted average is computed, using the intersection values as the weight. When the Data Observatory features are points, a simple average is computed.
COUNT
It computes the number of Data Observatory features that contain the enrichment variable and are intersected by the input geography.
For other types of aggregation, the DATAOBS_ENRICH_POLYGONS_RAW
procedure can be used to obtain non-aggregated data that can be later applied any desired custom aggregation.
If the enrichment of an input table needs to be repeated, please notice that dropping the added columns will generate problems in consecutive enrichments as Bigquery saves those columns during 7 days for time travel purposes. We recommend storing the original table columns in a temporal table, dropping the input table and then recreating the input table from the temporal table.
Input parameters
input_query
: STRING
query to be enriched (Standard SQL). A qualified table name can be given as well, e.g. 'project-id.dataset-id.table-name'
.
input_geography_column
: STRING
name of the GEOGRAPHY column in the query containing the polygons to be enriched.
variables
: ARRAY
of OBJECT
s with fields variable
and aggregation
. Variables of the Data Observatory that will be used to enrich the input polygons. For each variable, its slug and the aggregation method must be provided. Use 'default'
to use the variable's default aggregation method. Valid aggregation methods are: SUM
, AVG
, MAX
, MIN
, and COUNT
. The catalog procedure DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find available variables and their slugs and default aggregation.
filters
ARRAY
of OBJECT
s with fields dataset
and expression
. Filters to be applied to the Data Observatory datasets used in the enrichment can be passed here. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog procedures DATAOBS_SUBSCRIPTIONS
and DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find both the column names and the corresponding table slugs.
weight_variable
: OBJECT
with fields variable
and aggregation
. Variable that will be used to weight the intersections of the input polygons with the Data Observatory datasets polygons, lines, points. Its slug and the aggregation method must be provided. Valid aggregation methods are: SUM
, AVG
, MAX
, MIN
, COUNT
. This variable can either originate from the Data Observatory or be provided by the user, and it is mandatory. If NULL
then an error is raised.
custom_weight_query
: STRING
query that contains the custom variable to be used as weight together with a geography column geom
. This field is compulsory when a custom variable is passed in weight_variable, otherwise this must be set to NULL
.
do_weight_filters
: OBJECT
with fields dataset
and expression
. In case the weight_variable is from the DO, filters can be applied. Each filter is applied to the Data Observatory dataset or geography, identified by its corresponding slug, passed in the dataset
field of the structure. The second field of the structure, expression
, is an SQL expression that will be inserted in a WHERE
clause and that can reference any column of the dataset or geography table. Please note that column names (not slugs) should be applied here. The catalog procedures DATAOBS_SUBSCRIPTIONS
and DATAOBS_SUBSCRIPTION_VARIABLES
can be used to find both the column names and the corresponding table slugs.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
source
: STRING
name of the location where the Data Observatory subscriptions of the user are stored, in 'project-id.dataset-id'
format. If only the 'dataset-id'
is included, it uses the project 'carto-data'
by default.
Output
The output table will contain all the input columns provided in the input_query
and one extra column for each variable in variables
, named after its corresponding slug and including a suffix indicating the aggregation method used.
If a new output table is created, it will be clustered by the geography column to optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently. If an SQL clause is included in the output
parameter this optimization will not be performed.
Examples
Description
When calling this procedure, the result shows a list of the DO subscriptions available.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
filters
: VARCHAR
SQL expression to filter the results, e.g. $$dataset_category='Housing'$$
. And empty string ''
or NULL
can be used to omit the filtering.
Output
The result is an array of objects with the following fields:
dataset_slug
Internal identifier of the DO dataset.
dataset_name
name of the DO dataset.
dataset_country
name of the country the dataset belongs to.
dataset_category
name of the dataset category.
dataset_license
type of license, either "Public data" or "Premium data".
dataset_provider
name of the dataset provider.
dataset_version
version of the dataset.
dataset_geo_type
type of geometry used by the geography: "POINT"/"MULTIPOINT"/"LINESTRING"/"MULTILINESTRING"/"POLYGON"/"MULTIPOLYGON"/"GEOMETRYCOLLECTION".
dataset_table
name of the user BigQuery subscription table to access the dataset.
associated_geography_table
geography associated with the dataset (NULL if category is Geography
meaning the dataset itself is a geography); contains a subscription table/view if available for the geography or the original (public) BigQuery dataset qualified name otherwise.
associated_geography_slug
internal identifier of the geography associated with the dataset (NULL if category is Geography
).
Example
Description
When calling this procedure, the result shows a list of the DO subscriptions and variables available.
source
: VARCHAR
name of the location where the Data Observatory subscriptions of the user are stored, in DATABASE.SCHEMA
format. If only the SCHEMA
is included, it uses the database where the Analytics Toolbox is installed by default.
filters
: VARCHAR
SQL expression to filter the results, e.g. $$variable_type='STRING'$$
. And empty string ''
or NULL
can be used to omit the filtering.
Output
The result is an array of objects with the following fields:
variable_slug
unique identifier of the variable. This can be used for enrichment.
variable_name
column name of the variable.
variable_description
description of the variable.
variable_type
type of the variable column.
variable_aggregation
default aggregation method for the variable.
dataset_slug
identifier of the dataset the variable belongs to.
associated_geography_slug
identifier of the corresponding geography. Note that this is NULL if the dataset itself is a geography..
Example
Description
This procedure enriches a query containing grid cell indexes of one of the supported types (H3, Quadbin) with data from another enrichment query that contains geographies, thus effectively transferring geography-based data to an spatial grid.
As a result of this process, each input grid cell will be enriched with the data of the enrichment query that spatially intersects it. When the input cell intersects with more than one feature of the enrichment query, the data is aggregated using the aggregation methods specified.
Valid aggregation methods are:
SUM
: It assumes the aggregated variable is an extensive property (e.g. population). Accordingly, the value corresponding to the enrichment feature intersected is weighted by the fraction of area or length intersected. If the enrichment features are points, then a simple sum is performed.
MIN
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
MAX
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
AVG
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
. However, a weighted average is computed, using the intersection areas or lengths as the weight. When the enrichment features are points, a simple average is computed.
COUNT
It computes the number of enrichment features that contain the enrichment variable and are intersected by the input geography.
For other types of aggregation, the ENRICH_GRID_RAW
procedure can be used to obtain non-aggregated data that can be later applied to any desired custom aggregation.
Input parameters
grid_type
: Type of grid: "h3" or "quadbin".
input_query
: VARCHAR
query to be enriched; this query must produce valid grid indexes for the selected grid type in a column of the proper type (VARCHAR for H3 and INT for Quadbin). It can include additional columns with data associated with the grid cells that will be preserved. A qualified table name can be given as well.
input_index_column
: VARCHAR
name of a column in the query that contains the grid indexes.
data_query
: VARCHAR
query that contains both a geography column and the columns with the data that will be used to enrich the polygons provided in the input query. All rows in the table must contain the same kind of geometry (points/lines/polygons) in the geography column.
data_geography_column
: VARCHAR
name of the GEOGRAPHY column provided in the data_query
.
variables
: ARRAY
with the columns that will be used to enrich the input polygons and their corresponding aggregation method (SUM
, AVG
, MAX
, MIN
, COUNT
). Each element in this array should be an OBJECT
with fields column
and aggregation
.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
Note that GeometryCollection/FeatureCollection geographies are not supported at the moment.
Output
The output table will contain all the input columns provided in the input_query
and one extra column for each variable in variables
, named after its corresponding enrichment column and including a suffix indicating the aggregation method used.
If a new output table is created, it will be clustered by the spatial index to optimize its performance when filtering data by it or using it to join to other grid tables. This is important to visualize the results in a map efficiently. If an SQL clause is included in the output
parameter this optimization will not be performed.
Examples
Description
This procedure enriches a query containing grid cell indexes of one of the supported types (H3, Quadbin) with data from another enrichment query that contains geographies, thus effectively transferring geography-based data to an spatial grid.
Input parameters
grid_type
: Type of grid: "h3" or "quadbin".
input_query
: VARCHAR
query to be enriched; this query must produce valid grid indexes for the selected grid type in a column of the proper type (VARCHAR for H3 and INT for Quadbin). It can include additional columns with data associated with the grid cells that will be preserved. A qualified table name can be given as well.
input_index_column
: VARCHAR
name of a column in the query that contains the grid indexes.
data_query
: VARCHAR
query that contains both a geography column and the columns with the data that will be used to enrich the polygons provided in the input query. All rows in the table must contain the same kind of geometry (points/lines/polygons) in the geography column.
data_geography_column
: VARCHAR
name of the GEOGRAPHY column provided in the data_query
.
variables
: ARRAY
of VARCHAR
elements with names of the columns in the enrichment query that will be added to the enriched results.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
Note that GeometryCollection/FeatureCollection geographies are not supported at the moment.
Output
The output table will contain all the input columns provided in the input_query
and one extra ARRAY column named __CARTO_ENRICHMENT
. The array contains OBJECTs with one field for each variable. Additional fields will be included with information about the intersection of the geographies:
__carto_dimension
dimension of the enrichment geography: 2 for areas (polygons), 1 for lines and 0 for points.
__carto_intersection
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the intersection.
__carto_total
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the enrichment feature.
If a new output table is created, it will be clustered by the spatial index to optimize its performance when filtering data by it or using it to join to other grid tables. This is important to visualize the results in a map efficiently. If an SQL clause is included in the output
parameter this optimization will not be performed.
Examples
Description
This procedure enriches a query containing geographic points with data from another query, spatially matching both and aggregating the result.
As a result of this process, each input point will be enriched with the data from the enrichment query that spatially intersects it. When the input point intersects with more than one enrichment polygon, point or line, the data is aggregated using the aggregation methods specified.
Valid aggregation methods are: SUM
, MIN
, MAX
, AVG
, and COUNT
.
For special types of aggregation, the ENRICH_POINTS_RAW
procedure can be used to obtain non-aggregated data that can be later applied to any desired custom aggregation.
Input parameters
input_query
: VARCHAR
query to be enriched. A qualified table name can be given as well.
input_geography_column
: VARCHAR
name of the GEOGRAPHY column in the query containing the points to be enriched.
data_query
: VARCHAR
query that contains both a geography column and the columns with the data that will be used to enrich the points provided in the input query. All rows in the table must contain the same kind of geometry (points/lines/polygons) in the geography column.
data_geography_column
: VARCHAR
name of the GEOGRAPHY column provided in the data_query
.
variables
: ARRAY
with the columns that will be used to enrich the input polygons and their corresponding aggregation method (SUM
, AVG
, MAX
, MIN
, COUNT
). Each element in this array should be an OBJECT
with fields column
and aggregation
.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
Output
The output table will contain all the input columns provided in the input_query
and one extra column for each variable in variables
, named after its corresponding enrichment column and including a suffix indicating the aggregation method used.
If a new output table is created, it will be ordered by the x and y coordinates optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently.
Examples
Description
This procedure enriches a query containing geographic points with data from another query, spatially matching both.
As a result of this process, each input polygon will be enriched with the data from the enrichment query that spatially intersects it. The variable values corresponding to all intersecting enrichment features for a given input point will be returned in an ARRAY column named __CARTO_ENRICHMENT
. Each array value in this column contains OBJECTSs with one field for each variable and additional measure fields __carto_total
and __carto_dimension
. See the output information for details.
Input parameters
input_query
: VARCHAR
query to be enriched. A qualified table name can be given as well.
input_geography_column
: VARCHAR
name of the GEOGRAPHY column in the query containing the points to be enriched.
data_query
: VARCHAR
query that contains both a geography column and the columns with the data that will be used to enrich the points provided in the input query. All rows in the table must contain the same kind of geometry (points/lines/polygons) in the geography column.
data_geography_column
: VARCHAR
name of the GEOGRAPHY column provided in the data_query
.
variables
: ARRAY
of VARCHAR
elements with names of the columns in the enrichment query that will be added to the enriched results.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
Output
The output table will contain all the input columns provided in the input_query
and one extra ARRAY column named __CARTO_ENRICHMENT
. The array contains OBJECTs with one field for each variable. Additional fields will be included with information about the intersection of the geographies:
__carto_dimension
dimension of the enrichment geography: 2 for areas (polygons), 1 for lines, and 0 for points.
__carto_total
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the enrichment feature.
If a new output table is created, it will be ordered by the x and y coordinates optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently.
Examples
Description
This procedure enriches a query containing geographic polygons with data from another query, spatially matching both and aggregating the result.
As a result of this process, each input polygon will be enriched with the data from the enrichment query that spatially intersects it. When the input polygon intersects with more than one enrichment polygon, point or line, the data is aggregated using the aggregation methods specified.
Valid aggregation methods are:
SUM
: It assumes the aggregated variable is an extensive property (e.g. population). Accordingly, the value corresponding to the enrichment feature intersected is weighted by the fraction of area or length intersected. If the enrichment features are points, then a simple sum is performed.
MIN
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
MAX
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
.
AVG
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the intersected area/length as it's the case for SUM
. However, a weighted average is computed, using the intersection areas or lengths as the weight. When the enrichment features are points, a simple average is computed.
COUNT
It computes the number of enrichment features that contain the enrichment variable and are intersected by the input geography.
For other types of aggregation, the ENRICH_POLYGONS_RAW
procedure can be used to obtain non-aggregated data that can be later applied to any desired custom aggregation.
Input parameters
input_query
: VARCHAR
query to be enriched. A qualified table name can be given as well.
input_geography_column
: VARCHAR
name of the GEOGRAPHY column in the query containing the polygons to be enriched.
data_query
: VARCHAR
query that contains both a geography column and the columns with the data that will be used to enrich the polygons provided in the input query. All rows in the table must contain the same kind of geometry (points/lines/polygons) in the geography column.
data_geography_column
: VARCHAR
name of the GEOGRAPHY column provided in the data_query
.
variables
: ARRAY
with the columns that will be used to enrich the input polygons and their corresponding aggregation method (SUM
, AVG
, MAX
, MIN
, COUNT
). Each element in this array should be an OBJECT
with fields column
and aggregation
.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
Note that GeometryCollection/FeatureCollection geographies are not supported at the moment.
Output
The output table will contain all the input columns provided in the input_query
and one extra column for each variable in variables
, named after its corresponding enrichment column and including a suffix indicating the aggregation method used.
If a new output table is created, it will be ordered by the minimum x and y coordinates of each polygon to optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently.
Examples
Description
This procedure enriches a query containing geographic polygons with data from another query, spatially matching both.
As a result of this process, each input polygon will be enriched with the data of the enrichment query that spatially intersects it. The variable values corresponding to all intersecting enrichment features for a given input polygon will be returned in an ARRAY column named __CARTO_ENRICHMENT
. Each array value in this column contains OBJECTSs with one field for each variable and additional measure fields __carto_intersection
, __carto_total
and __carto_dimension
. See the output information for details.
Input parameters
input_query
: VARCHAR
query to be enriched. A qualified table name can be given as well.
input_geography_column
: VARCHAR
name of the GEOGRAPHY column in the query containing the polygons to be enriched.
data_query
: VARCHAR
query that contains both a geography column and the columns with the data that will be used to enrich the polygons provided in the input query. All rows in the table must contain the same kind of geometry (points/lines/polygons) in the geography column.
data_geography_column
: VARCHAR
name of the GEOGRAPHY column provided in the data_query
.
variables
: ARRAY
of VARCHAR
elements with names of the columns in the enrichment query that will be added to the enriched results.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
Note that GeometryCollection/FeatureCollection geographies are not supported at the moment.
Output
The output table will contain all the input columns provided in the input_query
and one extra ARRAY column named __CARTO_ENRICHMENT
. The array contains OBJECTs with one field for each variable. Additional fields will be included with information about the intersection of the geographies:
__carto_dimension
dimension of the enrichment geography: 2 for areas (polygons), 1 for lines and 0 for points.
__carto_intersection
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the intersection.
__carto_total
area in square meters (for dimension = 2) or length in meters (for dimension = 1) of the enrichment feature.
Moreover, another field named __carto_input_area
will be included in __CARTO_ENRICHMENT
, containing the area of the input polygon in square meters.
If a new output table is created, it will be ordered by the minimum x and y coordinates of each polygon to optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently.
Examples
Description
This procedure enriches a query containing geographic polygons with custom data provided by the user.
As a result of this process, each input polygon will be enriched with the custom data that spatially intersect it, weighted accordingly by the specified feature. When the input polygon intersects with more than one polygon, point, or line of the provided datasets, the data is aggregated using the aggregation methods specified. Using a speficied feature as weight for the enrichment, it weights appropiately the intersection segments with regard to the total original segment. For example the attribution of the feature to each intersected segment results from the value of the weighted feature in the intersection segment over the total original segment.
Valid aggregation methods are:
SUM
: It assumes the aggregated variable is an extensive property (e.g. population). Accordingly, the value corresponding to the feature intersected is weighted by the fraction of the intersected weight variable.
MIN
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the weight variable.
MAX
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). Thus, the value is not altered by the weight variable.
AVG
: It assumes the aggregated variable is an intensive property (e.g. temperature, population density). A weighted average is computed, using the value of the intersected weight variable as weights.
COUNT
It computes the number of features that contain the enrichment variable and are intersected by the input geography.
If the enrichment of an input table needs to be repeated, please notice that dropping the added columns will generate problems in consecutive enrichments as Bigquery saves those columns during 7 days for time travel purposes. We recommend storing the original table columns in a temporal table, dropping the input table and then recreating the input table from the temporal table.
Input parameters
input_query
: STRING
query to be enriched (Standard SQL). A qualified table name can be given as well, e.g. 'project-id.dataset-id.table-name'
.
input_geography_column
: STRING
name of the GEOGRAPHY column in the query containing the polygons to be enriched.
data_query
: STRING
query that contains both a geography column and the columns with the data that will be used to enrich the polygons provided in the input query.
data_geography_column
: STRING
name of the GEOGRAPHY column provided in the data_query
.
variables
: ARRAY
with the columns that will be used to enrich the input polygons and their corresponding aggregation method (SUM
, AVG
, MAX
, MIN
, COUNT
). Each element in this array should be an OBJECT
with fields column
and aggregation
.
weight_variable
: OBJECT
with fields variable
and aggregation
. Variable that will be used to weight the intersections of the input polygons with the Data Observatory datasets polygons, lines, points. Its slug and the aggregation method must be provided. Valid aggregation methods are: SUM
, AVG
, MAX
, MIN
, COUNT
. This variable is mandatory. If NULL
then an error is raised.
custom_weight_query
: STRING
query that contains the custom variable to be used as weight together with a geography column geom
. If it is set to NULL
, then the data_query
is used for the weight_variable
.
output
: ARRAY
of VARCHAR
containing the name of an output table to store the results and optionally an SQL clause that can be used to partition it, e.g. 'CLUSTER BY number'
. When the output table is the same than then input, the input table will be enriched in place.
Output
The output table will contain all the input columns provided in the input_query
and one extra column for each variable in variables
, named after its corresponding name and including a suffix indicating the aggregation method used.
If a new output table is created, it will be clustered by the geography column to optimize the performance of spatial filters and joins. This is important to visualize the results in a map efficiently. If an SQL clause is included in the output
parameter this optimization will not be performed.
Examples