
Analytics Toolbox for BigQuery
retail
This module contains procedures to perform analysis to solve specific retail analytics use cases, such as revenue prediction.
BUILD_REVENUE_MODEL
Description
This procedure is the second step of the Revenue Prediction analysis workflow. It creates the model and its description tables from the input model data (output of the BUILD_REVENUE_MODEL_DATA
procedure). It performs the following steps:
- Compute the model from the input query and options.
- Compute the revenue
model_shap
,model_stats
tables (see the output description for more details).
Input parameters
revenue_model_data
:STRING
table with the revenue model data generated with theBUILD_REVENUE_MODEL_DATA
procedure.options
:STRING
JSON string to overwrite the model default options. If set to NULL or empty, it will use the default options. Available options are: NUM_PARALLEL_TREE, TREE_METHOD, COLSAMPLE_BYTREE, MAX_TREE_DEPTH, SUBSAMPLE, L1_REG, L2_REG, EARLY_STOP, MAX_ITERATIONS, DATA_SPLIT_METHOD. More information about the model options can be found here.output_prefix
:STRING
destination prefix for the output tables. It must contain the project, dataset and prefix. For example<my-project>.<my-dataset>.<output-prefix>
.
Output
The procedure will output the following:
- Model: contains the trained model to be used for the revenue prediction. The name of the model includes the suffix
_model
, for example<my-project>.<my-dataset>.<output-prefix>_model
. - Shap table: contains a list of the features and their attribution to the model, computed with
ML.GLOBAL_EXPLAIN
. The name of the table includes the suffix_model_shap
, for example<my-project>.<my-dataset>.<output-prefix>_model_shap
. - Stats table: contains the model stats (mean_error, variance, etc.), computed with
ML.EVALUATE
. The name of the table includes the suffix_model_stats
, for example<my-project>.<my-dataset>.<output-prefix>_model_stats
.
To learn more about how to evaluate the results of your model through the concept of explainability, refer to this article (https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-xai-overview).
Example
|
|
BUILD_REVENUE_MODEL_DATA
Description
This procedure is the first step of the Revenue Prediction analysis workflow. It prepares the model data to be used in the training and prediction phases by performing the following steps:
- Polyfill the geometry from the area of interest using the grid type and resolution level.
- Enrich the grid cells with the revenue, stores, Data Observatory (DO) variables and custom variables.
- Apply a kring decay function to the enriched DO variables and custom variables. This operation smooths the features for a given cell by taking into account the values of these features in the neighboring cells (defined as those within the specified kring size), applying a scaling factor determined by the decay function of choice.
- Create the revenue
model_data
table (see the output description for more details). - Create the revenue
model_data_stats
table (see the output description for more details).
Input parameters
stores_query
:STRING
query with variables related to the stores to be used in the model, including their revenue per store (required) and other variables (optional) variables. It must contain the columnsrevenue
(revenue of the store),store
(store unique id) andgeom
(the geographical point of the store). The values of these columns cannot beNULL
.stores_variables
:ARRAY<STRUCT<variable STRING, aggregation STRING>>
list with the columns of thestores_query
and their corresponding aggregation method (sum
,avg
,max
,min
,count
) that will be used to enrich the grid cells. It can be set toNULL
.competitors_query
:STRING
query with the competitors information to be used in the model. It must contain the columnscompetitor
(competitor store unique id) andgeom
(the geographical point of the store).aoi_query
:STRING
query with the geography of the area of interest. It must contain a columngeom
with a single area (Polygon or MultiPolygon).grid_type
:STRING
type of the cell grid. Supported values areh3
andquadkey
.grid_level
:INT64
level or resolution of the cell grid. Check the available h3 levels and quadkey levels.kring
:INT64
size of the kring where the decay function will be applied. This value can be 0, in which case no kring will be computed and the decay function won’t be applied.decay
:STRING
decay function. Supported values areuniform
,inverse
,inverse_square
andexponential
. If set toNULL
or''
,uniform
is used by default.do_variables
:ARRAY<STRUCT<variable STRING, aggregation STRING>>
variables of the Data Observatory that will be used to enrich the grid cells and therefore train the revenue prediction model in the subsequent step of the Revenue Prediction workflow. For each variable, its slug and the aggregation method must be provided. Usedefault
to use the variable’s default aggregation method. Valid aggregation methods are:sum
,avg
,max
,min
,count
. The catalog procedureDATAOBS_SUBSCRIPTION_VARIABLES
can be used to find available variables and their slugs and default aggregation. It can be set toNULL
.do_source
:STRING
name of the location where the Data Observatory subscriptions of the user are stored, in<my-dataobs-project>.<my-dataobs-dataset>
format. If only the<my-dataobs-dataset>
is included, it uses the projectcarto-data
by default. It can be set toNULL
or''
.custom_variables
:ARRAY<STRUCT<variable STRING, aggregation STRING>>
list with the columns of thecustom_query
and their corresponding aggregation method (sum
,avg
,max
,min
,count
) that will be used to enrich the grid cells. It can be set toNULL
.custom_query
:STRING
query that contains a geography columngeom
and the columns with the custom data that will be used to enrich the grid cells. It can be set toNULL
or''
.output_prefix
:STRING
destination prefix for the output tables. It must contain the project, dataset and prefix. For example<my-project>.<my-dataset>.<output-prefix>
.
Output
The procedure will output two tables:
- Model data table: contains an
index
column with the cell ids and all the enriched columns:revenue_avg
,store_count
,competitor_count
,stores_variables
suffixed by aggregation method, DO variables and custom variables. The name of the table includes the suffix_model_data
, for example<my-project>.<my-dataset>.<output-prefix>_model_data
. - Model data stats table: contains the
morans_i
value computed for therevenue_avg
column, computed with kring 1 and decayuniform
. The name of the table includes the suffix_model_data_stats
, for example<my-project>.<my-dataset>.<output-prefix>_model_data_stats
.
Example
|
|
COMMERCIAL_HOTSPOTS
Description
This procedure is used to locate hotspot areas by calculating a combined Getis-Ord Gi* statistic using a uniform kernel over several variables. The input data should be in either an H3 or quadkey grid. Variables can be optionally weighted using the variable_weights
parameter, uniform weights will be considered otherwise. The combined Gi* statistic for each cell will be computed by taking into account the neighboring cells within the kring of size kring
.
Only those cells where the Gi* statistics is significant are returned, i.e., those above the p-value threshold (pvalue_thresh
) set by the user. Hotspots can be identified as those cells with the highest Gi* values.
Input parameters
input
:STRING
name of the table containing the input data. It should include project and dataset, i.e., follow the format<project-id>.<dataset-id>.<table-name>
.output
:STRING
name of the table where the output data will be stored. It should include project and dataset, i.e., follow the format<project-id>.<dataset-id>.<table-name>
. If NULL, the procedure will return the output but it will not be persisted.index_column
:STRING
name of the column containing the H3 or quadkey indexes.index_type
:STRING
type of the input cell indexes. Supported values are ‘h3’ and ‘quadkey’.variable_columns
:ARRAY<STRING>
names of the columns containing the variables to take into account when computing the combined Gi* statistic.variable_weights
:ARRAY<FLOAT64>
containing the weights associated with each of the variables. These weights can take any value but will be normalized to sum up to 1. If NULL, uniform weights will be consideredkring
:INT64
size of the kring (distance from the origin). This defines the area around each cell that will be taken into account to compute its Gi* statistic. If NULL, uniform weights will be considered.pvalue_thresh
: Threshold for the Gi* value significance, ranging from 0 (most significant) to 1 (least significant). It defaults to 0.05. Cells with a p-value above this threshold won’t be returned.
Output The output will contain the following columns:
index
:STRING
containing the cell index.combined_gi
:FLOAT64
with the resulting combined Gi*.p_value
:FLOAT64
with the p-value associated with the combined Gi* statistic.
If the output table is not specified when calling the procedure, the result will be returned but it won’t be persisted.
Examples
|
|
|
|
FIND_TWIN_AREAS
Description
Procedure to obtain the twin areas for a given origin location in a target area. The full description of the method, based on Principal Component Analysis (PCA), can be found here.
The output twin areas are those of the target area considered to be the most similar to the origin location, based on the values of a set of variables. Only variables with numerical values are supported. Both origin and target areas should be provided in grid format (h3 or quadkey) of the same resolution. We recommend using the data.GRIDIFY_ENRICH procedure to prepare the data in the format expected by this procedure.
Input
-
origin_query
:STRING
query to provide the origin cell (index
column) and its associated data columns. No NULL values should be contained in any of the data columns provided. The cell can be an h3 or a quadkey index. For quadkey, the value should be cast toSTRING
(CAST(index AS STRING)
). Example origin queries are:1 2 3
-- When selecting the origin cell from a dataset of gridified data SELECT * FROM `<project>.<dataset>.<origin_table>` WHERE index_column = <cell_id>
1 2 3
-- When the input H3 cell ID is inferred from a (longitude, latitude) pair SELECT * FROM `<project>.<dataset>.<origin_table>` WHERE ST_INTERSECTS(`carto-un`.H3_BOUNDARY(index_column), ST_GEOGPOINT(<longitude>, <latitude>))
1 2 3
-- When the input quadkey cell ID is inferred from a (longitude, latitude) pair SELECT * FROM `<project>.<dataset>.<origin_table>` WHERE ST_INTERSECTS(`carto-un`.carto.QUADINT_BOUNDARY(index_column), ST_GEOGPOINT(<longitude>, <latitude>))
1 2 3
-- When the cell ID is a quadkey and requires to be cast SELECT * EXCEPT(index_column), CAST(index_column AS STRING) FROM `<project>.<dataset>.<origin_table>`
-
target_query
: STRING query to provide the target area grid cells (index
column) and their associated data columns, e.g.SELECT * FROM <project>.<dataset>.<target_table>
. The data columns should be similar to those provided in theorigin_query
, otherwise the procedure will fail. Grid cells with any NULL values will be excluded from the analysis. -
index_column
:STRING
name of the index column for both theorigin_query
and thetarget_query
. -
pca_explained_variance_ratio
:FLOAT64
of the explained variance retained in the PCA analysis. It defaults to 0.9 if set toNULL
. -
max_results
:INT64
of the maximum number of twin areas returned. If set toNULL
, all target cells are returned. -
output_prefix
:STRING
destination and prefix for the output tables. It must contain the project, dataset and prefix:<project>.<dataset>.<prefix>
.
Output
The procedure outputs the following:
-
Twin area model, named
<project>.<dataset>.<prefix>_model
. Please note that the model computation only depends on thetarget_query
and therefore the same model can be used if the procedure is re-run for a differentorigin_query
. To allow for this scenario in which the model is reused, if the output model already exists, it won’t be recomputed. To avoid this behavior, simply choose a different<prefix>
in theoutput_prefix
parameter. -
Results table, named
<project>.<dataset>.<prefix>_<origin_index>_results
, containing in each row the index of the target cells (index_column
) and its associatedsimilarity_score
andsimilarity_skill_score
. Thesimilarity_score
corresponds to the distance between the origin and target cell in the Principal Component (PC) Scores spaces; thesimilarity_skill_score
for a given target cell*t*
is computed as1 - similarity_score(*t*) / similarity_score(<*t*>)
, where<*t*>
is the average target cell, computed by averaging each retained PC score for all the target cells. Thissimilarity_skill_score
represents a relative measure: the score will be positive if and only if the target cell is more similar to the origin than the mean vector data, with a score of 1 meaning perfect matching or zero distance. Therefore, a target cell with a larger score will be more similar to the origin under this scoring rule.
Example
|
|
FIND_WHITESPACE_AREAS
Description
This is a postprocessing step that may be used after completing a Revenue Prediction analysis workflow. It allows you to identify cells with the highest potential revenue (whitespaces), while satisfying a series of criteria (e.g. presence of competitors).
It requires as input the model data (output of the BUILD_REVENUE_MODEL_DATA
procedure) and the trained model (output of the BUILD_REVENUE_MODEL
procedure), as well as a query with points to use as generators for the area of applicability of the model, plus a series of optional filters.
A cell is eligible to be considered a whitespace if it complies with the filtering criteria (minimum revenue, presence of competitors, etc.) and is within the area of applicability of the revenue model provided.
Input parameters
revenue_model
:STRING
with the fully qualifiedmodel
name.revenue_model_data
:STRING
with the fully qualifiedmodel_data
table name.generator_query
:STRING
query with the location of a set of generator points as a geography column named geom. The algorithm will look for whitespaces in the surroundings of these locations, therefore avoiding offering results in locations that are not of the interest of the user. Good options to use as generator locations are, for instance, the location of the stores and competitors, or a collection of POIs that are known to drive commercial activity to an area.aoi_query
:STRING
query with the geography of the area of interest in which to perform the search. May beNULL
, in which case no spatial filter will be applied.minimum_revenue
:FLOAT64
the minimum revenue to filter results by. May beNULL
, in which case no revenue threshold will be applied.max_results
:INT64
of the maximum number of results, ordered by decreasing predicted revenue. May beNULL
, in which case all eligible cells are returned.with_own_stores
:BOOL
specifying whether to consider cells that already have own stores in them. IfNULL
, defaults toTRUE
.with_competitors
:BOOL
specifying whether to consider cells that already have competitors in them. IfNULL
, defaults toTRUE
.
Output
The procedure will output a table of cells with the following columns:
index
: identifying the H3 or quadkey cell.predicted_revenue_avg
: average revenue of an additional store located in the grid cell.store_count
: number of own stores present in the grid cell.competitor_count
: number of competitors present in the grid cell.
Example
|
|
PREDICT_REVENUE_AVERAGE
Description
This procedure is the third and final step of the Revenue Prediction analysis workflow. It predicts the average revenue of an additional store located in the specified grid cell. It requires as input the model data (output of the BUILD_REVENUE_MODEL_DATA
procedure) and the trained model (output of the BUILD_REVENUE_MODEL
procedure).
Input parameters
index
:STRING
cell index where the new store will be located. It can be anh3
or aquadkey
index. Forquadkey
, the value should be cast to string:CAST(index AS STRING)
. It can also be'ALL'
, in which case the prediction for all the grid cells of the model data are returned.revenue_model
:STRING
the fully qualifiedmodel
name.revenue_model_data
:STRING
the fully qualifiedmodel_data
table name.candidate_data
:STRING
the fully qualifiedcandidate_data
table name. It can be set toNULL
.stores_variables
:ARRAY<STRUCT<variable STRING, aggregation STRING>>
list with the columns of thestores_query
and their corresponding aggregation method (sum
,avg
,max
,min
,count
) that will be used to enrich the grid cells. It can be set toNULL
.
Output
The procedure will output the index
and predicted_revenue_avg
value in the cell, and the same units of the revenue
column.
Example
|
|

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 960401.