statistics
This module contains functions to perform spatial statistics calculations.
P_VALUE
Description
This function computes the p-value (two-tails test) of a given z-score assuming the population follows a normal distribution where the mean is 0 and the standard deviation is 1. The z-score is a measure of how many standard deviations below or above the population mean a value is. It gives you an idea of how far from the mean a data point is. The p-value is the probability that a randomly sampled point has a value at least as extreme as the point whose z-score is being tested.
z_score
:FLOAT64
Return type
FLOAT64
Example
KNN_TABLE
Description
This procedure returns for each point the k-nearest neighbors of a given set of points.
input
:STRING
the query to the data used to compute the KNN. A qualified table name can be given as well:<project-id>.<dataset-id>.<table-name>
.output_table
:STRING
qualified name of the output table:<project-id>.<dataset-id>.<table-name>
.geoid_col
:STRING
name of the column with unique ids.geo_col
:STRING
name of the column with the geometries.k
:INT64
number of nearest neighbors (positive, typically small).
Output
The results are stored in the table named <output_table>
, which contains the following columns:
geo
:GEOGRAPHY
the geometry of the considered point.geo_knn
:GEOGRAPHY
the k-nearest neighbor point.geoid
:STRING
the unique identifier of the considered point.geoid_knn
:STRING
the unique identifier of the k-nearest neighbor.distance
:FLOAT64
the k-nearest neighbor distance to the considered point.knn
:INT64
the k-order (knn).
Example
KNN
Description
This function returns for each point the k-nearest neighbors of a given set of points.
points
:ARRAY<STRUCT<geoid STRING, geo GEOGRAPHY>>
input data with unique id and geography.k
:INT64
number of nearest neighbors (positive, typically small).
Return type
ARRAY<STRUCT<geo GEOGRAPHY, geo_knn GEOGRAPHY, geoid STRING, geoid_knn STRING, distance FLOAT64, knn INT64>>
where:
geo
: the geometry of the considered point.geo_knn
: the k-nearest neighbor point.geoid
: the unique identifier of the considered point.geoid_knn
: the unique identifier of the k-nearest neighbor.distance
: the k-nearest neighbor distance to the considered point.knn
: the k-order (knn).
Example
LOF_TABLE
Description
This procedure computes the Local Outlier Factor for each point of a specified column and stores the result in an output table along with the other input columns.
src_fullname
:STRING
The input table. ASTRING
of the formproject-id.dataset-id.table-name
is expected. Theproject-id
can be omitted (in which case the default one will be used).target_fullname
:STRING
The resulting table where the LOF will be stored. ASTRING
of the formproject-id.dataset-id.table-name
is expected. Theproject-id
can be omitted (in which case the default one will be used). The dataset must exist and the caller needs to have permissions to create a new table in it. The process will fail if the target table already exists.geoid_column_name
:STRING
The column name with a unique identifier for each point.geo_column_name
:STRING
The column name containing the points.lof_target_column_name
:STRING
The column name where the resulting Local Outlier Factor will be stored in the output table.k
:INT64
Number of nearest neighbors (positive, typically small).
Output
The results are stored in the table named <output_table>
, which contains the following columns:
geo
:GEOGRAPHY
the geometry of the considered point.geoid
:GEOGRAPHY
the unique identifier of the considered point.lof
:FLOAT64
the Local Outlier Factor score.
Example
LOF
Description
This function computes the Local Outlier Factor of each point of a given set of points.
points
:ARRAY<STRUCT<geoid STRING, geo GEOGRAPHY>>
input data points with unique id and geography.k
:INT64
number of nearest neighbors (positive, typically small).
Return type
ARRAY<STRUCT<geo GEOGRAPHY, geoid GEOGRAPHY, lof FLOAT64>>
where:
geo
: the geometry of the considered point.geoid
: the unique identifier of the considered point.lof
: the Local Outlier Factor score.
Example
GFUN_TABLE
Description
This function computes the G-function of a given set of points.
input
:STRING
the query to the data used to compute the G-Function. A qualified table name can be given as well:<project-id>.<dataset-id>.<table-name>
.output_table
:STRING
qualified name of the output table:<project-id>.<dataset-id>.<table-name>
.geo_col
:STRING
name of the column with the geometries.
Output
The results are stored in the table named <output_table>
, which contains the following columns:
distance
:FLOAT64
the nearest neighbors distances.gfun_G
:FLOAT64
the empirical G evaluated for each distance in the support.gfun_ev
:FLOAT64
the theoretical Poisson G evaluated for each distance in the support.
Example
GFUN
Description
This function computes the G-function of a given set of points.
points
:ARRAY<GEOGRAPHY>
input data points.
Return type
ARRAY<STRUCT<distance FLOAT64, gfun_G FLOAT64, gfun_ev FLOAT64>>
where:
distance
: the nearest neighbors distances.gfun_G
: the empirical G evaluated for each distance in the support.gfun_ev
: the theoretical Poisson G evaluated for each distance in the support.
Example
CREATE_SPATIAL_COMPOSITE_SUPERVISED
Description
This procedure derives a spatial composite score as the residuals of a regression model which is used to detect areas of under- and over-prediction. The response variable should be measurable and correlated with the set of variables defining the score. For each data point. the residual is defined as the observed value minus the predicted value. Rows with a NULL value in any of the individual variables are dropped.
Input parameters
input_query
:STRING
the query to the data used to compute the spatial composite. It must contain all the individual variables that should be included in the computation of the composite as well as a unique geographic id for each row. A qualified table name can be given as well, e.g. 'project-id.dataset-id.table-name'.index_column
:STRING
the name of the column with the unique geographic identifier.output_prefix
:STRING
the prefix for the output table. It should include project and dataset, e.g. 'project-id.dataset-id.table-name'.options
:STRING
containing a valid JSON with the different options. Valid options are described below.model_transform
:STRING
containing the TRANSFORM clause in a BigQuery ML CREATE MODEL statement. If NULL no TRANSFORM clause is applied.model_options
:JSON
with the different options allowed by BigQuery ML CREATE MODEL statement for regression models. Any model is allowed as long as it can deal with numerical inputs for the response variable. At least theINPUT_LABEL_COLS
andMODEL_TYPE
parameters must be specified. By default, data will not be split into train and test (DATA_SPLIT_METHOD = 'NO_SPLIT'
). Hyperparameter tuning is not currently supported.r2_thr
:FLOAT64
the minimum allowed value for the R2 model score. If the R2 of the regression model is lower than this threshold this implies poor fitting and a warning is raised. The default value is 0.5.bucketize_method
:STRING
the method used to discretize the spatial composite score. The default value is NULL. Possible options are:EQUAL_INTERVALS_ZERO_CENTERED: the values of the spatial composite score are discretized into buckets of equal widths centered in zero. The lower and upper limits are derived from the outliers-removed maximum of the absolute values of the score.
nbuckets
:INT64
the number of buckets used when a bucketization method is specified. The default number of buckets is selected using Freedman and Diaconis’s (1981) rule. Ignored ifbucketize_method
is not specified.remove_outliers
:BOOL
. Whenbucketize_method
is specified, ifremove_outliers
is set to TRUE the buckets are derived from the oulier-removed data. The outliers are computed using Tukey’s fences k parameter for outlier detection. The default value is TRUE. For large inputs, setting this option to TRUE might cause aQuery exceeds CPU resources
error. Ignored ifbucketize_method
is not specified.
Return type
The results are stored in the table named <output_prefix>
, which contains the following columns:
index_column
: the unique geographic identifier. The type of this column depends on the type ofindex_column
ininput_query
.spatial_score
: the value of the composite score. The type of this column isFLOAT64
if the score is not discretized andINT64
otherwise.
When the score is discretized by specifying the bucketize_method
parameter, the procedure also returns a lookup table named <output_prefix>_lookup_table
with the following columns:
lower_bound
:FLOAT64
the lower bound of the bin.upper_bound
:FLOAT64
the upper bound of the bin.spatial_score
:INT64
the value of the (discretized) composite score.
Example
CREATE_SPATIAL_COMPOSITE_UNSUPERVISED
Description
This procedure combines (spatial) variables into a meaningful composite score. The composite score can be derived using different methods, scaling and aggregation functions and weights. Rows with a NULL value in any of the model predictors are dropped.
Input parameters
input_query
:STRING
the query to the data used to compute the spatial composite. It must contain all the individual variables that should be included in the computation of the composite as well as a unique geographic id for each row. A qualified table name can be given as well, e.g. 'project-id.dataset-id.table-name'.index_column
:STRING
the name of the column with the unique geographic identifier.output_prefix
:STRING
the prefix for the output table. It should include project and dataset, e.g. 'project-id.dataset-id.table-name'.options
:STRING
containing a valid JSON with the different options. Valid options are described below. If options is set to NULL then all options are set to default values, as specified in the table below.scoring_method
:STRING
Possible options are ENTROPY, CUSTOM_WEIGHTS, FIRST_PC. With the ENTROPY method the spatial composite is derived as the weighted sum of the proportion of the min-max scaled individual variables, where the weights are based on the entropy of the proportion of each variable. Only numerical variables are allowed. With the CUSTOM_WEIGHTS method, the spatial composite is computed by first scaling each individual variable and then aggregating them according to user-defined scaling and aggregation methods and individual weights. Depending on the scaling parameter, both numerical and ordinal variables are allowed (categorical and boolean variables need to be transformed to ordinal). With the FIRST_PC method, the spatial composite is derived from a Principal Component Analysis as the first principal component score. Only numerical variables are allowed.weights
:STRUCT
the (optional) weights for each variable used to compute the spatial composite when scoring_method is set to CUSTOM_WEIGHTS, passed as{"name":value, …}
. If a different scoring method is selected, then this input parameter is ignored. If specified, the sum of the weights must be lower than 1. If no weights are specified, equal weights are assumed. If weights are specified only for some variables and the sum of weights is less than 1, the remainder is distributed equally between the remaining variables. If weights are specified for all the variables and the sum of weights is less than 1, the remainder is distributed equally between all the variables.scaling
:STRING
the user-defined scaling when the scoring_method is set to CUSTOM_WEIGHTS. Possible options are:MIN_MAX_SCALER: data is rescaled into the range [0,1] based on minimum and maximum values. Only numerical variables are allowed.
STANDARD_SCALER: data is rescaled by subtracting the mean value and dividing the result by the standard deviation. Only numerical variables are allowed.
RANKING: data is replaced by its percent rank, that is by values ranging from 0 lowest to 1. Both numerical and ordinal variables are allowed (categorical and boolean variables need to be transformed to ordinal).
DISTANCE_TO_TARGET_MIN(_MAX,_AVG):data is rescaled by dividing by the minimum, maximum, or mean of all the values. Only numerical variables are allowed.
PROPORTION: data is rescaled by dividing by the sum total of all the values. Only numerical variables are allowed.
aggregation
:STRING
the aggregation function used when the scoring_method is set to CUSTOM_WEIGHTS. Possible options are:LINEAR: the spatial composite is derived as the weighted sum of the scaled individual variables.
GEOMETRIC: the spatial composite is given by the product of the scaled individual variables, each to the power of its weight.
correlation_var
:STRING
when scoring_method is set to FIRST_PC, the spatial score will be positively correlated with the selected variable (i.e. the sign the spatial score is set such that the correlation between the selected variable and the first principal component score is positive).correlation_thr
:FLOAT64
the minimum absolute value of the correlation between each individual variable and the first principal component score when scoring_method is set to FIRST_PC.return_range
:ARRAY<FLOAT64>
the user-defined normalization range of the spatial composite score, e.g [0.0,1.0]. Ignored ifbucketize_method
is specified.bucketize_method
:STRING
the method used to discretize the spatial composite score. Possible options are:EQUAL_INTERVALS: the values of the spatial composite score are discretized into buckets of equal widths.
QUANTILES: the values of the spatial composite score are discretized into buckets based on quantiles.
JENKS: the values of the spatial composite score are discretized into buckets obtained using k-means clustering.
nbuckets
:INT64
the number of buckets used when a bucketization method is specified. Whenbucketize_method
is set to EQUAL_INTERVALS, ifnbuckets
is NULL, the default number of buckets is selected using Freedman and Diaconis’s (1981) rule. Whenbucketize_method
is set to JENKS or QUANTILES,nbuckets
cannot be NULL. Whenbucketize_method
is set to JENKS the maximum value is 100, aka the maximum number of clusters allowed by BigQuery with k-means clustering.
Option |
|
|
| Valid options | Default value |
| Optional | Optional | Optional | ENTROPY, CUSTOM_WEIGHTS, FIRST_PC | ENTROPY |
| Ignored | Optional | Ignored |
| NULL |
| Ignored | Optional | Ignored | MIN_MAX_SCALER, STANDARD_SCALER, RANKING, DISTANCE_TO_TARGET_MIN, DISTANCE_TO_TARGET_MAX, DISTANCE_TO_TARGET_AVG, PROPORTION | MIN_MAX_SCALER |
| Ignored | Optional | Ignored | LINEAR, GEOMETRIC | LINEAR |
| Ignored | Optional | Mandatory | - | NULL |
| Ignored | Optional | Optional | - | NULL |
| Optional | Optional | Optional | - | NULL |
| Optional | Optional | Optional | EQUAL_INTERVALS, QUANTILES, JENKS | NULL |
| Optional | Optional | Optional | - | When |
Return type
The results are stored in the table named <output_prefix>
, which contains the following columns:
index_column
: the unique geographic identifier. The type of this column depends on the type ofindex_column
ininput_query
.spatial_score
: the value of the composite score. The type of this column isFLOAT64
if the score is not discretized andINT64
otherwise.
When the score is discretized by specifying the bucketize_method
parameter, the procedure also returns a lookup table named <output_prefix>_lookup_table
with the following columns:
lower_bound
:FLOAT64
the lower bound of the bin.upper_bound
:FLOAT64
the upper bound of the bin.spatial_score
:INT64
the value of the (discretized) composite score.
Examples
With the ENTROPY
method:
With the CUSTOM_WEIGHTS
method:
With the FIRST_PC
method:
CRONBACH_ALPHA_COEFFICIENT
Description
This procedure computes the Cronbach’s alpha coefficient for a set of (spatial) variables. This coefficient can be used as a measure of internal consistency or reliability of the data, based on the strength of correlations between individual variables. Cronbach’s alpha reliability coefficient normally ranges between 0 and 1 but there is actually no lower limit to the coefficient. Higher alpha (closer to 1) vs lower alpha (closer to 0) means higher vs lower consistency, with usually 0.65 being the minimum acceptable value of internal consistency. Rows with a NULL value in any of the individual variables are dropped.
Input parameters