Data Sources
Data sources, abbreviated sources in CARTO for React, are objects that represent the source of the data that is going to be visualized with layers or displayed and filtered with widgets. They need to include spatial information either using traditional geometry/geography representations or geospatial indexes (H3 / Quadbins).
The source objects include the following properties:
connection
. Name of the connection to the data warehouse in the Workspace. Not used for CARTO 2.credentials
. Specific credentials to use for this source. We usually define global credentials in the initialState slice but sometimes we need to retrieve data sources using different credentials.data
. Table name, SQL query or static tileset name.id
. Unique identifier.type
. Type of data source: can be a data warehouse table (MAP_TYPES.TABLE), an SQL query that is executed in the data warehouse to retrieve data (MAP_TYPES.QUERY) or a static tileset that has been pre-generated in the data warehouse (MAP_TYPES.TILESET). The MAP_TYPES.TABLE is not supported for CARTO 2.geoColumn
andaggregationExp
. To support spatial indexes, see below.
Filtering and Linking Layers and Widgets
CARTO for React applications synchronize the different components (map, layers, widgets…) through a centralized store managed by Redux Toolkit.
The data sources link layers with widgets in order to filter the visualization or the information displayed in the widgets.
When we filter a data source using a widget (i.e. selecting a category in a CategoryWidget
), what we are doing is adding this filter to the data source in the Redux store. Then, the layers that are using this data source react to the changes and apply the same filter to the features that are being displayed in the map.
The same approach is followed to implement the viewport mode in the widgets. When the user changes the viewstate by zooming or panning, the viewState
object in the store is updated and the widgets react to the changes and update the calculations.
This is also used to implement the FeatureSelectionWidget
. In this case, when the user completes the shape, the geometry is added to the spatialFilter
object in the store. The widgets are updated to take into account the new filter and deck.gl applies a mask to the visualization to hide all the features outside the shape drawn by the user.
Dropping features
Since deck.gl 8.8, the CartoLayer
only works with vector tiles, both dynamic and static. Vector tiles are designed with visualization in mind. Geometries are simplified depending on the zoom level and features are discarded in order to be able to visualize large datasets quickly. This means that feature dropping can happen at lower zoom levels when there are too many features in the same tile or when two or more points in the same tile coordinates.
In the case of dynamic tiling, the widgets calculations are decoupled from the data used for visualization. The widgets will make a request to the API to get the data, so the calculations are going to be accurate even if there is feature dropping in the tiles.
For static tilesets, it is not possible to implement these calculations needed by the widgets. This means that, for those zoom levels where feature dropping is happening, the widgets linked to the same source will not display results because we don’t have all the information needed to make the calculations locally in the browser.
If you are working with a large dataset and it is not an issue to aggregate data at lower zoom levels, another solution to avoid feature dropping is to use spatial indexes, like H3 or quadbins, instead of traditional geometries. With this solution, no features are going to be dropped and performance is going to be good at all zoom levels because the geospatial index resolution is adapted to the current zoom level. Spatial indexes are supported both with dynamic and static tiles.
Document mode
If we have a table or a query result that is not large (< 50 MB), it is still possible to download the full dataset to the browser and avoid requests for new tiles when the viewport is updated. We need to fetch the data in GeoJSON format and then use the GeoJsonLayer
instead of the CartoLayer
.
We can create a memoized data promise to fetch the data in GeoJSON format using the fetchLayerData
function from the CARTO module for deck.gl. Then, we create a GeoJsonLayer
with the data
property pointing to the promise and we extract the actual FeatureCollection
from the response when it arrives using the dataTransform
callback.
This solution will work only if the dataset is of small-medium size. The larger the dataset, the more powerful computer you need to be able to render all the features client-side. You can look at the total number of points/vertices in your dataset but you need to take into consideration that rendering polygons is more computationally expensive than lines and rendering lines is more expensive than points. Please read the Performance Considerations section in the User Manual for additional information.
Spatial Indexes and custom geometry column names
CARTO for React >=1.3 includes support for data sources using spatial indexes from discrete global grid systems, in addition to datasets using traditional geometries. The CARTO platform is compatible with datasets using H3 and Quadbins (improved coding for quadkeys) indexes. You can add a source (table, query or static tileset) where there is a column using a spatial index and then use this source for layers and widgets.
In order for the source to be correctly processed and visualized the name of the column containing the spatial index must be 'h3'
for H3 or 'quadbin'
for Quadbins. Then you need to define the geoColumn
and aggregationExp
properties in the CartoLayer
as explained in the layer guide.
If you create the source using the code generator, it expects to find a column name geom
in the data containing traditional geometries. In order for the source to be correctly processed and visualized the name of the column containing the spatial index must be 'h3'
for H3 or 'quadbin'
for Quadbins. Specifying a custom geometry column name is also required for tables/query data sources in case the column containing the geometry is named differently than the default value of 'geom'
Then you need to define the geoColumn
and aggregationExp
properties in the source as:
geoColumn
. The table/query must contain a column called with the spatial index for each feature. You need to set this property to'h3'
whe using H3 or'quadbin
' when using Quadbins. You can also use this property to specify a column name different than the default'geom'
for table/query sources not containing a spatial index but single geometries.aggregationExp
. When we are visualizing a data source that is using spatial indexes, the data is aggregated at different resolutions depending on the zoom level. We need to set this property to define how we are going to aggregate the attributes.
As explained above, spatial indexes are a powerful solution when you are dealing with very large datasets. In addition to provide a way to visualize those datasets, there are some spatial join operations like intersections that become regular joins. Spatial join operations can be quite expensive to compute when we are dealing with datasets with dozens/hundreds of millions of features. Using spatial indexes we can execute these operations in a fraction of the time.
For additional information, check the Aggregated Grids section in the User Manual.
Dynamic Sources
Sometimes you want a source to be dynamic so the data
property is updated as a reaction to the user actions. For instance, you can point the data source to a different table or tileset or you can add a condition to a SQL query.
The right way to update the data
property is to dispatch the action to add the source to the store with the new data
property but using the same id
to ensure the source is updated.
We need to take into account that the updated source is going to keep the new value for the data
property as long as we stay in the same view, if we are adding the static source in a useEffect
hook, as it happens with the automatic code created by the code generator. If we switch to another view and then we come back to the view where we updated the data
property, the source is going to use again the data
property value from the source file.
Last updated