Builder
What methods can I use to create a map layer?
How can I run spatial analyses in Builder?
How does the export mechanism from Builder works?
What methods can I use to create a map layer?
To add a data source to a map as a new layer you can either:
Pick a table or tileset from one of your active connections to cloud data warehouses
Add data resulting from applying a custom SQL Query. You can also leverage the SQL functions available in CARTO’s Analytics Toolbox.
Importing data from a local or remote file. Right now we currently support GeoJSON, Shapefile (in a zip package), and CSV files. We’re working to support more formats in the future.
How can I run spatial analyses in Builder?
To run spatial analysis in Builder, you can use the SQL Editor, which is accessible when adding a data source as a custom query. The SQL Editor allows you to execute SQL commands directly in your cloud data warehouse (e.g., BigQuery, Snowflake, etc.), taking advantage of the full capabilities of the platform, including functions and operations available there. Additionally, you can leverage UDFs from the Analytics Toolbxo for enhanced spatial analysis. While the SQL Editor is ideal for performing simple analysis or utilizing SQL Parameters, for more complex or multi-step analysis, we recommend using Workflows. Workflows enable you to perform detailed, step-by-step analysis and save the results as a materialized table, which can then be used as a source in Builder. This approach provides greater flexibility and scalability for more advanced spatial analysis tasks.
How does the export mechanism from Builder works?
The export mechanism in Builder leverages the built-in export functionality of the connected cloud data warehouse to handle and process data. This ensures efficient export of datasets, aligning with the performance optimization strategies of the underlying platform.
When using BigQuery specifically, the export process stores data in a Google Cloud Storage (GCS) bucket. For performance and scalability, BigQuery splits the exported data into multiple smaller files rather than a single large file. This behavior is expected and is a result of BigQuery’s internal strategies to parallelize the export jobs for optimal performance.
Last updated