Importing data

CARTO allows you to create geospatial tables by importing local files (from your computer) or via URL. When the data is imported, it will create a new geospatial table in your data warehouse. Currently, importing data is available for the following data warehouses:

To import data into Amazon Redshift in Self-Hosted environments, you'll first need to configure your own S3 Bucket in Settings.

Once a file is imported, the resulting table can be previewed in Data Explorer and used in Builder and external applications to create maps.

How to import data

To import your data, go to Data Explorer section and click on the Import data button on the top right (or the import data icon, if you have a connection or folder selected):

Selecting your data

A new dialog will open allowing you to import your data into the available connections. Here you can choose between uploading a local file (from your computer) or from a url. Read more about how each importing method works. Once you have selected your file, just click on Continue.

Selecting a destination

The next screen will allow you to set the location and name of the output table. Once you have completed this configuration, click on Save here.

Schema

You can decide how to manage the schema (data types for each column) of the new table. You can let CARTO automatically define the schema, or you can turn it off to manually define a custom schema.

If you let CARTO automatically define the schema, we will do our best effort to guess the data type for each column based on a sample of the table. This is the recommended approach for most cases.

A common case where you'll need to manually define the schema is when dealing with postal codes. Postal codes in certain countries are interpreted as a number for most systems, but they should be a STRING. If you need more information on defining the schema manually, check Defining a custom schema.

Confirmation and status

Lastly, review the details of the file you're going to import, and when you're ready, click on Import.

A new dialog will open informing you that the import may take a while to process and giving you the option to follow the status from a new dialogue box that appears at the top right corner of the screen.

There are three possible status: importing, imported successfully or dataset creation error.

When an error occurs, you can click on Read more to get more information about the error or hover the mouse over the dataset name or over the information icon. You can also click on Clear to clear the list when the imports have finished.

Once your data has been imported, it will be available as a table on your selected connection. Feel free to use Data Explorer to check the preview, or create a map using Builder.

Overwriting files

You can also overwrite existing files. When you import a file with an existing name, a message will appear warning you that the table already exists in the destination folder. Click on Save here to continue and overwrite it or click on Cancel if you don’t want the changes to be applied.

Write permissions

In order to import data, your connection needs to have write permissions on the chosen destination. If the connection does not have permissions, a warning will appear. Select a new location or click on Cancel if you don’t want the changes to be applied. Read more about why we require certain permissions on connections.

Importing methods

As previously shown, you can import data through two different methods: Local or Remote.

  • Local

This method allows you to upload your data from your computer. To import a local file, select the icon on the left:

  • Remote

This method allows you to enter a supported URL file. To import a remote URL, select the icon on the right.

When you import data via a remote URL, the data is imported only once and will not stay in sync with the remote URL.

Supported formats

Currently, the import of CSV, GeoJSON, GeoPackage, KML, KMZ, TAB, Shapefiles (in a zip package), and GeoParquet with at least two columns is supported. The size limit for a single import process is 1 GB. Please get in touch with us if you need a higher limit.

For CSV files, CARTO will try and autodetect the geometry column or create the geometries from latitude/longitude columns. The supported column names are:

  • For geometry: geom,Geom,geometry,the_geom,wkt,wkb

  • For latitude: latitude,lat,Latitude

  • For longitude: longitude,lon,Lon,Longitude,lng,Lng

The expected delimiters are: comma (,), semi-colon (;) or a tabulation.

For Snowflake and BigQuery connections, the import of Cloud Optimized GeoTIFF raster files is supported. Learn more about the requirements for this format in these guides:

Custom schema limitations

As seen in the guide above, you can choose between letting CARTO automatically set the data type for each column (schema) or defining it yourself manually. When defining a custom schema, here are the most important things to take into account:

  • The options available for each column are the native data types available for the destination data warehouse (eg: Google BigQuery, Snowflake...)

  • Object-like data types such as RECORD, STRUCT, ARRAY or OBJECT are not supported.

  • When using a custom schema, error tolerance will be 0.

To define a custom schema, just make sure to disable the Let CARTO automatically define the schema switch.

Error tolerance

By default, CARTO will work with the destination data warehouse to try and avoid failing an entire import process if only a small subset of rows are failing. This means that for a file with 10,000 rows, if one row fails, CARTO will successfully create a table with the remaining 9,999 rows.

This is the default error tolerance for each destination data warehouse:

  • BigQuery and CARTO Data Warehouse: up to 1,000 rows

  • Snowflake: unlimited rows

  • Redshift: up to 1,000 rows

  • PostgreSQL: 0 rows (importing will fail if a row throws an error)

When using a custom schema, the error tolerance will be 0 rows in all cases. If you need to customize the error tolerance for your imports, consider using our Imports API.

Region-related errors

If your data warehouse is established on a different cloud region than your CARTO SaaS organization, imports might fail. This depends on limitations and setup on each data warehouse and cloud provider, and are not set by CARTO.

Deleting data

In the Data Explorer section of the Workspace, you can view the list of your current data warehouse(s) and data observatory subscriptions. You can access the quick actions menu to manage your data by clicking on the “three dots” icon in the top-right corner. There are different options available depending on whether it is a table or a tileset.

If you click the Delete quick action, a dialog will appear allowing you to confirm that you want to delete the selected table or tileset. It includes information about data sources, layers, applications and API calls related to the existing dataset that could potentially be affected by the action. Click the Yes, delete button to confirm the changes or click Cancel if you don’t want the changes to be applied.

Last updated