LogoLogo
HomeAcademyLoginTry for free
  • Welcome
  • What's new
    • Q2 2025
    • Q1 2025
    • Q4 2024
    • Q3 2024
    • Q2 2024
    • Q1 2024
    • Q4 2023
    • Q3 2023
    • Q2 2023
    • Q1 2023
    • Q4 2022
    • Q3 2022
  • FAQs
    • Accounts
    • Migration to the new platform
    • User & organization setup
    • General
    • Builder
    • Workflows
    • Data Observatory
    • Analytics Toolbox
    • Development Tools
    • Deployment Options
    • CARTO Basemaps
    • CARTO for Education
    • Support Packages
    • Security and Compliance
  • Getting started
    • What is CARTO?
    • Quickstart guides
      • Connecting to your data
      • Creating your first map
      • Creating your first workflow
      • Developing your first application
    • CARTO Academy
  • CARTO User Manual
    • Overview
      • Creating your CARTO organization
      • CARTO Cloud Regions
      • CARTO Workspace overview
    • Maps
      • Data sources
        • Simple features
        • Spatial Indexes
        • Pre-generated tilesets
        • Rasters
        • Defining source spatial data
        • Managing data freshness
        • Changing data source location
      • Layers
        • Point
          • Grid point aggregation
          • H3 point aggregation
          • Heatmap point aggregation
          • Cluster point aggregation
        • Polygon
        • Line
        • Grid
        • H3
        • Raster
        • Zoom to layer
      • Widgets
        • Formula widget
        • Category widget
        • Pie widget
        • Histogram widget
        • Range widget
        • Time Series widget
        • Table widget
      • SQL Parameters
        • Date parameter
        • Text parameter
        • Numeric parameter
        • Publishing SQL parameters
      • Interactions
      • Legend
      • Basemaps
        • Basemap selector
      • AI Agents
      • SQL analyses
      • Map view modes
      • Map description
      • Feature selection tool
      • Search locations
      • Measure distances
      • Exporting data
      • Download PDF reports
      • Managing maps
      • Publishing and sharing maps
        • Map settings for viewers
        • Map preview for editors
        • Collaborative maps
        • Embedding maps
        • URL parameters
      • Performance considerations
    • Workflows
      • Workflow canvas
      • Results panel
      • Components
        • Aggregation
        • Custom
        • Data Enrichment
        • Data Preparation
        • Generative AI
        • Input / Output
        • Joins
        • Parsers
        • Raster Operations
        • Spatial Accessors
        • Spatial Analysis
        • Spatial Constructors
        • Spatial Indexes
        • Spatial Operations
        • Statistics
        • Tileset Creation
        • BigQuery ML
        • Snowflake ML
        • Google Earth Engine
        • Google Environment APIs
        • Telco Signal Propagation Models
      • Data Sources
      • Scheduling workflows
      • Sharing workflows
      • Using variables in workflows
      • Executing workflows via API
      • Temporary data in Workflows
      • Extension Packages
      • Managing workflows
      • Workflows best practices
    • Data Explorer
      • Creating a map from your data
      • Importing data
        • Importing rasters
      • Geocoding data
      • Optimizing your data
    • Data Observatory
      • Terminology
      • Browsing the Spatial Data Catalog
      • Subscribing to public and premium datasets
      • Accessing free data samples
      • Managing your subscriptions
      • Accessing your subscriptions from your data warehouse
        • Access data in BigQuery
        • Access data in Snowflake
        • Access data in Databricks
        • Access data in Redshift
        • Access data in PostgreSQL
    • Connections
      • Google BigQuery
      • Snowflake
      • Databricks
      • Amazon Redshift
      • PostgreSQL
      • CARTO Data Warehouse
      • Sharing connections
      • Deleting a connection
      • Required permissions
      • IP whitelisting
      • Customer data responsibilities
    • Applications
    • Settings
      • Understanding your organization quotas
      • Activity Data
        • Activity Data Reference
        • Activity Data Examples
        • Activity Data Changelog
      • Users and Groups
        • Inviting users to your organization
        • Managing user roles
        • Deleting users
        • SSO
        • Groups
        • Mapping groups to user roles
      • CARTO Support Access
      • Customizations
        • Customizing appearance and branding
        • Configuring custom color palettes
        • Configuring your organization basemaps
        • Enabling AI Agents
      • Advanced Settings
        • Managing applications
        • Configuring S3 Bucket for Redshift Imports
        • Configuring OAuth connections to Snowflake
        • Configuring OAuth U2M connections to Databricks
        • Configuring S3 Bucket integration for RDS for PostgreSQL Exports in Builder
        • Configuring Workload Identity Federation for BigQuery
      • Data Observatory
      • Deleting your organization
    • Developers
      • Managing Credentials
        • API Base URL
        • API Access Tokens
        • SPA OAuth Clients
        • M2M OAuth Clients
      • Named Sources
  • Data and Analysis
    • Analytics Toolbox Overview
    • Analytics Toolbox for BigQuery
      • Getting access
        • Projects maintained by CARTO in different BigQuery regions
        • Manual installation in your own project
        • Installation in a Google Cloud VPC
        • Core module
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • cpg
        • data
        • http_request
        • import
        • geohash
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • routing
        • s2
        • statistics
        • telco
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release notes
      • About Analytics Toolbox regions
    • Analytics Toolbox for Snowflake
      • Getting access
        • Native App from Snowflake's Marketplace
        • Manual installation
      • Key concepts
        • Spatial indexes
        • Tilesets
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • data
        • http_request
        • import
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release Notes
    • Analytics Toolbox for Databricks
      • Getting access
        • Personal (former Single User) cluster
        • Standard (former Shared) cluster
      • Reference
        • lds
        • tiler
      • Guides
      • Release Notes
    • Analytics Toolbox for Redshift
      • Getting access
        • Manual installation in your database
        • Installation in an Amazon Web Services VPC
        • Core version
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • clustering
        • constructors
        • data
        • http_request
        • import
        • lds
        • placekey
        • processing
        • quadbin
        • random
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
      • Release Notes
    • Analytics Toolbox for PostgreSQL
      • Getting access
        • Manual installation
        • Core version
      • Key concepts
        • Tilesets
        • Spatial Indexes
      • SQL Reference
        • h3
        • quadbin
        • tiler
      • Guides
        • Creating spatial index tilesets
        • Running queries from Builder
      • Release Notes
    • CARTO + Python
      • Installation
      • Authentication Methods
      • Visualizing Data
      • Working with Data
        • How to work with your data in the CARTO Data Warehouse
        • How to access your Data Observatory subscriptions
        • How to access CARTO's Analytics Toolbox for BigQuery and create visualizations via Python notebooks
        • How to access CARTO’s Analytics Toolbox for Snowflake and create visualizations via Python notebooks
        • How to visualize data from Databricks
      • Reference
    • CARTO QGIS Plugin
  • CARTO for Developers
    • Overview
    • Key concepts
      • Architecture
      • Libraries and APIs
      • Authentication methods
        • API Access Tokens
        • OAuth Access Tokens
        • OAuth Clients
      • Connections
      • Data sources
      • Visualization with deck.gl
        • Basemaps
          • CARTO Basemap
          • Google Maps
            • Examples
              • Gallery
              • Getting Started
              • Basic Examples
                • Hello World
                • BigQuery Tileset Layer
                • Data Observatory Tileset Layer
              • Advanced Examples
                • Arc Layer
                • Extrusion
                • Trips Layer
            • What's New
          • Amazon Location
            • Examples
              • Hello World
              • CartoLayer
            • What's New
        • Rapid Map Prototyping
      • Charts and widgets
      • Filtering and interactivity
      • Summary
    • Quickstart
      • Make your first API call
      • Visualize your first dataset
      • Create your first widget
    • Guides
      • Build a public application
      • Build a private application
      • Build a private application using SSO
      • Visualize massive datasets
      • Integrate CARTO in your existing application
      • Use Boundaries in your application
      • Avoid exposing SQL queries with Named Sources
      • Managing cache in your CARTO applications
    • Reference
      • Deck (@deck.gl reference)
      • Data Sources
        • vectorTableSource
        • vectorQuerySource
        • vectorTilesetSource
        • h3TableSource
        • h3QuerySource
        • h3TilesetSource
        • quadbinTableSource
        • quadbinQuerySource
        • quadbinTilesetSource
        • rasterSource
        • boundaryTableSource
        • boundaryQuerySource
      • Layers (@deck.gl/carto)
      • Widgets
        • Data Sources
        • Server-side vs. client-side
        • Models
          • getFormula
          • getCategories
          • getHistogram
          • getRange
          • getScatter
          • getTimeSeries
          • getTable
      • Filters
        • Column filters
        • Spatial filters
      • CARTO APIs Reference
    • Release Notes
    • Examples
    • CARTO for React
      • Guides
        • Getting Started
        • Views
        • Data Sources
        • Layers
        • Widgets
        • Authentication and Authorization
        • Basemaps
        • Look and Feel
        • Query Parameters
        • Code Generator
        • Sample Applications
        • Deployment
        • Upgrade Guide
      • Examples
      • Library Reference
        • Introduction
        • API
        • Auth
        • Basemaps
        • Core
        • Redux
        • UI
        • Widgets
      • Release Notes
  • CARTO Self-Hosted
    • Overview
    • Key concepts
      • Architecture
      • Deployment requirements
    • Quickstarts
      • Single VM deployment (Kots)
      • Orchestrated container deployment (Kots)
      • Advanced Orchestrated container deployment (Helm)
    • Guides
      • Guides (Kots)
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Use Workload Identity in GCP
        • High availability configuration for CARTO Self-hosted
        • Configure your custom service account
      • Guides (Helm)
        • Configure your own buckets (Helm)
        • Configure an external in-memory cache (Helm)
        • Enable Google Basemaps (Helm)
        • Enable the CARTO Data Warehouse (Helm)
        • Configure an external proxy (Helm)
        • Enable BigQuery OAuth connections (Helm)
        • Configure Single Sign-On (SSO) (Helm)
        • Use Workload Identity in GCP (Helm)
        • Use EKS Pod Identity in AWS (Helm)
        • Enable Redshift imports (Helm)
        • Migrating CARTO Self-hosted installation to an external database (Helm)
        • Advanced customizations (Helm)
        • Configure your custom service account (Helm)
    • Maintenance
      • Maintenance (Kots)
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
        • Change the Admin Console password
      • Maintenance (Helm)
        • Monitoring (Helm)
        • Rotating keys (Helm)
        • Uninstall (Helm)
        • Backups (Helm)
        • Updates (Helm)
    • Support
      • Get debug information for Support (Kots)
      • Get debug information for Support (Helm)
    • CARTO Self-hosted Legacy
      • Key concepts
        • Architecture
        • Deployment requirements
      • Quickstarts
        • Single VM deployment (docker-compose)
      • Guides
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Enable Redshift imports
        • Configure your custom service account
        • Advanced customizations
        • Migrating CARTO Self-Hosted installation to an external database
      • Maintenance
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
      • Support
    • Release Notes
  • CARTO Native App for Snowflake Containers
    • Deploying CARTO using Snowflake Container Services
  • Get Help
    • Legal & Compliance
    • Previous libraries and components
    • Migrating your content to the new CARTO platform
Powered by GitBook
On this page
  • How to import data
  • Selecting your data
  • Selecting a destination
  • Schema
  • Confirmation and status
  • Overwriting files
  • Write permissions
  • Importing methods
  • Supported formats
  • Custom schema limitations
  • Error tolerance
  • Region-related errors
  • Deleting data

Was this helpful?

Export as PDF
  1. CARTO User Manual
  2. Data Explorer

Importing data

PreviousCreating a map from your dataNextImporting rasters

Last updated 1 month ago

Was this helpful?

CARTO allows you to create geospatial tables by importing local files (from your computer) or via URL. When the data is imported, it will create a new geospatial table in your data warehouse. Currently, importing data is available for the following data warehouses:

  • CARTO Data Warehouse

  • BigQuery

  • Snowflake

  • PostgreSQL

  • Redshift

To import data into Amazon Redshift in Self-Hosted environments, you'll first need to configure your own S3 Bucket in Settings.

Once a file is imported, the resulting table can be previewed in Data Explorer and used in Builder and external applications to create maps.

How to import data

To import your data, go to Data Explorer section and click on the Import data button on the top right (or the import data icon, if you have a connection or folder selected):

Selecting your data

A new dialog will open allowing you to import your data into the available connections. Here you can choose between uploading a local file (from your computer) or from a url. Read more about how each importing method works. Once you have selected your file, just click on Continue.

Selecting a destination

The next screen will allow you to set the location and name of the output table. Once you have completed this configuration, click on Save here.

Schema

You can decide how to manage the schema (data types for each column) of the new table. You can let CARTO automatically define the schema, or you can turn it off to manually define a custom schema.

If you let CARTO automatically define the schema, we will do our best effort to guess the data type for each column based on a sample of the table. This is the recommended approach for most cases.

A common case where you'll need to manually define the schema is when dealing with postal codes. Postal codes in certain countries are interpreted as a number for most systems, but they should be a STRING. If you need more information on defining the schema manually, check Defining a custom schema.

Confirmation and status

Lastly, review the details of the file you're going to import, and when you're ready, click on Import.

A new dialog will open informing you that the import may take a while to process and giving you the option to follow the status from a new dialogue box that appears at the top right corner of the screen.

There are three possible status: importing, imported successfully or dataset creation error.

When an error occurs, you can click on Read more to get more information about the error or hover the mouse over the dataset name or over the information icon. You can also click on Clear to clear the list when the imports have finished.

Once your data has been imported, it will be available as a table on your selected connection. Feel free to use Data Explorer to check the preview, or create a map using Builder.

Overwriting files

You can also overwrite existing files. When you import a file with an existing name, a message will appear warning you that the table already exists in the destination folder. Click on Save here to continue and overwrite it or click on Cancel if you don’t want the changes to be applied.

Write permissions

In order to import data, your connection needs to have write permissions on the chosen destination. If the connection does not have permissions, a warning will appear. Select a new location or click on Cancel if you don’t want the changes to be applied. Read more about why we require certain permissions on connections.

Importing methods

As previously shown, you can import data through two different methods: Local or Remote.

  • Local

This method allows you to upload your data from your computer. To import a local file, select the icon on the left:

  • Remote

This method allows you to enter a supported URL file. To import a remote URL, select the icon on the right.

When you import data via a remote URL, the data is imported only once and will not stay in sync with the remote URL.

Supported formats

Currently, the import of CSV, GeoJSON, GeoPackage, KML, KMZ, TAB, Shapefiles (in a zip package), and GeoParquet with at least two columns is supported. The size limit for a single import process is 1 GB. Please get in touch with us if you need a higher limit.

TIP 💡

When working with zipped TAB or Shapefiles, use the sozip utility that comes with GDAL to generate an optimized ZIP file that will be much faster to import. For example:

sozip output.shp.zip  --optimize-from size_big_2.zip

Read more about sozip.

For CSV files, CARTO will try and autodetect the geometry column or create the geometries from latitude/longitude columns. The supported column names are:

  • For geometry: geom,Geom,geometry,the_geom,wkt,wkb

  • For latitude: latitude,lat,Latitude

  • For longitude: longitude,lon,Lon,Longitude,lng,Lng

The expected delimiters are: comma (,), semi-colon (;) or a tabulation.

For Snowflake and BigQuery connections, the import of Cloud Optimized GeoTIFF raster files is supported. Learn more about the requirements to import COG in your data warehouse in this section.

Custom schema limitations

As seen in the guide above, you can choose between letting CARTO automatically set the data type for each column (schema) or defining it yourself manually. When defining a custom schema, here are the most important things to take into account:

  • The options available for each column are the native data types available for the destination data warehouse (eg: Google BigQuery, Snowflake...)

  • Object-like data types such as RECORD, STRUCT, ARRAY or OBJECT are not supported.

  • When using a custom schema, error tolerance will be 0.

To define a custom schema, just make sure to disable the Let CARTO automatically define the schema switch.

Error tolerance

By default, CARTO will work with the destination data warehouse to try and avoid failing an entire import process if only a small subset of rows are failing. This means that for a file with 10,000 rows, if one row fails, CARTO will successfully create a table with the remaining 9,999 rows.

This is the default error tolerance for each destination data warehouse:

  • BigQuery and CARTO Data Warehouse: up to 1,000 rows

  • Snowflake: unlimited rows

  • Redshift: up to 1,000 rows

  • PostgreSQL: 0 rows (importing will fail if a row throws an error)

Region-related errors

If your data warehouse is established on a different cloud region than your CARTO SaaS organization, imports might fail. This depends on limitations and setup on each data warehouse and cloud provider, and are not set by CARTO.

  • To check your CARTO SaaS region, go to Settings.

  • In Self-Hosted deployments this problem can be avoided by setting up your own import buckets in the correct region.

  • In SaaS deployments you can solve this problem for Amazon Redshift connections by customizing your S3 import bucket.

Deleting data

In the Data Explorer section of the Workspace, you can view the list of your current data warehouse(s) and data observatory subscriptions. You can access the quick actions menu to manage your data by clicking on the “three dots” icon in the top-right corner. There are different options available depending on whether it is a table or a tileset.

If you click the Delete quick action, a dialog will appear allowing you to confirm that you want to delete the selected table or tileset. It includes information about data sources, layers, applications and API calls related to the existing dataset that could potentially be affected by the action. Click the Yes, delete button to confirm the changes or click Cancel if you don’t want the changes to be applied.

When using a custom schema, the error tolerance will be 0 rows in all cases. If you need to customize the error tolerance for your imports, consider using our .

Imports API
In this case, we are letting CARTO take charge of the schema by analyzing a sample of the table.