Migrating your content to the new CARTO platform

In 2021 we launched a new version of the CARTO platform, designed to connect seamlessly with leading cloud data warehouses. The new CARTO Cloud-native platform delivers unparalleled performance, scalability, and innovative features such as CARTO Workflows or AI Agents, plus an enhanced CARTO Builder experience and improved developer tools.

We encourage all CARTO users to explore and use the new platform, as the Legacy version will progressively be retired to the general public, with due notice.

This guide covers how to migrate your content (data, maps, and applications) from legacy accounts to the new version of the CARTO platform.

Considerations

How do I know if I'm using a Legacy version of CARTO?
  • For the Legacy version of CARTO, the url will typically be something like my_username.carto.com. It can be accessed via https://carto.com/login.

  • When using the new CARTO platform the url will be app.carto.com, and it can be accessed via https://app.carto.com.

If you already access CARTO through app.carto.com and all your content is there, no need to worry: you’re already using the new platform and no migration is required.

I’m using the Legacy version, what content do I need to migrate and why?

Technically speaking, there are many substantial differences between the legacy platform and the new cloud-native platform.

For example, CARTO Legacy uses a fully managed PostgreSQL database where you had to upload data in order to use it, while CARTO Cloud-native uses a live connection to your preferred data warehouse: BigQuery, Snowflake, Redshift, Databricks or PostgreSQL.

Another example is the mapping engine: our legacy platform uses Leaflet, where the new CARTO platform uses modern vector techniques over deck.gl.

Consequently, the data and content generated in the Legacy platform needs to be migrated manually, typically following these steps:

What if I don’t have a data warehouse of my own?

No problem! CARTO provides a built-in data warehouse for every organization, no setup required. The included CARTO Data Warehouse is highly scalable and performant, but it will be a less flexible solution than using your own data warehouse. In the long run, you should consider using your own data warehouse.

Accessing your new CARTO organization

To start your migration process, let’s make sure you have a valid organization in the new CARTO. The legacy and the new platform use totally different login credentials, so you might need to sign up first.

  • If you already have an organization in the new CARTO, just login via app.carto.com

  • If you don’t have an organization in the new CARTO, create a new 14-day trial via app.carto.com/signup. Then, email your CARTO representative or our Support team (support@carto.com) with your details and our team will complete the setup for your new organization as needed.

You should now see the CARTO Workspace — we’re ready!

Migrating data

There are several methods to migrate your data, and the best method for you will depend on your code expertise, the amount of data in your CARTO legacy account, and the data warehouse where you plan to migrate it to.

Method 1: UI-based export and import

For small amounts of data, the recommended way is to simply use the User Interface (UI) of both platforms to export and import your data:

  1. Download your data from the legacy CARTO platform: you can use any of these formats: CSV, SHP, KML, GeoJSON and GeoPackage.

  2. Upload your data to CARTO 3: use the new Import functionality to upload the legacy files into the desired data warehouse connection.

Method 2: Connect directly to your legacy CARTO DB and import using your own data warehouse

If you’re comfortable with data warehouse ETLs and you have larger volumes of data, you can create an external connection with your CARTO 2 DB to access and create a backup copy of your data:

  1. Exporting your backup: Use pg_dump to create a SQL file containing the data from your DB. Alternatively, you can also use CARTOframes to export the data using Python.

  2. Importing to your data warehouse: Use your preferred library to copy this data into your data warehouse.

    1. BigQuery: Convert the backup to CSV/JSON using pg_restore or a similar tool, then load it into BigQuery via the web UI, bq CLI, or Data Transfer Service. Ensure geometry columns are converted to WKT format.

    2. Snowflake: Export the data to CSV using pg_restore or a similar tool and stage it in an S3 bucket or Snowflake Stage. Use the COPY INTO command to load the data.

    3. Redshift: Restore the dump to a PostgreSQL instance, then use AWS DMS or COPY commands with CSV files staged in S3 to load data into Redshift.

    4. Databricks: Export the data to CSV/Parquet, upload it to a cloud storage location (e.g., S3 or Azure Blob), and use Databricks' spark.read to ingest it. Geometry columns may need transformation to WKT/WKB.

    5. PostgreSQL: Use pg_restore with the --dbname flag to directly import the dump into another PostgreSQL database.

Method 3: Connect directly to your legacy CARTO DB and import using the CARTO Imports API

Lastly, for those with larger volumes of data that are less comfortable with data warehouse ETLs (or are using the included CARTO Data Warehouse), you can create an external connection with your CARTO 2 DB to access and create a backup copy of your data, and then import it using our Imports API:

  1. Exporting your backup: Use pg_dump to create a SQL file containing the data from your DB. Alternatively, you can also use CARTOframes to export the data using Python.

  2. Convert it to CSV: Use pg_restore (for SQL files) or a script (for Python exports) to convert your backup into CSV files.

  3. Upload it to an object storage bucket (S3, GCS, Azure Blob…): Use your preferred bucket system and expose the files using public or signed URLs.

  4. Using the Imports API to import your backup: The Imports API is a rest API that allows you to import data from a URL into your own data warehouse, leveraging your CARTO connection. It performs the necessary checks and optimizations for visualization and analysis using CARTO. This is an example of an Import job using our Imports API:

Example using the Imports API
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
myHeaders.append("Authorization", "your-API-access-token");


var raw = JSON.stringify({
  "connection": "your-connection-name",
  "url": "https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1005861/2b.__FC_-_Over_25k_-_June_21.csv",
  "destination": "project.dataset.table",
  "overwrite": false,
  "autoguessing": true,
  "schema": {
    "columns": [
      {
        "name": "geom",
        "type": "GEOGRAPHY",
        "nullable": true
      },
      {
        "name": "cartodb_id",
        "type": "INT64",
        "nullable": true
      },
      {
        "name": "lat",
        "type": "FLOAT64",
        "nullable": true
      },
      {
        "name": "lon",
        "type": "FLOAT64",
        "nullable": true
      },
      {
        "name": "pais",
        "type": "STRING",
        "nullable": true
      },
      {
        "name": "pais_long",
        "type": "STRING",
        "nullable": true
      },
      {
        "name": "iso3",
        "type": "STRING",
        "nullable": true
      },
      {
        "name": "continente",
        "type": "STRING",
        "nullable": true
      },
      {
        "name": "subregion",
        "type": "STRING",
        "nullable": true
      },
      {
        "name": "iso2",
        "type": "STRING",
        "nullable": true
      },
      {
        "name": "capital",
        "type": "STRING",
        "nullable": true
      },
      {
        "name": "iso_n3",
        "type": "STRING",
        "nullable": true
      },
      {
        "name": "poblacion",
        "type": "STRING",
        "nullable": true
      }
    ],
    "options": [
      {
        "name": "STRING"
      },
      {
        "name": "BYTES"
      },
      {
        "name": "INT64"
      },
      {
        "name": "NUMERIC"
      },
      {
        "name": "BIGNUMERIC"
      },
      {
        "name": "FLOAT64"
      },
      {
        "name": "BOOL"
      },
      {
        "name": "DATE"
      },
      {
        "name": "DATETIME"
      },
      {
        "name": "TIME"
      },
      {
        "name": "TIMESTAMP"
      },
      {
        "name": "JSON"
      },
      {
        "name": "GEOGRAPHY"
      }
    ]
  }
});


var requestOptions = {
  method: 'POST',
  headers: myHeaders,
  body: raw,
  redirect: 'follow'
};


fetch("https://gcp-us-east1.api.carto.com/v3/imports", requestOptions)
  .then(response => response.text())
  .then(result => console.log(result))
  .catch(error => console.log('error', error));

Please note that the Imports API is not yet available for Databricks connections.

Re-creating maps

Once your data is in the new CARTO you can now re-create your maps using the new CARTO Builder. This will be a manual process and can’t be automated. Here are some tips and considerations to guide you through the process of building your new maps:

  1. Get familiar with the new CARTO Builder using our documentation and the tutorials in the CARTO Academy.

  2. Use the brand new CARTO Workflows if you want to edit or transform your data.

  3. Performance for smaller amounts of data can be similar, but the new CARTO is definitely more powerful for larger volumes of data.

  4. If you were using custom basemaps, you’ll need to first register them in Settings > Customizations > Basemaps

  5. If you re-use color palettes across multiple maps, you can store them in Settings > Customizations > Palettes

  6. The new maps do not need to match exactly the old ones. Consider using new functionality such as SQL Parameters, split view, 3D views, and more. Likewise, some features in the legacy CARTO might not match exactly the features in the new platform, and you might need to choose different solutions.

Once you’re comfortable with your new map, make sure to review the title and description, and share it with your team or with the world! And don’t forget — If you hit a roadblock in the map re-building process, our team (support@carto.com) will be there to help you.

Migrating applications

If you were running custom applications using our legacy tools, you’ll need to migrate them to use our new stack. The new CARTO offers a complete set of tools for developers, based on our new REST APIs and deck.gl, a modern WebGL/WebGPU-based JavaScript library to visualize data.

This migration can bring a positive impact: you’ll have access to more modern tools, built on top of your data warehouse (potentially removing components and ETL processes), and designed for security and scalability.

When facing an application migration project, these are the steps we recommend taking:

  1. Get familiar with CARTO + deck.gl: Explore our documentation and examples to understand how applications are built in the new platform.

  2. Get familiar with the CARTO APIs: Explore our new APIs to unlock additional functionality such as imports or geocoding.

  3. Get familiar with authentication mechanisms in the new CARTO: From public applications to large-scale integrations using SSO, the new CARTO offers a flexible set of authentication strategies for developers.

  4. Make a list of the functionalities and technologies in your current application that use CARTO: in order to design and scope the migration effort, review the features that are currently using CARTO, along with the technologies used.

  5. Choose a replacement for each of the functionalities/technologies: while this should be done on a case-by-case basis, this table provides general guidelines you can follow:

Functionality

Legacy Technology

Replace with…

Database

CARTODB

Your own cloud data warehouse (BigQuery, Snowflake, Redshift, Databricks or PostgreSQL), connected using CARTO

Javascript-based rendering maps, layers, symbols, popups, tooltips, etc…

CartoJS CartoVL

Styling data-driven visualizations

CartoCSS

SQL API for querying data

CARTO SQL API

CARTO SQL API

Python notebooks

CARTOframes

CARTO + Python

Use maps from CARTO Builder into a custom application

Named maps, viz.json

fetchMap in CARTO + deck.gl

Build applications in CARTO without exposing SQL client-side

Named maps

  1. Migrate your codebase: follow the plan and take the opportunity to achieve even better performance, add new features, and load even more types of data in your application, such as raster or spatial indexes.

Each migration project will be a unique process. If you have questions or need guidance, our support team (support@carto.com) will be happy to help you.

Conclusions

The new CARTO platform is an important leap into the future of cloud-native geospatial analysis. Content from previous legacy versions needs to be migrated manually due to the remarkable differences in the technical approach for each version. During the migration, you will be in charge of defining how your new maps, workflows and applications will look like in the new platform.

If you’ve already migrated your data, maps, and applications

Then… Congrats! You’re fully migrated and ready to experience the new CARTO.

FAQs

Can I start using the new CARTO without migrating?

Totally! Migrating is optional, in case you want to preserve your old datasets, maps, and applications by bringing them into the new platform.

Moreover, our recommendation is that you start using the new CARTO already for all upcoming projects, even if the migration isn’t fully finished.

Why can’t CARTO do this migration automatically?

The new CARTO and the legacy versions have different technical approaches, user interfaces, and feature sets. That is why during the migration process you will need to make a considerable amount of decisions in order to define how your data, maps and applications will exist in the new CARTO. We can’t make those decisions for you and therefore we can’t migrate them automatically.

Last updated

Was this helpful?