LogoLogo
HomeAcademyLoginTry for free
  • Welcome
  • What's new
    • Q2 2025
    • Q1 2025
    • Q4 2024
    • Q3 2024
    • Q2 2024
    • Q1 2024
    • Q4 2023
    • Q3 2023
    • Q2 2023
    • Q1 2023
    • Q4 2022
    • Q3 2022
  • FAQs
    • Accounts
    • Migration to the new platform
    • User & organization setup
    • General
    • Builder
    • Workflows
    • Data Observatory
    • Analytics Toolbox
    • Development Tools
    • Deployment Options
    • CARTO Basemaps
    • CARTO for Education
    • Support Packages
    • Security and Compliance
  • Getting started
    • What is CARTO?
    • Quickstart guides
      • Connecting to your data
      • Creating your first map
      • Creating your first workflow
      • Developing your first application
    • CARTO Academy
  • CARTO User Manual
    • Overview
      • Creating your CARTO organization
      • CARTO Cloud Regions
      • CARTO Workspace overview
    • Maps
      • Data sources
        • Simple features
        • Spatial Indexes
        • Pre-generated tilesets
        • Rasters
        • Defining source spatial data
        • Managing data freshness
        • Changing data source location
      • Layers
        • Point
          • Grid point aggregation
          • H3 point aggregation
          • Heatmap point aggregation
          • Cluster point aggregation
        • Polygon
        • Line
        • Grid
        • H3
        • Raster
        • Zoom to layer
      • Widgets
        • Formula widget
        • Category widget
        • Pie widget
        • Histogram widget
        • Range widget
        • Time Series widget
        • Table widget
      • SQL Parameters
        • Date parameter
        • Text parameter
        • Numeric parameter
        • Publishing SQL parameters
      • Interactions
      • Legend
      • Basemaps
        • Basemap selector
      • AI Agents
      • SQL analyses
      • Map view modes
      • Map description
      • Feature selection tool
      • Search locations
      • Measure distances
      • Exporting data
      • Download PDF reports
      • Managing maps
      • Publishing and sharing maps
        • Map settings for viewers
        • Map preview for editors
        • Collaborative maps
        • Embedding maps
        • URL parameters
      • Performance considerations
    • Workflows
      • Workflow canvas
      • Results panel
      • Components
        • Aggregation
        • Custom
        • Data Enrichment
        • Data Preparation
        • Generative AI
        • Input / Output
        • Joins
        • Parsers
        • Raster Operations
        • Spatial Accessors
        • Spatial Analysis
        • Spatial Constructors
        • Spatial Indexes
        • Spatial Operations
        • Statistics
        • Tileset Creation
        • BigQuery ML
        • Snowflake ML
        • Google Earth Engine
        • Google Environment APIs
        • Telco Signal Propagation Models
      • Data Sources
      • Scheduling workflows
      • Sharing workflows
      • Using variables in workflows
      • Executing workflows via API
      • Temporary data in Workflows
      • Extension Packages
      • Managing workflows
      • Workflows best practices
    • Data Explorer
      • Creating a map from your data
      • Importing data
        • Importing rasters
      • Geocoding data
      • Optimizing your data
    • Data Observatory
      • Terminology
      • Browsing the Spatial Data Catalog
      • Subscribing to public and premium datasets
      • Accessing free data samples
      • Managing your subscriptions
      • Accessing your subscriptions from your data warehouse
        • Access data in BigQuery
        • Access data in Snowflake
        • Access data in Databricks
        • Access data in Redshift
        • Access data in PostgreSQL
    • Connections
      • Google BigQuery
      • Snowflake
      • Databricks
      • Amazon Redshift
      • PostgreSQL
      • CARTO Data Warehouse
      • Sharing connections
      • Deleting a connection
      • Required permissions
      • IP whitelisting
      • Customer data responsibilities
    • Applications
    • Settings
      • Understanding your organization quotas
      • Activity Data
        • Activity Data Reference
        • Activity Data Examples
        • Activity Data Changelog
      • Users and Groups
        • Inviting users to your organization
        • Managing user roles
        • Deleting users
        • SSO
        • Groups
        • Mapping groups to user roles
      • CARTO Support Access
      • Customizations
        • Customizing appearance and branding
        • Configuring custom color palettes
        • Configuring your organization basemaps
        • Enabling AI Agents
      • Advanced Settings
        • Managing applications
        • Configuring S3 Bucket for Redshift Imports
        • Configuring OAuth connections to Snowflake
        • Configuring OAuth U2M connections to Databricks
        • Configuring S3 Bucket integration for RDS for PostgreSQL Exports in Builder
        • Configuring Workload Identity Federation for BigQuery
      • Data Observatory
      • Deleting your organization
    • Developers
      • Managing Credentials
        • API Base URL
        • API Access Tokens
        • SPA OAuth Clients
        • M2M OAuth Clients
      • Named Sources
  • Data and Analysis
    • Analytics Toolbox Overview
    • Analytics Toolbox for BigQuery
      • Getting access
        • Projects maintained by CARTO in different BigQuery regions
        • Manual installation in your own project
        • Installation in a Google Cloud VPC
        • Core module
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • cpg
        • data
        • http_request
        • import
        • geohash
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • routing
        • s2
        • statistics
        • telco
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release notes
      • About Analytics Toolbox regions
    • Analytics Toolbox for Snowflake
      • Getting access
        • Native App from Snowflake's Marketplace
        • Manual installation
      • Key concepts
        • Spatial indexes
        • Tilesets
      • SQL Reference
        • accessors
        • clustering
        • constructors
        • data
        • http_request
        • import
        • h3
        • lds
        • measurements
        • placekey
        • processing
        • quadbin
        • random
        • raster
        • retail
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
        • Working with Raster data
      • Release Notes
    • Analytics Toolbox for Databricks
      • Getting access
        • Personal (former Single User) cluster
        • Standard (former Shared) cluster
      • Reference
        • lds
        • tiler
      • Guides
      • Release Notes
    • Analytics Toolbox for Redshift
      • Getting access
        • Manual installation in your database
        • Installation in an Amazon Web Services VPC
        • Core version
      • Key concepts
        • Tilesets
        • Spatial indexes
      • SQL Reference
        • clustering
        • constructors
        • data
        • http_request
        • import
        • lds
        • placekey
        • processing
        • quadbin
        • random
        • s2
        • statistics
        • tiler
        • transformations
      • Guides
        • Running queries from Builder
      • Release Notes
    • Analytics Toolbox for PostgreSQL
      • Getting access
        • Manual installation
        • Core version
      • Key concepts
        • Tilesets
        • Spatial Indexes
      • SQL Reference
        • h3
        • quadbin
        • tiler
      • Guides
        • Creating spatial index tilesets
        • Running queries from Builder
      • Release Notes
    • CARTO + Python
      • Installation
      • Authentication Methods
      • Visualizing Data
      • Working with Data
        • How to work with your data in the CARTO Data Warehouse
        • How to access your Data Observatory subscriptions
        • How to access CARTO's Analytics Toolbox for BigQuery and create visualizations via Python notebooks
        • How to access CARTO’s Analytics Toolbox for Snowflake and create visualizations via Python notebooks
        • How to visualize data from Databricks
      • Reference
    • CARTO QGIS Plugin
  • CARTO for Developers
    • Overview
    • Key concepts
      • Architecture
      • Libraries and APIs
      • Authentication methods
        • API Access Tokens
        • OAuth Access Tokens
        • OAuth Clients
      • Connections
      • Data sources
      • Visualization with deck.gl
        • Basemaps
          • CARTO Basemap
          • Google Maps
            • Examples
              • Gallery
              • Getting Started
              • Basic Examples
                • Hello World
                • BigQuery Tileset Layer
                • Data Observatory Tileset Layer
              • Advanced Examples
                • Arc Layer
                • Extrusion
                • Trips Layer
            • What's New
          • Amazon Location
            • Examples
              • Hello World
              • CartoLayer
            • What's New
        • Rapid Map Prototyping
      • Charts and widgets
      • Filtering and interactivity
      • Summary
    • Quickstart
      • Make your first API call
      • Visualize your first dataset
      • Create your first widget
    • Guides
      • Build a public application
      • Build a private application
      • Build a private application using SSO
      • Visualize massive datasets
      • Integrate CARTO in your existing application
      • Use Boundaries in your application
      • Avoid exposing SQL queries with Named Sources
      • Managing cache in your CARTO applications
    • Reference
      • Deck (@deck.gl reference)
      • Data Sources
        • vectorTableSource
        • vectorQuerySource
        • vectorTilesetSource
        • h3TableSource
        • h3QuerySource
        • h3TilesetSource
        • quadbinTableSource
        • quadbinQuerySource
        • quadbinTilesetSource
        • rasterSource
        • boundaryTableSource
        • boundaryQuerySource
      • Layers (@deck.gl/carto)
      • Widgets
        • Data Sources
        • Server-side vs. client-side
        • Models
          • getFormula
          • getCategories
          • getHistogram
          • getRange
          • getScatter
          • getTimeSeries
          • getTable
      • Filters
        • Column filters
        • Spatial filters
      • CARTO APIs Reference
    • Release Notes
    • Examples
    • CARTO for React
      • Guides
        • Getting Started
        • Views
        • Data Sources
        • Layers
        • Widgets
        • Authentication and Authorization
        • Basemaps
        • Look and Feel
        • Query Parameters
        • Code Generator
        • Sample Applications
        • Deployment
        • Upgrade Guide
      • Examples
      • Library Reference
        • Introduction
        • API
        • Auth
        • Basemaps
        • Core
        • Redux
        • UI
        • Widgets
      • Release Notes
  • CARTO Self-Hosted
    • Overview
    • Key concepts
      • Architecture
      • Deployment requirements
    • Quickstarts
      • Single VM deployment (Kots)
      • Orchestrated container deployment (Kots)
      • Advanced Orchestrated container deployment (Helm)
    • Guides
      • Guides (Kots)
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Use Workload Identity in GCP
        • High availability configuration for CARTO Self-hosted
        • Configure your custom service account
      • Guides (Helm)
        • Configure your own buckets (Helm)
        • Configure an external in-memory cache (Helm)
        • Enable Google Basemaps (Helm)
        • Enable the CARTO Data Warehouse (Helm)
        • Configure an external proxy (Helm)
        • Enable BigQuery OAuth connections (Helm)
        • Configure Single Sign-On (SSO) (Helm)
        • Use Workload Identity in GCP (Helm)
        • Use EKS Pod Identity in AWS (Helm)
        • Enable Redshift imports (Helm)
        • Migrating CARTO Self-hosted installation to an external database (Helm)
        • Advanced customizations (Helm)
        • Configure your custom service account (Helm)
    • Maintenance
      • Maintenance (Kots)
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
        • Change the Admin Console password
      • Maintenance (Helm)
        • Monitoring (Helm)
        • Rotating keys (Helm)
        • Uninstall (Helm)
        • Backups (Helm)
        • Updates (Helm)
    • Support
      • Get debug information for Support (Kots)
      • Get debug information for Support (Helm)
    • CARTO Self-hosted Legacy
      • Key concepts
        • Architecture
        • Deployment requirements
      • Quickstarts
        • Single VM deployment (docker-compose)
      • Guides
        • Configure your own buckets
        • Configure an external in-memory cache
        • Enable Google Basemaps
        • Enable the CARTO Data Warehouse
        • Configure an external proxy
        • Enable BigQuery OAuth connections
        • Configure Single Sign-On (SSO)
        • Enable Redshift imports
        • Configure your custom service account
        • Advanced customizations
        • Migrating CARTO Self-Hosted installation to an external database
      • Maintenance
        • Updates
        • Backups
        • Uninstall
        • Rotating keys
        • Monitoring
      • Support
    • Release Notes
  • CARTO Native App for Snowflake Containers
    • Deploying CARTO using Snowflake Container Services
  • Get Help
    • Legal & Compliance
    • Previous libraries and components
    • Migrating your content to the new CARTO platform
Powered by GitBook
On this page
  • Using OAuth 2.0
  • Using Workload Identity Federation
  • Using a Service Account
  • Requiring Viewer Credentials
  • Required BigQuery permissions
  • Billing-project level
  • Resource level
  • Advanced options
  • IP Whitelisting

Was this helpful?

Export as PDF
  1. CARTO User Manual
  2. Connections

Google BigQuery

PreviousConnectionsNextSnowflake

Last updated 3 months ago

Was this helpful?

CARTO can connect to your BigQuery Data Warehouse, allowing you to use your data for building Maps, Workflows and custom applications. There are three ways to set up a connection to Google BigQuery.

Recommended methods:

  • : Connect your own Google account and use all the Google BigQuery permissions that you have access to, with the possibility of enforcing viewer credentials. This method is ideal if you want to use in CARTO exactly the same permissions that you have in your BigQuery console. This method is also called OAuth 2.0.

  • : Leverage CARTO identities directly in Google Cloud Platform, with permissions being granted via IAM to a Workload Identity Pool, previously configured by your GCP administrators. This method is ideal if you want to use granular, restricted permissions exclusively for CARTO.

Other methods

  • : These are a set of credentials (a key in JSON format) generated in Google Cloud, representing a set of permissions for a database or a project, not associated with an individual. This is likely a quicker and more flexible solution for testing, but the other methods represent a more secure strategy for production environments.

  • In all methods you will need to indicate a billing project. All queries performed by CARTO will use the billing account associated with the selected billing project. We recommend you review the different BigQuery pricing models, and more importantly, configure specific limits in BigQuery to avoid any unexpected charges.

CARTO is a fully cloud-native platform that runs queries on your behalf to power maps, workflows, etc. We never create or maintain any copies of your data.

Using OAuth 2.0

To connect CARTO and BigQuery using your Google account simply click the Continue with Google button. This will open a Google login flow that will request the necessary scopes for CARTO to connect to your BigQuery data.

After allowing CARTO to access your Google BigQuery data, you will see a form where you'll specify the remaining details for this connection:

  • Name: This will be the name used to identify this connection across CARTO. It needs to be unique and there are special format rules: 3-50 characters long, containing only lowercase letters and numbers. Dashes and underscores are allowed if they're not leading or trailing.

  • Billing project: All queries performed by CARTO will run against this Google Cloud Platform project, and its associated billing account.

When using OAuth-based connections (such as this "Continue with Google"), you might be asked to reconnect at any time. It could happen, for example, after a few months or after changing your password. The reason is that this type of connections are linked to your Google account consent to CARTO, which you can also revoke at any moment.

Using Workload Identity Federation

Initial setup required

To use it in your connections (and after your administator has completed the integration) simply click on Connect with Workload Identity Federation in the connection method selection screen. A new connection form will appear, and you'll need to complete the setup for your connection.

  • Name: This will be the name used to identify this connection across CARTO. It needs to be unique and there are special format rules: 3-50 characters long, containing only lowercase letters and numbers. Dashes and underscores are allowed if they're not leading or trailing.

  • IAM Principal: this is the identity that this connection will use in Google Cloud. Use this IAM directly in Google Cloud to grant permissions to this connection.

    • Service Account email for impersonation (optional): if you want to impersonate a Service Account using this connection, enter the service account email here. This assumes the IAM Principal has permissions to impersonate the service account.

  • Billing project: All queries performed by CARTO will run against this Google Cloud Platform project, and its associated billing account.

Using a Service Account

If you select Connect using a Service Account, you'll see a form where you'll specify the details for this connection:

  • Name: This will be the name used to identify this connection across CARTO. It needs to be unique and there are special format rules: 3-50 characters long, containing only lowercase letters and numbers. Dashes and underscores are allowed if they're not leading or trailing.

  • Billing project: All queries performed by CARTO will run against this Google Cloud Platform project, and its associated billing account.

Requiring Viewer Credentials

Connections to Google BigQuery using both OAuth and Workload Identity Federation can be set up to require viewer credentials. This means that when the connection is shared, other users trying to access it will have to provide their own credentials to use it, instead of using the credentials (token) of the user that created the connection.

When you share Workload Identity Federation connections in "Requiring Viewer Credentials" mode, users will automatically use their own identity, and do not need to provide any additional information.

Required BigQuery permissions

For each area + resource combination, connection credentials must have at least the “Minimum permissions” to work. Some optional features may require additional permissions to work as expected.

For the best experience in CARTO, we advise you to set up the “Recommended role”:

  • bigquery.dataEditor

  • bigquery.user

Billing-project level

CARTO requires the following permissions at the billing project to connect to BigQuery:

Where
Recommended role
Minimum permissions required

Billing project (as specified in the connection)

bigquery.dataEditor

bigquery.user

bigquery.jobs.list

bigquery.jobs.create

resourcemanager.projects.get

Resource level

CARTO requires the following permissions for each BigQuery resource in order to operate with those resources, such as projects, datasets, or tables.

Where
Recommended role
Minimum permissions required

bigquery.dataEditor

bigquery.user

resourcemanager.projects.get

resourcemanager.projects.list bigquery.tables.list

bigquery.dataEditor

bigquery.user

bigquery.jobs.create

bigquery.tables.list

bigquery.dataEditor

bigquery.user

bigquery.jobs.create

bigquery.jobs.list

bigquery.tables.list

bigquery.tables.create

bigquery.datasets.create

bigquery.datasets.get

bigquery.dataEditor

bigquery.user

bigquery.jobs.create

Advanced options

  • Analytics Toolbox location: This setting controls the location of the Analytics Toolbox used in SQL queries generated by Workflows components, Builder SQL Analyses, 'Create Tileset', 'Geocode Table' and 'Enrich Data' functionalities. By default, CARTO will automatically determine the corresponding AT Location based on the actual region of the data.

  • Data Observatory location: This settings controls the location of the Data Observatory subscriptions. This setting will be observed by Data Explorer, Workflows and Enrichment to access your data subscriptions. By default, a specific project for your account (created automatically and maintained by CARTO) will be used. For example carto-data.ac_xxxxxxxx

  • Max number of concurrent queries: This setting controls the maximum number of simultaneous queries that CARTO will send to BigQuery using this connection.

  • Max query timeout: This setting controls the maximum allowed duration of queries that CARTO runs in BigQuery using this connection.

IP Whitelisting

Please make sure that your credentials (regardless of the method used) have the necessary permissions for CARTO to run. For more information, see .

Please make sure your Google account has adequate permissions for CARTO, at least at the billing project level. Learn more at .

CARTO can connect to BigQuery by leveraging .

To connect to BigQuery using this method, the organization admin must first set up a Workload Identity Federation integration in CARTO. Once this is done, it will be available to all users within the organization. .

Please make sure that your Workload Identity Federation IAM Principal has adequate permissions in Google BigQuery for CARTO, at least at the billing project level. Learn more at . If you're not sure, check with the Google Cloud administrator in your organization that .

Service account key: The credentials file in JSON format. Please read the following instructions to learn how to create a and a service account in Google Cloud.

Please make sure the Service Account has adequate permissions for CARTO, at least at the billing project level. Learn more at

For more information, see .

When creating the connection, CARTO will check that you have a minimum set of permissions that will allow the connection to operate with CARTO. These checks are performed at the .

You can then granularly specify a different set of permissions for each resource. For example, the connection could have edit permissions in some tables but read-only in others. Please note that you can give limited and granular permissions to resources in completely different projects than the billing project. We call this .

Listing projects, datasets and tables in CARTO ()

Projects, datasets, and tables used for map visualization ()

Projects, datasets, and tables used for spatial analysis ()

Projects, datasets, and tables used in custom applications ()

You can also check our (more generic) guide about , with examples on setting different connections for different teams.

Workflows temp. location: This setting controls the location (project.dataset) where Workflows will create temporal tables for each node. By default, it's a carto dataset that will be created in the connection's project during the execution of a workflow. Learn more about it .

Data Transfer Version Info: This setting is only necessary for with 'Sign in with Google' connections. Learn about how to generate the code .

Restrict this connection to only use Named Sources: When this setting is enabled, this connection will only work within apps that use , and it will NOT work in Data Explorer, Builder and Workflows. This prevents the usage of arbitrary SQL in applications for this connection.

If you're using the cloud version of CARTO (SaaS), CARTO will connect to BigQuery using a set of static IPs for each region. for your specific region.

What it means to be fully cloud native.
Workload Identity Federation
Read more about setting up a Workload Identity Federation for BigQuery integration
service account
key file
why CARTO requires each permission
Scheduling Workflows
here
Named Sources
Check this guide to find the IPs you need to allow
Required BigQuery Permissions
Required BigQuery Permissions
integrated CARTO via Workload Identity Federation
Required BigQuery Permissions
Required BigQuery Permissions.
Billing-project level
Resource level
Data Explorer
Builder
Workflows
CARTO for Developers
Sign in with Google
Workload Identity Federation
Service Account
Requiring viewing credentials for shared connections
here