AI in Tableau and Trust
Note: Einstein Copilot has been renamed Tableau Agent, as the platform expands to accommodate more AI agent functionality. Starting in October 2024, you'll see updates to page sections, field names, and other UI text throughout Tableau Prep, Tableau Catalog, and Tableau Cloud Web Authoring. Help content and Trailhead modules are also being updated to reflect these changes.
It’s important that your data stays safe while you innovate with new technology. With AI in Tableau, we keep Trust as the #1 value, and we strive to make sure your data is secure while also creating experiences that are accurate and safe.
AI in Tableau and your data
To keep your data secure, Salesforce has agreements in place with Large Language Model (LLM) providers, like OpenAI. Organizations can leverage generative AI capabilities without their private data being used to train the LLM.
Trusted generative AI
Salesforce’s Einstein generative AI solutions are designed, developed, and delivered based on five principles for trusted generative AI.
Accuracy: We prioritize accuracy, precision, and recall in our models, and we back our model outputs up with explanations and sources whenever possible. We recommend that a human check model output before sharing with end users.
Safety: We work to detect and mitigate bias, toxicity, and harmful outputs from models used in our products through industry-leading detection and mitigation techniques.
Transparency: We ensure that our models and features respect data provenance and are grounded in your data whenever possible.
Empowerment: We believe our products should augment people’s capabilities and make them more efficient and purposeful in their work.
Sustainability: We strive towards building right-sized models that prioritize accuracy and reduce our carbon footprint.
To learn more about trusted AI, see Salesforce Research: Trusted AI(Link opens in a new window)
AI in Tableau features support English (en_US). Starting in version 2025.1 (February release) a subset of additional languages are supported for Tableau Agent in the following feature areas. Currently, generative AI in Tableau Pulse Enhanced Q&A (Discover) and Tableau Catalog only supports English (en_US).
Tableau Pulse Insight Summaries and better semantic matching in Ask Q&A
Tableau Agent in Tableau Prep Web Authoring
Tableau Agent in Tableau Cloud Web Authoring
Tableau Agent in Tableau Desktop
AI in Tableau features inherit the Einstein Trust Layer and security controls for supported languages. Some languages may not fully support pattern-based data masking, toxicity detection, or audit and feedback data.
For more information about the data types supported and toxicity detection available by language, see Einstein Trust Layer Region and Language Support(Link opens in a new window) in the Salesforce help. For more information about the types of audit data collected for generative AI, see Generative AI Audit and Feedback Data(Link opens in a new window) in the Salesforce help.
Languages and locales by feature area
When using Tableau Agent in Tableau Cloud, the language used in the generative AI response is based on the Language set in your My Account Settings. If the setting in your My Account Settings is set to Unspecified, the browser language setting is used instead. When using Tableau Agent in Tableau Desktop, the language used in the generative AI response is based on the language selected in the Help > Choose Language menu.
The following languages are currently supported.
Tableau Pulse Insight Summaries
Language | Code |
Chinese (Simplified) | zh_CN |
Chinese (Traditional) | zh_TW |
Dutch | nl_NL |
English (United Kingdom) | en_GB |
English (United States) | en_US |
French (Canada) | fr_CA |
French (France) | fr_FR |
German | de_DE |
Italian | it_IT |
Japanese | ja_JP |
Korean | ko_KR |
Portuguese (Brazil) | pt_BR |
Spanish | es_ES |
Swedish | sv_SE |
Thai | th_TH |
Tableau Agent in Tableau Prep Web Authoring
Note: Pattern-based data masking and toxicity detection not currently supported for Portuguese.
Language | Code |
English (United Kingdom) | en_GB |
English (United States) | en_US |
French (France) | fr_FR |
German | de_DE |
Italian | it_IT |
Japanese | ja_JP |
Portuguese (Brazil) | pt_BR |
Spanish | es_ES |
Tableau Agent in Tableau Cloud Web Authoring
Note: Pattern-based data masking and toxicity detection not currently supported for Portuguese.
Language | Code |
English (United Kingdom) | en_GB |
English (United States) | en_US |
French (France) | fr_FR |
German | de_DE |
Italian | it_IT |
Japanese | ja_JP |
Portuguese (Brazil) | pt_BR |
Spanish | es_ES |
Tableau Agent in Tableau Desktop
Note: Pattern-based data masking and toxicity detection not currently supported for Portuguese.
Language | Code |
English (United Kingdom) | en_GB |
English (United States) | en_US |
French (France) | fr_FR |
German | de_DE |
Italian | it_IT |
Japanese | ja_JP |
Portuguese (Brazil) | pt_BR |
Spanish | es_ES |
Geo-aware LLM request routing
With Tableau Agent, choosing a Salesforce-managed Large Language Model (LLM) isn't supported. Instead, the development team at Tableau tests and selects the best model to use, based on performance, accuracy and cost.
Currently the models used by Tableau Agent and Tableau Pulse Discover don't support geo-aware routing. All LLMs used by Tableau Agent are hosted in the United States, where all models are available. For information about which specific LLM models are currently being used in your version of Tableau Agent, contact your Tableau Account Executive.
For more information about Geo-aware LLM request routing, see Geo-Aware LLM Request Routing on the Einstein Generative AI Platform(Link opens in a new window) in the Salesforce help.
The Einstein Trust Layer in action
AI in Tableau is powered by Einstein AI and inherits the Einstein Trust Layer and security controls.
In Tableau Pulse
Insight summaries are grounded using templated natural language insights and values calculated using deterministic statistical models. Tableau Pulse is also based on a metric layer that provides a bounded, safe space for insights to be detected. Tableau Pulse uses generative AI to enhance and synthesize the language of the insights generated by Tableau. The result is summarized insights in easy to understand language that the user can quickly engage with.
Better semantic matching for Ask Q&A enhances semantic matching for any language. Questions and insights text are sent to OpenAI as part of the process for calculating semantic matching. All calls to OpenAI go through the Einstein Trust Layer.
Enhanced Q&A (Discover) uses statistical algorithms to examine grouped metrics and surface insights that are relevant, interesting, and worth investigating. It uses generative AI to generate intuitive key insights, relevant visualizations, source references, and suggested follow-up questions. Because it is powered by AI in Tableau, you can ask questions in your own words to get answers in natural language about your data.
In Tableau Agent
To enable Tableau Agent to return a viz, a calculation, or an asset description, we first need to ground Tableau Agent in your data.
When you launch Tableau Agent, we query the data source that you're connected to and create a summary that includes field metadata (field captions, field descriptions, data roles, and data types) and up to 1000 unique field values if the data type is string (text). This summary is sent to the Large Language Model (LLM) to create vector embeddings so that Tableau Agent can understand the context of your data. The summary creation happens within Tableau and the summary context data is forgotten by the LLM as soon as the vector embeddings are created.
When you type a question or request into the conversation pane, a combined prompt consisting of the user's input, metadata describing the current state of the viz (web authoring), and historical context from the conversation pane flow through the Einstein Trust Layer to the LLM. The Einstein Trust Layer can be used to mask Personally Identifying Information (PII) using pattern-based data masking before it is sent to the LLM. Using machine learning and pattern matching techniques, PII in prompts are replaced with generic tokens and then unmasked with original values in the response. For more information about PII masking see Einstein Trust Layer Region Language Support(Link opens in a new window) in the Salesforce help.
The response flows back through the Einstein Trust Layer to check for toxicity and unmask any masked data. Due to our zero data retention policies in place with our third-party LLM providers, any data sent to the LLM isn’t retained and is deleted after a response is sent back.
The result is a viz, a calculation, or an asset description ready for you to review.
Techniques like this ensure our products adopt generative AI in a trusted manner. At the same time, your customer data isn’t used to train any global model.
Want to learn more about the Einstein Trust Layer? See Einstein Trust layer: Designed for Trust(Link opens in a new window) in the Salesforce help, or take the Einstein Trust Layer(Link opens in a new window) module on Salesforce Trailhead.
Reviewing generative AI outputs
AI in Tableau is a tool that can help you quickly discover insights, make smarter business decisions, and be more productive. This technology isn’t a replacement for human judgment. You’re ultimately responsible for any LLM-generated outputs you incorporate into your data analysis and share with your users.
Whether it’s generating calculations to use in a Tableau Prep flow, summarizing insights for metrics you follow, creating visualizations for you from your data, or drafting descriptions for your data assets, it’s important to always verify that the LLM output is accurate and appropriate.
Focus on the accuracy and safety of the content before you incorporate it into your flows, visualizations, and analysis.
Accuracy: Generative AI can sometimes “hallucinate”—fabricate output that isn’t grounded in fact or existing sources. Before you incorporate any suggestions, check to make sure that key details are correct. For example, is the proposed syntax for a calculation supported by Tableau?
Bias and Toxicity: Because AI is created by humans and trained on data created by humans, it can also contain bias against historically marginalized groups. Rarely, some outputs can contain harmful language. Check your outputs to make sure they’re appropriate for your users.
If the output doesn’t meet your standards or business needs, you don’t have to use it. Some features allow you to edit the response directly before applying it to your data, and you can also try starting over to generate another output. To help us improve the output, let us know what was wrong by using the thumbs up and thumbs down buttons where available and provide feedback.