Einstein Generative AI and Trust
It’s important that your data stays safe while you innovate with new technology. With Tableau AI, we keep trust as the #1 value, and we strive to make sure your data is secure while also creating experiences that are accurate and safe.
Tableau AI and your data
To keep your data secure, Salesforce has agreements in place with Large Language Model (LLM) providers, like OpenAI. Organizations can leverage generative AI capabilities without their private data being used to train the LLM.
Trusted generative AI
Salesforce’s Einstein generative AI solutions are designed, developed, and delivered based on five principles for trusted generative AI.
Accuracy: We prioritize accuracy, precision, and recall in our models, and we back our model outputs up with explanations and sources whenever possible. We recommend that a human check model output before sharing with end users.
Safety: We work to detect and mitigate bias, toxicity, and harmful outputs from models used in our products through industry-leading detection and mitigation techniques.
Honesty: We ensure that the data we use in our models respects data provenance and we have consent to use the data.
Empowerment: Whenever possible, we design models to include human involvement as part of the workflow.
Sustainability: We strive towards building right-sized models that prioritize accuracy and reduce our carbon footprint.
To learn more about trusted AI, see Salesforce Research: Trusted AI(Link opens in a new window)
Reviewing generative AI outputs
Generative AI is a tool that can help you quickly discover insights, make smarter business decisions, and be more productive. This technology isn’t a replacement for human judgment. You’re ultimately responsible for any LLM-generated outputs you incorporate into your data analysis and share with your users.
Whether it’s generating a calculation syntax to use in a Tableau Prep flow, summarizing insights for metrics you follow, or creating visualizations for you from your data, it’s important to always verify that the LLM output is accurate and appropriate.
Focus on the accuracy and safety of the content before you incorporate it into your flows, visualizations, and analysis.
Accuracy: Generative AI can sometimes “hallucinate”—fabricate output that isn’t grounded in fact or existing sources. Before you incorporate any suggestions, check to make sure that key details are correct. For example, is the proposed syntax for a calculation supported by Tableau?
Bias and Toxicity: Because AI is created by humans and trained on data created by humans, it can also contain bias against historically marginalized groups. Rarely, some outputs can contain harmful language. Check your outputs to make sure they’re appropriate for your users.
If the output doesn’t meet your standards or business needs, you don’t have to use it. Some features allow you to edit the response directly before applying it to your data, and you can also try starting over to generate another output. To help us improve the output, let us know what was wrong by providing feedback.