Einstein Trust Layer
Trust is our #1 value at Salesforce, and our generative AI applications are no exception. The Einstein Trust Layer elevates the security of generative AI at Salesforce through data and privacy controls that are seamlessly integrated into the end-user experience.
It’s a sequence of gateways and retrieval systems that ground your prompts in data while mitigating risk. We safely supercharge your data with generative AI.
To ensure relevant, accurate responses, we send a high-quality prompt to a model. When Tableau AI makes a generative AI request, we translate that request into a prompt. A prompt is how we tell a large language model (LLM) about the task we want it to perform. A prompt includes instructions for the task alongside information grounded in truth.
To protect your data, we integrate strict internal and external security measures.
At Salesforce: We use a standard protocol to encrypt the prompt when we send it to a LLM.
Your Data: We have zero retention agreements in place with LLM providers. These agreements mean that you and your users don’t need to worry about how external LLM providers retain your data. The model forgets the prompt and the response as soon as the response is sent back to Tableau.
The Einstein Trust Layer in action
With Tableau Pulse we ground the insight summaries that we generate using templated natural language insights and values calculated using deterministic statistical models. Tableau Pulse is also based on a metric layer that provides a bounded, safe space for insights to be detected.
The result is summarized insights in easy to understand language that the user can quickly engage with. Techniques like this ensure our products adopt generative AI in a trusted manner. At the same time, your customer data isn’t used to train any global model.
Want to learn more about the Einstein Trust Layer? Take the Einstein Trust Layer(Link opens in a new window) module on Salesforce Trailhead.