Trusted AI

Trusted AI

Your path to secure, compliant AI.

Before your firm can unlock AI’s full potential, you need compliant AI solutions — and an experienced technology partner — that won’t compromise your firm or client data.

With over a decade of expertise in integrating AI capabilities into our software, Intapp technology includes robust platform safeguards to keep your data safe. And as AI capabilities change and grow, Intapp is continually developing new security protocols that prioritize the integrity and safety of your data.

Unlock the power of AI for your firm with confidence — without putting your data at risk.

Frequently asked questions about Intapp Applied AI

AI ethics

AI governance

AI transparency

Ongoing AI enhancements

AI ethics

Learn more about Intapp’s approach to AI ethics, including transparency, fairness, bias mitigation, accountability, responsible deployment, and the use of AI in our products.

Most of the generative AI use cases supported at Intapp follow a Retrieve-Augment-Generate (RAG) approach, which enables explicit referencing to the source of information used to generate the model response. For those generative AI use cases not implementing RAG — but still using LLMs pre-trained with publicly available data (such as Microsoft Azure OpenAI GPT-3.5) — there is no way to avoid the potential surfacing of copyrighted content in model outputs.

Intapp uses moderation capabilities provided by Microsoft Azure Content Safety service. We constantly monitor and evaluate model performances against testing benchmarks. However, Intapp does not monitor the content of user inputs.

We believe that algorithmic bias doesn’t pose a substantial threat in most of the ways we apply AI at Intapp, however we do use several techniques to reduce bias in our models. These techniques include, but are not limited to, training data filtering, balancing and augmentation, and client-agnostic model weight calibrations. By default, Intapp includes prompting controls that provide explicit instructions to AI models. However, even with these processes and techniques, Intapp cannot guarantee that our models will be free from bias.

AI governance

Learn more about Intapp’s approach to AI governance, including data collection and processing, privacy, enhanced security in our AI systems, and our ongoing review of regulatory standards that fosters a robust framework for responsible AI implementation.

Intapp Applied AI utilizes the information and data that users input into various Intapp products during their usage.

In addition, Intapp also captures:

  • Client data:This is data uploaded by a client into Intapp cloud services. With client permission, this data is used to deliver private, client-specific AI models. Client data is stored in each Client’s production environment where access is restricted to staff responsible for maintaining our services.
  • Application data: This is all the metadata-associated data and activity, along with any additional features and metrics that can be derived from it. This data is collected and stored in our infrastructure to monitor system performance and user satisfaction. Application data does not include client data (as defined above).

Client data is processed in different ways depending upon the AI use case. Typically, client data is converted into a machine-comprehensible format as the first step. This may include tokenization, vector format encoding, and other processing steps. For example, in the case of generative AI experiences, client data is first processed in a client’s tenant environment to generate a prompt appropriate for the AI feature being used.

AI services using shared models are stateless and therefore designed to retain no data. AI service responses, along with the inputs supplied to generate them, are captured, transferred back to the Intapp product that initiated the call, and, when appropriate, stored with client data. Client data is encrypted in transit throughout these steps.

Client data is processed within Microsoft’s Azure infrastructure and other infrastructures as applicable, as set forth here. Private models live alongside a client’s own tenant in their regional cluster. Shared models are deployed among our global clusters as their required infrastructure is supported in different regions. For non-U.S. regions that do not yet support Intapp Applied AI’s needed infrastructure, calls are routed to the EU infrastructure.

Each request processed through our generative AI service is assigned a unique inference ID, which is retained throughout the lifecycle of the request. This measure is designed to ensure that data remains segregated between clients, and that the progression of each request can be traced from initiation to the response presented in the user interface.

Intapp’s products are designed to avoid commingling client data when processing calls. When an Intapp product calls a Microsoft Azure OpenAI service, the stateless OpenAI service is not designed to store the content. View the Microsoft Azure OpenAI data policy.

Intapp maintains industry standard security controls on its client tenant environments where client data is stored. Intapp is regularly audited and maintains security and privacy certifications to validate its security and privacy controls.  Learn more.

Intapp does not sell or monetize any client data. View the Intapp Privacy Policy to learn more.

Intapp does however use third-party sub-processors to provide our services to clients. View the list of sub-processors.

When a client’s entire dataset is deleted, Intapp will delete the fine-tuned model (which can include client data).

Intapp uses specific cloud monitoring infrastructure to monitor and track performance of AI services. User interactions and AI results are also monitored and stored in client-specific databases.

Data that fits Intapp’s application data definition can be used by automated benchmarking and analytic processes to track algorithm performance and reliability.

Intapp AI products are designed to perform inference on business processes. This, by the nature of the task, minimizes the risk of discrimination and harm. In addition to this, we also implement explicit filters for content moderation where applicable. However, even with these processes, Intapp cannot guarantee that decisions will not be based on or include such harms or discrimination.

Our AI-driven features provide responses that help users complete their tasks efficiently. Intapp Applied AI does not act or make decisions — the user is ultimately responsible for the action taken or decision made. The output of our AI models is designed to be presented with context reminding users their review is necessary before acting on AI-generated information. No outputs of Intapp Applied AI pass directly to systems outside of Intapp infrastructure.

We continually test our AI systems to verify that the AI works as intended. In addition, we use the Azure AI Content Safety provided by Azure AI Service, which is designed to safeguard against potentially harmful content.

From time to time, Intapp also conducts comparative analysis of existing solutions against new technologies, algorithms, and platforms. This can lead to a partial or complete replacement or upgrade of some solutions when deemed necessary by Intapp.

Intapp uses models directly trained with client data only to process or make inferences on the client’s own data.

No, fine-tuned models can only be used with Intapp products.

Intapp utilizes many data sets, including open source and in-house/proprietary data sources, as well as those from third parties. We do this to support our product needs for various use cases, such as buying email data for a specific language. We also collect data from the web (for example, collecting information from trusted websites to gather training data for a classifier that implements a legal standard taxonomy).

AI transparency

Learn more about the importance of AI explainability and interpretability, and how we make our AI systems transparent and understandable to foster trust and facilitate informed decision-making.

We use two types of models:

  • Shared models: Built using Intapp proprietary data and client-agnostic data, or based on services provided by other vendors such as Microsoft Open AI.
  • Private models: Built using a combination of Intapp proprietary data, client-agnostic data and services provided by other vendors such as Microsoft Open AI and are further fine-tuned using client-specific data.

We constantly monitor and evaluate model performances against both quantitative and qualitative benchmarks. Specific attention is paid to potential data and concept drifts (a common problem for AI models), during which, over long periods of time, training data eventually differs substantially from actual data. Techniques such as model adaptation, incremental training, and parameter calibrations are periodically used to ensure that model integrity and reliability are preserved.

Most of our AI models return confidence scores associated with inference events. Such scores can be interpreted as the model’s degree of certainty regarding its provided outputs. While these are good indicators of relevance among outputs (i.e., good for ranking purposes), these scores cannot be used as absolute quality metrics.

Our models are only accessible through internal product APIs. The service behind the API will ignore any unauthorized requests. Content filtering and moderation are also in place to discourage attempts to produce malformed or malicious content from the model.

Yes, Intapp leverages few-shot learning, which is an advanced prompting technique that refers to adding examples to the prompt to describe exactly what a user is expecting the output to be for the next token prediction. For example, in Intapp Assist for DealCloud, few-shot learning is used regularly for conversational agents (chatbots).

Intapp solutions detect or reduce hallucinations/inaccuracies using multiple methods, including human interaction and testing, cross-testing by different LLMs on the quality of their responses, and grounding the LLM in a client’s own data. While even the latter cannot entirely prevent hallucinations, it is useful to ensure that outputs are based on inputs that a firm controls. It is important to remember while hallucinations can be reduced, for now they will almost always exist.

Ongoing AI enhancement

Learn more about Intapp’s continuous monitoring and improvement in AI enhancement, including our iterative process of refining AI systems over time through ongoing observation, analysis, and optimization for greater effectiveness and adaptability.

We compare new models with existing models by running inferences on historic data. We upgrade the model only when new models outperform existing models.

Intapp does this only for private models, in which we use individual tenant feedback and data to incrementally train the new private version of the model for the same tenant. Common models are upgraded with data produced by Intapp, including from vendors and data providers.

No, Microsoft Copilot does not have the same access to the features that Intapp’s AI products provide. A core value-add of Intapp solutions is the sophisticated application logic and the LLM honed for the industries we serve.

Learn more about Intapp Applied AI

Connect with us

Discover more about how you can unlock the power of AI at your firm — without putting your sensitive data at risk.