How Privue protects your data
Your data is your competitive advantage. At Privue, confidentiality and integrity are non-negotiable.
We use advanced analytics and public AI models responsibly, with guardrails that keep your information isolated, anonymised where appropriate, and accessible only to authorised users.
At a glance
- Certified management systems: ISO 27001:2022 (ISMS) and ISO 9001:2015 (QMS).
- Privacy by design: GDPR and India’s DPDP Act 2023 embedded into our processes, overseen by a dedicated DPO.
- AI governance: alignment with the NIST AI Risk Management Framework for trustworthy, explainable AI.
- Client-dedicated hosting: logically isolated VPCs with unique encryption keys and network segmentation, plus geo-residency choices and high-availability failover.
- Responsible use of public AI: PII is masked before any interaction with public LLMs; we use enterprise instances that prohibit data retention or training on your content.
- Access control: fine-grained role-based permissions with comprehensive auditability on enterprise LLMs.
- Data anonymisation: salted SHA-256 hashing and tokenisation for sensitive identifiers.
- Secure infrastructure: enterprise-grade AWS environment protected by Privue’s internal firewall.
Governance and certifications
We operate under independently audited ISO 27001:2022 and ISO 9001:2015 management systems that cover the full data lifecycle from ingestion to archival.
Our privacy program embeds GDPR and DPDP requirements, with a Data Protection Officer (DPO) responsible for oversight and continual improvement.
We further align our AI practice to the NIST AI RMF to ensure responsible, explainable outcomes.
Secure-by-design infrastructure
Each client’s data runs in a logically isolated private environment (VPC) with unique keys and network segmentation.
This dedicated-tenant architecture eliminates data commingling risks and supports geo-residency preferences and high-availability failover.
Our compute runs in an enterprise-grade AWS environment within Privue’s internal firewall perimeter.
Responsible use of public AI models
We leverage enterprise versions of large language models that offer enhanced privacy, reliability, and compliance features.
- Providers are contractually prohibited from retaining or using client data for training.
- Personally Identifiable Information (PII) is masked before any interaction with public models to prevent unintended exposure.
- All access is restricted via fine-grained roles with audit logging.
Access control and accountability
Only authorised personnel can access sensitive datasets and AI tooling, enforced through role-based access controls (RBAC).
Enterprise LLM instances provide granular permissions and comprehensive audit trails to support compliance and incident forensics.
Data minimisation and anonymisation
Where appropriate, we transform names and unique identifiers (e.g., for dealers, vendors, and clients) using salted SHA-256 hashing and tokenisation, ensuring original identities remain inaccessible even to internal analysts.
Client choice and data residency
We support deployment choices that respect your residency needs, with high-availability configurations to meet uptime requirements — all within your logically isolated environment.
Why this matters
AI systems often handle highly sensitive personal and business information.
Our governance framework and technical controls ensure these tools remain safe, compliant, and never compromise the confidentiality of your data.
Frequently asked questions
Do you use client data to train AI models?
No. Our enterprise LLM instances contractually prohibit providers (such as OpenAI) from retaining or using your data for model training.
What goes to public AI models?
Only non-identifying, masked text. We mask PII before any interaction with public models, and we operate under a formal AI governance approach aligned to NIST AI RMF.
Where is my data stored and processed?
In your own logically isolated VPC with unique keys and network segmentation, with options for geo-residency and high-availability.
How is access controlled?
Through fine-grained, role-based permissions with auditing on enterprise AI services.
Note
This page summarises how Privue secures and governs client data and the responsible use of public AI models.
It is not a legal policy or contract. For policy documents or specific assurances, please contact us.