Security & Trust

RisksRadarAI monitors sensitive organizational data. Here's exactly how we protect it.

Data Privacy Architecture

Self-Hosted: Zero Data Egress

When deployed on your infrastructure, no data ever leaves your network. AI inference runs locally via Nemotron models. We have zero access to your data.

Managed: Tenant Isolation

Each organization has a completely isolated database, separate encryption keys, and dedicated agent sandboxes. No cross-tenant data access is possible.

Communication Metadata Only

We analyze email/chat metadata (frequency, timing, response latency). We never read message content. Ever.

PII-Stripping Privacy Router

When cloud inference is used, the Privacy Router strips all personally identifiable information before any data reaches external APIs.

Agent Security (NemoClaw Sandboxing)

Every AI agent runs in a policy-controlled sandbox with four isolation layers:

Landlock LSM
Linux Security Module restricting filesystem access
seccomp
System call filtering prevents unauthorized operations
Filesystem NS
Isolated filesystem namespace per agent
Network NS
Deny-by-default networking — only whitelisted endpoints

Encryption & Access Control

Encryption at Rest
AES-256-GCM for all sensitive fields (credentials, PII). Database encrypted at the storage layer.
Encryption in Transit
TLS 1.3 for all API communications. Internal service-to-service communication encrypted.
Role-Based Access Control
7 system roles (Admin, Compliance Officer, CISO, Analyst, Manager, Auditor, Regulator) with granular permissions.
Immutable Audit Trail
Database triggers prevent UPDATE and DELETE on audit logs. Every action is permanently recorded.
Credential Management
Integration credentials encrypted with AES-256-GCM and stored separately from configuration. Key rotation supported.

Compliance Readiness

BSA/AMLSupported

SAR generation, override monitoring, transaction forensics

SOXSupported

Internal control monitoring, audit trails, segregation of duties

HIPAASupported

On-prem deployment for PHI, access monitoring, breach detection

GDPRSupported

Article 88 employee data, anonymization, right of access

EU AI ActAligned

Explainability, human oversight, risk management, transparency

NIST CSFAligned

Identify, Protect, Detect, Respond, Recover functions

Responsible AI Practices

Human-in-the-loop: All AI-generated alerts require human review before any action is taken. No automated personnel decisions.

Bias monitoring: Demographic attributes are never used as model inputs. Quarterly fairness audits check for disproportionate flagging across groups.

Explainability: Every alert includes a chain-of-thought reasoning chain showing exactly how the conclusion was reached, with source data citations.

Consent and transparency: Organizations are expected to inform employees about monitoring scope. We provide consent management tools and privacy controls.

Security Questions?

For security assessments, penetration test reports, or compliance documentation:

security@aigovhub.io