Trust Center
Transparency builds trust. Learn how we protect your data, ensure human oversight, and maintain the highest security standards.
Our Commitments
Six foundational promises that guide how we handle your data and conduct security assessments.
Zero AI Training
Your data is NEVER used to train AI models. We use Anthropic Claude via API with a data processing agreement that explicitly prohibits training on customer data.
Data Location
Customer database is stored in Norway. For AI-assisted analysis, only anonymized technical details are sent to our AI provider.
End-to-End Encryption
All credentials and sensitive data are encrypted with AES-256-GCM. Data in transit is protected with TLS 1.3.
Right to Deletion
Request complete deletion of your data at any time. We process deletion requests within 24 hours.
Human-in-Control
No AI action happens without human approval. Every test plan, finding, and report is validated by certified security professionals.
Clear Liability
We carry professional liability insurance and take full responsibility for any damage caused during authorized testing.
Human-in-Control Process
Our four-gate system ensures that humans approve every critical decision. AI assists with efficiency, but humans remain in control.
Gate 0: Authorization
Customer defines scope and digitally signs authorization for testing.
Gate 1: Attack Plan
AI generates attack plan based on scope. Security analyst reviews and approves before execution.
Gate 2: Validation
All findings are manually validated by security experts to eliminate false positives.
Gate 3: Delivery
Report is reviewed and enhanced by analysts before delivery to customer.
AI does the heavy lifting. Humans make the decisions.
How We Use AI
We use AI assistants (Anthropic Claude) to enhance security analysis efficiency.
Data Protection
- Data processing agreement with AI provider
- Your data is never used for model training
- Sensitive information (credentials, tokens) is not sent to AI
Our Approach
We request test environments over production data when possible:
- ✕Human-in-Control: AI suggests, humans approve
- ✕All findings validated by certified security professionals
- ✕Credentials never sent to AI
Data Retention Policy
| Data Type | Retention Period |
|---|---|
| Final Report | 12 months, then deleted |
| Raw Test Data | 30 days after engagement |
| Test Credentials | Deleted immediately after engagement |
| Audit Logs | Anonymized after 90 days |
Frequently Asked Questions
Answers to common concerns about AI-assisted security testing.
Is my data used to train the AI model?
No. We use Anthropic Claude via API with data processing agreements that explicitly prohibit training on customer data. Your sensitive business data, source code, and vulnerability information is never used to improve AI models.
What if the AI damages my systems?
Our Human-in-Control approach means no action is taken without analyst approval. We test against designated test environments, follow strict scope boundaries, and carry professional liability insurance. If anything goes wrong during authorized testing, we take full responsibility.
Who has access to the vulnerabilities found?
Only the assigned security analyst and necessary quality assurance personnel. All findings are encrypted at rest, and we conduct background checks on all employees. We provide NDA before engagement if required, and we never report findings to authorities or third parties without your explicit consent.
Is AI as good as human pentesters?
AI and humans have complementary strengths. AI excels at systematic testing, consistent methodology, and complete documentation. Humans excel at creative attack vectors, business logic flaws, and contextual understanding. Our Human-in-Control approach gives you the best of both: AI efficiency with human judgement.
This is new technology - how mature is it?
Our platform combines established AI capabilities with proven security testing methodologies. Every engagement benefits from human oversight at critical gates. We're transparent about capabilities and limitations, and our gate system ensures quality regardless of AI output.
Will the report satisfy compliance requirements?
Yes. Our reports meet requirements for ISO 27001, PCI-DSS, SOC 2, and GDPR security testing. All findings are human-validated, not raw AI output. We follow OWASP, PTES, and industry-standard methodologies, and reports can be attested by our certified security professionals.
Our Guarantees
Written commitments included in every engagement contract.
No Data Leakage
Contractual liability if we leak your data
No AI Training
Written guarantee in contract
Deletion on Request
Within 24 hours of request
Norwegian Support
Always available in Norwegian