Three cyber security
practitioners built
this product.
The threat model came first.
SECUVA was not built by software engineers who hired a CISO at Series A. All three co-founders come from cyber security - offensive and defensive. The architecture reflects that. PHI never leaving your network is not a feature. It is the consequence of modelling what an attacker would try first.
Security is not a department at SECUVA.
It is what the founders do.
Most healthcare data platforms treat security as a procurement requirement - something you address with a questionnaire and a penetration test every 12 months. We treat it the same way a red team would: assume compromise, reduce blast radius, eliminate implicit trust.
The on-prem architecture is not a product decision. It is the output of asking: if this system were targeted, what would an attacker try? The answer was the same every time - get to the clinical data. So we built a system where that data is never reachable.
Offensive security
The founding team has practitioner experience in adversarial simulation and red team operations - not just defending against known threats but reasoning about novel attack surfaces in clinical network environments.
Defensive architecture
Zero trust, network segmentation, privilege minimisation, and cryptographic controls designed by engineers who have operated them in production - not specified by a consultant and implemented by someone else.
Australian privacy law
The Privacy Act 1988, OAIC de-identification guidance, and the My Health Records Act are the legal context for this product - not HIPAA retrofitted for a different jurisdiction. The compliance posture is AU-native.
What SECUVA can access. What it architecturally cannot.
Most security pages list controls. This one starts with the architectural separation that makes those controls meaningful - and makes their absence irrelevant for patient data.
Zero-trust from the
network layer up.
Every layer of the SECUVA stack operates on explicit verification - no implicit trust, no permanent credentials, no broad network access. Even the on-prem agent and the control plane do not trust each other without a live mTLS handshake using a certificate whose leaf is pinned.
The cryptographic choices are deliberate: TLS 1.3 only (1.2 disabled at the load balancer, not just deprioritised). Military-grade encryption at rest. Asymmetric signing for agent binaries. These are not defaults - they are choices made by engineers who know what the alternatives expose.
Every dependency is accounted for.
Every binary is signed.
Supply chain compromise is among the highest-impact, lowest-visibility attack vectors available to a sophisticated adversary. We treat it accordingly - not as a checkbox, but as a standing operational concern with automated controls and a clear escalation path.
The agent binary that runs in your environment is built reproducibly, signed with an offline asymmetric key, and verified by the agent before any update is applied. The signing key is never accessible from the build pipeline.
Software Bill of Materials (SBOM)
A software bill of materials is generated for every agent release and published alongside the binary. Customers can verify the dependency tree independently. Any new critical CVE introduced by a release blocks promotion to production.
Binary signing
Agent binaries are signed with an asymmetric key held offline - air-gapped from the build pipeline. The agent verifies the signature cryptographically before applying any update. A build pipeline compromise cannot produce a valid signature.
Dependency vulnerability scanning
All dependencies scanned on every PR and every scheduled build. Critical CVEs block merge. High CVEs trigger a 48-hour remediation SLA. Findings are triaged by an engineer - not auto-dismissed by severity score alone.
Reproducible builds
Agent builds are reproducible - given the same source and toolchain, the output binary is byte-for-byte identical. This means customers, auditors, or external researchers can verify that a released binary corresponds to the published source.
Two environments. One hard boundary.
Patient data and the infrastructure that processes it do not share a network path - not as a policy, as a network topology.
The agent communicates outbound to the control plane over mTLS. The control plane has no inbound connection to the agent. A control plane compromise - however severe - cannot reach raw patient data. The two environments are architecturally separated. The boundary cannot be misconfigured away.
When something goes wrong,
the plan was already written.
Incident response plans written during an incident are not incident response plans. SECUVA maintains a tested IR playbook with defined roles, escalation paths, and customer notification SLAs. We practise it with tabletop exercises - not just keep it in a document.
Given the on-prem architecture, the blast radius of a SECUVA control plane incident is bounded - patient data cannot be exfiltrated through us. But we treat every security event with the same urgency regardless: a fast, transparent response is a security posture, not just a reputational one.
Automated alerting surfaces anomalies to the on-call security engineer within minutes. The event is classified by severity (P1–P4). P1 and P2 events trigger the full IR team immediately.
Isolation of affected systems. Preservation of forensic state before remediation. For P1: affected customer notification within 4 hours of detection - not discovery by the customer.
Root cause analysis. Patch or configuration remediation deployed. For any event involving potential data exposure: OAIC notification assessment initiated (72-hour NDB obligation clock starts from detection).
Post-incident report issued to affected customers. Controls gap identified. Process improvement or architectural change tracked. Responsible disclosure coordinated with any external researcher involved.
This is not HIPAA.
It was never designed to be.
The majority of "healthcare data security" content is written by companies whose primary market is the United States. HIPAA, HITECH, and 21st Century Cures are US statutory obligations. They do not map directly - and in some cases do not map at all - to the Australian Privacy Act 1988, the My Health Records Act 2012, or the OAIC's de-identification guidance.
SECUVA was engineered against Australian law from day one. The Privacy Act's definition of 'sensitive information', the Notifiable Data Breaches scheme, and OAIC's technical guidance on de-identification are the legal instruments that shaped the product - not an afterthought applied to a US-built platform.
Continuous. Not periodic.
Security events in a healthcare data platform warrant continuous monitoring - not quarterly assessments alone. SECUVA runs automated scanning on every deployment, with real-time alerting to our security team on anomalous access patterns, policy violations, and agent communication irregularities.
We do not treat a vulnerability scan as evidence of security - we treat it as one input into an ongoing programme. The on-call security engineer is a practitioner, not a service desk. Escalations go to someone who can triage a finding technically, not open a ticket.
Real-time threat detection
Automated correlation of agent logs, control plane events, and network telemetry. Anomalies surface within minutes. Policy violations generate immediate alerts - not batch reports.
Continuous vulnerability scanning
Every deployment scanned. Critical CVEs: 4-hour escalation SLA. Zero-day response: 24-hour SLA. Findings triaged by an engineer. Remediation tracked to close.
Independent penetration testing
Annual third-party pen test by an independent Australian security firm. Internal red team exercises quarterly. Findings - including unmitigated items - disclosed to enterprise customers on request.
PHI egress alerting
Any attempted pipeline routing to a non-allowlisted destination is blocked and immediately alerted. Attempted PHI egress triggers on-call security team notification within 60 seconds.
Found something?
We are practitioners too. Tell us.
We welcome security researchers, red teamers, and the broader community. We have been on your side of this conversation - we know what it is like to find something real and wonder how it will be received. We commit to a technically engaged, prompt response and public credit for legitimate findings.
Reports go directly to our security team - not to a triage queue staffed by people who will look up the CVE score and determine urgency by number. We will engage with the technical detail.
We practise responsible disclosure ourselves. When we find vulnerabilities in third-party software or infrastructure we use, we follow the same process we ask researchers to follow with us - coordinated disclosure, reasonable timelines, and no surprises.