Services How it works About us FAQ Contact Free health check
Full audit methodology

How our AI Readiness
Audit works.

A structured end-to-end socio-technical audit (E2EST/AA) tailored to Australian regulatory requirements. This page explains every phase, every domain, every check, and every deliverable — so you know exactly what you're getting before you commit.

What is an AI Readiness Audit?

Our AI Readiness Audit is an independent, end-to-end, socio-technical assessment of your organisation's AI systems, data practices, governance structures, and people capability. It inspects AI systems in their actual implementation context — not in theory — and validates that you have taken all necessary measures at every stage to ensure your AI operates in line with Australian law, ethics principles, and regulatory expectations.

The methodology is based on internationally recognised AI auditing practice, adapted specifically for the Australian regulatory environment: the Privacy Act 1988, Australia's 8 AI Ethics Principles, APRA CPS 230, and ASIC AI guidance.

Days 1–3
Kick-off & document review
Days 4–7
Stakeholder interviews
Days 8–11
Analysis & gap mapping
Days 12–14
Deliverable production
1
Model Card & System Inventory
We begin by building a complete picture of every AI system your organisation uses — both intentional deployments and embedded AI within SaaS tools. This is your Model Card: a structured record of each system's purpose, owner, data inputs, decision outputs, and regulatory exposure.
AI system name, version and purpose documented
System owner and governance roles identified
Risk level assessed against AU Ethics Principles
Existing documentation catalogue compiled
Training data sources and characteristics recorded
Decision variables and output types mapped
2
System Map & Process Analysis
We map the full system — the trained model, the technical infrastructure, and the decision-making process it supports. This establishes accountability, identifies gaps in responsibility assignment, and documents the end-to-end data flow that regulators and auditors expect to see.
AI component identified and version-controlled
Responsibility distribution documented (Privacy Act APPs)
Data flows traced pre-, in- and post-processing
Transparency obligations identified (ADM disclosure)
Proportionality and necessity analysis conducted
Data retention and storage limits reviewed
3
Five-Domain Assessment
The core of the audit. We assess your organisation across five domains, scoring each against a structured checklist calibrated to the Australian regulatory environment and the 8 AI Ethics Principles.
Domain 01
Strategy
  • AI vision and board mandate
  • Roadmap maturity and ownership
  • Executive accountability structure
  • AI investment decision framework
Domain 02
Data
  • Data quality assurance procedures
  • Data source documentation and legal grounds
  • Pre-processing and bias control measures
  • Privacy Act data minimisation compliance
Domain 03
Infrastructure
  • Cloud posture and sovereignty controls
  • MLOps capability and model lifecycle
  • API security and access management
  • Version control and traceability systems
Domain 04
Governance
  • AI Use Policy existence and currency
  • ADM disclosure obligations mapped
  • Model risk register maintained
  • Incident response plan covers AI
Domain 05
People
  • Staff AI literacy baseline
  • Change readiness assessment
  • Training completion records
  • Human oversight roles defined
4
Bias & Fairness Assessment
AI bias in the Australian context goes far beyond inaccurate predictions. It includes systematic errors that cause Privacy Act breaches, discriminatory automated decisions, and failures to meet the Fairness and Contestability principles. We identify bias across all three stages of the AI lifecycle.
Bias assessed across the AI lifecycle
Pre-processing: selection bias Historical bias Label bias In-processing: statistical bias Omitted variable bias Measurement bias Post-processing: automation bias Deployment bias Aggregation bias
Protected groups defined and impact rates measured
Training data examined for representativeness
Fairness metrics applied (demographic parity, equal opportunity)
Recourse and contestability mechanisms reviewed
5
Verification, Validation & Security
We review how your AI systems are tested and validated before and after deployment — and whether the security controls required by the Privacy Act and APRA CPS 230 are in place. This phase specifically addresses the performance, consistency, stability, and traceability requirements expected by Australian regulators.
Testing strategy and validation plan documented
False positive / false negative rates analysed
Version control across datasets and code
Incident log and anomaly detection in place
Security risk analysis conducted (Privacy Act s. 6)
Human override mechanisms documented
Audit credit
The Audit fee is credited in full against an AI Strategy Sprint — so starting here doesn't cost you extra.

The Audit report

01
Internal E2EST report
Captures the full process, issues identified, and mitigation measures. Provided to your leadership team in PDF format within 24 hours of completion. Confidential — not published.
02
Board-ready summary
A 6–10 page executive summary designed to be presented to your board without modification. Includes the maturity scorecard, top 5 gaps, and prioritised action plan with cost estimates.
03
Governance Starter Pack
7 editable policy templates in Word format: AI Use Policy, ADM Disclosure Notice, Vendor Due Diligence Checklist, AI Incident Response Plan, and more.

Ready to get started?

Book a free 30-minute health check first. We'll confirm the Audit is right for your business before you commit a cent.