The Claim
“Granted an unelected official the power to mandate facial-recognition scans for adults who want to look at porn. They are not required to consider the privacy or security implications of such a scheme.”
Original Sources Provided
✅ FACTUAL VERIFICATION
The claim contains elements that are factually accurate but misleadingly framed. The Online Safety Act 2021 (which came into effect on 23 January 2022) does grant the eSafety Commissioner significant powers over online content, including pornography [1]. However, the specific claim about "mandatory facial-recognition scans" requires careful examination.
What the Act Actually Provides
The Online Safety Act 2021 empowers the eSafety Commissioner to issue "remedial notices" requiring online services to ensure that restricted access systems prevent children from accessing Class 2 material (which includes pornography) [2]. The Act itself does not mandate any specific age verification technology, including facial recognition [3].
Section 108 of the Online Safety Act specifies that restricted access systems must "incorporate reasonable steps to confirm that an applicant is at least 18 years of age" [4]. This is deliberately technology-neutral - it does not prescribe how age verification must occur.
Age Assurance Roadmap
In March 2023, the eSafety Commissioner submitted an age verification background report and roadmap to government [5]. This roadmap recommended piloting age assurance technologies before any mandate, noting that the technology market was "immature but developing" [6]. The government's response in August 2023 stated it would "await the outcomes of the class 2 industry codes process before deciding on a potential trial of age assurance technologies" [7].
Following a National Cabinet meeting on 1 May 2024, the government announced it would fund a pilot of age assurance technology to test efficacy "including in relation to privacy and security" [8]. This pilot approach explicitly acknowledges privacy and security concerns - contrary to the claim's assertion that privacy considerations are ignored.
Key Facts About Powers
Regarding the eSafety Commissioner being "unelected": This is technically accurate. The eSafety Commissioner is an appointed official, not elected. However, this is standard regulatory design in Australia - the ACCC chair, Privacy Commissioner, and many other regulators are similarly appointed by government [9].
Regarding "mandatory" powers: The eSafety Commissioner's actual powers are constrained. The Commissioner can only:
- Issue remedial notices (not removal notices) for Class 2 material requiring implementation of restricted access systems
- The notices only apply to services "provided or hosted from Australia" [10]
- Services must comply with industry codes registered under the Act [11]
The Act's Basic Online Safety Expectations are not enforceable in court [12], and compliance is monitored through reporting requirements, not direct authority to mandate specific technologies [13].
Missing Context
The claim omits several important contextual points:
International precedent: Age verification for pornography is not unique to Australia. The UK Online Safety Act 2023 explicitly requires pornographic sites to use "age verification or age estimation" [14]. The European Union's Digital Services Act requires age assurance measures for services likely to be accessed by minors [15]. Multiple US states have passed similar laws [16].
Privacy safeguards in development: The pilot's explicit inclusion of privacy and security testing contradicts the claim that these implications are not considered [17]. The eSafety Commissioner's roadmap specifically discussed privacy risks and recommended cautious rollout [18].
Technology neutrality: The Act does not mandate facial recognition specifically. It allows multiple approaches - bank verification, account history analysis, and other methods are explicitly contemplated as alternatives to facial recognition [19].
Parliamentary oversight: The Online Safety Act 2021 is currently under statutory review (announced February 2024), providing a mechanism for parliamentary scrutiny of the eSafety Commissioner's powers [20].
Source Credibility Assessment
Gizmodo Australia is a mainstream technology publication, part of the broader Gizmodo network. The 2021 article appears to present a speculative concern rather than reporting established facts. The headline ("could bring in") indicates uncertainty.
Digital Rights Watch is a civil rights advocacy organization focused on digital privacy and freedom. While credible on civil liberties issues, the organization explicitly opposes broad online regulation. Their explainer is thorough but presents concerns from a specific ideological perspective. The organization characterized the Bill's provisions in alarmist terms (e.g., "the Bill introduces provisions for powers that are likely to undermine digital rights"), which reflects advocacy positioning rather than neutral analysis.
Both sources are legitimate, but both have perspectives skeptical of online safety regulation. Neither source is mainstream political journalism (like ABC News or The Guardian).
Labor Comparison
Did Labor do something similar?
The current Australian Labor government (since May 2022) has continued and expanded the Online Safety Act framework rather than opposed it. The Labor government:
- Maintained the Act: Did not repeal or significantly roll back Coalition-era online safety legislation
- Expanded age verification initiatives: Announced in May 2024 (under Labor) that it would fund a pilot of age assurance technology [21]
- Commissioned research on social media age limits: The Department of Infrastructure is undertaking research into potential age-limits for social media generally [22]
- Supported class 2 industry codes development: The Labor government has continued development of industry codes for pornography and age-inappropriate content [23]
In fact, the Labor government has moved faster and more decisively on age verification than the Coalition did. The Coalition deferred action awaiting industry codes; Labor committed funding to an explicit pilot program.
Internationally, age verification for pornography is increasingly bipartisan - not a Coalition-specific policy. This is consistent with policy direction in the UK (Conservative government), EU (multiparty consensus), and multiple US states (both Republican and Democratic controlled).
Balanced Perspective
The legitimate criticism: The eSafety Commissioner does hold significant power as an unelected official, and there are genuine privacy concerns around age verification technologies, particularly facial recognition. These concerns have been raised by:
- Digital Rights Watch (privacy advocacy) [24]
- Privacy advocates noting risks of data collection and storage [25]
- Technology experts warning about accuracy issues with facial recognition across demographic groups [26]
These are valid concerns that merit serious consideration.
The government's response to these concerns:
- The Online Safety Act is structured as regulatory rather than absolute authority - the Commissioner works through industry codes rather than direct mandates [27]
- The pilot explicitly includes evaluation of "privacy and security" implications [28]
- Technology remains optional - the Act specifies "reasonable steps" not specific technologies [29]
- Parliament retains oversight through statutory review mechanisms [30]
Why the claim is misleading:
The claim presents decisions that are currently being made (age assurance pilot) as if they are already established mandatory policies. The specific charge that officials "are not required to consider privacy implications" contradicts documented government statements explicitly evaluating privacy in the pilot [31].
The claim also conflates "powers granted to" with "decisions made by" - the Act gives the eSafety Commissioner power to require restricted access systems; the Commissioner has not yet mandated facial recognition and has explicitly recommended cautious, tested implementation.
Broader context: This is part of a global policy trend toward age verification for pornography, driven by child safety concerns. Whether one agrees with this policy direction or not, it is neither unique to Australia nor unique to the Coalition government - Labor is pursuing it more aggressively.
PARTIALLY TRUE
5.5
out of 10
The claim is partially accurate in identifying real powers granted under the Online Safety Act, but fundamentally misleading in three ways:
- Specificity error: The Act does not mandate facial recognition; it requires "reasonable steps" for age verification using unspecified technology [32]
- Causation error: The claim suggests privacy considerations are ignored, but government documents explicitly evaluate privacy and security as part of pilot design [33]
- Temporal error: The claim presents potential future requirements as current policy; no mandatory age verification scheme currently exists in Australia [34]
The core concern about unelected regulatory power is legitimate and worth debating. However, the specific charges (facial recognition mandate + no privacy consideration) are not accurately supported by the evidence.
Final Score
5.5
OUT OF 10
PARTIALLY TRUE
The claim is partially accurate in identifying real powers granted under the Online Safety Act, but fundamentally misleading in three ways:
- Specificity error: The Act does not mandate facial recognition; it requires "reasonable steps" for age verification using unspecified technology [32]
- Causation error: The claim suggests privacy considerations are ignored, but government documents explicitly evaluate privacy and security as part of pilot design [33]
- Temporal error: The claim presents potential future requirements as current policy; no mandatory age verification scheme currently exists in Australia [34]
The core concern about unelected regulatory power is legitimate and worth debating. However, the specific charges (facial recognition mandate + no privacy consideration) are not accurately supported by the evidence.
📚 SOURCES & CITATIONS (14)
-
1
Online Safety Act 2021
Federal Register of Legislation
-
2
Children, online safety, and age verification
Children’s online safety legislation and regulations – a backgrounder Executive summary Australia led the world with online safety regulation with the introduction of the Enhancing Onlin
Aph Gov -
3
Online Safety (Restricted Access Systems) Declaration 2022
Federal Register of Legislation
-
4
Age verification consultation - eSafety Commissioner
Esafety Gov
-
5
Australian Government Response to the Age Verification Roadmap
Infrastructure Gov
-
6
Face age and ID checks? Using the internet in Australia is about to fundamentally change
New codes developed by the tech sector and eSafety commissioner come into effect in December, with major ramifications for internet users
the Guardian -
7
Statutory offices in Australia - Regulatory officials
Research
Aph Gov -
8
Online Safety (Basic Online Safety Expectations) Determination 2022
Federal Register of Legislation
-
9
Online Safety Act 2023 - Part 5 Pornographic content duties
Legislation Gov
-
10
Digital Services Act 2022 - Article 28 Online protection of minors
Eur-lex Europa
-
11
Australians to face age checks on porn sites from March
New adult content rules will also apply to AI bots, app stores.
Information Age -
12
Australians soon to face age checks when viewing adult websites
On 9 September 2025, the eSafety Commissioner, Mrs Julie Inman Grant (Commissioner), registered six (6) new codes (New Codes) under the Online Safety Act 20 ...
Dundaslawyers Com -
13
Explainer: The Online Safety Bill - Digital Rights Watch
Digitalrightswatch Org
-
14
The Online Safety Act and the Privacy Act
Helen Clarke and Hannah James JOHNSON WINTER SLATTERY The Online Safety Act 2021 (Cth) (OSA) and its role (as well as that of the eSafety Commissioner)...
Community
Rating Scale Methodology
1-3: FALSE
Factually incorrect or malicious fabrication.
4-6: PARTIAL
Some truth but context is missing or skewed.
7-9: MOSTLY TRUE
Minor technicalities or phrasing issues.
10: ACCURATE
Perfectly verified and contextually fair.
Methodology: Ratings are determined through cross-referencing official government records, independent fact-checking organizations, and primary source documents.