The Online Safety Act 2021 did establish the eSafety Commissioner's powers to issue removal notices, but key details contradict the claim [1][2].
**What is true:**
- The Coalition Government did pass the Online Safety Act 2021, which commenced on 23 January 2022 [1]
- The eSafety Commissioner (an independent statutory office) was granted statutory powers to issue removal notices for certain content [2]
- The eSafety Commissioner is appointed by the Governor-General on advice and not directly elected [3]
**What is misleading or false:**
- The eSafety Commissioner **cannot arbitrarily delete political posts** or ban politicians [2][4].
* * * * 真实 zhēn shí 的 de 部分 bù fèn : : * * * *
The power is specifically limited to removal of "class 1 material" (extremely violent content, child sexual abuse material, material likely to be rated RC) and "cyber-abuse material targeted at Australian adults" [4][5]
- Removal notices require the content to meet strict statutory definitions, not subjective political judgment [5]
- **The most recent court case explicitly shows these powers do NOT extend to political speech.** In 2025, the Administrative Review Tribunal struck down the eSafety Commissioner's order to remove a post by Chris Elston (Billboard Chris) attacking transgender activist Teddy Cook, finding it did not meet the statutory definition of cyber-abuse material [6].
该 gāi 主张 zhǔ zhāng 遗漏 yí lòu 了 le 对 duì 电子 diàn zi 安全 ān quán 专员 zhuān yuán 权力 quán lì 的 de 关键 guān jiàn 限制 xiàn zhì : :
The claim omits critical limitations on eSafety's powers:
1. **Strict statutory definitions:** The eSafety Commissioner can only act on content meeting specific legal definitions, not "controversial political opinions" [5].
Material must be intended to cause serious harm, and be menacing, harassing, or offensive in the circumstances [5]
2. **Court oversight:** The eSafety Commissioner's decisions are subject to Administrative Appeals Tribunal review and Federal Court challenge [2][6].
In June 2024, the Federal Court in *eSafety Commissioner v X Corp* [2024] FCA 499 significantly limited the Commissioner's enforcement powers, holding that issuing global removal notices is unreasonable and conflicts with international comity [2][7]
3. **Recent defeats on political/opinion content:** The 2025 Administrative Appeals Tribunal case shows the Commissioner attempting to remove political speech (transgender debate) and losing [6].
This demonstrates the system is NOT functioning as the claim suggests
4. **Exemptions for news content:** Material published as part of legitimate news reporting has exemptions, protecting political reporting and commentary [1]
5. **Platform compliance issues:** In practice, platforms often resist or ignore removal notices.
The original sources provided are parliamentary and official:
- Facebook's response to the exposure draft: A corporate submission by an affected platform, likely to reflect concerns about regulatory overreach but representing an industry perspective rather than neutral analysis [9]
- Parliamentary records: Official government records of the legislative process, reliable for what was debated and decided [2]
Neither source explicitly makes the claim in question.
The claim appears to be an interpretation/extrapolation of the legislative powers, not a direct quotation.
**Missing context about actual source credibility:** The mdavis.xyz source (from the claim file header) is Labor-aligned and has incentive to present Coalition policies negatively.
**Did Labor support similar regulation?**
Labor did not establish the Online Safety Act, but its response has been mixed:
- Labor has supported **expanded** online safety regulation, including banning social media for under-16s (the Online Safety Amendment (Social Media Minimum Age) Act 2024, passed by the Labor Government in 2024) [10]
- Labor has **not opposed** the eSafety Commissioner's powers; the agency has continued under Labor administration with expanded responsibilities [10][11]
- In fact, Labor moved to strengthen content regulation further with age restrictions and "duty of care" obligations on platforms, suggesting Labor wants MORE regulatory power, not less [11]
**Key distinction:** Neither party has proposed (nor would propose) regulations allowing arbitrary deletion of political speech.
* * * *
Labor's approach is actually MORE expansive in terms of regulating platforms generally [11].
**Criticisms of eSafety's powers (some valid):**
Critics argue the eSafety Commissioner has been overly aggressive in interpretation [2]:
- The Commissioner attempted to force global removal of content (not just Australian removal) in the X Corp case, which the Federal Court rejected as unreasonable [2][7]
- The Commissioner has issued "informal" notices that may reduce transparency and due process [12]
- The Commissioner attempted to remove speech about transgender issues, which some view as political overreach [6]
**Government/regulatory perspective:**
The Coalition and subsequent Labor governments justify these powers as necessary to:
- Protect children from extreme violence and abuse material [1]
- Prevent severe online harassment of real individuals [5]
- Maintain minimum safety standards across platforms [1]
**Critical fact:** Despite possessing these powers, the eSafety Commissioner has been **repeatedly limited by courts** when attempting broad enforcement [2][6][7].
The system has built-in judicial checks that prevent arbitrary use of power [2].
**The actual risk:** Not that the Commissioner can delete political speech (courts prevent this), but that the regulatory framework's vague language ("menacing, harassing or offensive in all the circumstances") creates uncertainty and may chill political speech through the threat of regulatory action [2].
The agency's actions are subject to full court review and override [2][6]
The core claim—that the government granted power to delete political posts—is contradicted by both statute and recent judicial interpretation.
The claim confuses regulatory powers over harmful content (legitimate) with arbitrary political censorship powers (which do not exist and courts have explicitly prevented).
The agency's actions are subject to full court review and override [2][6]
The core claim—that the government granted power to delete political posts—is contradicted by both statute and recent judicial interpretation.
The claim confuses regulatory powers over harmful content (legitimate) with arbitrary political censorship powers (which do not exist and courts have explicitly prevented).