Misleading

Rating: 3.0/10

Coalition
C0108

The Claim

“Granted an unelected official the power to delete online posts by politicians and ban them from platforms for expressing controversial political opinions.”
Original Source: Matthew Davis
Analyzed: 29 Jan 2026

Original Sources Provided

FACTUAL VERIFICATION

The claim conflates several factual and non-factual elements. The Online Safety Act 2021 did establish the eSafety Commissioner's powers to issue removal notices, but key details contradict the claim [1][2].

What is true:

  • The Coalition Government did pass the Online Safety Act 2021, which commenced on 23 January 2022 [1]
  • The eSafety Commissioner (an independent statutory office) was granted statutory powers to issue removal notices for certain content [2]
  • The eSafety Commissioner is appointed by the Governor-General on advice and not directly elected [3]

What is misleading or false:

  • The eSafety Commissioner cannot arbitrarily delete political posts or ban politicians [2][4]. The power is specifically limited to removal of "class 1 material" (extremely violent content, child sexual abuse material, material likely to be rated RC) and "cyber-abuse material targeted at Australian adults" [4][5]
  • Removal notices require the content to meet strict statutory definitions, not subjective political judgment [5]
  • The most recent court case explicitly shows these powers do NOT extend to political speech. In 2025, the Administrative Review Tribunal struck down the eSafety Commissioner's order to remove a post by Chris Elston (Billboard Chris) attacking transgender activist Teddy Cook, finding it did not meet the statutory definition of cyber-abuse material [6]. The tribunal noted the post, while offensive, did not establish intent to cause serious harm [6]

Missing Context

The claim omits critical limitations on eSafety's powers:

  1. Strict statutory definitions: The eSafety Commissioner can only act on content meeting specific legal definitions, not "controversial political opinions" [5]. Material must be intended to cause serious harm, and be menacing, harassing, or offensive in the circumstances [5]

  2. Court oversight: The eSafety Commissioner's decisions are subject to Administrative Appeals Tribunal review and Federal Court challenge [2][6]. In June 2024, the Federal Court in eSafety Commissioner v X Corp [2024] FCA 499 significantly limited the Commissioner's enforcement powers, holding that issuing global removal notices is unreasonable and conflicts with international comity [2][7]

  3. Recent defeats on political/opinion content: The 2025 Administrative Appeals Tribunal case shows the Commissioner attempting to remove political speech (transgender debate) and losing [6]. This demonstrates the system is NOT functioning as the claim suggests

  4. Exemptions for news content: Material published as part of legitimate news reporting has exemptions, protecting political reporting and commentary [1]

  5. Platform compliance issues: In practice, platforms often resist or ignore removal notices. X Corp (formerly Twitter) won its legal battle and the eSafety Commissioner dropped the case [6][8]

Source Credibility Assessment

The original sources provided are parliamentary and official:

  • Facebook's response to the exposure draft: A corporate submission by an affected platform, likely to reflect concerns about regulatory overreach but representing an industry perspective rather than neutral analysis [9]
  • Parliamentary records: Official government records of the legislative process, reliable for what was debated and decided [2]

Neither source explicitly makes the claim in question. The claim appears to be an interpretation/extrapolation of the legislative powers, not a direct quotation.

Missing context about actual source credibility: The mdavis.xyz source (from the claim file header) is Labor-aligned and has incentive to present Coalition policies negatively. The framing of "unelected official" is politically charged language designed to undermine the regulatory agency's legitimacy [3].

⚖️

Labor Comparison

Did Labor support similar regulation?

Labor did not establish the Online Safety Act, but its response has been mixed:

  • Labor has supported expanded online safety regulation, including banning social media for under-16s (the Online Safety Amendment (Social Media Minimum Age) Act 2024, passed by the Labor Government in 2024) [10]
  • Labor has not opposed the eSafety Commissioner's powers; the agency has continued under Labor administration with expanded responsibilities [10][11]
  • In fact, Labor moved to strengthen content regulation further with age restrictions and "duty of care" obligations on platforms, suggesting Labor wants MORE regulatory power, not less [11]

Key distinction: Neither party has proposed (nor would propose) regulations allowing arbitrary deletion of political speech. Labor's approach is actually MORE expansive in terms of regulating platforms generally [11].

🌐

Balanced Perspective

Criticisms of eSafety's powers (some valid):

Critics argue the eSafety Commissioner has been overly aggressive in interpretation [2]:

  • The Commissioner attempted to force global removal of content (not just Australian removal) in the X Corp case, which the Federal Court rejected as unreasonable [2][7]
  • The Commissioner has issued "informal" notices that may reduce transparency and due process [12]
  • The Commissioner attempted to remove speech about transgender issues, which some view as political overreach [6]

Government/regulatory perspective:

The Coalition and subsequent Labor governments justify these powers as necessary to:

  • Protect children from extreme violence and abuse material [1]
  • Prevent severe online harassment of real individuals [5]
  • Maintain minimum safety standards across platforms [1]

Critical fact: Despite possessing these powers, the eSafety Commissioner has been repeatedly limited by courts when attempting broad enforcement [2][6][7]. The system has built-in judicial checks that prevent arbitrary use of power [2].

The actual risk: Not that the Commissioner can delete political speech (courts prevent this), but that the regulatory framework's vague language ("menacing, harassing or offensive in all the circumstances") creates uncertainty and may chill political speech through the threat of regulatory action [2]. However, court decisions are now clarifying the boundaries [6].

MISLEADING

3.0

out of 10

The claim implies the eSafety Commissioner has arbitrary power to delete political posts and ban politicians. The evidence shows:

  1. Powers are limited to specific harmful content categories, not political speech [1][2][5]
  2. Recent court decisions (2024-2025) explicitly prevent the use of these powers against political/opinion speech [2][6][7]
  3. The Commissioner has lost multiple legal challenges attempting broader enforcement [2][6]
  4. The agency's actions are subject to full court review and override [2][6]

The core claim—that the government granted power to delete political posts—is contradicted by both statute and recent judicial interpretation. The claim confuses regulatory powers over harmful content (legitimate) with arbitrary political censorship powers (which do not exist and courts have explicitly prevented).

📚 SOURCES & CITATIONS (11)

  1. 1
    legislation.gov.au

    Online Safety Act 2021 - Federal Register of Legislation

    Federal Register of Legislation

  2. 2
    esafety.gov.au

    About the Commissioner - eSafety Commissioner

    Esafety Gov

  3. 3
    esafety.gov.au

    Regulatory guidance - eSafety Commissioner

    Esafety Gov

  4. 4
    PDF

    Online Content Scheme Regulatory Guidance - eSafety Commissioner (PDF)

    Esafety Gov • PDF Document
  5. 5
    Elon Musk's X wins 'free speech' fight against eSafety Commissioner - Sydney Morning Herald (2025-07-01)

    Elon Musk's X wins 'free speech' fight against eSafety Commissioner - Sydney Morning Herald (2025-07-01)

    A court has overruled the eSafety Commissioner’s order to Elon Musk to remove a post on his app X, which attacked an Australian trans rights activist. 

    The Sydney Morning Herald
  6. 6
    esafety.gov.au

    Statement from the eSafety Commissioner re: Federal Court proceedings - eSafety Commissioner

    Esafety Gov

  7. 7
    eSafety reaching across borders: Federal Court grants injunctions in X Corp proceedings - Clifford Chance (2024-05)

    eSafety reaching across borders: Federal Court grants injunctions in X Corp proceedings - Clifford Chance (2024-05)

    The Australian eSafety Commissioner has succeeded in obtaining an interim injunction requiring X Corp to hide extreme violent video content of an alleged terrorist act.

    Clifford Chance
  8. 8
    PDF

    Facebook response to exposure draft for new Online Safety Act (PDF)

    Australia Fb • PDF Document
    Original link unavailable — view archived version
  9. 9
    Online Safety Amendment (Social Media Minimum Age) Bill 2024 - Parliament of Australia

    Online Safety Amendment (Social Media Minimum Age) Bill 2024 - Parliament of Australia

    Helpful information Text of bill First reading: Text of the bill as introduced into the Parliament Third reading: Prepared if the bill is amended by the house in which it was introduced. This version of the bill is then considered by the second house. As passed by

    Aph Gov
  10. 10
    gtlaw.com.au

    Government ramps up digital platforms online safety agenda by proposing duty of care obligations - GT Law

    Gtlaw Com

  11. 11
    Avoiding statutory steps when enforcing eSafety - X removal of post - Administrative Law (2025-02-13)

    Avoiding statutory steps when enforcing eSafety - X removal of post - Administrative Law (2025-02-13)

    The eSafety Commissioner issues 'informal' notices to social media providers like X. Avoiding statutory steps reduces transparency.

    Administrative Power and the Law

Rating Scale Methodology

1-3: FALSE

Factually incorrect or malicious fabrication.

4-6: PARTIAL

Some truth but context is missing or skewed.

7-9: MOSTLY TRUE

Minor technicalities or phrasing issues.

10: ACCURATE

Perfectly verified and contextually fair.

Methodology: Ratings are determined through cross-referencing official government records, independent fact-checking organizations, and primary source documents.