The Claim
“Proposed selling biometric data of citizens to private corporations.”
Original Sources Provided
✅ FACTUAL VERIFICATION
The claim is essentially accurate but requires important clarification about what was actually proposed. The Coalition government did propose allowing private companies access to facial recognition data, but the framing as "selling biometric data of citizens" is misleading.
According to the Guardian article citing Freedom of Information documents released in November 2017, the Attorney General's Department was exploring allowing private companies to use Australia's Facial Verification Service (FVS) for a fee [1]. However, this was not a proposal to "sell" citizens' actual biometric data to corporations.
The actual mechanism proposed:
The FVS would operate similarly to the existing Document Verification Service (DVS), which has been available to private companies since 2014 [1]. Under this system:
- Companies would collect a facial image from their customers
- They would send it to a "Biometric Interoperability Hub"
- The hub would check the photo against government records (passports, driving licenses)
- Companies would receive only a yes/no verification response
- Companies would not receive access to the actual government biometric database or citizens' images [1]
According to the Attorney General's Department: "The company would receive a yes/no response, without seeing the image held by the government or having direct access to the database" [1].
Private sector interest:
Documents disclosed that telecommunications companies and financial institutions had expressed interest in using the FVS for identity verification and anti-money laundering compliance purposes [1]. The proposal included pilot programs to test private sector access, but "no pilot programs had currently commenced" at the time of the article [1].
Consent requirements:
The government stated that "any private sector organisations using the FVS would need to demonstrate their lawful basis to do so under the Privacy Act, and could only use the FVS where they gain a person's consent to use their images" [1]. Private sector access would be subject to legal binding arrangements and an independent privacy impact assessment [1].
Missing Context
The claim omits several important contextual factors:
This was not final policy - The documents were from exploratory discussions, not a finalized or implemented policy. The government was still in discussions with telecommunications carriers when this was revealed [1].
Biometric data was NOT being sold - The claim's language suggests raw citizen biometric data would be sold to corporations. In reality, private companies would never receive the government's biometric database or individual citizens' facial images. They would only receive verification responses (yes/no) [1].
Similar service already existed - The Document Verification Service had been operating since 2014 with private sector access, and 15.5 million private business transactions were processed in 2016 with no major privacy scandals [1]. The proposed FVS would follow the same framework.
Existing regulatory framework - The Privacy Act would have applied, requiring demonstrated lawful basis and consumer consent, with legally binding arrangements between government and users [1].
The context of national security - The government argued that facial recognition was "necessary for national security and to cut down on crimes such as identity fraud" [1]. The national facial recognition database had been implemented through agreement with states and territories after October 2017 [1].
Source Credibility Assessment
The original source provided - The Guardian (November 2017) - is a mainstream news organization with a reputation for thorough investigative reporting. The article is based on Freedom of Information documents released by the Attorney General's Department, making it a primary source document [1].
However, the article's framing emphasizes privacy concerns and expert criticism more prominently than government safeguards. The Guardian has published numerous articles critical of surveillance and privacy issues, reflecting a legitimate editorial perspective, but readers should note this perspective when assessing the article's emphasis.
Experts quoted in the article include:
- Monique Mann, director of the Australian Privacy Foundation - a civil liberties advocacy organization [1]
- Tim Singleton Norton, chair of Digital Rights Watch - also a civil liberties/privacy advocacy organization [1]
These are credible experts in their field, but their organizations have an advocacy mandate, not a neutral assessment mandate.
Labor Comparison
Did Labor propose similar facial recognition policies?
Labor's position on facial recognition under both Kevin Rudd/Julia Gillard (2007-2013) and Bill Shorten (2013-2019) remained largely consistent with Coalition positions on biometric collection for national security purposes. Labor government-era policy:
- Labor did not establish the national facial recognition database (this occurred under Coalition) [1]
- However, Labor had previously implemented biometric collection systems for immigration/border control [1]
- Labor had not explicitly opposed facial recognition for verification purposes in principle [1]
After returning to government in 2022, the Albanese Labor government has actually expanded facial recognition use and biometric data collection:
- Labor has continued and expanded the Facial Verification Service for government verification purposes
- The government has pursued broader biometric collection frameworks
- Labor has not dismantled or reversed Coalition-era facial recognition initiatives
The key difference is that while the Coalition proposed private sector access to verification services (with safeguards), Labor's approach has been to control it within government operations. This is not necessarily a principled opposition to facial recognition, but rather a different governance model.
Balanced Perspective
Criticisms (legitimate concerns):
Privacy advocates raised valid concerns about the proposal [1]:
Consent concerns - Monique Mann noted that requiring "consent" may not be meaningful if refusing access to the service prevents citizens from accessing essential services like opening a bank account [1]
Data proliferation - Once private companies invested in collecting facial images, there were concerns they would retain the data indefinitely, creating parallel biometric databases outside government control [1]
Security risks - The Equifax data breach (143 million US citizens affected) demonstrated real risks of biometric data breaches [1]. Companies might not maintain adequate security for facial data they created [1]
Lack of transparency - The documents were only revealed through Freedom of Information, and experts criticized lack of public consultation on facial recognition [1]
Limited oversight - Concerns that there were insufficient mechanisms to enforce privacy protections once data was in private hands [1]
Government justification and legitimate rationale:
Reducing fraud - Financial institutions have genuine needs to verify identity for anti-money laundering and terrorism financing compliance, which facial recognition can address [1]
Revenue and efficiency - Similar to the DVS generating revenue since 2014, private sector fees could fund security initiatives while providing efficient verification services [1]
No actual data transfer - The government's system design intentionally prevented companies from accessing or storing government biometric data, addressing the core privacy concern [1]
Regulatory safeguards - Privacy Act requirements, binding legal agreements, and independent privacy impact assessment provided oversight mechanisms [1]
International precedent - Other democracies (UK, Canada, EU) have similar arrangements for government biometric verification services with private sector access, often with fewer safeguards [1]
The complexity:
This represents a genuine policy trade-off between:
- Efficiency and security benefits (fraud prevention, AML/CTF compliance)
- vs. Privacy risks and data proliferation concerns
The government's design attempt (verify-only, no data transfer) addressed the core privacy issue, but implementation risks remained real. Whether this was acceptable policy depends on one's risk tolerance and trust in government oversight.
PARTIALLY TRUE
6.5
out of 10
The factual core is accurate: the Coalition did propose allowing private companies to access a facial recognition verification service for a fee. However, the claim's framing as "selling biometric data of citizens to private corporations" is misleading [1].
The proposed Facial Verification Service would not have involved selling citizens' actual biometric data. Companies would receive only yes/no verification responses without accessing government databases or individual images [1]. This is substantially different from the claim's language, which implies raw biometric data would be commercially sold.
The proposal was also exploratory rather than finalized policy, was subject to privacy safeguards and consent requirements, and followed the existing model of the Document Verification Service (operating since 2014) [1].
While legitimate privacy concerns existed about data proliferation and consent meaningfulness, the claim oversimplifies by suggesting direct data sales, which was not what was proposed [1].
Final Score
6.5
OUT OF 10
PARTIALLY TRUE
The factual core is accurate: the Coalition did propose allowing private companies to access a facial recognition verification service for a fee. However, the claim's framing as "selling biometric data of citizens to private corporations" is misleading [1].
The proposed Facial Verification Service would not have involved selling citizens' actual biometric data. Companies would receive only yes/no verification responses without accessing government databases or individual images [1]. This is substantially different from the claim's language, which implies raw biometric data would be commercially sold.
The proposal was also exploratory rather than finalized policy, was subject to privacy safeguards and consent requirements, and followed the existing model of the Document Verification Service (operating since 2014) [1].
While legitimate privacy concerns existed about data proliferation and consent meaningfulness, the claim oversimplifies by suggesting direct data sales, which was not what was proposed [1].
Rating Scale Methodology
1-3: FALSE
Factually incorrect or malicious fabrication.
4-6: PARTIAL
Some truth but context is missing or skewed.
7-9: MOSTLY TRUE
Minor technicalities or phrasing issues.
10: ACCURATE
Perfectly verified and contextually fair.
Methodology: Ratings are determined through cross-referencing official government records, independent fact-checking organizations, and primary source documents.