Australia’s two leading human rights groups have called on the parliament to scrap the government’s planned facial recognition laws, arguing the proposed scheme is “more draconian” than a similar UK system which was found to be highly inaccurate and likely to break human rights laws.

The Parliamentary Joint Committee on Intelligence and Security (PJCIS) is currently reviewing two bills which would create a national database of Australian citizens faces and hand authorities new surveillance powers including an expanded use of facial recognition technologies. The government agency responsible, the Department of Home Affairs, has previously argued the scheme is currently “not intended for mass surveillance”.

The same bills – the Identity-Matching Services Bill 2019 and the Australian Passports Amendment (Identity-matching Services) Bill 2019 – lapsed at the conclusion of the last parliament but were quickly reintroduced unchanged when the coalition government was returned to power.

The new PJCIS is considering submissions from the previous review and accepted new submissions up until last week.

In a new supplementary submission, the Human Rights Law Centre (HRLC) reiterated its calls to halt the new powers, calling the government’s bill “manifestly dangerously and insufficient” for its purported aim of establishing a legal framework for the use of facial recognition and matching technologies.

“The Bill does not provide a legal basis for use of identity matching services and little, if any, safeguards for Australians whose information will be vacuumed into databases used for search comparisons,” the HRLC submission says.

“The Bill can be characterised as providing authorities with extraordinarily broad capabilities to use facial recognition technology without any apparent regard for the civil liberties of all of us who will be affected.” 

‘More Draconian’ than struggling UK scheme

In its submission, the human rights group, says Australia’s proposed laws are “more draconian” than a UK facial recognition scheme. The UK scheme uses facial recognition technology at live events to scan peoples faces and compare them against a “watchlist” of known offenders and suspects. An independent review of the UK trials found matches were only correct in one fifth of cases and likely to break human rights laws.

The HRLC says also of concern is that while the UK system only checks against a watchlist, Australia’s proposed system would effectively match against a database of “everyone with government-approved documentation … regardless of their being suspected of a crime”. 

The HRLC submission also notes the UK system has stronger data retention protections and the technology is used in a more transparent way.

Human Rights Law Centre Legal Director, Emily Howie. Source: hrlc.org.au.

Emily Howie, a Legal Director of the Human Rights Law Centre, said the proposed Australian laws are “something you’d expect in an authoritarian state”.

“The facial recognition scheme effectively hands control of powerful new forms of surveillance to the Home Affairs department with virtual carte blanche to collect and use some of our most sensitive personal data. 

“The laws are a recipe for disaster, they put at risk everyone’s privacy and contain no meaningful safeguards. This law is sloppy, it’s dangerous, and it has no place in a democracy.” 

Concerns over technology accuracy and bias

The Australian Human Rights Commission agrees the bill should not be passed and called for Australia to remain vigilant to the threats posed by some technology to human rights in its latest submission, also supplementary to evidence it gave to the previous review.

“The Commission continues to hold serious concerns that the Bill would impinge on a number of human rights in ways not demonstrated to be necessary and proportionate to achieving its objectives,” its submission says.

“Rights that are particularly likely to be limited are the right to privacy, freedom of movement, the right to non-discrimination, and the right to a fair trial, though this is not an exhaustive list.”

The Commission notes facial recognition technology is not yet reliable and particularly poor when identifying certain groups of people.

“… the leading academic research makes clear that the technology, generally, remains unreliable, particularly compared to humans’ capacity to recognise faces, which is itself prone to error. This is particularly the case in ‘real-world’ applications, which generally involve the use of lower-quality images taken in sub-optimal conditions.”

Last year, research from the MIT Media lab revealed leading facial recognition software has potential inherent bias towards certain groups, stemming from the data facial recognition algorithms are “trained” on. 

According to the study, the error rates when matching faces of dark skinned females was as high as 34.7 per cent while the maximum error rate for white males in the same analysis was 0.8 per cent.

The Australian Human Rights Commission refers to the study in its submission, explaining the inherent bias disadvantages the affected groups in several ways.

“In the context of law enforcement, lower reliability increases the likelihood that innocent people will be misidentified and become subject to investigation or coercive action by law enforcement and intelligence agencies,” the submission says.

“In the case of service delivery, it may make it more difficult for members of those groups to establish their identity and to access services including government and financial services.”

LinkedIn
Previous post

The world’s noisiest marketplace: eCargo launches incubator program to help Australian FMCG brands enter China

Next post

Marketers no longer control the brand, says Shutterstock CMO